The Irrationality Game

post by Will_Newsome · 2010-10-03T02:43:35.917Z · LW · GW · Legacy · 932 comments

Contents

  Please read the post before voting on the comments, as this is a game where voting works differently.
None
932 comments

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

932 comments

Comments sorted by top scores.

comment by PlaidX · 2010-10-03T05:09:52.186Z · LW(p) · GW(p)

Flying saucers are real. They are likely not nuts-and-bolts spacecrafts, but they are actual physical things, the product of a superior science, and under the control of unknown entities. (95%)

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Replies from: Will_Newsome, AngryParsley, Will_Newsome, CronoDAS, Yvain, Will_Newsome, wedrifid, Jonathan_Graehl, CronoDAS
comment by Will_Newsome · 2010-10-06T15:39:53.752Z · LW(p) · GW(p)

Now that there's a top comments list, could you maybe edit your comment an add a note to the effect that this was part of The Irrationality Game? No offense, but newcomers that click on Top Comments and see yours as the record holder could make some very premature judgments about the local sanity waterline.

Replies from: wedrifid
comment by wedrifid · 2010-10-06T16:11:13.506Z · LW(p) · GW(p)

Given that most of the top comments are meta in one way or another it would seem that the 'top comments' list belongs somewhere other than on the front page. Can't we hide the link to it on the wiki somewhere?

Replies from: LukeStebbing
comment by Luke Stebbing (LukeStebbing) · 2010-10-10T02:08:17.936Z · LW(p) · GW(p)

The majority of the top comments are quite good, and it'd be a shame to lose a prominent link to them.

Jack's open thread test, RobinZ's polling karma balancer, Yvain's subreddit poll, and all top-level comments from The Irrationality Game are the only comments that don't seem to belong, but these are all examples of using the karma system for polling (should not contribute to karma and should not be ranked among normal comments) or, uh, para-karma (should contribute to karma but should not be ranked among normal comments).

comment by AngryParsley · 2010-10-03T06:16:25.380Z · LW(p) · GW(p)

Just to clarify: by "unknown entities" do you mean non-human intelligent beings?

Replies from: PlaidX
comment by PlaidX · 2010-10-03T07:09:50.886Z · LW(p) · GW(p)

Yes.

comment by Will_Newsome · 2011-12-27T22:05:22.302Z · LW(p) · GW(p)

I would like to announce that I have updated significantly in favor of this after examining the evidence and thinking somewhat carefully for awhile (an important hint is "not nuts-and-bolts"). Props to PlaidX for being quicker than me.

comment by CronoDAS · 2010-10-08T19:44:10.594Z · LW(p) · GW(p)

I find it vaguely embarrassing that this post, taken out of context, now appears at the top of the "Top Comments" listing.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-08T19:46:16.419Z · LW(p) · GW(p)

I think "top comments" was an experiment with a negative result, and so should be removed.

comment by Scott Alexander (Yvain) · 2010-10-03T10:28:59.238Z · LW(p) · GW(p)

I upvoted you because 95% is way high, but I agree with you that it's non-negligible. There's way too much weirdness in some of the cases to be easily explainable by mass hysteria or hoaxes or any of that stuff - and I'm glad you pointed out Fatima, because that was the one that got me thinking, too.

That having been said, I don't know what they are. Best guess is easter eggs in the program that's simulating the universe.

Replies from: Will_Newsome, PlaidX
comment by Will_Newsome · 2010-10-03T21:27:54.562Z · LW(p) · GW(p)

Prior before having learned of Fatima, roughly? Best guess at current probability?

comment by PlaidX · 2010-10-03T20:39:57.090Z · LW(p) · GW(p)

I don't think that's a very good guess, but it's as good as any I've seen. I tried to phrase my belief statement to include things like this within its umbrella.

comment by Will_Newsome · 2010-10-03T05:32:15.360Z · LW(p) · GW(p)

Voted up, and you've made me really curious. Link or explanation?

Replies from: PlaidX
comment by PlaidX · 2010-10-03T06:16:20.087Z · LW(p) · GW(p)

This is what spurred me to give consideration to the idea initially, but what makes me confident is sifting through simply mountains of reports. To get an idea of the volume and typical content, here's a catalog of vehicle interference cases in Australia from 1958 to 2004. Most could be explained by a patchwork of mistakes and coincidences, some require more elaborate, "insanity or hoax" explanations, and if there are multiple witnesses, insanity falls away too. But there is no pattern that separates witnesses into a "hoax" and a "mistake" group, or even that separates them from the general population.

Replies from: erratio, Will_Newsome
comment by erratio · 2010-10-03T06:41:03.603Z · LW(p) · GW(p)

If there are mutliple witnesses who can see each others reactions, it's a good candidate for mass hysteria

comment by Will_Newsome · 2010-10-03T06:30:46.565Z · LW(p) · GW(p)

I couldn't really understand the blog post: his theory is that there are terrestrial but nonhuman entities that like to impress the religious? But the vehicle interference cases you reference are generally not religious in nature, and are extremely varying in the actual form of the craft seen (some are red and blue, some are series of lights). What possible motivations for the entities could there be? Most agents with such advanced technology will aim to efficiently optimize for their preferences. If this is what optimizing for their preferences looks like, they have some very improbably odd preferences.

Replies from: Yvain, PlaidX, Will_Newsome
comment by Scott Alexander (Yvain) · 2010-10-03T10:38:53.142Z · LW(p) · GW(p)

To be fair to the aliens, the actions of Westerners probably seem equally weird to Sentinel Islanders. Coming every couple of years in giant ships or helicopters to watch them from afar, and then occasionally sneaking into abandoned houses and leaving gifts?

Replies from: JohannesDahlstrom
comment by JohannesDahlstrom · 2010-10-03T20:20:17.295Z · LW(p) · GW(p)

That was a fascinating article. Thank you.

comment by PlaidX · 2010-10-03T06:58:54.161Z · LW(p) · GW(p)

I agree with you entirely, and this is a great source of puzzlement to me, and to basically every serious investigator. They hide in the shadows with flashing lights. What could they want from us that they couldn't do for themselves, and if they wanted to influence us without detection, shouldn't it be within their power to do it COMPLETELY without detection?

I have no answers to these questions.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-03T15:45:31.664Z · LW(p) · GW(p)

That's assuming that what's going on is that entities who are essentially based on the same lawful universe as we are are running circles around humans. If what's going on is instead something like a weird universe, where reality makes sense most of the time, but not always, I imagine you might get something that looks a lot like some of the reported weirdness. Transient entities that don't make sense leaking through the seams, never quite leaving the causal trail which would incontrovertibly point to their existence.

comment by Will_Newsome · 2011-12-27T22:09:55.738Z · LW(p) · GW(p)

If I'd asked the above questions honestly rather than semi-rhetorically I may have figured a few things out a lot sooner than I did. I might be being uncharitable to myself, especially as I did eventually ask them honestly, but the point still stands I think.

comment by wedrifid · 2010-10-05T05:30:23.253Z · LW(p) · GW(p)

64 points! This is the highest voted comment that I can remember seeing. (A few posts have gone higher). Can anyone remember another, higher voted example?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-10-05T07:45:52.905Z · LW(p) · GW(p)

But the rules are different in this thread. 64 here means that 64 more voters disagree than agree.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-05T08:54:48.120Z · LW(p) · GW(p)

Tell that to the out-of-context list of all LW comments sorted by rating!

Replies from: wedrifid
comment by wedrifid · 2010-10-05T10:02:55.511Z · LW(p) · GW(p)

Hang on, we have one of those?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-06T06:54:18.154Z · LW(p) · GW(p)

-

comment by Jonathan_Graehl · 2010-10-03T07:07:21.671Z · LW(p) · GW(p)

I'd like to know what your prior is for the disjunction "unknown entities control saucers that ambiguously reveal themselves to a minority of people on Earth, for some purpose". While I'm sure you've looked more closely at the evidence than I have, I presume your prior for that disjunction must be much higher than mine to even look closely.

Replies from: PlaidX
comment by PlaidX · 2010-10-03T07:22:44.895Z · LW(p) · GW(p)

It certainly wasn't high... I went through most of my life never giving the idea a thought, stumbled onto the miracle of fatima one day, and said "well, clearly this wasn't a flying saucer, but what the heck was it?"

But the rabbit hole just kept going down. It is not a particularly pleasant feeling to me, as someone who used to think he had a fairly solid grip on the workings of the world.

Replies from: Perplexed, Will_Newsome
comment by Perplexed · 2010-10-03T18:35:59.376Z · LW(p) · GW(p)

tumbled onto the miracle of Fatima one day, and said "well, clearly this wasn't a flying saucer, but what the heck was it?

The sun, seen through moving clouds. Just exactly what it is described as being.

Replies from: PlaidX
comment by PlaidX · 2010-10-03T20:22:23.769Z · LW(p) · GW(p)

Here is one of many detailed accounts, this one is from Dr. José Maria de Almeida Garrett, professor at the Faculty of Sciences of Coimbra, Portugal

I was looking at the place of the apparitions, in a serene, if cold, expectation of something happening, and with diminishing curiosity, because a long time had passed without anything to excite my attention. Then I heard a shout from thousands of voices and saw the multitude suddenly turn its back and shoulders away from the point toward which up to now it had directed its attention, and turn to look at the sky on the opposite side.

It must have been nearly two o'clock by the legal time, and about midday by the sun. The sun, a few moments before, had broken through the thick layer of clouds which hid it, and shone clearly and intensely. I veered to the magnet which seemed to be drawing all eyes, and saw it as a disc with a clean-cut rim, luminous and shining, but which did not hurt the eyes. I do not agree with the comparison which I have heard made in Fatima---that of a dull silver disc. It was a clearer, richer, brighter colour, having something of the luster of a pearl. It did not in the least resemble the moon on a clear night because one saw it and felt it to be a living body. It was not spheric like the moon, nor did it have the same colour, tone, or shading. It looked like a glazed wheel made of mother-of-pearl. It could not be confused, either, with the sun seen through fog (for there was no fog at the time), because it was not opaque, diffused or veiled. In Fatima it gave light and heat and appeared clear-cut with a well-defined rim.

The sky was mottled with light cirrus clouds with the blue coming through here and there, but sometimes the sun stood out in patches of clear sky. The clouds passed from west to east and did not obscure the light of the sun, giving the impression of passing behind it, though sometimes these flecks of white took on tones of pink or diaphanous blue as they passed before the sun.

It was a remarkable fact that one could fix one's eyes on this brazier of heat and light without any pain in the eyes or blinding of the retina. The phenomenon, except for two interruptions when the sun seemed to send out rays of refulgent heat which obliged us to look away, must have lasted about ten minutes.

The sun's disc did not remain immobile. This was not the sparkling of a, heavenly body, for it spun round on itself in a mad whirl. Then, suddenly, one heard a clamour, a cry of anguish breaking from all the people. The sun, whirling wildly, seemed to loosen itself from the firmament and advance threateningly upon the earth as if to crush us with its huge and fiery weight. The sensation during those moments was terrible.

During the solar phenomenon, which I have just described in detail, there were changes of colour in the atmosphere. Looking at the sun, I noticed that everything around was becoming darkened. I looked first at the nearest objects and then extended my glance further afield as far as the horizon. I saw everything an amethyst colour. Objects around me, the sky and the atmosphere, were of the same colour. An oak tree nearby threw a shadow of this colour on the ground.

Fearing that I was suffering from an affection of the retina, an improbable explanation because in that case one could not see things purple-colored, I turned away and shut my eyes, keeping my hands before them to intercept the light. With my back still turned, I opened my eyes and saw that the landscape was the same purple colour as before.

The impression was not that of an eclipse, and while looking at the sun I noticed that the atmosphere had cleared. Soon after I heard a peasant who was near me shout out in tones of astonishment: "Look, that lady is all yellow!"

And in fact everything, both near and far, had changed, taking on the colour of old yellow damask. People looked as if they were suffering from jaundice, and I recall a sensation of amusement at seeing them look so ugly and unattractive. My own hand was the same colour. All the phenomena which I have described were observed by me in a calm and serene state of mind, and without any emotional disturbance. It is for others to interpret and explain them.

comment by Will_Newsome · 2010-10-03T07:35:23.422Z · LW(p) · GW(p)

Do you think you guess numerically what your prior probability was before learning of the Miracle of Fatima?

Replies from: PlaidX, Eugine_Nier
comment by PlaidX · 2010-10-03T08:01:11.636Z · LW(p) · GW(p)

Mmm, < .01%, it wasn't something I would've dignified with enough thought to give a number. Even as a kid, although I liked the idea of aliens, stereotypical flying saucer little green men stuff struck me as facile and absurd. A failure of the imagination as to how alien aliens would really be.

In hindsight I had not considered that their outward appearance and behavior could simply be a front, but even then my estimate would've been very low, and justifiably, I think.

comment by Eugine_Nier · 2010-10-03T21:31:17.611Z · LW(p) · GW(p)

Probably ~15% (learning about Fatima didn't change it much by the way). Basically because I can't think of a good reason why this should have an extremely low prior.

comment by CronoDAS · 2010-10-08T20:13:04.526Z · LW(p) · GW(p)

And do you believe in Santa Claus, too? :P

comment by Raemon · 2010-10-05T15:46:12.809Z · LW(p) · GW(p)

Google is deliberately taking over the internet (and by extension, the world) for the express purpose of making sure the Singularity happens under their control and is friendly. 75%

Replies from: jimrandomh
comment by jimrandomh · 2010-10-05T17:39:44.182Z · LW(p) · GW(p)

I wish. Google is the single most likely source of unfriendly AIs anywhere, and as far as I know they haven't done any research into friendliness.

Replies from: ata, magfrump
comment by ata · 2010-10-05T20:16:34.427Z · LW(p) · GW(p)

Agreed. I think they've explicitly denied that they're working on AGI, but I'm not too reassured. They could be doing it in secret, probably without much consideration of Friendliness, and even if not, they're probably among the entities most likely (along with, I'd say, DARPA and MIT) to stumble upon seed AI mostly by accident (which is pretty unlikely, but not completely negligible, I think).

Replies from: sketerpot
comment by sketerpot · 2010-10-06T01:38:42.836Z · LW(p) · GW(p)

If Google were to work on AGI in secret, I'm pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.

Personally, I doubt that they'e working on AGI yet. They're getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.

Replies from: Kevin
comment by Kevin · 2010-10-08T09:32:58.059Z · LW(p) · GW(p)

Google has one employee working (sometimes) on AGI.

http://research.google.com/pubs/author37920.html

Replies from: khafra
comment by khafra · 2010-10-08T16:42:33.062Z · LW(p) · GW(p)

It's comforting, friendliness-wise, that one of his papers cites "personal communication with Steve Rayhawk."

comment by magfrump · 2010-10-06T23:30:14.421Z · LW(p) · GW(p)

If they've explicitly denied doing research into AGI, they would have no reason to talk about friendliness research; that isn't additional evidence. I do think the OP is extremely overconfident though.

Replies from: Raemon
comment by Raemon · 2010-10-07T15:14:11.711Z · LW(p) · GW(p)

I confess that I probably exaggerated the certainty. It's more like 55-60%.

I actually used to have a (mostly joking) theory about how Google would accidentally create a sentient internet that would have control over everything and send a robot army to destroy us. Someone gave me a book called "How to survive a Robot Uprising" which described the series of events that would lead to a Terminator-like Robot apocalypse, and Google was basically following it like a checklist.

Then I came here and learned more about nanotechnology and the singularity and the joke became a lot less funny. (The techniques described in the Robot Uprising are remarkably useless when you have about a day between noticing something is wrong and the whole world turning into paperclips.) It seems to me that with the number of extremely smart people in Google, there's gotta be at least some who are pondering this issue and thinking about it seriously. The actual evidence of Google being a genuinely idealistic company that just wants information to be free and to provide a good internet experience vs them having SOME kind of secret agenda seems about 50/50 to me - there's no way I can think of to tell the difference until they actually DO something with their massively accumulated power.

Given that I have no control of it, basically I just feel more comfortable believing they are doing something that a) uses their power in a way I can perceive as good or at least good-intentioned, which might actually help, b) lines up with the particular set of capabilities and interests.

I'd also note that the type of Singularity I'm imagining isn't necessarily AI per se. More of the internet and humanity (or parts of it) merging into a superintelligent consciousness, gradually outsourcing certain brain functions to the increasingly massive processing power of computers.

Replies from: magfrump, NancyLebovitz
comment by magfrump · 2010-10-07T16:48:55.553Z · LW(p) · GW(p)

I do think it's possible and not unlikely that Google is purposefully trying to steer the future in a positive direction; although I think people there are likely to be more skeptical of "singularity" rhetoric than LWers (I know at least three people who have worked at Google and I have skirted the subject sufficiently to feel pretty strongly that they don't have a hidden agenda. This isn't very strong evidence but it's the only evidence I have).

I would assign up to a 30% probability or so of "Google is planning something which might be described as preparing to implement a positive singularity." But less than a 5% chance that I would describe it that way, due to more detailed definitions of "singularity" and "positive."

comment by NancyLebovitz · 2010-10-07T16:15:36.377Z · LW(p) · GW(p)

I don't entirely trust Google because they want everyone else's information to be available. Google is somewhat secretive about its own information. There are good commercial reasons for them to do that, but it does show a lack of consistency.

comment by JamesAndrix · 2010-10-03T21:45:50.380Z · LW(p) · GW(p)

Panpsychism: All matter has some kind of experience. Atoms have some kind of atomic-qualia that adds up to the things we experience. This seems obviously right to me, but stuff like this is confusing so I'll say 75%

Please note that this comment has been upvoted because the members of lesswrong widely DISAGREE with it. See here for details.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-03T22:42:40.540Z · LW(p) · GW(p)

Can you rephrase this statement tabooing the words experience and qualia.

Replies from: orthonormal
comment by orthonormal · 2010-10-04T02:48:45.057Z · LW(p) · GW(p)

If he could, he wouldn't be making that mistake in the first place.

comment by Will_Newsome · 2010-10-03T03:01:34.752Z · LW(p) · GW(p)

This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.

We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.

(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)

Replies from: Will_Newsome, LucasSloan, Nick_Tarleton, Mass_Driver, AlephNeil, timtyler, army1987, None, Liron, Perplexed, Kaj_Sotala
comment by Will_Newsome · 2010-10-03T06:11:06.448Z · LW(p) · GW(p)

I am shocked that more people believe in a 95% chance of advanced flying saucers than a 99.5% change of not being in 'basement reality'. Really?! I still think all of you upvoters are irrational! Irrational I say!

Replies from: kodos96, LucasSloan
comment by kodos96 · 2010-10-07T18:34:18.570Z · LW(p) · GW(p)

Well, from a certain point of view you could see the two propositions as being essentially equivalent... i.e. the inhabitants of a higher layer reality poking through the layers and toying with us (if you had a universe simulation running on your desktop, would you really be able to refrain from fucking with your sims' heads)? So whatever probability you assign to one proposition, your probability for the other shouldn't be too much different.

comment by LucasSloan · 2010-10-03T08:03:57.028Z · LW(p) · GW(p)

I certainly agree with you now, but it wasn't entirely certain what you meant by your statement. A qualifier might help.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T08:07:50.443Z · LW(p) · GW(p)

Most won't see the need for precision, but you're right, I should add a qualifier for those who'd (justifiably) like it.

Replies from: Perplexed
comment by Perplexed · 2010-10-04T00:31:56.637Z · LW(p) · GW(p)

Help! There is someone reasoning in terms of decision theoretic significantness ruining my fun by telling me that my disagreement with you is meaningless.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T01:13:57.676Z · LW(p) · GW(p)

Ahhh! Ahhhhh! I am extremely reluctant to go into long explanations here. Have you read the TDT manual though? I think it's up at the singinst.org website now, finally. It might dissolve confusions of interpretation, but no promises. Sorry, it's just a really tricky and confusing topic with lots of different intuitions to take into account and I really couldn't do it justice in a few paragraphs here. :(

comment by LucasSloan · 2010-10-03T07:09:48.530Z · LW(p) · GW(p)

What do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T07:31:36.531Z · LW(p) · GW(p)

What do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.

So you know which I must have meant, then. I do try to be almost certainly right. ;)

(Technically, we shouldn't really be thinking about probabilities here either because it's not important and may be meaningless decision theoretically, but I think LW is generally too irrational to have reached the level of sophistication such that many would pick that nit.)

comment by Nick_Tarleton · 2010-10-05T08:16:38.685Z · LW(p) · GW(p)

99.5%

I'm surprised to hear you say this. Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

Replies from: Will_Newsome, wedrifid
comment by Will_Newsome · 2010-10-05T22:48:02.787Z · LW(p) · GW(p)

Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

That is a good question. I feel like asking 'in what direction would structural uncertainty likely bend my thoughts?' leads me to think, from past trends, 'towards the world being bigger, weirder, and more complex than I'd reckoned'. This seems to push higher than 99.5%. If you keep piling on structural uncertainty, like if a lot of things I've learned since becoming a rationalist and hanging out at SIAI become unlearned, then this trend might be changed to a more scientific trend of 'towards the world being bigger, less weird, and simpler than I'd reckoned'. This would push towards lower than 99.5%.

What are your thoughts? I realize that probabilities aren't meaningful here, but they're worth naively talking about, I think. Before you consider what you can do decision theoretically you have to think about how much of you is in the hands of someone else, and what their goals might be, and whether or not you can go meta by appeasing those goals instead of your own and the like. (This is getting vaguely crazy, but I don't think that the craziness has warped my thinking too much.) Thus thinking about 'how much measure do I actually affect with these actions' is worth considering.

comment by wedrifid · 2010-10-05T10:07:31.873Z · LW(p) · GW(p)

Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

That's a good question. My impression is that it is somewhat. But in the figures we are giving here we seem to be trying to convey two distinct concepts (not just likelyhoods).

comment by Mass_Driver · 2010-10-03T05:14:07.685Z · LW(p) · GW(p)

Propositions about the ultimate nature of reality should never be assigned probability greater than 90% by organic humans, because we don't have any meaningful capabilities for experimentation or testing.

Replies from: Will_Newsome, Jonathan_Graehl
comment by Will_Newsome · 2010-10-03T05:16:01.137Z · LW(p) · GW(p)

Pah! Real Bayesians don't need experiment or testing; Bayes transcends the epistemological realm of mere Science. We have way more than enough data to make very strong guesses.

Replies from: None
comment by [deleted] · 2010-10-03T05:26:03.665Z · LW(p) · GW(p)

This raises an interesting point: what do you think about the Presumptuous Philosopher thought experiment?

comment by Jonathan_Graehl · 2010-10-03T07:38:25.173Z · LW(p) · GW(p)

Yep. Over-reliance on anthropic arguments IMO.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T08:15:21.539Z · LW(p) · GW(p)

Huh, querying my reasons for thinking 99.5% is reasonable, few are related to anthropics. Most of it is antiprediction about the various implications of a big universe, as well as the antiprediction that we live in such a big universe.

(ETA: edited out 'if any', I do indeed have a few arguments from anthropics, but not in the sense of typical anthropic reasoning, and none that can be easily shared or explained. I know that sounds bad. Oh well.)

comment by AlephNeil · 2010-10-07T07:35:24.683Z · LW(p) · GW(p)

If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything. Even assuming it does, Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect.

On the other hand, if 'living in a simulation' is restricted to those scenarios where there is a two-way interaction between beings 'inside' and 'outside' the simulation then surely everything we know about science - the uniformity and universality of physical laws - suggests that this is false. At least, it wouldn't merit 99.5% confidence. (The counterarguments are essentially the same as those against the existence of a God who intervenes.)

Replies from: Will_Newsome, wedrifid
comment by Will_Newsome · 2010-10-07T10:28:34.205Z · LW(p) · GW(p)

If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything.

It's a nontrivial philosophical question whether 'means anything' means anything here. I would think 'means anything' should mean 'has decision theoretic significance'. In which case knowing that you're in a simulation could mean a lot.

First off, even if the simulators don't intervene, we still intervene on the the simulators just by virtue of our existence. Decision theoretically it's still fair game, unless our utility function is bounded in a really contrived and inelegant way.

(Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).)

[S]urely everything we know about science - the uniformity and universality of physical laws - suggests that this is false.

What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?

Replies from: AlephNeil
comment by AlephNeil · 2010-10-07T13:11:09.651Z · LW(p) · GW(p)

It's a nontrivial philosophical question whether 'means anything' means anything here.

Oh sure - non-trivial philosophical questions are funny like that.

Anyway, my idea is that for any description of a universe, certain elements of that description will be ad hoc mathematical 'scaffolding' which could easily be changed without meaningfully altering the 'underlying reality'. A basic example of this would be a choice of co-ordinates in Newtonian physics. It doesn't mean anything to say that this body rather than that one is "at rest".

Now, specifying a manner in which the universe is being simulated is like 'choosing co-ordinates' in that, to do a simulation, you need to make a bunch of arbitrary ad hoc choices about how to represent things numerically (you might actually need to be able to say "this body is at rest"). Of course, you also need to specify the laws of physics of the 'outside universe' and how the simulation is being implemented and so on, but perhaps the difference between this and a simple 'choice of co-ordinates' is a difference in degree rather than in kind. (An 'opaque' chunk of physics wrapped in a 'transparent' mathematical skin of varying thickness.)

I'm not saying this account is unproblematic - just that these are some pretty tough metaphysical questions, and I see no grounds for (near-)certainty about their correct resolution.

(Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).)

He's not talking about ensemble vs 'single universe' models of reality, he's talking about reference - what's it's possible for someone to refer to. He may be wrong - I'm not sure - but even when he's wrong he's usually wrong in an interesting way. (Like this.)

What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?

I'm unmoved - it's trite to point out that even smart people tend to be overconfident in beliefs that they've (in some way) invested in. (And please note that the line you were responding to is specifically about the scenario where there is 'intervention'.)

comment by wedrifid · 2010-10-07T08:04:40.964Z · LW(p) · GW(p)

Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect.

Err... I'm not intimately acquainted with the sport myself... What's the approximate difficulty rating of that kind of verbal gymnastics stunt again? ;)

Replies from: AlephNeil
comment by AlephNeil · 2010-10-07T08:43:26.086Z · LW(p) · GW(p)

It's a tricky one - read the paper. I think what he's saying is that there's no way for a person in a simulation (assuming there is no intervention) to refer to the 'outside' world in which the simulation is taking place. Here's a crude analogy: Suppose you were a two-dimensional being living on a flat plane, embedded in an ambient 3D space. Then Putnam would want to say that you cannot possibly refer to "up" and "down". Even if you said "there is a sphere above me" and there was a sphere above you, you would be 'incorrect' (in the same paradoxical way).

Replies from: MugaSofer
comment by MugaSofer · 2012-09-17T14:00:30.078Z · LW(p) · GW(p)

But ... we can describe spaces with more than three dimensions.

comment by timtyler · 2010-10-03T18:58:46.007Z · LW(p) · GW(p)

So: you think there's a god who created the universe?!?

Care to lay out the evidence? Or is this not the place for that?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T21:14:02.570Z · LW(p) · GW(p)

Care to lay out the evidence? Or is this not the place for that?

I really couldn't; it's such a large burden of proof to justify 99.5% certainty that I would have to be extremely careful in laying out all of my disjunctions and explaining all of my intuitions and listing every smart rationalist who agreed with me, and that's just not something I can do in a blog comment.

comment by A1987dM (army1987) · 2012-09-17T18:50:15.062Z · LW(p) · GW(p)

Upvoted mainly because of the last sentence (though upvoting it does coincide with what I'd have to do according to the rules of the game).

comment by [deleted] · 2010-10-06T06:39:47.832Z · LW(p) · GW(p)

For those brave souls who reason in terms of measure

I'm confused about the justification for reasoning in terms of measure. While the MUH (or at least its cousin the CUH) seems to be preferred from complexity considerations, I'm unsure of how to account for the fact that it is unknown whether the cosmological measure problem is solvable.

Also, what exactly do you consider making up "your measure"? Just isomorphic computations?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-06T06:53:04.000Z · LW(p) · GW(p)

Also, what exactly do you consider making up "your measure"? Just isomorphic computations?

Naively, probabilistically isomorphic computations, where the important parts of the isomorphism are whatever my utility function values... such that, on a scale from 0 to 1, computations like Luke Grecki might be .9 'me' based on qualia valued by my utility function, or 1.3 'me' if Luke Grecki qualia are more like the qualia my utility function would like to have if I knew more, thought faster, and was better at meditation.

Replies from: None
comment by [deleted] · 2010-10-06T07:08:54.582Z · LW(p) · GW(p)

Ah, you just answered the easier part!

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-06T07:20:04.892Z · LW(p) · GW(p)

Yeah... I ain't a mathematician! If 'measure' turns out not to be the correct mathematical concept, then I think that something like it, some kind of 'reality fluid' as Eliezer calls it, will take its place.

comment by Liron · 2010-10-03T20:20:29.342Z · LW(p) · GW(p)

99.5% is just too certain. Even if you think piles of realities nested 100 deep are typical, you might only assign 99% to not being in the basement.

comment by Perplexed · 2010-10-03T18:48:44.609Z · LW(p) · GW(p)

a non-negligible fraction of my measure is in a simulation.

How is that different than "I believe that I am a simulation with non-negligible probability"?

I'm leaving you upvoted. I think the probability is negligible however you play with the ontology.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T20:52:35.268Z · LW(p) · GW(p)

How is that different than "I believe that I am a simulation with non-negligible probability"?

If the same computation is being run in so-called 'basement reality' and run on a simulator's computer, you're in both places; it's meaningless to talk about the probability of being in one or the other. But you can talk about the relative number of computations of you that are in 'basement reality' versus on simulators' computers.

This also breaks down when you start reasoning decision theoretically, but most LW people don't do that, so I'm not too worried about it.

In a dovetailed ensemble universe, it doesn't even really make sense to talk about any 'basement' reality, since the UTM computing the ensemble eventually computes itself, ad infinitum. So instead you start reasoning about 'basement' as computations that are the product of e.g. cosmological/natural selection-type optimization processes versus the product of agent-type optimization processes (like humans or AGIs).

The only reason you'd expect there to be humans in the first place is if they appeared in 'basement' level reality, and in a universal dovetailer computing via complexity, there's then a strong burden of proof on those who wish to postulate the extra complexity of all those non-basement agent-optimized Earths. Nonetheless I feel like I can bear the burden of proof quite well if I throw a few other disjunctions in. (As stated, it's meaningless decision theoretically, but meaningful if we're just talking about the structure of the ensemble from a naive human perspective.)

Replies from: Perplexed, None
comment by Perplexed · 2010-10-03T23:11:08.162Z · LW(p) · GW(p)

If the same computation is being run in so-called 'basement reality' and run on a simulator's computer, you're in both places; it's meaningless to talk about the probability of being in one or the other.

Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don't think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark#

I'm not asking you to bear the burden of proving that non-basement versions are numerous. I'm asking you to justify your claim that when I use the word "I" in this universe, it is meaningless to say that I'm not talking about the fellow saying "I" in a simulation and that he is not talking (in part) about me. Surely "I" can be interpreted to mean the local instance.

Replies from: LucasSloan
comment by LucasSloan · 2010-10-03T23:35:10.441Z · LW(p) · GW(p)

Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn't just that given that they have no way to test, this means they'll never know, it's more fundamental than that. It's kinda like how if there's an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there's no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.

Replies from: Perplexed
comment by Perplexed · 2010-10-03T23:53:47.844Z · LW(p) · GW(p)

I must be missing a subtlety here. I began by asking "Is saying X different from saying Y?" I seem to be getting the answer "Yes, they are different. X is meaningless because it can't be distinguished from Y."

Replies from: LucasSloan
comment by LucasSloan · 2010-10-03T23:59:51.847Z · LW(p) · GW(p)

Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program - and in this case, we can say "yes, in memory position #31415926 there's a human in basement reality and in memory position #2718281828 there's an identical human in a deeper simulation". However, those humans can't tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.

Replies from: Perplexed, Will_Newsome
comment by Perplexed · 2010-10-04T00:27:23.574Z · LW(p) · GW(p)

No, you don't see the problem. The problem is that Will_Newsome began by stating:

We are living in a simulation... Almost certain. >99.5%.

Which is fine. But now I am being told that my counter claim "I am not living in a simulation" is meaningless. Meaningless because I can't prove my statement empirically.

What we seem to have here is very similar to Godel's version of St. Anselm's "ontological" proof of the existence of a simulation (i.e. God).

Replies from: LucasSloan
comment by LucasSloan · 2010-10-04T00:37:03.415Z · LW(p) · GW(p)

Oh. Did you see my comment asking him to tell whether he meant "some of our measure is in a simulation" or "this particular me is in a simulation"? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).

comment by Will_Newsome · 2010-10-04T00:18:47.139Z · LW(p) · GW(p)

Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there's an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).

comment by [deleted] · 2010-10-06T23:25:25.191Z · LW(p) · GW(p)

In a dovetailed ensemble universe, it doesn't even really make sense to talk about any 'basement' reality, since the UTM computing the ensemble eventually computes itself, ad infinitum.

I don't get this. Consider the following ordering of programs; T' < T iff T can simulate T'. More precisely:

T' < T iff for each x' there exists an x such that T'(x') = T(x)

It's not immediately clear to me that this ordering shouldn't have any least elements. If it did, such elements could be thought of as basements. I don't have any idea about whether or not we could be part of such a basement computation.

I still think your distinction between products of cosmological-type optimization processes and agent-type optimization processes is important though.

comment by Kaj_Sotala · 2010-10-03T17:29:15.905Z · LW(p) · GW(p)

My stance on the simualtion hypothesis:

Presume that there is an infinite amount of "stuff" in the universe. This can be a a Tegmarkian Level IV universe (all possible mathematical structures exist), or alternatively there might only be an infinite amount of matter in this universe. The main assumption we need is that there is an infinite amount of "stuff", enough that anything in the world gets duplicated an infinite number of times. (Alternatively, it could finite but insanely huge.)

Now this means that there are an infinite number of Earths like ours. It also means that there is an infinite number of planets that are running different simulations. An infinite number of those simulations will, by coincidence or purpose, happen to be simulating the exact same Earth as ours.

This means that there exist an infinite number of Earths like ours that are in a simulation, and an infinite number of Earths like ours that are not in a simulation. Thus it becomes meaningless to ask whether or not we exist in a simulation. We exist in every possible world containing us that is a simulation, and exist in every possible world containing us that is not a simulation.

(I'm not sure if I should upvote or downvote you.)

Replies from: Eugine_Nier, Will_Newsome
comment by Eugine_Nier · 2010-10-03T18:00:55.407Z · LW(p) · GW(p)

This means that there exist an infinite number of Earths like ours that are in a simulation, and an infinite number of Earths like ours that are not in a simulation. Thus it becomes meaningless to ask whether or not we exist in a simulation. We exist in every possible world containing us that is a simulation, and exist in every possible world containing us that is not a simulation.

Just because a set is infinite doesn't mean it's meaningless to speak of measures on it.

Replies from: Perplexed
comment by Perplexed · 2010-10-04T00:38:03.324Z · LW(p) · GW(p)

Just because a set is infinite doesn't mean it's meaningless to speak of measures on it.

The infinite cardinality of the set doesn't preclude the bulk of the measure being attached to a single point of that set. For Solomonof-like reasons, it certainly makes sense to me to attach the bulk of the measure to the "basement reality"

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T22:06:06.616Z · LW(p) · GW(p)

(FWIW I endorse this line of reasoning, and still think 99.5% is reasonable. Bwa ha ha.)

(That is, I also think it makes sense to attach the bulk of the measure to basement reality, but sense happens to be wrong here, and insanity happens to be right. The universe is weird. I continue to frustratingly refuse to provide arguments for this, though.)

(Also, though I and I think most others agree that measure should be assigned via some kind of complexity prior (universal or speed priors are commonly suggested), others like Tegmark are drawn towards a uniform prior. I forget why.)

Replies from: Perplexed
comment by Perplexed · 2010-10-04T23:21:31.643Z · LW(p) · GW(p)

... others like Tegmark are drawn towards a uniform prior.

I wouldn't have thought that a uniform prior would even make sense unless the underlying space has a metric (a bounded metric, in fact). Certainly, a Haar measure on a recursively nested space (simulations within simulations) would have to assign the bulk of its measure to the basement. Well, live and learn.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T23:55:08.150Z · LW(p) · GW(p)

I wouldn't have thought that a uniform prior would even make sense unless the underlying space has a metric (a bounded metric, in fact).

Yeah, I also don't understand Tegmark's reasoning (which might have changed anyway).

comment by Will_Newsome · 2010-10-03T20:40:34.210Z · LW(p) · GW(p)

Right, I agree with Eugine Nier: the relative measures are important. You are in tons of universes at once, but some portion of your measure is simulated, and some not. What's the portion?

comment by jferguson · 2010-10-28T05:00:36.135Z · LW(p) · GW(p)

The surface of Earth is actually a relatively flat disc accelerating through space "upward" at a rate of 9.8 m/s^2, not a globe. The north pole is at about the center of the disc, while Antarctica is the "pizza crust" on the outside. The rest of the universe is moving and accelerating such that all the observations seen today by amateur astronomers are produced. The true nature of the sun, moon, stars, other planets, etc. is not yet well-understood by science. A conspiracy involving NASA and other space agencies, all astronauts, and probably at least some professional astronomers is a necessary element. I'm pretty confident this isn't true, much more due to the conspiracy element than the astronomy element, but I don't immediately dismiss it where I imagine most LW-ers would, so let's say 1%.

The Flat Earth Society has more on this, if you're interested. It would probably benefit from a typical, interested LW participant. (This belief isn't the FES orthodoxy, but it's heavily based on a spate of discussion I had on the FES forums several years ago.)

Edit: On reflection, 1% is too high. Instead, let's say "Just the barest inkling more plausible than something immediately and rigorously disprovable with household items and a free rainy afternoon."

Replies from: Tuna-Fish, Jack
comment by Tuna-Fish · 2010-11-03T13:20:43.642Z · LW(p) · GW(p)

Discussing about the probability of wacky conspiracies is absolutely the wrong way to disprove this. The correct method is a telescope, a quite wide sign with a distance scale drawn on it in very visible colours, and the closest 200m+ body of water you can find.

As long as you are close enough to the ground, the curvature of the earth is very visible, even over surprisingly small distances. I have done this as a child.

comment by Jack · 2010-10-31T09:13:17.538Z · LW(p) · GW(p)

Even with the 1% credence this strikes me as the most wrong belief in this thread, way more off than 95% for UFOs. You're basically giving up science since Copernicus, picking an arbitrary spot in the remaining probability space and positing a massive and unmotivated conspiracy. Like many, I'm uncomfortable making precise predictions at very high and very low levels of confidence but I think you are overconfident by many orders of magnitude.

Upvoted.

Replies from: jferguson
comment by jferguson · 2010-10-31T15:52:17.829Z · LW(p) · GW(p)

The motive would be NASA's budget being mostly funneled into someone's pockets rather than materials and labor for actual spaceflight. I tried to take into account that people underestimate the power of orders of magnitude (~1 in 100 possible universes would have this be true--like a single person wearing a blue shirt in a decent-size crowd of red-shirted people).

Replies from: Jack
comment by Jack · 2010-10-31T21:47:32.498Z · LW(p) · GW(p)

One in a billion strikes me as too high. Rank ordering is easier for me. I'd put your hypothesis above the existence of the Biblical God but beneath the conjunction of "9/11 Attack was a plot organized by elements of the US government", "the Lock Ness monster is a living Plesiosaur", and "Homeopathy works".

Replies from: Psy-Kosh, jferguson
comment by Psy-Kosh · 2010-12-12T14:57:19.244Z · LW(p) · GW(p)

Huh. My initial thought would be to simply put it at about the same order of improbability as "homeopathy is real" rather than far below.

A quick surface consideration would seem to imply both requiring the same sort of "stuff we thought we know about the world is wrong, in a way that we'd strongly expect to make it look very different than it does, so in addition to that, it would need a whole lot of other tweaks to make it still look mostly the way it does look to us now".

(At least that's my initial instinctive thought. Didn't make the effort to try to actually compute specific probabilities yet.)

Replies from: Jack, Jack
comment by Jack · 2010-12-13T07:49:34.075Z · LW(p) · GW(p)

Like homeopathy it is a belief that well-confirmed scientific theories are wrong. But more so than homeopathy it specifies a scenario within that probability space (the earth is an accelerating disk, and specifies a scenario for why the information we have is wrong [the conspiracy]). I also think the disk-earth scenario requires more fundamental and better confirmed theories to be wrong than homeopathy does. It calls into question gravitation, Newtonian physics, thermodynamics and geometry.

I may be overconfident regarding homeopathy, though. The disk-earth scenario might seem more improbable because it is bigger and would do more to shatter my conception of my place in the universe than memory water would. Would we have to topple all of science to acknowledge homeopathy? Thats my sense of what we would have to do for the disk-earth thing.

Replies from: Psy-Kosh, David_Gerard, Jack
comment by Psy-Kosh · 2010-12-14T05:57:03.100Z · LW(p) · GW(p)

I was thinking homeopathy would essentially throw out much of what we think we know about chemistry. For the world to still look like it does even with the whole "you can dilute something to the point that there's hardly a molecule of the substance in question, but it can impose its energy signature onto the water molecules", etc, well... for that sort of thing to have a biological effect as far as being able to treat stuff, but not having any effect like throwing everything else about chemistry and bio out of whack would seem to be quite a stretch. Not to mention that, underneath all that, would probably require physics to work rather differently than the physics we know. And in noticeable ways rather than zillionth decimal place ways.

Possibly you're right, and it would be less of a stretch than flat-earth, but doesn't seem that way at least. Specifying the additional specific of a nasa conspiracy being the source of the flat earth being hidden may be sufficient additional complexity to drive it below homeopathy. But overall, I'd think of both as requiring similar order of magnitude improbabilities.

Replies from: Jack
comment by Jack · 2010-12-15T15:45:03.485Z · LW(p) · GW(p)

But can't homeopathy be represented as positing an additional chemical law- the presence of some spiritual energy signature which water can carry? I'm not exactly familiar with homeopathy but it seems like you could come up with a really kludgey theory that lets it work without you actually having to get rid of theories of chemical bonding, valence electrons and so on. It doesn't seem as easy to do that with the disk earth scenario.

Replies from: Desrtopa, Psy-Kosh
comment by Desrtopa · 2010-12-15T16:40:03.838Z · LW(p) · GW(p)

It's worse than that. Water having a memory, spiritual or otherwise, of things it used to carry, would be downright simple compared to what homeopathy posits. Considering everything all the water on Earth has been through, you'd expect it to be full of memories of all sorts of stuff; not just the last homeopathic remedy you put in it. What homeopathy requires is that water has a memory of things that it has held, which has to be primed by a specific procedure, namely thumping the container of water against a leather pad stuffed with horse hair while the solute is still in it so the water will remember it. The process is called "succussion" and the inventor of homeopathy thought that it made his remedies stronger. Later advocates though, realized the implications of the "water has a memory" hypothesis, and so rationalized it as necessary.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-12-15T18:53:37.239Z · LW(p) · GW(p)

Wow. I hadn't even heard of the very specific leather pad thing. (I've heard it has to be shaken in specific ways, but not that)

How is it that no matter how stupid I think it is, I keep hearing things that makes homeopathy even more stupid than I previously thought?

comment by Psy-Kosh · 2010-12-15T18:46:05.538Z · LW(p) · GW(p)

What Desertopa said.

But essentially whatever kludge you come up with would still have to have biochemical consequences or it wouldn't be able to work at all. (Or you make the kludge super extra complex, which then, again, crushes the probability). And once you have those effects, you need an excuse for why those effects don't show up elsewhere in chemistry, why we don't see such things otherwise.

comment by David_Gerard · 2010-12-15T17:07:26.024Z · LW(p) · GW(p)

Would we have to topple all of science to acknowledge homeopathy?

Large chunks of it. You'd need to overturn pretty much all of chemistry and molecular biology, and I think physics would be severely affected too.

The reasons for homeopathy retaining popularity are in the realm of psychology.

comment by Jack · 2010-12-15T16:13:22.353Z · LW(p) · GW(p)

Quoting myself:

It calls into question... geometry.

Am I right about this?- that we'd need a kind of radial geometry in order to explain say the distance around the Tropic of Cancer being apx. equal to the distance around the Tropic of Capricorn. Or, more blatantly, the similar distances around the Arctic and Antarctic circles. You'd have a have center point, and circles with that point as their vertex would get their circumference proportionally to their radius. Then, when the radius reached half a longitude the circumference would get proportionally smaller until the radius reaches a full longitude and the circle collapses to a point. On this Earth airplanes in the periphery of the disk trying to get to the exact other side of the disk would first fly the edge of the disk and in an instant fly 40,000 kilometers around side of the disk. Momentarily, of course, the plane would be 40,000 kilometers long. Once on the opposite side the plane would continue on to it's destination.

Replies from: komponisto, jferguson
comment by komponisto · 2010-12-15T18:29:57.024Z · LW(p) · GW(p)

If you accept measurements, it seems to me there's no way to save the flat-earth hypothesis except by supposing that our understanding of mathematics is wrong -- which seems rather less likely than measurements being wrong.

The most likely way that flat-earth could be true is that all the information we've been told about measurements (including, for example, the photos of the spherical earth) is a lie.

(Since you were fond of the Knox case discussion, I'll note that I have a similar view of the situation there: the most likely way that Knox and Sollecito could be guilty is that there is mundane but important information that has somehow never made it to the internet. In both cases, the most vulnerable beliefs underpinning the high-confidence conclusion are beliefs about the transmission of information among humans.)

comment by jferguson · 2010-12-16T01:29:19.419Z · LW(p) · GW(p)

The traditional response to this on the FES website is that airplanes aren't actually flying from one side of the disk to the other. They might go around the periphery to some extent, but outside the disk is probably either a lot of nothing or a very, very large, cold field of ice. So, that would make a trip from the Cape of Good Hope to Cape Horn take much, much longer than a spherical-ish Earth would predict.

That's why I assign such a low probability to this--that, and the motion of the stars in the Northern and Southern hemispheres working exactly the way they would if the Earth were approximately spherical. If this disk Earth were the case, the stars in the Southern hemisphere would be rotating in the same direction as the stars in the Northern hemisphere, just with a wider radius of rotation, and there would be no axis that the stars rotate about near the south pole; and though I haven't personally observed this effect, I'm pretty confident that astronomers would have noticed this. (This whole objection got explained away by different "star clouds" in different hemispheres.)

Well, that and the conspiracy.

My initial probability given was probably too low.

comment by Jack · 2010-12-13T05:51:40.562Z · LW(p) · GW(p)

Well I think jferguson's idea is more unlikely than just your everyday "stuff we thought we know about the world is wrong, in a way that we'd strongly expect to make it look very different than it does, so in addition to that, it would need a whole lot of other tweaks to make it still look mostly the way it does look to us now". I may be overconfident regarding homeopathy but my sense is the idea is underspecified enough that it could be true without rendering false as many fundamental and important scientific theories as this variety of flat-earth does. If jfergusons's idea was right we wouldn't have gravity as we know it. Somehow the Earth is accelerating no force is specified in the comment I don't know if we're getting rid of Newton/thermodynamics or if there is a giant rocket on the dark side of the earth. I don't even understand how basic things like the length of southern hemisphere plane flight would be explained. Every time I think about it for two seconds I think of more things that don't make sense about it.

So yes 'stuff we know about the earth is wrong' but enough stuff that I'd say homeopathy is more probable. But it isn't just 'stuff is wrong'. If physics is wrong in the way the comment implies lots of things could be true with the world, the earth could be an octagon with a mysterious force pushing down on us, whatever. But the comment picks out a specific option in all that probability space. When you say precisely what the 'clever tweaks' are the possibility you are right gets much smaller. This is especially the case when those clever tweaks involve a massive and despite his second comment basically unmotivated conspiracy.

comment by jferguson · 2010-11-01T00:04:58.709Z · LW(p) · GW(p)

I agree, at least with the first and last examples of more-likely. 1% is probably too high.

How about "Just the barest inkling above not immediately dismissed" instead of a specific number.

comment by nick012000 · 2010-10-11T15:32:07.475Z · LW(p) · GW(p)

If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)

Replies from: RobinZ, Nick_Tarleton, Normal_Anomaly
comment by RobinZ · 2010-10-11T16:57:20.791Z · LW(p) · GW(p)

What reason do you have for assigning such high probability to time travel being possible?

Replies from: Perplexed, nick012000, rabidchicken
comment by Perplexed · 2010-10-11T23:18:28.586Z · LW(p) · GW(p)

And what reason do you have for assigning a high probability to an unfriendly AI coming into existence with Eliezer not involved in its creation?

;)

Edit: I meant what reason do you (nic12000) have? Not you (RobinZ). Sorry for the confusion.

Replies from: RobinZ
comment by RobinZ · 2010-10-11T23:28:27.411Z · LW(p) · GW(p)

I have not assigned a high probability to that outcome, but I would not find it surprising if someone else has assigned a probability as high as 95% - my set of data is small. On the other hand, time travel at all is such a flagrant violation of known physics that it seems positively ludicrous that it should be assigned a similarly high probability.

Edit: Of course, evidence for that 95%+ would be appreciated.

comment by nick012000 · 2010-10-12T05:32:24.380Z · LW(p) · GW(p)

Well, most of the arguments against it are, to my knowledge, start with something along the lines of "If time travel exists, causality would be fucked up, and therefore time travel can't exist," though it might not be framed quite that implicitly.

Also, if FTL travel exists, either general relativity is wrong, or time travel exists, and it might be possible to create FTL travel by harnessing the Casimir effect or something akin to it on a larger scale, and if it is possible to do so, a recursively improving AI will figure out how to do so.

Replies from: RobinZ
comment by RobinZ · 2010-10-12T12:18:33.840Z · LW(p) · GW(p)

That ... doesn't seem quite like a reason to believe. Remember: as a general rule, any random hypothesis you consider is likely to be wrong unless you already have evidence for it. All you have to do is look at the gallery of failed atomic models to see how difficult it is to even invent the correct answer, however simple it appears in retrospect.

comment by rabidchicken · 2010-10-11T21:45:01.635Z · LW(p) · GW(p)

nick voted up, robin voted down... This feels pretty weird.

comment by Nick_Tarleton · 2010-10-11T22:43:18.958Z · LW(p) · GW(p)

If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?

comment by Normal_Anomaly · 2010-12-14T02:47:10.647Z · LW(p) · GW(p)

My P(this|time travel possible) is much higher than my P(this), but P(this) is still very low. Why wouldn't the UFAI have sent the assassins to back before he started spreading bad-for-the-UFAI memes (or just after so it would be able to know who to kill)?

comment by nick012000 · 2010-10-11T15:08:48.847Z · LW(p) · GW(p)

God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)

Replies from: Swimmy, RobinZ
comment by Swimmy · 2010-10-16T20:04:12.016Z · LW(p) · GW(p)

You have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.

Replies from: wedrifid
comment by wedrifid · 2010-10-16T20:24:56.956Z · LW(p) · GW(p)

Wait... was the grandparent serious? He's talking about the flaming swords of the angels being laser turrents! That's got to be tongue in cheek!

Replies from: RobinZ
comment by RobinZ · 2010-10-21T22:31:51.043Z · LW(p) · GW(p)

It is possible that nick012000 is violating Rule 4 - but his past posting history contains material which I found consistent with him being serious here. It would behoove him to confirm or deny this.

comment by RobinZ · 2010-10-11T16:55:14.246Z · LW(p) · GW(p)

I see in your posting history that you identify as a Christian - but this story contains more details than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-11T17:38:07.283Z · LW(p) · GW(p)

We should learn to present this argument correctly, since complexity of hypothesis doesn't imply its improbability. Furthermore, the prior argument drives probability through the floor, making 99% no more surprising than 1%, and is thus an incorrect argument if you wouldn't use it for 1% as well (would you?).

Replies from: RobinZ
comment by RobinZ · 2010-10-11T18:01:41.641Z · LW(p) · GW(p)

I don't feel like arguing about priors - good evidence will overwhelm ordinary priors in many circumstances - but in a story like the one he told, each of the following needs to be demonstrated:

  1. God exists.
  2. God created the universe.
  3. God prefers not to violate natural laws.
  4. The stories about people seeing angels are based on real events.
  5. The angels seen during these events were actually just robots.
  6. The angels seen during these events were wielding laser turrets.

Claims 4-6 are historical, and at best it is difficult to establish 99% confidence in that field for anything prior to - I think - the twentieth century. I don't even think people have 99% confidence in the current best-guess location of the podium where the Gettysburg Address was delivered. Even spotting him 1-3 the claim is overconfident, and that was what I meant when I gave my response.

But yes - I'm not good at arguing.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-11T18:07:06.077Z · LW(p) · GW(p)

I addressed the "burdensome detail" argument you invoked, not other possible arguments.

Replies from: RobinZ, RobinZ
comment by RobinZ · 2010-10-11T22:24:03.897Z · LW(p) · GW(p)

Edit: 99.8% assumes independence, which is certainly violated in the proposed case.

Here's the thing: in order for nick012000's stated confidence to be justified, every one of these six points must be justified to a level over 99% - and the geometric average must be over 99.8%. The difference between 99% and 99.8% may not be huge in the grand scheme of things, but for historical events it's far from negligible.

comment by RobinZ · 2010-10-11T18:14:13.088Z · LW(p) · GW(p)

Is my elaboration of the "burdensome detail" argument faulty? How would you advise I revise it?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-11T18:18:13.864Z · LW(p) · GW(p)

Is my elaboration of the "burdensome detail" argument faulty?

"Burdensome detail" is wholly about priors, and you started the elaboration with "I don't feel like arguing about priors", and going on about updating on evidence. Thus, I don't see how you made any elaboration of burdensome detail argument, you've described a different argument instead.

Replies from: RobinZ
comment by RobinZ · 2010-10-11T18:47:02.953Z · LW(p) · GW(p)

I think I might see what you mean.

I don't want to argue about the priors for 1-3 specifically. Such arguments generally devolve into unproductive bickering about the assignment of the burden of proof. However, priors for arguments about specific historical events, such as the location of the podium from which the speeches were delivered at Gettysburg, are known to be of ordinarily-small levels, and most evidence (e.g. written accounts) are of known weak strength in particular predictable ways*. In fact, I mentioned Gettysburg specifically because the best-guess location changed relatively recently due to new analysis of the written evidence and terrain. In pure terms of my own curiosity, therefore, I anticipate more interesting discussion on 4-6 than 1-3, as nick012000's evidence for the latter I expect to be wrong in familiar ways.

* cf. Imaginary Positions - rounding to the nearest cliché is a standard failure mode.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-11T18:54:05.521Z · LW(p) · GW(p)

Such arguments generally devolve into unproductive bickering about the assignment of the burden of proof.

Russel's teapot seems quite settled, and most religions go the same way for similar reasons. This argument is quite strong. Anyway, this is what I referred to; I don't want to discuss evidence about religion.

Replies from: RobinZ
comment by RobinZ · 2010-10-11T20:47:08.896Z · LW(p) · GW(p)

I do wish to discuss evidence about religion - at least, I do today. I hope nick will oblige.

comment by dyokomizo · 2010-10-03T13:44:46.140Z · LW(p) · GW(p)

There's no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.

Corollary 1: most models explain after the fact and require both the subject to be aware of the model's predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.

Corollary 2: we'll spend most of our time in drama trying to understand the real reasons or the truth about our/other's behavior even when presented with evidence pointing to the randomness of our actions. After the fact we'll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.

Replies from: orthonormal, None, Perplexed
comment by orthonormal · 2010-10-04T02:43:14.915Z · LW(p) · GW(p)

This (modulo the chance it was made up) is pretty strong evidence that you're wrong. I wish it was professionally ethical for psychologists to do this kind of thing intentionally.

Replies from: Blueberry, AdeleneDawner
comment by Blueberry · 2011-01-22T02:49:56.579Z · LW(p) · GW(p)

Here's another case:

"Let me get this straight. We had sex. I wind up in the hospital and I can't remember anything?" Alice said. There was a slight pause. "You owe me a 30-carat diamond!" Alice quipped, laughing. Within minutes, she repeated the same questions in order, delivering the punch line in the exact tone and inflection. It was always a 30-carat diamond. "It was like a script or a tape," Scott said. "On the one hand, it was very funny. We were hysterical. It was scary as all hell." While doctors tried to determine what ailed Alice, Scott and other grim-faced relatives and friends gathered at the hospital. Surrounded by anxious loved ones, Alice blithely cracked jokes (the same ones) for hours.

comment by AdeleneDawner · 2010-10-04T02:48:49.881Z · LW(p) · GW(p)

I wish it was professionally ethical for psychologists to do this kind of thing intentionally.

They could probably do some relevant research by talking to Alzheimer's patients - they wouldn't get anything as clear as that, I think, but I expect they'd be able to get statistically-significant data.

comment by [deleted] · 2010-10-03T19:34:17.385Z · LW(p) · GW(p)

How detailed of a model are you thinking of? It seems like there are at least easy and somewhat trivial predictions we could make e.g. that a human will eat chocolate instead of motor oil.

Replies from: dyokomizo
comment by dyokomizo · 2010-10-03T19:47:20.205Z · LW(p) · GW(p)

I would classify such kinds of predictions as vague, after all they match equally well for every human being in almost any condition.

Replies from: AdeleneDawner, Douglas_Knight
comment by AdeleneDawner · 2010-10-03T22:53:50.342Z · LW(p) · GW(p)

How about a prediction that a particular human will eat bacon instead of jalapeno peppers? (I'm particularly thinking of myself, for whom that's true, and a vegetarian friend, for whom the opposite is true.)

Replies from: dyokomizo
comment by dyokomizo · 2010-10-04T00:46:01.690Z · LW(p) · GW(p)

This model seems to be reducible to "people will eat what they prefer".

A good model would be able to reduce the number of bits to describe a behavior, if the model requires to keep a log (e.g. what particular humans prefer to eat) to predict something, it's not much less complex (i.e. bit encoding) than the behavior.

Replies from: AdeleneDawner, newerspeak
comment by AdeleneDawner · 2010-10-04T01:12:00.518Z · LW(p) · GW(p)

Maybe I've misunderstood.

It seems to me that your original prediction has to refer either to humans as a group, in which case Luke's counterexample is a good one, or humans as individuals, in which case my counterexample is a good one.

It also seems to me that either counterexample can be refined into a useful prediction: Humans in general don't eat petroleum products. I don't eat spicy food. Corvi doesn't eat meat. All of those classes of things can be described more efficiently than making lists of the members of the sets.

comment by newerspeak · 2010-10-05T18:38:17.286Z · LW(p) · GW(p)

"people eat what they prefer".

No, because preferences are revealed by behavior. Using revealed preferences is a good heuristic generally, but it's required if you're right that explanations for behavior are mostly post-hoc rationalizations.

So:

People eat what they prefer. What they prefer is what they wind up having eaten. Ergo, people eat what they eat.

Replies from: Strange7
comment by Strange7 · 2011-01-22T04:14:08.869Z · LW(p) · GW(p)

Consistency of preferences is at least some kind of a prediction.

comment by Douglas_Knight · 2010-10-04T00:37:16.328Z · LW(p) · GW(p)

I think "vague" is a poor word choice for that concept. "(not) informative" is a technical term with this meaning. There are probably words which are clearer to the layman.

Replies from: dyokomizo
comment by dyokomizo · 2010-10-04T00:41:50.725Z · LW(p) · GW(p)

I agree vague is not a good word choice. Irrelevant (using relevancy as it's used to describe search results) is a better word.

comment by Perplexed · 2010-10-03T18:32:14.184Z · LW(p) · GW(p)

Downvoted in agreement. But I think that the randomness comes from what programmers call "race conditions" in the timing of external stimuli vs internal stimuli. Still, these race conditions make prediction impossible as a practical matter.

comment by mattnewport · 2010-10-03T20:21:27.002Z · LW(p) · GW(p)
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

Replies from: wedrifid, DanielLC, None, None, Eugine_Nier
comment by wedrifid · 2010-10-04T04:38:46.141Z · LW(p) · GW(p)

I want to upvote each of these points a dozen times. Then another few for the first.

A Singleton AI is not a stable equilibrium

It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.

Replies from: mattnewport
comment by mattnewport · 2010-10-04T04:53:52.693Z · LW(p) · GW(p)

I guess I'm playing the game right then :)

I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.

Replies from: Mass_Driver, wedrifid
comment by Mass_Driver · 2010-10-06T05:59:03.368Z · LW(p) · GW(p)

Funny you should mention it; that's exactly what I was thinking. I have a friend (also named matt, incidentally) who I strongly believe is guilty of motivated cognition about the desirability of a singleton AI (he thinks it is likely, and therefore is biased toward thinking it would be good) and so I leaped naturally to the ad hominem attack you level against yourself. :-)

comment by wedrifid · 2010-10-04T06:26:32.324Z · LW(p) · GW(p)

I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.

Most of them, no. Some, yes. Particularly since the alternative is the inevitable loss of everything that is valuable to me in the universe.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T22:12:02.639Z · LW(p) · GW(p)

This is incredibly tangential, but I was talking to a friend earlier and I realized how difficult it is to instill in someone the desire for altruism. Her reasoning was basically, "Yeah... I feel like I should care about cancer, and I do care a little, but honestly, I don't really care." This sort of off-hand egoism is something I wasn't used to; most smart people try to rationalize selfishness with crazy beliefs. But it's hard to argue with "I just don't care" other than to say "I bet you will have wanted to have cared", which is gramatically horrible and a pretty terrible argument.

Replies from: Jordan
comment by Jordan · 2010-10-05T03:04:54.892Z · LW(p) · GW(p)

I respect blatant apathy a whole hell of a lot more than masked apathy, which is how I would qualify the average person's altruism.

comment by DanielLC · 2010-10-05T02:44:46.550Z · LW(p) · GW(p)

I agree with your second. Was your third supposed to be high or low? I think it's low, but not unreasonably so.

Replies from: mattnewport
comment by mattnewport · 2010-10-05T02:53:52.003Z · LW(p) · GW(p)

I expected the third to be higher than most less wrongers would estimate.

comment by [deleted] · 2010-10-03T21:30:36.069Z · LW(p) · GW(p)

I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?

comment by [deleted] · 2010-10-03T21:29:45.695Z · LW(p) · GW(p)

I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?

Replies from: mattnewport
comment by mattnewport · 2010-10-03T21:43:56.315Z · LW(p) · GW(p)

Stable equilibrium here does not refer to a property of a mind. It refers to a state of the universe. I've elaborated on this view a little here before but I can't track the comment down at the moment.

Essentially my reasoning is that in order to dominate the physical universe an AI will need to deal with fundamental physical restrictions such as the speed of light. This means it will have spatially distributed sub-agents pursuing sub-goals intended to further its own goals. In some cases these sub-goals may involve conflict with other agents (this would be particularly true during the initial effort to become a singleton).

Maintaining strict control over sub-agents imposes restrictions on the design and capabilities of sub-agents which means it is likely that they will be less effective at achieving their sub-goals than sub-agents without such design restrictions. Sub-agents with significant autonomy may pursue actions that conflict with the higher level goals of the singleton.

Human (and biological) history is full of examples of this essential conflict. In military scenarios for example there is a tradeoff between tight centralized control and combat effectiveness - units that have a degree of authority to take decisions in the field without the delays or overhead imposed by communication times are generally more effective than those with very limited freedom to act without direct orders.

Essentially I don't think a singleton AI can get away from the principal-agent problem. Variations on this essential conflict exist throughout the human and natural worlds and appear to me to be fundamental consequences of the nature of our universe.

Replies from: orthonormal
comment by orthonormal · 2010-10-04T02:47:39.223Z · LW(p) · GW(p)

Ant colonies don't generally exhibit the principal-agent problem. I'd say with high certainty that the vast majority of our trouble with it is due to having the selfishness of an individual replicator hammered into each of us by our evolution.

Replies from: Eugine_Nier, mattnewport
comment by Eugine_Nier · 2010-10-04T03:02:33.984Z · LW(p) · GW(p)

I'm not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI's) would also have these problems.

Replies from: orthonormal
comment by orthonormal · 2010-10-04T03:24:31.016Z · LW(p) · GW(p)

Cancer is a case where an engineered genome could improve over an evolved one. We've managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction.

One reason that evolution hasn't constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-04T03:28:54.518Z · LW(p) · GW(p)

However, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can't simply rely on digital copying to prevent malfunctions.

Replies from: orthonormal
comment by orthonormal · 2010-10-04T03:41:04.035Z · LW(p) · GW(p)

So you agree that it's possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don't know either way.

But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons(pdf) rather than a continued time of plenty).

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-04T04:54:10.457Z · LW(p) · GW(p)

I believe it would at the very least have to sacrifice at least all adaptability by doing so, as in only sending out nodes with all instructions in ROM and instructions to periodically rest all non-ROM memory and shelf-destruct if it notices any failures of its triple redundancy ROM. As well as an extremely strong directive against anything that would let nodes store long term state.

Replies from: orthonormal
comment by orthonormal · 2010-10-04T19:38:27.304Z · LW(p) · GW(p)

Remember, you're the one trying to prove impossibility of a task here. Your inability to imagine a solution to the problem is only very weak evidence.

comment by mattnewport · 2010-10-04T04:32:14.212Z · LW(p) · GW(p)

I don't know whether ant colonies exhibit principal-agent problems (though I'd expect that they do to some degree) but I know there is evidence of nepotism in queen rearing in bee colonies where individuals are not all genetically identical (evidence of workers favouring the most closely related larvae when selecting larvae to feed royal jelly to create a queen).

The fact that ants from different colonies commonly exhibit aggression towards each other indicates limits to scaling such high levels of group cohesion. Though supercolonies do appear to exist they have not come to total dominance.

The largest and most complex examples of group coordination we know of are large human organizations and these show much greater levels of internal goal conflicts than much simpler and more spatially concentrated insect colonies.

Replies from: orthonormal
comment by orthonormal · 2010-10-04T19:34:31.523Z · LW(p) · GW(p)

I'm analogizing a singleton to a single ant colony, not to a supercolony.

comment by Eugine_Nier · 2010-10-03T20:36:49.241Z · LW(p) · GW(p)

I agree with your first two, but am dubious about your third.

Replies from: mattnewport
comment by mattnewport · 2010-10-03T20:58:04.543Z · LW(p) · GW(p)

Two points that influence my thinking on that claim:

  1. Gains from trade have the potential to be greater with greater difference in values between the two trading agents.
  2. Destruction tends to be cheaper than creation. Intelligent agents that recognize this have an incentive to avoid violent conflict.
comment by Pavitra · 2010-10-04T07:23:51.716Z · LW(p) · GW(p)

75%: Large groups practicing Transcendental Meditation or TM-Sidhis measurably decrease crime rates.

At an additional 20% (net 15%): The effect size depends on the size of the group in a nonlinear fashion; specifically, there is a threshhold at which most of the effect appears, and the threshhold is at .01*pop (1% of the total population) for TM or sqrt(.01*pop) for TM-Sidhis.

(Edited for clarity.)

(Update: I no longer believe this. New estimates: 2% for the main hypothesis, additional 50% (net 1%) for the secondary.)

Replies from: Risto_Saarelma, magfrump
comment by Risto_Saarelma · 2010-10-04T07:56:01.643Z · LW(p) · GW(p)

Just to make sure, is this talking about something different from people committing less crimes when they are themselves practicing TM or in daily contact with someone who does?

I don't really understand the second paragraph. What arm TM-Sidhis, are they something distinct from regular TM (are these different types of practicioners). And what's with the sqrt(1%)? One in ten people in the total population need to be TM-Sidhis for the crime rate reduction effect to kick in?

Replies from: Pavitra
comment by Pavitra · 2010-10-04T17:17:34.782Z · LW(p) · GW(p)

Just to make sure, is this talking about something different from people committing less crimes when they are themselves practicing TM or in daily contact with someone who does?

I'm not sure if personal contact with practitioners has an effect, but the studies I'm thinking of were on the level of cities -- put a group of meditators in Chicago, the Chicago crime rate goes down.

What arm TM-Sidhis, are they something distinct from regular TM (are these different types of practicioners).

TM-Sidhis is a separate/additional practice that has TM as a dependency in the sense of package management. If you know TM, you can learn TM-Sidhis.

And what's with the sqrt(1%)? One in ten people in the total population need to be TM-Sidhis for the crime rate reduction effect to kick in?

Sorry, I meant sqrt(.01p) where p is the population of the group to be affected. For example, a city of one million people would require ten thousand TM meditators or 100 TM-Sidhis meditators.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-05T02:49:31.764Z · LW(p) · GW(p)

Sorry, I meant sqrt(.01p) where p is the population of the group to be affected. For example, a city of one million people would require ten thousand TM meditators or 100 TM-Sidhis meditators.

Right, thanks for the clarification. This definitely puts the claim into upvote territory for me.

comment by magfrump · 2010-10-05T08:26:24.918Z · LW(p) · GW(p)

No vote: I agree with the hypothesis that appropriate meditation practice could reduce crime rates, but I haven't the slightest idea how to evaluate the specific population figures.

Replies from: Pavitra
comment by Pavitra · 2010-10-06T17:43:27.761Z · LW(p) · GW(p)

Can you clarify the question, or does the whole statement seem meaningless?

Replies from: magfrump
comment by magfrump · 2010-10-06T18:52:50.014Z · LW(p) · GW(p)

I don't really have a question. You have a hypothesis:

Transcendental meditation practitioners will reduce the crime rate in their cities in a nonlinear fashion satisfying certain identities.

The statement I have written above I agree with, and would therefore normally downvote.

However, you posit specific figures for the reduction of the crime rate. I have no experience with city planning or crime statistics or population figures, and hence have no real basis to judge your more specific claim.

If I disagreed with it on a qualitative level, then I would upvote. If I had any sense of what your numbers meant I might think that they were about right or too high or too low but since I don't I'm not able to evaluate it.

But not-evaluating because I don't know how to engage the numbers is different than not-evaluating because I didn't read it, so I wanted to make the difference clear; since the point of the game is to engage with ideas that may be controversial.

Replies from: Pavitra
comment by Pavitra · 2010-10-06T19:08:25.666Z · LW(p) · GW(p)

I'm still not sure I understand what you mean, but let me take a shot in the dark:

Out of the variance in crime rate that depends causally on the size of the meditating group, most of that variance depends on whether or not the size of the group is greater than a certain value that I'll call x. If the meditating group is practicing only TM, then x is equal to 1% of the size of the population to be affected, and if the meditating group is practicing TM-Sidhis, then x is equal to the square root of 1% of the population to be affected.

For example, with a TM-only group in a city of ten thousand people, increasing the size of the group from 85 to 95 meditators should have a relatively small effect on the city's crime rate, increasing from 95 to 105 should have a relatively large effect, and increasing from 105 to 115 should have a relatively small effect.

Edit: Or did you mean my confidence values? The second proposition (about the nonlinear relationship) I assign 20% confidence conditional on the truth of the first proposition. Since I assign the first proposition 75% confidence, and since the second proposition essentially implies the first, it follows that the second proposition receives a confidence of (0.2 * 0.75)=15%.

Replies from: magfrump
comment by magfrump · 2010-10-06T23:20:58.774Z · LW(p) · GW(p)

I understand what you meant by your proposition, I'm not trying to ask for clarification.

I assume you have some model of TM-practitioner behavior or social networking or something which justifies your idea that there is such a threshold in that place.

I do not have any models of: how TM is practiced, and by whom; how much TM effects someone's behavior, and consequently the behavior of those they interact with; how much priming effects like signs or posters for TM groups or instruction have on the general populace; how much the spread of TM practitioners increases the spread of advertisement.

I would not be hugely surprised if it were the case that, given 1% of the population practiced TM, this produced enough advertisement to reach nearly all of the population (i.e. a sign on the side of a couple well-traveled highways) or enough social connections that everyone in a city was within one or two degrees of separation of a TM practitioner.

But I also wouldn't be surprised if the threshold was 5%, or .1%, or if there was no threshold, or if there was a threshold in rural areas but not urban areas, or conservative-leaning areas but not liberal-leaning areas, or the reverse. I have no model of how these things would go about, so I don't feel comfortable agreeing or disagreeing.

Certainly fewer than 15% of the possible functions of TM-practice vs crime are as you describe, but it is certainly far more likely that your hypothesis is true compared with the hypothesis "even one TM-practitioner makes the crime rate 100%" but I don't know if it's 5 bits more relevant or 10 bits more relevant, and I don't know what my probabilities should be even if I knew how many bits of credence I should give a hypothesis.

If you know something more than I do (which is to say, anything at all) about social networking, advertising, or the meditation itself, or the people who practice it, then you might reasonably have a good hypothesis. But I don't, so I can only take the outside view, which tells me "more people actively relaxing/being mindful/spending their time on non-crime activity should reduce crime."

Replies from: Pavitra
comment by Pavitra · 2010-10-07T00:44:31.095Z · LW(p) · GW(p)

I understand now.

The causally primary reason for my belief is that while I was growing up in a TM-practicing community, I was told repeatedly that there were many scientific studies published in respectable journals demonstrating this effect, and the "square root of one percent" was a specific point of doctrine.

I've had some trouble finding the articles in question on academically respectable, non-paywalled sites (though I didn't try for more than five or ten minutes), but a non-neutrally-hosted bibliography-ish thing is here.

(Is there a general lack of non-paywalled academically respectable online archives of scientific papers?)

.

(Edited to add: if anyone decides to click any of the videos on that page, rather than just following text links, I'd assign Fred Travis the highest probability of saying anything worth hearing.)

.

(Edited again: I was going to say this when I first wrote this comment, but forgot: The obvious control would be against other meditation techniques. I don't think there are studies with this specific control on the particular effect in my top-level comment, but there are such studies on e.g. medical benefits.)

.

(Edited yet again: I've now actually watched the videos in question.

The unlabeled video at the top (John Hagelin) is a lay-level overview of studies that you can read for yourself through text links. (That is, you can read the studies, not the overview.)

Gary Kaplan is philosophizing with little to no substance in the sense of expectation-constraint, and conditional on the underlying phenomena being real his explanation is probably about as wrong as, say, quantum decoherence.

Nancy Lonsdorf is arguing rhetorically for ideas whose truth is almost entirely dependent on the validity of the studies in question and that follow from such validity in a trivial and straightforward fashion. Some people might need what she's saying pointed out to them, but probably not the readers of Less Wrong.

Fred Travis goes into more crunchy detail, about fewer studies, than any of the others, but still not as much detail as just reading the papers.)

Replies from: magfrump
comment by magfrump · 2010-10-07T02:21:41.836Z · LW(p) · GW(p)

Wow that was a super in depth response! Thanks, I'll check it out if I have time.

comment by Simon Fischer (SimonF) · 2010-10-05T16:17:18.115Z · LW(p) · GW(p)

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)

EDIT: Quote from here

Replies from: wedrifid, timtyler, whpearson, ata, pengvado
comment by wedrifid · 2010-10-05T17:02:03.171Z · LW(p) · GW(p)

Do you apply this to yourself?

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-05T17:13:49.217Z · LW(p) · GW(p)

Yes!

Humans are "designed" to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.

Replies from: RomanDavis, Risto_Saarelma
comment by RomanDavis · 2010-10-06T06:31:46.461Z · LW(p) · GW(p)

But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using,

And this is before the computer uses it's knowledge to reoptimize it's optimization process.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T09:24:17.206Z · LW(p) · GW(p)

I understand the concept of recursive self-optimization und I don't consider it to be very implausible.

Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?

I'm also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.

Replies from: wedrifid
comment by wedrifid · 2010-10-06T10:25:40.294Z · LW(p) · GW(p)

Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?

I count "algorithm-space is really really really big" as at least some form of evidence. ;)

Mind you by "is there any evidence?" you really mean "does the evidence lead to a high assigned probability?" That being the case "No Free Lunch" must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid.

Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it's something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.

comment by Risto_Saarelma · 2010-10-06T06:12:10.636Z · LW(p) · GW(p)

I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.

You're putting 'effectively' here in place of 'intelligently' in the original assertion.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T08:56:01.343Z · LW(p) · GW(p)

I understand "capable of behaving intelligently" to mean "capable of achieving complex goals in complex environments", do you disagree?

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-06T09:04:42.204Z · LW(p) · GW(p)

I don't disagree. Are you saying that humans aren't capable of achieving complex goals in the domains of quantum mechanics or computer programming?

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T09:16:22.547Z · LW(p) · GW(p)

This is of course a matter of degree, but basically yes!

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-06T09:34:37.978Z · LW(p) · GW(p)

Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans' native capabilities for dealing more efficiently with certain types of problems?

The obvious examples are problems where a slow reaction time will lead to failure, but these don't seem to tell that much about the general complexity handling abilities of the agents.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T09:53:07.104Z · LW(p) · GW(p)

I'll try to give examples:

For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent.

For quantum mechanics: Design a high-temperature superconductor from scratch.

Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?

Replies from: wedrifid
comment by wedrifid · 2010-10-06T10:03:03.956Z · LW(p) · GW(p)

Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?

We have a way to use brute force to achieve general optimisation goals? That seems like a good start to me!

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T10:08:39.473Z · LW(p) · GW(p)

Not a good start if we are facing exponential search-spaces! If brute-force would work, I imagine the AI-problem would be solved?

Replies from: wedrifid
comment by wedrifid · 2010-10-06T10:23:11.694Z · LW(p) · GW(p)

Not a good start if we are facing exponential search-spaces!

Not particularly. :)

But it would constitute an in principle method of bootstrapping a more impressive kind of general intelligence. I actually didn't expect you would concede the ability to brute force 'general optimisation' - the ability to notice the brute forced solution is more than half the problem. From there it is just a matter of time to discover an algorithm that can do the search efficiently.

If brute-force would work, I imagine the AI-problem would be solved?

Not necessarily. Biases could easily have made humans worse than brute-force.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T10:31:11.112Z · LW(p) · GW(p)

Please give evidence that "a more impressive kind of general intelligence" actually exists!

Replies from: wedrifid
comment by wedrifid · 2010-10-06T11:02:38.536Z · LW(p) · GW(p)

Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours.

Note that I've tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference:

  • Demands of the general form "Where is the evidence for?" are somewhat of a hangover from traditional rational 'debate' mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn't the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations).
  • "More impressive than humans" is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best 'general intelligence' we could arrive at in the local area. We haven't had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn't wait until our brains reached the best level DNA could support before it kicked in.
  • A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle.
  • Being able to 'brute force' a solution to any problem is actually a significant step towards being generally intelligent. Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.
Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T11:30:40.495Z · LW(p) · GW(p)

Finding evidence for something is easy but isn't the sort of habit I like to encourage in myself.

My intention was merely to point out where I don't follow your argument, but your criticism in my formulation is valid.

"More impressive than humans" is a ridiculously low bar.

I agree, we can probably build far better problem-solvers for many problems (including problems of great practical importance)

algorithm that can in principle handle most any problem, given unlimited resources

My concern is more about what we can do with limited ressources, this is why I'm not impressed with the brute-force-solution

Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.

This is true, I was mostly thinking about a pure search-problem where evaluting the solution is simple. (The example was chess, where brute-forcing leads to perfect play given sufficient ressources)

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-10-06T11:54:50.811Z · LW(p) · GW(p)

The example was chess, where brute-forcing leads to perfect play given sufficient resources

It just occurred to me to wonder if this resource requirement is even finite. Is there are turn limit on the game? I suppose even "X turns without a piece being taken" would be sufficient depending on how idiotic the 'brute force' is. Is such a rule in place?

Replies from: Apprentice
comment by Apprentice · 2010-10-06T12:04:55.986Z · LW(p) · GW(p)

Yes, the fifty-move rule. Though technically it only allows you to claim a draw, it doesn't force it.

Replies from: wedrifid
comment by wedrifid · 2010-10-06T12:10:40.672Z · LW(p) · GW(p)

OK, thanks. In that case brute force doesn't actually produce perfect play in chess and doesn't return if it tries.

(Incidentally, this observation that strengthens SimonF's position.)

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-06T12:26:54.836Z · LW(p) · GW(p)

But the number of possible board position is finite, and there is a rule that forces a draw if the same position comes up three times. (Here)

This claims that generalized chess is EXPTIME-complete, which is in agreement with the above.

Replies from: wedrifid
comment by wedrifid · 2010-10-06T12:49:00.833Z · LW(p) · GW(p)

But the number of possible board position is finite, and there is a rule that forces a draw if the same position comes up three times. (Here)

That rule will do it (given the forced).

comment by wedrifid · 2010-10-06T11:42:56.264Z · LW(p) · GW(p)

(Pardon the below tangent...)

The example was chess, where brute-forcing leads to perfect play given sufficient ressources

I'm somewhat curious as to whether perfect play leads to a draw or a win (probably to white although if it turned out black should win that'd be an awesome finding!) I know tic-tac-toe and checkers are both a draw and I'm guessing chess will be a stalemate too but I don't know for sure even whether we'll ever be able to prove that one way or the other.

Discussion of chess AI a few weeks ago also got me thinking: The current trend is for the best AIs to beat the best human grandmasters even with progressively greater disadvantages. Even up to 'two moves and a pawn" or somesuch thing. My prediction:

As chess playing humans and AIs develop the AIs will be able to beat the humans with greater probability with progressively more significant handicaps. But given sufficient time this difference would peak and then actually decrease. Not because of anything to do with humans 'catching up'. Rather, because if perfect play of a given handicap results in a stalemate or loss then even an exponentially increasing difference in ability will not be sufficient in preventing the weaker player from becoming better at forcing the expected 'perfect' result.

comment by timtyler · 2011-06-22T14:05:43.624Z · LW(p) · GW(p)

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s).

Sure there is - see:

The only assumption about the environment is that Occam's razor applies to it.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2011-06-22T14:24:54.771Z · LW(p) · GW(p)

Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.

Replies from: timtyler
comment by timtyler · 2011-06-22T14:33:14.286Z · LW(p) · GW(p)

IMO, it is best to think of power and breadth being two orthogonal dimensions - like this.

  • narrow <-> broad;
  • weak <-> powerful.

The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct.

I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can.

I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.

Replies from: None, SimonF
comment by [deleted] · 2012-04-18T16:16:21.395Z · LW(p) · GW(p)

That is a very good point, with wideness orthogonal to power.

Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.

comment by Simon Fischer (SimonF) · 2011-06-22T18:09:27.832Z · LW(p) · GW(p)

I don't care about that specific formulation of the idea; maybe Robin Hanson's formulation that there exists no "grand unified theory of intelligence" is clearer? (link)

Replies from: timtyler
comment by timtyler · 2011-06-22T19:29:54.316Z · LW(p) · GW(p)

Clear - but also clearly wrong. Robin Hanson says:

After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?

...but the answer seems simple. A big part of "betterness" is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn't immediately suggest implementation strategy - which is what we need. So: more discoveries relating to this seem likely.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2011-06-23T10:31:15.287Z · LW(p) · GW(p)

Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.

Replies from: timtyler
comment by timtyler · 2011-06-23T20:08:08.811Z · LW(p) · GW(p)

To me it seems a lot like the question of whether general, computationally tractable methods of compression exist.

Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam's razor, I would say that the answer is just "yes, they do".

comment by whpearson · 2010-10-05T18:50:52.324Z · LW(p) · GW(p)

Can you unpack algorithm and why you think an intelligence is one?

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-05T19:22:33.627Z · LW(p) · GW(p)

I'm not sure what your point is, I don't think I use the term "algorithm" in a non-standard way.

Wikipedia says: "Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system."

When talking about "intelligence" I assume we are talking about a goal-oriented agent, controlled by an algorithm as defined above.

Replies from: whpearson
comment by whpearson · 2010-10-05T19:51:09.932Z · LW(p) · GW(p)

Does it make sense to call the computer system in front of you as being controlled by a single algorithm? If so that would have to be the fetch-execute cycle. Which may not halt or be a finite sequence. This form of system is sometimes called an interaction machine or persistent Turing machine. So some may say it is not an algorithm.

The fetch-execute cycle is very poor at giving you information about what problems your computer might be able to solve, as it can download code from all over the place. Similarly if you think of an intelligence as this sort of system, you cannot bound what problems it might be able to solve. At any given time it won't have the programming to solve all problems well, but it can modify the programming it does have.

comment by ata · 2010-10-05T17:08:18.378Z · LW(p) · GW(p)

Do you behave intelligently in domains you were not specifically designed(/selected) for?

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2010-10-05T17:33:03.719Z · LW(p) · GW(p)

No, I don't think I would be capable if the domain is sufficiently different from the EEA.

comment by pengvado · 2010-10-13T18:07:16.271Z · LW(p) · GW(p)

Do you antipredict an AI specialized in AI design, which can't do anything it's not specifically designed to do, but can specifically design itself as needed?

comment by James_Miller · 2010-10-04T03:34:46.207Z · LW(p) · GW(p)

Within five years the Chinese government will have embarked on a major eugenics program designed to mass produce super-geniuses. (40%)

Replies from: Pavitra, JoshuaZ, Jack, gwern, wedrifid, magfrump
comment by Pavitra · 2010-10-04T07:11:03.085Z · LW(p) · GW(p)

I think 40% is about right for China to do something about that unlikely-sounding in the next five years. The specificity of it being that particular thing is burdensome, though; the probability is much lower than the plausibility. Upvoted.

comment by JoshuaZ · 2010-10-04T03:38:03.205Z · LW(p) · GW(p)

Upvoting. If you had said 10 years or 15 years I'd find this much more plausible. But I'm very curious to hear your explanation.

Replies from: James_Miller
comment by James_Miller · 2010-10-04T03:58:55.842Z · LW(p) · GW(p)

I wrote about it here:

http://www.ideasinactiontv.com/tcs_daily/2007/10/a-thousand-chinese-einsteins-every-year.html

Once we have identified genes that play a key role in intelligence then eugenics through massive embryo selection has a good chance at producing lots of super-geniuses especially if you are willing to tolerate a high "error rate." The Chinese are actively looking for the genetic keys to intelligence. (See http://vladtepesblog.com/?p=24064) The Chinese have a long pro-eugenics history (See Imperfect Conceptions by Frank Dikötter) and I suspect have a plan to implement a serious eugenics program as soon as it becomes practical which will likely be within the next five years.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-10-04T04:12:07.497Z · LW(p) · GW(p)

I think the main point of disagreement is the estimate that such a program would be practical in five years (hence my longer-term estimate). My impression is that actual studies of the genetic roots of intelligence are progressing but at a fairly slow pace. I'd give a much lower than 40% chance that we'll have that good an understanding in five years.

Replies from: James_Miller
comment by James_Miller · 2010-10-04T04:18:51.328Z · LW(p) · GW(p)

If the following is correct we are already close to finding lots of IQ boosting genes:

"SCIENTISTS have identified more than 200 genes potentially associated with academic performance in schoolchildren.

Those schoolchildren possessing the 'right' combinations achieved significantly better results in numeracy, literacy and science.'"

http://www.theaustralian.com.au/news/nation/found-genes-that-make-kids-smart/story-e6frg6nf-1225926421510

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-10-05T18:12:25.403Z · LW(p) · GW(p)

If the following is correct we are already close to finding lots of IQ boosting genes:

"SCIENTISTS have identified more than 200 genes potentially associated with academic performance in schoolchildren.

The article is correct, but we are not close to finding lots of IQ boosting genes.

But the relevant question is whether the Chinese government is fooled by this too.

comment by Jack · 2010-10-31T08:26:24.629Z · LW(p) · GW(p)

Can you specify what "major" means? I would be shocked if the government wasn't already pairing high-IQ individuals like they do with very tall people to breed basketball players.

comment by wedrifid · 2010-10-05T10:18:44.291Z · LW(p) · GW(p)

Hat tip to China.

comment by magfrump · 2010-10-05T08:28:18.356Z · LW(p) · GW(p)

Tentatively downvoted; I think over a longer time period it's highly likely, but I would be unsurprised to later discover that it started that soon. I might put my (uninformed) guess closer to 10-20% but it feels qualitatively similar.

comment by WrongBot · 2010-10-04T08:55:41.484Z · LW(p) · GW(p)

There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)

Replies from: JenniferRM, jimrandomh, Will_Newsome, saturn, RobinZ, nick012000
comment by JenniferRM · 2010-10-04T22:10:42.501Z · LW(p) · GW(p)

If I'm interpreting the terms charitably, I think I put this more like 70%... which seems like a big enough numerical spread to count as disagreement -- so upvoted!

My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe's Leviathan, and personal musings about Fukuyama's End Of History extrapolated into transhuman contexts, and more ideas in this vein.

It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out... but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a "theory of morality".

But even then, being able to generate evidence about the absence of an objective object level "theory of morality" would itself seem to offer a strategy for taking a universally acceptable position on the general subject... which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel's "Last Word": "If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it."

Replies from: magfrump
comment by magfrump · 2010-10-05T08:23:17.530Z · LW(p) · GW(p)

I almost agree with this due to fictional evidence from Three Worlds Collide, except that a manufactured intelligence such as an AI could be constructed without evolutionary constraints and saying that every possible descendant of a being that survived evolution MUST have a moral similarity to every other being seems like a much more complicated and less likely hypothesis.

comment by jimrandomh · 2010-10-04T13:02:38.560Z · LW(p) · GW(p)

This probably isn't what you had in mind, but any single complete human brain is a (or contains a) morality, and it's objectively real.

Replies from: WrongBot
comment by WrongBot · 2010-10-04T16:29:36.583Z · LW(p) · GW(p)

Indeed, that was not at all what I meant.

comment by Will_Newsome · 2010-10-04T22:20:02.644Z · LW(p) · GW(p)

Does the morality apply to paperclippers? Babyeaters?

Replies from: WrongBot
comment by WrongBot · 2010-10-05T00:36:29.783Z · LW(p) · GW(p)

I'd say that it's about as likely to apply to paperclippers or babyeaters as it is to us. While I think there's a non-trivial chance that such a morality exists, I can't even begin to speculate about what it might be or how it exists. There's just a lot of uncertainty and very little either evidence.

The reason I think there's a chance at all, for what it's worth, is the existence of information theory. If information is a fundamental mathematical concept, I don't think it's inconceivable that there are all kinds of mathematical laws specifically about engines of cognition. Some of which may look like things we call morality.

But most likely not.

Replies from: Perplexed
comment by Perplexed · 2010-10-06T05:08:08.024Z · LW(p) · GW(p)

Information theory is the wrong place to look for objective morality. Information is purely epistemic - i.e. about knowing. You need to look at game theory. That deals with wanting and doing. As far as I know, no one has had any moral issues with simply knowing since we got kicked out of the Garden of Eden. It is what we want and what we do that get us into moral trouble these days.

Here is a sketch of a game-theoretic golden rule: Form coalitions that are as large as possible. Act so as to yield the Nash bargaining solution in all games with coalition members - pretending that they have perfect information about your past actions, even though they may not actually have perfect information. Do your share to punish defectors and members of hostile coalitions, but forgive after fair punishment has been meted out. Treat neutral parties with indifference - if they have no power over you, you have no reason to apply your power over them in either direction.

This "objective morality" is strikingly different from the "inter-subjective morality" that evolution presumably installed in our human natures. But this may be an objective advantage if we have to make moral decisions regarding Baby Eaters who presumably received a different endowment from their own evolutionary history.

Replies from: AdeleneDawner, timtyler
comment by AdeleneDawner · 2010-10-06T15:18:47.558Z · LW(p) · GW(p)

Do your share to punish defectors and members of hostile coalitions, but forgive after fair punishment has been meted out.

This does help bring clarity to the babyeaters' actions: The babies are, by existing, defecting against the goal of having a decent standard of living for all adults. The eating is the 'fair punishment' that brings the situation back to equilibrium.

I suspect that we'd be better served by a less emotionally charged word than 'punishment' for that phenomenon in general, though.

Replies from: Perplexed
comment by Perplexed · 2010-10-06T16:33:29.251Z · LW(p) · GW(p)

Oh, I think "punishment" is just fine as a word to describe the proper treatment of defectors, and it is actually used routinely in the game-theory literature for that purpose. However, I'm not so sure I would agree that the babies in the story are being "punished".

I would suggest that, as powerless agents not yet admitted to the coalition, they ought to be treated with indifference, perhaps to be destroyed like weeds, were no other issues involved. But there is something else involved - the babies are made into pariahs, something similar to a virgin sacrifice to the volcano god. Participation in the baby harvesting is transformed to a ritual social duty. Now that I think about it, it does seem more like voodoo than rational-agent game theory.

However, the game theory literature does contain examples where mutual self-punishment is required for an optimal solution, and a rule requiring requiring one to eat one's own babies does at least provide some incentive to minimize the number of excess babies produced.

comment by timtyler · 2010-10-07T02:16:39.461Z · LW(p) · GW(p)

Does that "game-theoretic golden rule" even tell you how to behave?

comment by saturn · 2010-10-04T19:18:51.399Z · LW(p) · GW(p)

Do you also think there is a means or mechanism for humans to discover and verify the objectively real morality? If so, what could it be?

Replies from: WrongBot
comment by WrongBot · 2010-10-05T00:25:08.130Z · LW(p) · GW(p)

I would assume any objectively real morality would be in some way entailed by the physical universe, and therefore in theory discoverable.

I wouldn't say that a thing existed if it could not interact in any causal way with our universe.

comment by RobinZ · 2010-10-04T12:39:52.050Z · LW(p) · GW(p)

I expect a plurality may vote as you expect, but 10% seems reasonable based on my current state of knowledge.

comment by nick012000 · 2010-10-11T15:17:41.846Z · LW(p) · GW(p)

Voted up for under-confidence. God exists, and he defined morality the same way he defined the laws of physics.

comment by Tenek · 2010-10-04T15:29:46.927Z · LW(p) · GW(p)

The pinnacle of cryonics technology will be a time machine that can at the very least, take a snapshot of someone before they died and reconstitute them in the future. I have three living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out. (50%)

Replies from: Will_Newsome, Tiiba
comment by Will_Newsome · 2010-10-06T06:58:59.613Z · LW(p) · GW(p)

This seems reasonable with the help of FAI, though I doubt CEV would do it; or are you thinking of possible non-FAI technologies?

comment by Tiiba · 2010-10-04T19:11:06.663Z · LW(p) · GW(p)

So you intend to acquire an extra grandparent somewhere along the line?

Replies from: Tenek
comment by Tenek · 2010-10-04T19:36:01.904Z · LW(p) · GW(p)

No. I intend to revive one. Possibly all four, if necessary. Consider it thawing technology so advanced it can revive even the pyronics crowd.

Replies from: JenniferRM, Tiiba
comment by JenniferRM · 2010-10-04T20:55:42.120Z · LW(p) · GW(p)

Did you coin the term "pyronics"?

Replies from: Tenek
comment by Tenek · 2010-10-05T04:26:35.946Z · LW(p) · GW(p)

I would imagine not (99%) , although it doesn't appear to be in common usage.

comment by Tiiba · 2010-10-05T13:22:34.425Z · LW(p) · GW(p)

Sorry, I missed the time machine part.

comment by erratio · 2010-10-03T23:06:28.743Z · LW(p) · GW(p)

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

Replies from: LucasSloan, wedrifid, NihilCredo, davidad, drc500free
comment by LucasSloan · 2010-10-03T23:52:08.093Z · LW(p) · GW(p)

What we call consciousness/self-awareness is just a meaningless side-effect of brain processes (55%)

What does this mean? What is the difference between saying "What we call consciousness/self-awareness is just a side-effect of brain processes", which is pretty obviously true and saying that they're meaningless side effects?

Replies from: erratio
comment by erratio · 2010-10-04T00:07:51.398Z · LW(p) · GW(p)

Sorry, I was letting my own uncertainty get in the way of clarity there. A stronger version of what I was trying to say would be that consciousness gives us the illusion of being in control of our actions when in fact we have no such control. Or to put it another way: we're all P-zombies with delusions of grandeur (yes, this doesn't actually make logical sense, but it works for me)

Replies from: LucasSloan, Eugine_Nier
comment by LucasSloan · 2010-10-04T00:19:02.191Z · LW(p) · GW(p)

So I agree with the science you cite, right? But what you said really doesn't follow. Just because our phonologic loop doesn't actually have the control it thinks it does, it doesn't follow that sensory modalities are "meaningless." You might want to re-read Joy in the Merely Real with this thought of yours in mind.

Replies from: erratio
comment by erratio · 2010-10-04T01:00:21.802Z · LW(p) · GW(p)

Well, sure, you can find meaning wherever you want. I'm currently listening to some music that I find beautiful and meaningful. But that beauty and meaning isn't an inherent trait of the music, it's just something that I read into it. Similarly when I say that consciousness is meaningless I don't mean that we should all become nihilists, only that consciousness doesn't pay rent and so any meaning or usefulness it has is what you invent for it.

comment by Eugine_Nier · 2010-10-04T00:10:44.446Z · LW(p) · GW(p)

I don't know about you, but I'm not a P-zombie. :)

Replies from: PeterS
comment by PeterS · 2010-10-04T01:09:26.445Z · LW(p) · GW(p)

That emoticon isn't fooling anyone.

comment by wedrifid · 2010-10-04T04:43:33.050Z · LW(p) · GW(p)

Upvoted for 'not even being wrong'.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-10-08T17:47:16.293Z · LW(p) · GW(p)

I'm not sure whether "not even wrong" calls for an upvote, does it?

comment by NihilCredo · 2010-10-03T23:39:56.890Z · LW(p) · GW(p)

Could you expand a little on this?

Replies from: erratio
comment by erratio · 2010-10-03T23:59:14.834Z · LW(p) · GW(p)

Sure. Here's a version of the analogy that first got me thinking about it:

If I turn on a lamp at night, it sheds both heat and light. But I wouldn't say that the point of a lamp is to produce heat, nor that the amount of heat it does or doesn't produce is relevant to its useful light-shedding properties. In the same way, consciousness is not the point of the brain and doesn't do much for us. There's a fair amount of cogsci literature that suggests that we have little if any conscious control over our actions and reinforces this opinion. But I like feeling responsible for my actions, even if it is just an illusion, hence the low probability assignment even though it feels intuitively correct to me.

Replies from: Perplexed, NihilCredo
comment by Perplexed · 2010-10-05T00:48:52.269Z · LW(p) · GW(p)

(I'm not sure why I pushed the button to reply, but here I am so I guess I'll just make something up to cover my confusion.)

Do you also believe that we use language - speaking, writing, listening, reading, reasoning, doing arithmetic calculations, etc. - without using our consciousness?

Replies from: erratio
comment by erratio · 2010-10-05T01:07:45.884Z · LW(p) · GW(p)

Hah! I found it amusing at least.

I'm.. honestly not sure. I think that the vast majority of the time we don't consciously choose whether to speak or what exact words to say when we do speak. Listening and reading are definitely unconscious processes, otherwise it would be possible to turn them off (also, cocktail party effect is a huge indication of listening being largely unconscious). Arithmetic calculations - that's a matter of learning an algorithm which usually involves mnemonics for the numbers..

On balance I have to go with yes, I don't think those processes require consciousness

Replies from: AdeleneDawner, Perplexed
comment by AdeleneDawner · 2010-10-05T01:45:27.879Z · LW(p) · GW(p)

Some autistic people, particularly those in the middle and middle-to-severe part of the spectrum, report that during overload, some kinds of processing - most often understanding or being able to produce speech, but also other sensory processing - turn off. Some report that turned-off processing skills can be consciously turned back on, often at the expense of a different skill, or that the relevant skill can be consciously emulated even when the normal mode of producing the intended result is offline. I've personally experienced this.

Also, in my experience, a fair portion (20-30%) of adults of average intelligence aren't fluent in reading, and do have to consciously parse each word.

comment by Perplexed · 2010-10-05T01:45:49.386Z · LW(p) · GW(p)

I have to go with yes, I don't think those [symbolic, linguistic] processes require consciousness.

You pretty much have to go with "yes" if you want to claim that "consciousness/self-awareness is just a meaningless side-effect of brain processes." I've got to disagree. What my introspection calls my "consciousness" is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud.

Not all of my speech works this way, but some does. And almost all of my writing, including this note. So I have to disagree that consciousness has no causal role in my behavior. Sometimes I act with "malice aforethought". Or at least I sometimes speak that way.

For these reasons, I prefer "spotlight" consciousness theories, like "global workspace" or "integrated information theory". Theories that capture the fact that we observe some things consciously and do some things consciously.

Replies from: Blueberry
comment by Blueberry · 2011-01-22T08:35:16.282Z · LW(p) · GW(p)

I've got to disagree. What my introspection calls my "consciousness" is mostly listening to myself talk to myself. And then after I have practiced saying it to myself, I may go on to say it out loud.

Agreed, but that tells you consciousness requires language. That doesn't tell you language requires consciousness. Drugs such as alcohol or Ambien can cause people to have conversations and engage in other activities while unconscious.

comment by NihilCredo · 2010-10-04T00:07:19.271Z · LW(p) · GW(p)

Thanks; +1 for the explanation.

No mod to the original comment; I would downmod the "consciousness was not a positive factor in the evolution of brains" part and upmod the "we do not actually rely much if at all on conscious thought" one.

comment by davidad · 2010-10-13T21:47:24.623Z · LW(p) · GW(p)

Upvoted for underconfidence.

comment by drc500free · 2010-10-08T15:55:06.553Z · LW(p) · GW(p)

Having just stumbled across LW yesterday, I've been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this.

“Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapping the map, or recursively refining the super-model to understand itself better) can also impact our ability to preserve ourselves, making it “meaningful” to the agent/model itself. Being aware of others self-awareness (i.e. mapping a different agent/map and realizing that it will act to preserve itself) is probably one of the most critical developments in the evolution of humans. “I am” a super-agent. It is a stack of component agents.

At each layer, a shared belief by a system of agents (that each agent is working towards the common utility of all the agents) results in a super-agent with more complex goals that does not have a belief that it is composed of distinct sub-agents. Like the 7-layer network model or the transistor-gate-chip-computer model, each layer is just an emergent property of its components. But each layer has meaning because it provides us a predictive model to understand the system’s behavior, in a way that we don’t understand by just looking at a complex version of the layer below it. My super-agent has a super-model of reality, similarly composed. Some parts of that super-model are tagged, weakly or strongly, with an attribute. The collection of cells that makes up a fatty lump on my head is weakly marked with that attribute. The parts of reality where my super-agent/-model exist are very strongly tagged. My super-agent survives because it has marked the area on its model corresponding to where it exists, and it has a goal of continually remarking this area. If it has an accurate model, but marks a different region of reality (or marks the correct region but doesn’t protect it), it will eventually be destroyed by entropy. If it has an inaccurate model, it won’t be able to effectively interact with reality to protect the region where it resides. If it has an accurate model, and marks only where it originally is, it won’t be able to adapt to face environmental changes and challenges while still maintaining its reality.

comment by Kevin · 2010-10-03T07:43:11.093Z · LW(p) · GW(p)

It does not all add up to normality. We are living in a weird universe. (75%)

Replies from: Interpolate, Eugine_Nier, Risto_Saarelma, Will_Newsome, Clippy
comment by Interpolate · 2010-10-03T11:20:26.451Z · LW(p) · GW(p)

It does not all add up to normality. We are living in a weird universe. (75%)

My initial reaction was that this is not a statement of belief but one of opinion, and to think like reality.

We are living in a Fun Theory universe where we find ourselves as individual or aggregate fun theoretic agents, or something else really bizarre that is not explained by naive Less Wrong rationality, such as multiversal agents playing with lots of humanity's measure.

I'm still not entirely sure what you mean (further elaboration would be very welcome), but going by a naive understanding I upvoted your comment based on the principle of Occam's Razor - whatever your reasons for believing this (presumably perceived inconsistencies, paradoxes etc. in the observable world, physics etc.) I doubt your conceived "weird" universe would the simplest explanation. Additionally, that conceived weird universe in addition to lacking epistemic/empirical ground begs for more explanation than the understanding/lack thereof of the universe/reality that's more of less shared by current scientific consensus.

If I'm understanding correctly, your argument for the existence of a "weird universe" is analagous to an argument for the existence of God (or the supernatural, for that matter): where by introducing some cosmic force beyond reason and empiricism, we eliminate the problem of there being phenomena which can't be explained by it.

comment by Eugine_Nier · 2010-10-03T07:53:59.941Z · LW(p) · GW(p)

Please specify what you mean by a weird universe.

Replies from: Kevin
comment by Kevin · 2010-10-03T08:13:53.252Z · LW(p) · GW(p)

We are living in a Fun Theory universe where we find ourselves as individual or aggregate fun theoretic agents, or something else really bizarre that is not explained by naive Less Wrong rationality, such as multiversal agents playing with lots of humanity's measure.

Replies from: None
comment by [deleted] · 2010-10-08T04:54:18.800Z · LW(p) · GW(p)

The more I hear about this the more intrigued I get. Could someone with a strong belief in this hypothesis write a post about it? Or at the very least throw out hints about how you updated in this direction?

comment by Risto_Saarelma · 2010-10-03T10:10:40.012Z · LW(p) · GW(p)

Would "Fortean phenomena really do occur, and some type of anthropic effect keeps them from being verifiable by scientific observers" fit under this statement?

Replies from: Kevin
comment by Kevin · 2010-10-03T10:13:36.140Z · LW(p) · GW(p)

That sounds weird to me.

comment by Will_Newsome · 2010-10-03T07:50:01.945Z · LW(p) · GW(p)

Downvoted in agreement (I happen to know generally what Kevin's talking about here, but it's really hard to concisely explain the intuition).

comment by Clippy · 2010-10-04T16:25:49.521Z · LW(p) · GW(p)

Why do you think so?

Replies from: Kevin
comment by Kevin · 2010-10-09T22:49:26.965Z · LW(p) · GW(p)

For some definitions of weird, our deal (assuming it continues to completion) is enough to land this universe in the block of weird universes.

comment by [deleted] · 2010-10-03T03:00:00.471Z · LW(p) · GW(p)

I think that there are better-than-placebo methods for causing significant fat loss. (60%)

ETA: apparently I need to clarify.

It is way more likely than 60% that gastric bypass surgery, liposuction, starvation, and meth will cause fat loss. I am not talking about that. I am talking about healthy diet and exercise. Can most people who want to lose weight do that deliberately, through diet and exercise? I think it's likely but not certain.

Replies from: magfrump, None, Will_Newsome, Normal_Anomaly, khafra, Larks, JoshuaZ, lmnop, datadataeverywhere
comment by magfrump · 2010-10-03T04:30:19.869Z · LW(p) · GW(p)

voted up because 60% seems WAAAAAYYYY underconfident to me.

Replies from: Eugine_Nier, Zvi, wedrifid, datadataeverywhere
comment by Eugine_Nier · 2010-10-03T04:40:39.275Z · LW(p) · GW(p)

Now that we're up-voting underconfidence I changed my vote.

Replies from: magfrump
comment by magfrump · 2010-10-03T04:56:43.674Z · LW(p) · GW(p)

From the OP:

Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

comment by Zvi · 2010-10-06T19:06:43.315Z · LW(p) · GW(p)

I almost want this reworded the opposite way for this reason, as a 40% chance that there are not better-than-placebo methods for causing significant fat loss. Even if I didn't have first and second hand examples to fall back on I don't see why there is real doubt on this question. Another more interesting variation is, does such a method exist that is practical for a large percentage of people?

comment by wedrifid · 2010-10-03T05:05:51.107Z · LW(p) · GW(p)

Likewise. My p: 99.5%

comment by datadataeverywhere · 2010-10-03T23:47:30.463Z · LW(p) · GW(p)

likewise

comment by [deleted] · 2010-10-03T05:57:46.355Z · LW(p) · GW(p)

shoot... I'm just scared to bet, is all. You can tell I'm no fun at Casino Night.

Replies from: Will_Newsome, Relsqui
comment by Will_Newsome · 2010-10-03T06:07:49.997Z · LW(p) · GW(p)

Ah, but betting for a proposition is equivalent to betting against its opposite. Why are you so certain that there's no better-than-placebo methods for causing significant fat loss?

But If you do change your mind, please don't change the original, as then everyone's comments would be irrelevant.

Replies from: Jonathan_Graehl, None
comment by Jonathan_Graehl · 2010-10-03T07:43:14.469Z · LW(p) · GW(p)

Absolutely right. This is an important point that many people miss. If you're uncertain about your estimated probability, or even merely risk averse, then you may want to take neither side of the implied bet. Fine, but at least figure out some odds where you feel like you should have an indifferent expectation.

comment by [deleted] · 2010-10-03T06:12:34.850Z · LW(p) · GW(p)

I think, with some confidence, that there are better-than-placebo methods for causing significant fat loss. The low confidence estimate has more to do with my reluctance to be wrong than anything else.

If I were wrong, it would be because overweight is mostly genetic and irreversible (something I have seen argued and supported with clinical studies.)

comment by Relsqui · 2010-10-03T07:13:14.755Z · LW(p) · GW(p)

I sympathize with this. But I also upvoted the original comment because of it (i.e. I also think you're underconfident).

comment by Will_Newsome · 2010-10-03T03:03:50.872Z · LW(p) · GW(p)

Voted down for agreement! (Liposuction... do you mean dietary methods? I'd still agree with you though.)

Edit: On reflection, 60% does seem too low. Changed to upvote.

Replies from: None
comment by [deleted] · 2010-10-03T03:05:58.806Z · LW(p) · GW(p)

I meant diet, exercise, and perhaps supplements; liposuction is trivially true.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T03:20:08.075Z · LW(p) · GW(p)

Generally speaking, most diets and moderate exercise work very well for a year or two. But the shangri-la diet tends to work for as long as you do it (for many/most? people). Also, certain supplements work, but I forgot which. So I gotta agree with you.

Replies from: wedrifid
comment by wedrifid · 2010-10-03T05:07:54.342Z · LW(p) · GW(p)

Also, certain supplements work, but I forgot which. So I gotta agree with you.

For example... just about any stimulant you can get your hands on.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:41:04.253Z · LW(p) · GW(p)

But there were others, I think? User:taw talked about one that you take with caffeine. It might have been a stimulant, though.

Replies from: Douglas_Knight, wedrifid
comment by Douglas_Knight · 2010-10-03T06:57:41.877Z · LW(p) · GW(p)

User:taw talked about one that you take with caffeine.

ephedrine. It's called ECA, including aspirin, but that wasn't used in the studies.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T22:43:22.820Z · LW(p) · GW(p)

Thanks! :D

comment by wedrifid · 2010-10-03T05:58:33.906Z · LW(p) · GW(p)

But there were others, I think?

For sure. Laxatives. e coli. But yes, there are others with better side effect profiles too. :)

User:taw talked about one that you take with caffeine. It might have been a stimulant, though.

Take with caffeine? More caffeine. That'll do the trick. :P

comment by Normal_Anomaly · 2010-12-14T02:29:44.437Z · LW(p) · GW(p)

Upvoted, because I say diet and exercise work at 85% (for a significant fraction of people; there may be some with unlucky genes who can't lose weight that way).

comment by khafra · 2010-10-04T14:53:16.156Z · LW(p) · GW(p)

Does "method" include "exercise and healthy eating"?

Replies from: None
comment by [deleted] · 2010-10-04T14:58:47.210Z · LW(p) · GW(p)

This post has generated so much more controversy than I expected.

I meant exactly exercise and healthy eating! I thought people would assume I meant that. Not gastric bypass surgery, not liposuction, not starvation, not amputating limbs.

Replies from: DilGreen, Richard_Kennaway
comment by DilGreen · 2010-10-05T22:51:41.312Z · LW(p) · GW(p)

Whenever I see someone with one of those badges that says; 'Lose weight now, ask me how!", I check that they have all their limbs.

comment by Richard_Kennaway · 2010-10-04T15:18:35.341Z · LW(p) · GW(p)

That's ok. Just put an ETA in the top-level comment to clarify that. There's a lot of wiggle room around "healthy eating" though. Where are you drawing the line between calorie restriction and starvation?

comment by Larks · 2010-10-04T11:00:05.288Z · LW(p) · GW(p)

Becoming seriously ill? Better in the sense of losing more weight.

comment by JoshuaZ · 2010-10-04T03:52:48.602Z · LW(p) · GW(p)

Voting down for trivial agreement. Both stomach stapling and gastric lap bands easily meet this. Do you mean maybe non-surgical methods? That seems more questionable.

comment by lmnop · 2010-10-03T03:11:40.163Z · LW(p) · GW(p)

Short term or long term? If long, how long?

comment by datadataeverywhere · 2010-10-03T23:52:12.839Z · LW(p) · GW(p)

I assign p=1 to the proposition that not eating causes significant fat loss. I can't justify subtracting any particular epsilon, which means to me that p=1-e, where e is too small for me to conceive and apply a number to.

EDIT: I am particularly referring to indefinite periods of perfect fasting.

Replies from: None, Richard_Kennaway
comment by [deleted] · 2010-10-03T23:53:34.351Z · LW(p) · GW(p)

The reason it's questionable: how long can one not eat? Can most people not eat for long enough?

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-10-04T00:03:56.566Z · LW(p) · GW(p)

Then take involuntary starvation. Perhaps you meant "better" in an ethical sense, but I thought you meant in a sense of strict effectiveness.

This proposition is patently false (by indicating that there is a 40% chance that nothing causes better weight loss than placebo), as you admitted with regard to liposuction elsewhere in this thread.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T00:27:10.448Z · LW(p) · GW(p)

I think you're nitpicking; if what she's saying sounds completely obviously unreasonable then it's probably not what she meant. She means something like "There's a 60% chance that diets, legal supplements, fasting, and/or exercise, in amounts that Western culture would count as memetically reasonable, and in amounts that can be reasonably expected to be undertaken by members of Western culture, can cause significant weight loss." To which everyone says, "No, more like 95%", not "Haha obviously liposuction works, and so does starvation, you imprecise person: next time write a paragraph's worth of disclaimers and don't count on the ability of your audience to make charitable interpretations."

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-10-04T00:47:38.341Z · LW(p) · GW(p)

Maybe I have a different idea than you of memetically reasonable, but I'm perfectly happy saying "No, more like 1-10^-30" to your statement as well as hers. Maybe I need to make a top level post here, but I think that it's a very small minority of humans that are unable to lose weight through diet and exercise, even if the degree of effort required is one not frequently undertaken. I don't think that the degree of effort required is considered widely unreasonable in Western culture.

My p value is so high because this thread asks us to discount matters of opinion, so the probability that the effort required is beyond what is considered reasonable seems outside the scope. Same for "reasonably expected". I feel like it's enough to say that the methods don't require super-human willpower or vast resources. I think the methods themselves are unquestionable.

comment by Richard_Kennaway · 2010-10-04T11:49:32.075Z · LW(p) · GW(p)

It has been remarked in support of that proposition that no fat people came out of Auschwitz (or Singapore, or similar episodes). But is that because they got thin, or did they die before getting thin? Has any research been done on how people of different body types respond to starvation? The full report on this experiment might address that, but the Wiki article doesn't. However, the volunteers for that experiment were "young, healthy men" volunteering as an alternative to military service, so it's unlikely that any of them were obese going in.

comment by nazgulnarsil · 2010-10-04T07:09:13.372Z · LW(p) · GW(p)

the joint stock corporation is the best* system of peacefully organizing humans to achieve goals. the closer governmental structure conforms to a joint-stock system the more peaceful and prosperous it will become (barring getting nuked by a jealous democracy). (99%)

*that humans have invented so far

Replies from: Mass_Driver, blogospheroid, knb, Scott78704
comment by Mass_Driver · 2010-10-06T05:55:18.930Z · LW(p) · GW(p)

The proposition strikes me as either circular or wrong, depending on your definitions of "peaceful" and "prosperous."

If by "peaceful" you mean "devoid of violence," and by "violence" you essentially mean "transfers of wealth that are contrary to just laws," and by "just laws" you mean "laws that honor private property rights above all else," then you should not be surprised if joint stock corporations are the most peaceful entities the world has seen so far, because joint stock corporations are dependent on private property rights for their creation and legitimacy.

If by "prosperous" you mean "full of the kind of wealth that can be reported on an objective balance sheet," and if by "objective balance sheet" you mean "an accounting that will satisfy a plurality of diverse, decentralized and marginally involved investors," then you should likewise not be surprised if joint stock corporations increase prosperity, because joint stock corporations are designed so as to maximize just this sort of prosperity.

Unfortunately, they do it by offloading negative externalities in the form of pollution, alienation, lower wages, censored speech, and cyclical instability of investments onto individual people.

When your 'goals' are the lowest common denominator of materialistic consumption, joint stock corporations might be unbeatable. If your goals include providing a social safety net, education, immunizations, a free marketplace of ideas, biodiversity, and clean air, you might want to consider using a liberal democracy.

Using the most charitable definitions I can think of for your proposition, my estimate for the probability that a joint-stock system would best achieve a fair and honest mix of humanity's crasser and nobler goals is somewhere around 15%, and so I'm upvoting you for overconfidence.

Replies from: blogospheroid
comment by blogospheroid · 2010-10-06T11:18:49.412Z · LW(p) · GW(p)

Coming from the angle of competition in governance, I think you might be mixing up a lot of stuff. A joint stock corporation which is sovereign is trying to compete in the wider world for customers , i.e. willing taxpayers.

If the people desire the values you have mentioned then the joint-stock government will try to provide those cost effectively.

Clean Air and Immunizations will almost certainly be on the agenda of a city government

Biodiversity will be important to a government which includes forests in its assets and wants to sustainably maintain the same.

A free marketplace of ideas, free education and social safety nets would purely be determined by the market for people. Is it an important value enough that people would not come to your country and would go to another? if it is, then the joint stock government would try to provide the same. If not, then they wouldn't.

Replies from: wedrifid, Mass_Driver
comment by wedrifid · 2010-10-06T11:30:56.012Z · LW(p) · GW(p)

All of this makes sense in principle.

(I'm assuming you're not thinking that any of it would actually work in practice with either humans or ideal rational agents, right?)

comment by Mass_Driver · 2010-10-06T13:40:28.677Z · LW(p) · GW(p)

Good response, but I have to agree with wedrifid here: you can't compete for "willing taxpayers" at all if you're dealing with hard public goods, and elsewhere competition is dulled by (a) the irrational political loyalties of citizens, (b) the legitimate emotional and economic costs of immigration, (c) the varying ability of different kinds of citizens to move, and (d) protectionist controls on the movement of labor in whatever non-libertopian governments remain, which might provide them with an unfair advantage in real life, the theoretical axioms of competitive advantage theory be damned.

I'd be all for introducing some features of the joint stock corporation into some forms of government, but that doesn't sound very much like what you were proposing would lead to peace and prosperity -- you said the jsc was better than other forms, not a good thing to have a nice dose of.

comment by blogospheroid · 2010-10-05T04:57:41.328Z · LW(p) · GW(p)

Or how I would call it, no representation without taxation. Those who contribute equity to society rule it. Everyone else contracts with the corporate in some way or another.

comment by knb · 2010-10-04T22:25:11.816Z · LW(p) · GW(p)

What is the term for this mode of governance? Corporate Monarchy? Seems like a good idea to me.

Replies from: gwern, Emile
comment by gwern · 2010-10-07T01:48:54.772Z · LW(p) · GW(p)

England had property-rights based monarchy. It's basically gone now. So pace Mencius Moldbug, it can't be especially good a system - else it would not have died.

Replies from: knb
comment by knb · 2010-10-07T02:49:10.054Z · LW(p) · GW(p)

So pace Mencius Moldbug, it can't be especially good a system - else it would not have died.

I don't understand this. England never was a corporate monarchy, though.

Replies from: gwern
comment by gwern · 2010-10-07T03:07:31.125Z · LW(p) · GW(p)

England was never a 'corporate' monarchy in the sense of a limited-liability joint-stock company with numeric shares, voting rights, etc. I never said it was, though, but that it was 'property-rights based', which it was - the whole country and all legal privileges were property which the king could and did rent and sell away.

This is one of the major topics of Nick Szabo's blog Unenumerated. If you have the time, I strongly recommend reading it all. It's up there with Overcoming Bias in my books.

comment by Emile · 2010-10-17T13:20:01.663Z · LW(p) · GW(p)

Moldbug calls this a joint-stock republic, though he mixes it with some of his more fringe ideas.

I'll second gwern's recommendation on Nick Szabo's blog - he has a post on Government for Profit, which I think was written as a rebuttal to some of Moldbug's ideas (see the comments in this post)

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-10-17T19:40:08.830Z · LW(p) · GW(p)

Another recommendation for Nick Szabo's blog. The only online writings I know of about governance and political economy that come close are the blogs of economist Arnold Kling and the eccentric and hyperbolic Mencius Moldbug. (Hanson's blog is extremely strong on several subjects, but governance is not IMHO one of them.)

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-17T21:03:33.557Z · LW(p) · GW(p)

rhollerith_dot_com:

Another recommendation for Nick Szabo's blog. The only online writings I know of about governance and political economy that come close are the blogs of economist Arnold Kling and the eccentric and hyperbolic Mencius Moldbug.

I agree with all these recommendations, and I'd add that these three authors have written some of their best stuff in the course of debating each other. In particular, a good way to get the most out of Moldbug is to read him alongside Nick Szabo's criticisms that can be found both in UR comments and on Szabo's own blog. As another gem, the 2008 Moldbug-Kling debate on finance (parts (1), (2), (3), (4), and (5)) was one of the best and most insightful discussions of economics I've ever read.

Hanson's blog is extremely strong on several subjects, but governance is not IMHO one of them.

I agree. In addition, I must say I'm disappointed with the shallowness of the occasional discussions of governance on LW. Whenever such topics are opened, I see people who otherwise display tremendous smarts and critical skills making not-even-wrong assertions based on a completely naive view of the present system of governance, barely more realistic than the descriptions from civics textbooks.

comment by Scott78704 · 2010-10-06T14:55:50.092Z · LW(p) · GW(p)

Open source.

comment by Vladimir_M · 2010-10-03T10:45:08.934Z · LW(p) · GW(p)

Although lots of people here consider it a hallmark of "rationality," assigning numerical probabilities to common-sense conclusions and beliefs is meaningless, except perhaps as a vague figure of speech. (Absolutely certain.)

Replies from: Alicorn, novalis, Perplexed, komponisto, prase, xv15, torekp, orthonormal, None, None
comment by Alicorn · 2010-10-03T14:23:43.312Z · LW(p) · GW(p)

(Absolutely certain.)

I'm not sure whether to chide you or giggle at the self-reference. I suspect, though, that "absolutely certain" is not a confidence level.

comment by novalis · 2010-10-03T22:42:43.724Z · LW(p) · GW(p)

I want to vote you down in agreement, but I don't have enough karma.

comment by Perplexed · 2010-10-04T14:27:15.824Z · LW(p) · GW(p)

assigning numerical probabilities to common-sense conclusions and beliefs is meaningless

It is risky to deprecate something as "meaningless" - a ritual, a practice, a word, an idiom. Risky because the actual meaning may be something very different than you imagine. That seems to be the case here with attaching numbers to subjective probabilities.

The meaning of attaching a number to something lies in how that number may be used to generate a second number that can then be attached to something else. There is no point in providing a number to associate with the variable 'm' (i.e. that number is meaningless) unless you simultaneously provide a number to associate with the variable 'f' and then plug both into "f=ma" to generate a third number to associate with the variable 'a', an number which you can test empirically.

Similarly, a single isolated subjective probability estimate may seem somewhat meaningless in isolation, but if you place it into a context with enough related subjective probability estimates and empirically measured frequencies, then all those probabilities and frequencies can be combined and compared using the standard formulas of Bayesian probability:

  • P(~A) = 1 - P(A)
  • P(B|A)*P(A)=P(A&B)=P(A|B)*P(B)

So, if you want to deprecate as "meaningless" my estimate that the Democrats have a 40% chance to maintain their House majority in the next election, go ahead. But you cannot then also deprecate my estimate that the Republicans have a 70% of reaching a House majority. Because the conjunction of those two probability estimates is not meaningless. It is quite respectably false.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T20:15:43.540Z · LW(p) · GW(p)

I think you're not drawing a clear enough distinction between two different things, namely the mathematical relationships between numbers, and the correspondence between numbers and reality.

If you ask an astronomer what is the mass of some asteroid, he will presumably give you a number with a few significant digits and and uncertainty interval. If you ask him to justify this number, he will be able to point to some observations that are incompatible with the assumption that the mass is outside this interval, which follows from a mathematical argument based on our best knowledge of physics. If you ask for more significant digits, he will say that we don't know (and that beyond a certain accuracy, the question doesn't even make sense, since it's constantly losing and gathering small bits of mass). That's what it means for a number to be rigorously justified.

But now imagine that I make an uneducated guess of how heavy this asteroid might be, based on no actual astronomical observation. I do of course know that it must be heavier than a few tons or otherwise it wouldn't be noticeable from Earth as an identifiable object, and that it must be lighter than 10^20 or so tons since that's roughly the range where smaller planets are, but it's clearly nonsensical for me to express that guess with even one digit of precision. Yet I could insist on a precise guess, and claim that it's "meaningful" in a way analogous to your above justification of subjective probability estimates, by deriving various mathematical and physical implications of this fact. If you deprecate my claim that its mass is 4.5237 x 10^15kg, then you cannot also deprecate my claim that it is a sphere of radius 1km and average density 1000kg/m^3, since the conjunction of these claims is by the sheer force of mathematics false.

Therefore, I don't see how you can argue that a number is meaningful by merely noting its relationships with other numbers that follow from pure mathematics. Or am I missing something with this analogy?

Replies from: Perplexed
comment by Perplexed · 2010-10-04T20:33:45.384Z · LW(p) · GW(p)

I don't see how you can argue that a number is meaningful by merely noting its relationships with other numbers that follow from pure mathematics. Or am I missing something with this analogy?

The only thing you are missing is the first paragraph of my reply. Just because something doesn't have the kind of meaning you think it ought to have (by virtue of being a number, for example) that doesn't justify your claim that it is meaningless.

Subjective probabilities of isolated propositions don't have the kind of meaning you want numbers to have. But they have exactly the kind of meaning I want them to have - specifically they can be used in computations that produce consistent results.

Do you think that the digits of pi beyond the first half dozen are also meaningless?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T20:52:48.205Z · LW(p) · GW(p)

Perplexed:

Subjective probabilities of isolated propositions don't have the kind of meaning you want numbers to have. But they have exactly the kind of meaning I want them to have - specifically they can be used in computations that produce consistent results.

Fair enough, but I still don't see how this solves the problem of the correspondence between numbers and reality. Any number can be used in computations that produce consistent results if you just start plugging it into formulas derived from some consistent mathematical theory. It is when the numbers are used as basis for claims about the real, physical world that I insist on an explanation of how exactly they are derived and how their claimed correspondence with reality is justified.

Do you think that the digits of pi beyond the first half dozen are also meaningless?

The digits of pi are an artifact of pure mathematics, so I don't think it's a good analogy for what we're talking about. Once you've built up enough mathematics to define lengths of curves in Euclidean geometry, the ratio between the circumference and diameter of a circle follows by pure logic. Any suitable analogy for what we're talking about must encompass empirical knowledge, and claims which can be falsified by empirical observations.

Replies from: Perplexed
comment by Perplexed · 2010-10-04T21:25:15.892Z · LW(p) · GW(p)

Subjective probabilities of isolated propositions don't have the kind of meaning you want numbers to have. But they have exactly the kind of meaning I want them to have - specifically they can be used in computations that produce consistent results.

Fair enough, but I still don't see how this solves the problem of the correspondence between numbers and reality.

It doesn't have to. That is a problem you made up. Other people don't have to buy in to your view on the proper relationship between numbers and physical reality.

My viewpoint on numbers is somewhere between platonism and formalism. I think that the meaning of a number is a particular structure in my mind. If I have an axiom system that is categorical (and, of course, usually I don't) then that picture in my mind can be made inter-subjective in that someone who also accepts those axioms can build an isomorphic structure in their own mind. The real world has absolutely nothing to do with Tarski's semantics - which is where I look to find out what the "meaning" of a number is.

Your complaint that subjective probabilities have no meaning is very much like the complaint of a new convert to atheism who laments that without God, life has no meaning. My advice: stop telling other people what the word "meaning" should mean.

However, if you really need some kind of affirmation, then I will provide some. I agree with you that the numbers used in subjective probabilities are less, ... what is the right word, ... less empirical than are the numbers you usually find in science classes. Does that make you feel better?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T22:27:41.312Z · LW(p) · GW(p)

Perplexed:

It doesn't have to. That is a problem you made up. Other people don't have to buy in to your view on the proper relationship between numbers and physical reality.

You probably wouldn't buy that same argument if it came from a numerologist, though. I don't think I hold any unusual and exotic views on this relationship, and in fact, I don't think I have made any philosophical assumptions in this discussion beyond the basic common-sense observation that if you want to use numbers to talk about the real world, they should have a clear connection with something that can be measured or counted to make any sense. I don't see any relevance of these (otherwise highly interesting) deep questions of the philosophy of math for any of my arguments.

Replies from: Perplexed, mattnewport
comment by Perplexed · 2010-10-04T23:30:21.882Z · LW(p) · GW(p)

There is nothing philosophically wrong with your position except your choice of the word "meaningless" as an epithet for the use of numbers which cannot be empirically justified. Your choice of that word is pretty much the only reason I am disagreeing with you.

comment by mattnewport · 2010-10-04T22:50:08.066Z · LW(p) · GW(p)

Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet? If you're not in the habit of accepting bets, how do you think someone who does this for a living (a bookie for example) should go about deciding on what odds to assign to a given bet?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T23:49:54.411Z · LW(p) · GW(p)

mattnewport:

Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet?

In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense. Otherwise, betting is just gambling (which of course can be worthwhile for fun or signaling value). Look at the stock market: it's pure gambling, unless you have insider knowledge or vastly higher expertise than the average investor.

This is the basic reason why I consider the emphasis on subjective Bayesian probabilities that is so popular here misguided. In technical problems where probability calculations can be helpful, the experts in the field already know how to use them. On the other hand, for the great majority of the relevant beliefs and conclusions you'll form in life, they offer nothing useful beyond what your vague common sense is already telling you. If you start taking them too seriously, it's easy to start fooling yourself that your thinking is more accurate and precise than it really is, and if you start actually betting on them, you'll be just gambling.

If you're not in the habit of accepting bets, how do you think someone who does this for a living (a bookie for example) should go about deciding on what odds to assign to a given bet?

I'm not familiar with the details of this business, but from what I understand, bookmakers work in such a way that they're guaranteed to make a profit no matter what happens. Effectively, they exploit the inconsistencies between different people's estimates of what the favorable odds are. (If there are bookmakers who stake their profit on some particular outcome, then I'm sure that they have insider knowledge if they can stay profitable.) Now of course, the trick is to come up with a book that is both profitable and offers odds that will sell well, but here we get into the fuzzy art of exploiting people's biases for profit.

Replies from: mattnewport, jimrandomh
comment by mattnewport · 2010-10-05T00:06:34.376Z · LW(p) · GW(p)

In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense.

You still have to be able to translate your superior relevant knowledge into odds in order to set the terms of the bet however. Do you not believe that this is an ability that people have varying degrees of aptitude for?

Look at the stock market: it's pure gambling, unless you have insider knowledge or vastly higher expertise than the average investor.

Vastly higher expertise than the average investor would appear to include something like the ability in question - translating your beliefs about the future into a probability such that you can judge whether investments have positive expected value. If you accept that true alpha) exists (and the evidence suggests that though rare a small percentage of the best investors do appear to have positive alpha) then what process do you believe those who possess it use to decide which investments are good and which bad?

What's your opinion on prediction markets? They seem to produce fairly good probability estimates so presumably the participants must be using some better-than-random process for arriving at numerical probability estimates for their predictions.

I'm not familiar with the details of this business, but from what I understand, bookmakers work in such a way that they're guaranteed to make a profit no matter what happens.

They certainly aim for a balanced book but they wouldn't be very profitable if they were not reasonably competent at setting initial odds (and updating them in the light of new information). If the initial odds are wildly out of line with their customers' then they won't be able to make a balanced book.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-05T07:18:42.760Z · LW(p) · GW(p)

mattnewport:

You still have to be able to translate your superior relevant knowledge into odds in order to set the terms of the bet however. Do you not believe that this is an ability that people have varying degrees of aptitude for?

They sure do, but in all the examples I can think of, people either just follow their intuition directly when faced with a concrete situation, or employ rigorous science to attack the problem. (It doesn't have to be the official accredited science, of course; the Venn diagram of official science and valid science features only a partial overlap.) I just don't see any practical examples of people successfully betting by doing calculations with probability numbers derived from their intuitive feelings of confidence that would go beyond what a mere verbal expression of these feelings would convey. Can you think of any?

If you accept that true alpha exists (and the evidence suggests that though rare a small percentage of the best investors do appear to have positive alpha) then what process do you believe those who possess it use to decide which investments are good and which bad?

Well, if I knew, I would be doing it myself -- and I sure wouldn't be talking about it publicly!

The problem with discussing investment strategies is that any non-trivial public information about this topic necessarily has to be bullshit, or at least drowned in bullshit to the point of being irrecoverable, since exclusive possession of correct information is a sure path to getting rich, but its effectiveness critically depends on exclusivity. Still, I would be surprised to find out that the success of some alpha-achieving investors is based on taking numerical expressions of common-sense confidence seriously.

In a sense, a similar problem faces anyone who aspires to be more "rational" than the average folk in any meaningful sense. Either your "rationality" manifests itself only in irrelevant matters, or you have to ask yourself what is so special and exclusive about you that you're reaping practical success that eludes so many other people, and in such a way that they can't just copy your approach.

What's your opinion on prediction markets? They seem to produce fairly good probability estimates so presumably the participants must be using some better-than-random process for arriving at numerical probability estimates for their predictions.

I agree with this assessment, but the accuracy of information aggregated by a prediction market implies nothing about your own individual certainty. Prediction markets work by cancelling out random errors and enabling specialists who wield esoteric expertise to take advantage of amateurs' systematic biases. Where your own individual judgment falls within this picture, you cannot know, unless you're one of these people with esoteric expertise.

Replies from: mattnewport
comment by mattnewport · 2010-10-05T20:22:31.202Z · LW(p) · GW(p)

I just don't see any practical examples of people successfully betting by doing calculations with probability numbers derived from their intuitive feelings of confidence that would go beyond what a mere verbal expression of these feelings would convey. Can you think of any?

I'd speculate that bookies and professional sports bettors are doing something like this. By bookies here I mean primarily the kind of individuals who stand with a chalkboard at race tracks rather than the large companies. They probably use some semi-rigorous / scientific techniques to analyze past form and then mix it with a lot of intuition / expertise together with lots of detailed domain specific knowledge and 'insider' info (a particular horse or jockey has recently recovered from an illness or injury and so may perform worse than expected, etc.). They'll then integrate all of this information together using some non mathematically rigorous opaque mental process and derive a probability estimate which will determine what odds they are willing to offer or accept.

I've read a fair bit of material by professional investors and macro hedge fund managers describing their thinking and how they make investment decisions. I think they are often doing something similar. Integrating information derived from rigorous analysis with more fuzzy / intuitive reasoning based on expertise, knowledge and experience and using it to derive probabilities for particular outcomes. They then seek out investments that currently appear to be mis-priced relative to the probabilities they've estimated, ideally with a fairly large margin of safety to allow for the imprecise and uncertain nature of their estimates.

It's entirely possible that this is not what's going on at all but it appears to me that something like this is a factor in the success of anyone who consistently profits from dealing with risk and uncertainty.

The problem with discussing investment strategies is that any non-trivial public information about this topic necessarily has to be bullshit, or at least drowned in bullshit to the point of being irrecoverable, since exclusive possession of correct information is a sure path to getting rich, but its effectiveness critically depends on exclusivity.

My experience leads me to believe that this is not entirely accurate. Investors are understandably reluctant to share very specific time critical investment ideas for free but they frequently share their thought processes for free and talk in general terms about their approaches and my impression is that they are no more obfuscatory or deliberately misleading than anyone else who talks about their success in any field.

In addition, hedge fund investor letters often share quite specific details of reasoning after the fact once profitable trades have been closed and these kinds of details are commonly elaborated in books and interviews once time-sensitive information has lost most of its value.

Either your "rationality" manifests itself only in irrelevant matters, or you have to ask yourself what is so special and exclusive about you that you're reaping practical success that eludes so many other people, and in such a way that they can't just copy your approach.

This seems to be taking the ethos of the EMH a little far. I comfortably attribute a significant portion of my academic and career success to being more intelligent and a clearer thinker than most people. Anyone here who through a sense of false modesty believes otherwise is probably deluding themselves.

Where your own individual judgment falls within this picture, you cannot know, unless you're one of these people with esoteric expertise.

This seems to be the main point of ongoing calibration exercises. If you have a track record of well calibrated predictions then you can gain some confidence that your own individual judgement is sound.

Overall I don't think we have a massive disagreement here. I agree with most of your reservations and I'm by no means certain that improving one's own calibration is feasible but I suspect that it might be and it seems sufficiently instrumentally useful that I'm interested in trying to improve my own.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-06T06:22:43.139Z · LW(p) · GW(p)

mattnewport:

I'd speculate that bookies and professional sports bettors are doing something like this. [...] I've read a fair bit of material by professional investors and macro hedge fund managers describing their thinking and how they make investment decisions. I think they are often doing something similar.

Your knowledge about these trades seems to be much greater than mine, so I'll accept these examples. In the meantime, I have expounded my whole view of the topic in a reply to an excellent systematic list of questions posed by prase, and in those terms, this would indicate the existence of what I called the third type of exceptions under point (3). I still maintain that these are rare exceptions in the overall range of human judgments, though, and that my basic point holds for the overwhelming majority of human common-sense thinking.

Investors are understandably reluctant to share very specific time critical investment ideas for free but they frequently share their thought processes for free and talk in general terms about their approaches and my impression is that they are no more obfuscatory or deliberately misleading than anyone else who talks about their success in any field.

I don't think they're being deliberately misleading. I just think that the whole mechanism by which the public discourse on these topics comes into being inherently generates a nearly impenetrable confusion, which you can dispel to extract useful information only if you are already an expert in the first place. There are many specific reasons for this, but it all ultimately comes down to the stability of the weak EMH equilibrium.

This seems to be taking the ethos of the EMH a little far. I comfortably attribute a significant portion of my academic and career success to being more intelligent and a clearer thinker than most people. Anyone here who through a sense of false modesty believes otherwise is probably deluding themselves.

Oh, absolutely! But you're presumably estimating the rank of your abilities based on some significant accomplishments that most people would indeed find impossible to achieve. What I meant to say (even though I expressed it poorly) is that there is no easy and readily available way to excel at "rationality" in any really relevant matters. This in contrast to the attitude, sometimes seen among the people here, that you can learn about Bayesianism or whatever else and just by virtue of that set yourself apart from the masses in accuracy of thought. The EMH ethos is, in my opinion, a good intellectual antidote against such temptations of hubris.

comment by jimrandomh · 2010-10-05T00:04:33.718Z · LW(p) · GW(p)

Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet?

In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense

You're dodging the question. What if the odds arose from a natural process, so that there isn't a person on the other side of the bet to compare your state of knowledge against?

Replies from: None, Vladimir_M
comment by [deleted] · 2010-10-05T03:27:32.400Z · LW(p) · GW(p)

I think this is right. The idea that you would be betting against another person is inessential, an unfortunate distraction arising from the choice of thought experiment. Admittedly it's a natural way to understand the thought experiment, but it's inessential. The experiment could be revised to exlude it. In fact every moment we make decisions whose outcomes depend on things we don't know, and in making those decisions we are therefore in effect gambling. We are surrounded by risks, and our decisions reveal our assessment of those risks.

comment by Vladimir_M · 2010-10-05T02:22:08.742Z · LW(p) · GW(p)

jimrandomh:

You're dodging the question. What if the odds arose from a natural process, so that there isn't a person on the other side of the bet to compare your state of knowledge against?

Maybe it's my failure of English comprehension (I'm not a native speaker, as you might guess from my frequent grammatical errors), but when I read the phrase "being offered good odds if offered a bet," I understood it as asking about a bet with opponents who stand to lose if my guess is right. So, honestly, I wasn't dodging the question.

But to answer your question, it depends on the concrete case. Some natural processes can be approximated with models that yield useful probability estimates, and faced with some such process, I would of course try to use the best scientific knowledge available to calculate the odds if the stakes are high enough to justify the effort. When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet. And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.

For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I'm effectively betting that it's narrow enough to jump over safely. In reality, I'll feel intuitively either that it's safe to jump or not, and I'll act on that feeling, produced by some opaque module for physics calculations in my brain. Of course, my conclusion might be wrong, and as a kid I would occasionally injure myself by judging wrongly in such situations, but how can I possibly quantify this feeling of certainty numerically in a meaningful way? It simply makes no sense. The overwhelming majority of real-life cases where I have to produce some judgment, and perhaps even bet on it, are of this sort.

It would be cool to have a brain that produces confidence estimates for its conclusions with greater precision, but mine simply isn't like that, and it's useless to pretend that it is.

Replies from: None
comment by [deleted] · 2010-10-05T10:54:40.909Z · LW(p) · GW(p)

When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet.

Applying the view of probability as willingness to bet, you can't refuse to reveal your probability assignments. Life continually throws at us risky choices. You can perform risky action X with high-value success Y and high-cost failure Z or you can refuse to perform it, but both actions reveal something about your probability assignments. If you perform the risky action X, it reveals that you assign sufficiently high probability to Y (i.e. low to Z) given the values that you place on Y and Z. If you refuse to perform risky action X, it reveals that you assign sufficiently low probability to Y given the values you place on Y and Z. This is nothing other than your willingness to bet.

In an actual case, your simple yes/no response to a given choice is not enough to reveal your probability assignment and only reveals some information about it (that it is below or above a certain value). But counterfactually, we can imagine infinite variations on the choice you are presented with, and for each of these choices, there is a response which (counterfactually) you would have given. This set of responses manifests your probability assignment (and reveals also its degree of precision). Of course, in real life, we can't usually conduct an experiment that reveals a substantial portion of this set of counterfactuals, so in real life, we remain in the dark about your probability assignment (unless we find some clever alternative way to elicit it than the direct, brute force test-all-variations approach I have just described). But the counterfactuals are still there, and still define a probability assignment, even if we don't know what it is.

And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.

But this revealed probability assignment is parallel to revealed preference. The point of revealed preference is not to help the consumer make better choices. It is a conceptual and sometimes practical tool of economics. The economist studying people discovers their preferences by observing their purchases. And similarly, we can discover a person's probability assignments by observing his choices. The purpose need not be to help that person to increase the clarity or accuracy of his own thinking, any more than the purpose of revealed preference is to help the consumer shop.

A person interested in self-knowledge, for whatever reason, might want to observe his own behavior in order to discover his own preferences. I think that people like Roissy in DC may be able to teach women about themselves if they choose to read him, teach them about what they really want in a man by pointing out what their behavior is, pointing out that they pursue certain kinds of men and shun others. Women - along with everybody else - are apparently suffering from many delusions about what they want, thinking they want one thing, but actually wanting another - as revealed by their behavior. This self-knowledge may or may not be helpful, but surely at least some women would be interested in it.

For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I'm effectively betting that it's narrow enough to jump over safely.

But as a matter of fact your choice is influenced by several factors, including the reward of successfully jumping over the ditch (i.e. the reduction in walking time) and the cost of attempting the jump and failing, along with the width of the gap. As these factors are (counterfactually) varied, a possibly precise picture of your probability assignment may emerge. That is, it may turn out that you are willing to risk the jump if failure would only sprain an ankle, but unwilling to risk the jump if failure is certain death. This would narrow down the probability of success that you have assigned to the jump - it would be probable enough to be worth risking the sprained ankle, but not probable enough to be worth risking certain death. This probability assignment is not necessarily anything that you have immediately available to your conscious awareness, but in principle it can be elicited through experimentation with variations on the scenario.

Replies from: wnoise
comment by wnoise · 2010-10-05T15:26:19.745Z · LW(p) · GW(p)

But the counterfactuals are still there,

That's a startling statement (especially out of context).

Replies from: None
comment by [deleted] · 2010-10-05T16:51:49.449Z · LW(p) · GW(p)

Are you asking for a defense of the statement, or do you agree with it and are merely commenting on the way I expressed it?

I'll give a defense by means of an example. At Wikipedia they give the following example of a counterfactual:

If Oswald had not shot Kennedy, then someone else would have.

Now consider the equation F=ma. This is translated at Wikipedia into the English:

A body of mass m subject to a force F undergoes an acceleration a that has the same direction as the force and a magnitude that is directly proportional to the force and inversely proportional to the mass, i.e., F = ma.

Now suppose that there is a body of mass m floating in space, and that it has not been subject to nor is it currently subject to any force. I believe that the following is a true counterfactual statement about the body:

Had this body (of mass m) been subject to a force F then it would have undergone an acceleration a that would have had the same direction as the force and a magnitude that would have been directly proportional to the force and inversely proportional to the mass.

That is a counterfactual statement following the model of the wikipedia example, and I believe it is true, and I believe that the contradiction of the counterfactual (which is also a counterfactual, i.e., the claim that the body would not have undergone the stated acceleration) is false.

I believe that this point can be extended to all the laws of physics, either Newton's laws or, if they have been replaced, modern laws. And I believe, furthermore, that the point can be extended to higher-level statements about bodies which are not mere masses moving in space, but, say, thinking creatures making decisions.

Is there any part of this with which you disagree?

A point about the insertion of "I believe". The phrase "I believe" is sometimes used by people to assert their religious beliefs. I don't consider the point I am making to be a personal religious belief, but the plain truth. I only insert "I believe" because the very fact that you brought up the issue tells me that I may be in mixed company that includes someone whose philosophical education has instilled certain views.

Replies from: wnoise
comment by wnoise · 2010-10-08T05:40:45.685Z · LW(p) · GW(p)

I am merely commenting. Counterfactuals are counterfactual, and so don't "exist" and can't be "there" by their very nature.

Yes, of course, they're part of how we do our analyses.

comment by komponisto · 2010-10-03T14:56:45.538Z · LW(p) · GW(p)

Upvoted. Definitely can't back you on this one.

Are you sure you're not just worried about poor calibration?

Replies from: wedrifid, Vladimir_M
comment by wedrifid · 2010-10-03T15:02:16.450Z · LW(p) · GW(p)

Another upvote. That's crazy talk.

comment by Vladimir_M · 2010-10-03T19:45:28.647Z · LW(p) · GW(p)

komponisto:

Are you sure you're not just worried about poor calibration?

No, my objection is fundamental. I provide a brief explanation in the comment I linked to, but I'll restate it here briefly.

The problem is that the algorithms that your brain uses to perform common-sense reasoning are not transparent to your conscious mind, which has access only to their final output. This output does not provide a numerical probability estimate, but only a rough and vague feeling of certainty. Yet in most situations, the output of your common sense is all you have. There are very few interesting things you can reason about by performing mathematically rigorous probability calculations (and even when you can, you still have to use common sense to establish the correspondence between the mathematical model and reality).

Therefore, there are only two ways in which you can arrive at a numerical probability estimate for a common-sense belief:

  1. Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes the number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.

  2. Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

Honestly, all this seems entirely obvious to me. I would be curious to see which points in the above reasoning are supposed to be even controversial, let alone outright false.

Replies from: komponisto, mattnewport
comment by komponisto · 2010-10-03T22:33:09.194Z · LW(p) · GW(p)

Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes this number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.

Disagree here. Numbers get people to convey more information about their beliefs. It doesn't matter whether you actually use numbers, or do something similar (and equivalent) like systematize the use of vague expressions. I'd be just as happy if people used a "five-star" system, or even in many cases if they just compared the belief in question to other beliefs used as reference-points.

Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

Disagree here also. The probability calculation you present should represent your brain's reasoning, as revealed by introspection. This is not a perfect process, and may be subject to later refinement. But it is definitely meaningful.

For example, consider my current probability estimate of 10^(-3) that Amanda Knox killed her roommate. On my current analysis, this is obtained as follows: I start with a prior of 10^(-4) (from a general homicide rate of about 10^(-3), plus reasoning that Knox is demographically an order of magnitude less likely to kill than the typical person; the figure happens to match my intuitive sense that I'd have to meet about 10,000 similar people before I'd have any fear for my life). Then all the evidence in the case raises the probability by about an order of magnitude at most, yielding 10^(-3).

Now, this is just a rough order-of-magnitude argument. But it's already much more meaningful and useful than my just saying "I don't think she did it". It provides a way of breaking down the reasoning, so that points of disagreement can be precisely identified in an efficient manner. (If you happened to disagree, the next step would be to say something like "but surely evidence X alone raises the odds by more than a factor of ten", and then we'd iterate the process specifically on X rather than the original proposition.)

It's a very useful technique for keeping debates informative, and preventing them from turning into (pure) status signaling contests.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T00:11:00.611Z · LW(p) · GW(p)

komponisto:

Numbers get people to convey more information about their beliefs. It doesn't matter whether you actually use numbers, or do something similar (and equivalent) like systematize the use of vague expressions. I'd be just as happy if people used a "five-star" system, or even in many cases if they just compared the belief in question to other beliefs used as reference-points.

If I understand correctly, you're saying that talking about numbers rather than the usual verbal expressions of certainty prompts people to be more careful and re-examine their reasoning more strictly. This may be true sometimes, but on the other hand, numbers also tend to give a false feeling of accuracy and rigor where there is none. One of the usual symptoms (and, in turn, catalysts) of pseudoscience is the use of numbers with spurious precision and without rigorous justification.

In any case, you seem to concede that these numbers ultimately don't convey any more information than various vague verbal expressions of confidence. If you want to make the latter more systematic and clear, I have no problem with that, but I see no way to turn them into actual numbers without introducing spurious precision.

The probability calculation you present should represent your brain's reasoning, as revealed by introspection. This is not a perfect process, and may be subject to later refinement. But it is definitely meaningful.

Trouble is, this is often not possible. Most of what happens in your brain is not amenable to introspection, and you cannot devise a probability calculation that will capture all the important things that happen there. Take your own example:

For example, consider my current probability estimate of 10^(-3) that Amanda Knox killed her roommate. On my current analysis, this is obtained as follows: I start with a prior of 10^(-4) (from a general homicide rate of about 10^(-3), plus reasoning that Knox is demographically an order of magnitude less likely to kill than the typical person; the figure happens to match my intuitive sense that I'd have to meet about 10,000 similar people before I'd have any fear for my life). Then all the evidence in the case raises the probability by about an order of magnitude at most, yielding 10^(-3).

See, this is where, in my opinion, you're introducing spurious numerical claims that are at best unnecessary and at worst outright misleading.

First you note that murderers are extremely rare, and that AK is a sort of person especially unlikely to be one. OK, say you can justify these numbers by looking at crime statistics. Then you perform a complex common-sense evaluation of the evidence, and your brain tells you that on the whole it's weak, so it's highly unlikely that AK killed the victim. So far, so good. But then you insist on turning this feeling of near-certainty about AK's innocence into a number, and you end up making a quantitative claim that has no justification at all. You say:

Now, this is just a rough order-of-magnitude argument. But it's already much more meaningful and useful than my just saying "I don't think she did it".

I strongly disagree. Neither is this number you came up with any more meaningful than the simple plain statement "I think it's highly unlikely she did it," nor does it offer any additional practical benefit. On the contrary, it suggests that you can actually make a mathematically rigorous case that the number is within some well-defined limits. (Which you do disclaim, but which is easy to forget.)

Even worse, your claims suggest that while your numerical estimates may be off by an order of magnitude or so, the model you're applying to the problem captures reality well enough that it's only necessary to plug in accurate probability estimates. But how do you know that the model is correct in the first place? Your numbers are ultimately based on an entirely non-mathematical application of common sense in constructing this model -- and the uncertainty introduced there is altogether impossible for you to quantify meaningfully.

Replies from: komponisto
comment by komponisto · 2010-10-04T18:09:18.601Z · LW(p) · GW(p)

Let's see if we can try to hug the query here. What exactly is the mistake I'm making when I say that I believe such-and-such is true with probability 0.001?

Is it that I'm not likely to actually be right 999 times out of 1000 occasions when I say this? If so, then you're (merely) worried about my calibration, not about the fundamental correspondence between beliefs and probabilities.

Or is it, as you seem now to be suggesting, a question of attire: no one has any business speaking "numerically" unless they're (metaphorically speaking) "wearing a lab coat"? That is, using numbers is a privilege reserved for scientists who've done specific kinds of calculations?

It seems to me that the contrast you are positing between "numerical" statements and other indications of degree is illusory. The only difference is that numbers permit an arbitrarily high level of precision; their use doesn't automatically imply a particular level. Even in the context of scientific calculations, the numbers involved are subject to some particular level of uncertainty. When a scientist makes a calculation to 15 decimal places, they shouldn't be interpreted as distinguishing between different 20-decimal-digit numbers.

Likewise, when I make the claim that the probability of Amanda Knox's guilt is 10^(-3), that should not be interpreted as distinguishing (say) between 0.001 and 0.002. It's meant to be distinguished from 10^(-2) and (perhaps) 10^(-4). I was explicit about this when I said it was an order-of-magnitude estimate. You may worry that such disclaimers are easily forgotten -- but this is to disregard the fact that similar disclaimers always apply whenever numbers are used in any context!

In any case, you seem to concede that these numbers ultimately don't convey any more information than various vague verbal expressions of confidence. If you want to make the latter more systematic and clear, I have no problem with that, but I see no way to turn them into actual numbers without introducing spurious precision.

Here's the way I do it: I think approximately in terms of the following "scale" of improbabilities:

(1) 10% to 50% (mundane surprise)

(2) 1% to 10% (rare)

(3) 0.1% (=10^(-3)) to 1% (once-in-a-lifetime level surprise on an important question)

(4) 10^(-6) to 10^(-3) (dying in a plane crash or similar)

(5) 10^(-10) to 10^(-6) (winning the lottery; having an experience unique among humankind)

(6) 10^(-100) to 10^(-10) (religions are true)

(7) below 10^(-100) (theoretical level of improbability reached in thought experiments).

Replies from: Mass_Driver, soreff, Vladimir_M
comment by Mass_Driver · 2010-10-05T01:53:44.133Z · LW(p) · GW(p)

Love the logic and the scale, although I think Vladimir_M pokes some important holes specifically at the 10^(-2) to 10^(-3) level.

May I suggest "un-planned for errors?" In my experience, it is not useful to plan for contingencies with about a 1/300 chance in happening per trial. For example, on any given day of the year, my favorite cafe might be closed due to the owner's illness, but I do not call the cafe first to confirm that it is open each time I go there. At any given time, one of my 300-ish acquaintances is probably nursing a grudge against me, but I do not bother to open each conversation with "Hi, do you still like me today?" When, as inevitably happens, I run into a closed cafe or a hostile friend, I usually stop short for a bit; my planning mechanism reports a bug; there is no 'action string' cached for that situation, for the simple reason that I was not expecting the situation, because I did not plan for the situation, because that is how rare it is. Nevertheless, I am not 'surprised' -- I know at some level that things that happen about 1/300 times are sort of prone to happening once in a while. On the other hand, I would be 'surprised' if my favorite cafe had been burned to the ground or if my erstwhile buddy had taken a permanent vow of silence. I expect that these things will never happen to me, and so if they happen I go and double-check my calculations and assumptions, because it seems equally likely that I am wrong about my assumptions and that the 1/30,000 event would actually occur. Anyway, the point is that a category 3 event is an event that makes you shut up for a moment but doesn't make you reexamine any core beliefs.

If you hold most of your core beliefs with probability > .993 then you are almost certainly overconfident in your core beliefs. I'm not talking about stuff like "my senses offer moderately reliable evidence" or "F(g) = GMm/(r^2)"; I'm talking about stuff like "Solominoff induction predicts that hyperintelligent AIs will employ a timeless decision theory."

comment by soreff · 2010-10-04T19:10:19.864Z · LW(p) · GW(p)

(3) 0.1% (=10^(-3)) to 1% (once-in-a-lifetime level surprise on an important question)

10^-3 is roughly the probability that I try to start my car and it won't start because the battery has gone bad. Is the scale intended only for questions one asks once per lifetime? There are lots of questions that one asks once a day, hence my car example.

Replies from: komponisto
comment by komponisto · 2010-10-04T19:40:49.466Z · LW(p) · GW(p)

That is precisely why I added the phrase "on an important question". It was intended to rule out exactly those sorts of things.

The intended reference class (for me) consists of matters like the Amanda Knox case. But if I got into the habit of judging similar cases every day, that wouldn't work either.

Think "questions I might write a LW post about".

comment by Vladimir_M · 2010-10-04T21:59:04.047Z · LW(p) · GW(p)

komponisto:

What exactly is the mistake I'm making when I say that I believe such-and-such is true with probability 0.001? Is it that I'm not likely to actually be right 999 times out of 1000 occasions when I say this? If so, then you're (merely) worried about my calibration, not about the fundamental correspondence between beliefs and probabilities.

It's not that I'm worried about your poor calibration in some particular instance, but that I believe that accurate calibration in this sense is impossible in practice, except in some very special cases.

(To give some sense of the problem, if such calibration were possible, then why not calibrate yourself to generate accurate probabilities about the stock market movements and bet on them? It would be an easy and foolproof way to get rich. But of course that there is no way you can make your numbers match reality, not in this problem, nor in most other ones.)

Or is it, as you seem now to be suggesting, a question of attire: no one has any business speaking "numerically" unless they're (metaphorically speaking) "wearing a lab coat"? That is, using numbers is a privilege reserved for scientists who've done specific kinds of calculations?

The way you put it, "scientists" sounds too exclusive. Carpenters, accountants, cashiers, etc. also use numbers and numerical calculations in valid ways. However, their use of numbers can ultimately be scrutinized and justified in similar ways as the scientific use of numbers (even if they themselves wouldn't be up to that task), so with that qualification, my answer would be yes.

(And unfortunately, in practice it's not at all rare to see people using numbers in ways that are fundamentally unsound, which sometimes gives rise to whole edifices of pseudoscience. I discussed one such example from economics in this thread.)

Now, you say:

It seems to me that the contrast you are positing between "numerical" statements and other indications of degree is illusory. The only difference is that numbers permit an arbitrarily high level of precision; their use doesn't automatically imply a particular level. Even in the context of scientific calculations, the numbers involved are subject to some particular level of uncertainty. When a scientist makes a calculation to 15 decimal places, they shouldn't be interpreted as distinguishing between different 20-decimal-digit numbers.

However, when a scientist makes a calculation with 15 digits of precision, or even just one, he must be able to rigorously justify this degree of precision by pointing to observations that are incompatible with the hypothesis that any of these digits, except the last one, is different. (Or in the case of mathematical constants such as pi and e, to proofs of the formulas used to calculate them.) This disclaimer is implicit in any scientific use of numbers. (Assuming valid science is being done, of course.)

And this is where, in my opinion, you construct an invalid analogy:

Likewise, when I make the claim that the probability of Amanda Knox's guilt is 10^(-3), that should not be interpreted as distinguishing (say) between 0.001 and 0.002. It's meant to be distinguished from 10^(-2) and (perhaps) 10^(-4). I was explicit about this when I said it was an order-of-magnitude estimate. You may worry that such disclaimers are easily forgotten -- but this is to disregard the fact that similar disclaimers always apply whenever numbers are used in any context!

But these disclaimers are not at all the same! The scientist's -- or the carpenter's, for that matter -- implicit disclaimer is: "This number is subject to this uncertainty interval, but there is a rigorous argument why it cannot be outside that range." On the other hand, your disclaimer is: "This number was devised using an intuitive and arbitrary procedure that doesn't provide any rigorous argument about the range it must be in."

And if I may be permitted such a comment, I do see lots of instances here where people seem to forget about this disclaimer, and write as if they believed that they could actually become Bayesian inferers, rather than creatures who depend on capricious black-box circuits inside their heads to make any interesting judgment about anything, and who are (with the present level of technology) largely unable to examine the internal functioning of these boxes and improve them.

Here's the way I do it: I think approximately in terms of the following "scale" of improbabilities:

I don't think such usage is unreasonable, but I think it falls under what I call using numbers as vague figures of speech.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-10-05T02:06:20.868Z · LW(p) · GW(p)

To give some sense of the problem, if such calibration were possible, then why not calibrate yourself to generate accurate probabilities about the stock market movements and bet on them? It would be an easy and foolproof way to get rich.

I think this statement reflects either an ignorance of finance or the Dark Arts.

First, the stock market is the single worst place to try to test out ideas about probabilities, because so many other people are already trying to predict the market, and so much wealth is at stake. Other people's predictions will remove most of the potential for arbitrage (reducing 'signal'), and the insider trading and other forms of cheating generated by the potential for quick wealth will further distort any scientifically detectable trends in the market (increasing 'noise'). Because investments in the stock market must be made in relatively large quantities to avoid losing your money through trading commissions, a causal theory tester is likely to run out of money long before hitting a good payoff even if he or she is already well-calibrated.

Of course, in real life, people might be moderately-calibrated. The fact that one is capable of making some predictions with some accuracy and precision is not a guarantee that one will be able to reliably and detectably beat even a thin market like a political prediction clearinghouse. Nevertheless, some information is often better than none: I am (rationally) much more concerned about automobile accidents than fires, despite the fact that I know two people who have died in fires and none who have died in automobile accidents. I know this based on my inferences from published statistics, the reliability of which I make further inferences about. I am quite confident (p ~ .95) that it is sensible to drive defensively (at great cost in effort and time) while essentially ignoring fire safety (even though checking a fire extinguisher or smoke detector might take minimal effort.)

I don't play the stock market, though. I'm not that well calibrated, and probably nobody is without access to inside info of one kind or another.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-05T05:33:28.409Z · LW(p) · GW(p)

Mass_Driver:

I think this statement reflects either an ignorance of finance or the Dark Arts.

I'm not an expert on finance, but I am aware of everything you wrote about it in your comment. So I guess this leaves us with the second option. The Dark Arts hypothesis is probably that I'm using the extreme example of the stock market to suggest a general sweeping conclusion that in fact doesn't hold in less extreme cases.

To which I reply: yes, the stock market is an extreme example, but I honestly can't think of any other examples that would show otherwise. There are many examples of scientific models that provide more or less accurate probability estimates for all kinds of things, to be sure, but I have yet to hear about people achieving practical success in anything relevant by translating their common-sense feelings of confidence in various beliefs into numerical probabilities.

In my view, calibration of probability estimates can succeed only if (1) you come up with a valid scientific model which you can then use in a shut-up-and-calculate way instead of applying common sense (though you still need it to determine whether the model is applicable in the first place), or (2) you make an essentially identical judgment many times, and from your past performance you extrapolate how frequently the black box inside your head tends to be right.

Now, you try to provide some counterexamples:

I am (rationally) much more concerned about automobile accidents than fires, despite the fact that I know two people who have died in fires and none who have died in automobile accidents. I know this based on my inferences from published statistics, the reliability of which I make further inferences about. I am quite confident (p ~ .95) that it is sensible to drive defensively (at great cost in effort and time) while essentially ignoring fire safety (even though checking a fire extinguisher or smoke detector might take minimal effort.)

Frankly, the only subjective probability estimate I see here is the p~0.95 for your belief about driving. In this case, I'm not getting any more information from this number than if you just described your level of certainty in words, nor do I see any practical application to which you can put this number. I have no objection to your other conclusions, but I see nothing among them that would be controversial to even the most extreme frequentist.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-10-06T03:04:23.723Z · LW(p) · GW(p)

Not sure who voted down your reply; it looks polite and well-reasoned to me.

I believe you when you say that the stock market was honestly intended as representative, although, of course, I continue to disagree about whether it actually is representative.

Here are some more counterexamples:

*When deciding whether to invest in an online bank that pays 1% interest or a local community bank that pays 0.1% interest, I must calculate the odds that each bank will fail before I take my money out; I cannot possibly have a scientific model that generates replicable results for these two banks while also holding down a day job, but numbers will nevertheless help me make a decision that is not driven by an emotional urge to stay with (or leave) an old bank based on customer service considerations that I rationally value as far less than the value of my principal.

*When deciding whether to donate time, money, or neither to a local election campaign, it will help to know which of my donations will have an 10^-6 chance, a 10^-4 chance, and a 10^-2 chance of swinging the election. Numbers are important here because irrational friends and colleagues will urge me to do what 'feels right' or to 'do my part' without pausing to consider whether this serves any of our goals. If I can generate a replicable scientific model that says whether an extra $500 will win an election, I should stop electioneering and sign up for a job as a tenured political science faculty member, but I nevertheless want to know what the odds are, approximately, in each case, if only so that I can pick which campaign to work on.

As for your objection that:

the only subjective probability estimate I see here is the p~0.95 for your belief about driving. In this case, I'm not getting any more information from this number than if you just described your level of certainty in words,

I suppose I have left a few steps out of my analysis, which I am spelling out in full now:

*Published statistics say that the risk of dying in a fire is 10^-7/people-year and the risk of dying in a car crash is 10^-4/people-year (a report of what is no doubt someone else's subjective but relatively evidence-based estimate).

*The odds that these statistics are off by more than a factor of 10 relative to each other are less than 10^-1 (a subjective estimate).

*My cost in effort to protect against car crashes is more than 10 times higher than my cost in effort to protect against fires.

*I value the disutility of death-by-fire and death-by-car-crash roughly equally.

*Therefore, there exists a coherent utility function with respect to the relevant states of the world and my relevant strategies such that it is rational for me to protect against car crashes but not fires.

*Therefore, one technique that could be used to show that my behavior is internally incoherent has failed to reject the null hypothesis.

*Therefore, I have some Bayesian evidence that my behavior is rational.

Please let me know if you still think I'm just putting fancy arithmetic labels on what is essentially 'frequentist' reasoning, and, if so, exactly what you mean by 'frequentist.' I can Wikipedia the standard definition, but it doesn't quite seem to fit here, imho.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-07T00:08:34.361Z · LW(p) · GW(p)

Regarding your examples with banks and donations, when I imagine myself in such situations, I still don't see how numbers derived from my own common-sense reasoning can be useful. I can see myself making a decision based a simple common-sense impression that one bank looks less shady, or that it's bigger and thus more likely to be bailed out, etc. Similarly, I could act on a vague impression that one political candidacy I'd favor is far more hopeless than another, and so on. On the other hand, I could also judge from the results of calculations based on numbers from real expert input, like actuary tables for failures of banks of various types, or the poll numbers for elections, etc.

What I cannot imagine, however, is doing anything sensible and useful with probabilities dreamed up from vague common-sense impressions. For example, looking at a bank, getting the impression that it's reputable and solid, and then saying, "What's the probability it will fail before time T? Um.. seems really unlikely... let's say 0.1%.", and then using this number to calculate my expected returns.

Now, regarding your example with driving vs. fires, suppose I simply say: "Looking at the statistical tables, it is far more likely to be killed by a car accident than a fire. I don't see any way in which I'm exceptional in my exposure to either, so if I want to make myself safer, it would be stupid to invest more effort in reducing the chance of fire than in more careful driving." What precisely have you gained with your calculation relative to this plain and clear English statement?

In particular, what is the significance of these subjectively estimated probabilities like p=10^-1 in step 2? What more does this number tell us than a simple statement like "I don't think it's likely"? Also, notice that my earlier comment specifically questioned the meaningfulness and practical usefulness of the numerical claim that p~0.95 for this conclusion, and I don't see how it comes out of your calculation. These seem to be exactly the sorts of dreamed-up probability numbers whose meaningfulness I'm denying.

comment by mattnewport · 2010-10-03T20:00:14.892Z · LW(p) · GW(p)

It seems plausible to me that routinely assigning numerical probabilities to predictions/beliefs that can be tested and tracking these over time to see how accurate your probabilities are (calibration) can lead to a better ability to reliably translate vague feelings of certainty into numerical probabilities.

There are practical benefits to developing this ability. I would speculate that successful bookies and professional sports bettors are better at this than average for example and that this is an ability they have developed through practice and experience. Anyone who has to make decisions under uncertainty seems like they could benefit from a well developed ability to assign well calibrated numerical probability estimates to vague feelings of certainty. Investors, managers, engineers and others who must deal with uncertainty on a regular basis would surely find this ability useful.

I think a certain degree of skepticism is justified regarding the utility of various specific methods for developing this ability (things like predictionbook.com don't yet have hard evidence for their effectiveness) but it certainly seems like it is a useful ability to have and so there are good reasons to experiment with various methods that promise to improve calibration.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-03T20:28:13.892Z · LW(p) · GW(p)

I addressed this point in another comment in this thread:

http://lesswrong.com/lw/2sl/the_irrationality_game/2qgm

Replies from: mattnewport
comment by mattnewport · 2010-10-03T20:44:50.107Z · LW(p) · GW(p)

I agree with most of what you're saying (in that comment and this one) but I still think that the ability to give well calibrated probability estimates for a particular prediction is instrumentally useful and that it is fairly likely that this is an ability that can be improved with practice. I don't take this to imply anything about humans performing actual Bayesian calculations either implicitly or explicitly.

comment by prase · 2010-10-05T14:31:33.453Z · LW(p) · GW(p)

I have read most of the responses and still am not sure whether to upvote or not. I doubt among several (possibly overlapping) interpretations of your statement. Could you tell to what extent the following interpretations really reflect what you think?

  1. Confession of frequentism. Only sensible numerical probabilities are those related to frequencies, i.e. either frequencies of outcomes of repeated experiments, or probabilities derived from there. (Creative drawing of reference-class boundaries may be permitted.) Especially, prior probabilities are meaningless.
  2. Any sensible numbers must be produced using procedures that ultimately don't include any numerical parameters (maybe except small integers like 2,3,4). Any number which isn't a result of such a procedure is labeled arbitrary, and therefore meaningless. (Observation and measurement, of course, do count as permitted procedures. Admittedly arbitrary steps, like choosing units of measurement, are also permitted.)
  3. Degrees of confidence shall be expressed without reflexive thinking about them. Trying to establish a fixed scale of confidence levels (like impossible - very unlikely - unlikely - possible - likely - very likely - almost certain - certain), or actively trying to compare degrees of confidence in different beliefs is cheating, since such scales can be then converted into numbers using a non-numerical procedure.
  4. The question of whether somebody is well calibrated is confused for some reason. Calibrating people has no sense. Although we may take the "almost certain" statements of a person and look at how often they are true, the resulting frequency has no sense for some reason.
  5. Unlike #3, beliefs can be ordered or classified on some scale (possibly imprecisely), but assigning numerical values brings confusing connotations and should be avoided. Alternatively said, the meaning of subjective probabilities is preserved after monotonous rescaling.
  6. Although, strictly speaking, human reasoning can be modelled as a Bayesian network where beliefs have numerical strengths, human introspection is poor at assessing their values. Declared values more likely depend on anchoring than on the real strength of the belief. Speaking about numbers actually introduces noise into reasoning.
  7. Human reasoning cannot be modelled by Bayesian inference, not even in approximation.
Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-05T22:42:20.259Z · LW(p) · GW(p)

That’s an excellent list of questions! It will help me greatly to systematize my thinking on the topic.

Before replying to the specific items you list, perhaps I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight. Therefore, I believe that whenever one encounters people talking about numbers of any sort that look even slightly suspicious, they should be considered guilty until proven otherwise -- and this entire business with subjective probability estimates for common-sense beliefs doesn’t come even close to clearing that bar for me.

Now to reply to your list.


(1) Confession of frequentism. Only sensible numerical probabilities are those related to frequencies, i.e. either frequencies of outcomes of repeated experiments, or probabilities derived from there. (Creative drawing of reference-class boundaries may be permitted.) Especially, prior probabilities are meaningless.

(2) Any sensible numbers must be produced using procedures that ultimately don't include any numerical parameters (maybe except small integers like 2,3,4). Any number which isn't a result of such a procedure is labeled arbitrary, and therefore meaningless. (Observation and measurement, of course, do count as permitted procedures. Admittedly arbitrary steps, like choosing units of measurement, are also permitted.)

My answer to (1) follows from my opinion about (2).

In my view, a number that gives any information about the real world must ultimately refer, either directly or via some calculation, to something that can be measured or counted (at least in principle, perhaps using a thought-experiment). This doesn’t mean that all sensible numbers have to be derived from concrete empirical measurements; they can also follow from common-sense insight and generalization. For example, reading about Newton’s theory leads to the common-sense insight that it’s a very close approximation of reality under certain assumptions. Now, if we look at the gravity formula F=m1*m2/r^2 (in units set so that G=1), the number 2 in the denominator is not a product of any concrete measurement, but a generalization from common sense. Yet what makes it sensible is that it ultimately refers to measurable reality via a well-defined formula: measure the force between two bodies of known masses at distance r, and you’ll get log(m1*m2/F)/log(r) = 2.

Now, what can we make out of probabilities from this viewpoint? I honestly can’t think of any sensible non-frequentist answer to this question. Subjectivist Bayesian phrases such as “the degree of belief” sound to me entirely ghostlike unless this “degree” is verifiable via some frequentist practical test, at least in principle. In this sense, I do confess frequentism. (Though I don’t wish to subscribe to all the related baggage from various controversies in statistics, much of which is frankly over my head.)

(3) Degrees of confidence shall be expressed without reflexive thinking about them. Trying to establish a fixed scale of confidence levels (like impossible - very unlikely - unlikely - possible - likely - very likely - almost certain - certain), or actively trying to compare degrees of confidence in different beliefs is cheating, since such scales can be then converted into numbers using a non-numerical procedure.

That depends on the concrete problem under consideration, and on the thinker who is considering it. The thinker’s brain produces an answer alongside a more or less fuzzy feeling of confidence, and the human language has the capacity to express these feelings with about the same level of fuziness as that signal. It can be sensible to compare intuitive confidence levels, if such comparison can be put to a practical (i.e. frequentist) test. Eight ordered intuitive levels of certainty might perhaps be too much, but with, say, four levels, I could produce four lists of predictions labeled “almost impossible,” “unlikely,” “likely,” and “almost certain,” such that common-sense would tell us that, with near-certainty, those in each subsequent list would turn out to be true in ever greater proportion.

If I wish to express these probabilities as numbers, however, this is not a legitimate step unless the resulting numbers can be justified in the sense discussed above under (1) and (2). This requires justification both in the sense of defining what aspect of reality they refer to (where frequentism seems like the only answer), and guaranteeing that they will be accurate under empirical tests. If they can be so justified, then we say that the intuitive estimate is “well-calibrated.” However, calibration is usually not possible in practice, and there are only two major exceptions.

The first possible path towards accurate calibration is when the same person performs essentially the same judgment many times, and from the past performance we extract the frequency with which their brain tends to produce the right answer. If this level of accuracy remains roughly constant in time, then it makes sense to attach it as the probability to that person’s future judgments on the topic. This approach treats the relevant operations in the brain as a black box whose behavior, being roughly constant, can be subjected to such extrapolation.

The second possible path is reached when someone has a sufficient level of insight about some problem to cross the fuzzy limit between common-sense thinking and an actual scientific model. Increasingly subtle and accurate thinking about a problem can result in the construction of a mathematical model that approximates reality well enough that when applied in a shut-up-and-calculate way, it yields probability estimates that will be subsequently vindicated empirically.

(Still, deciding whether the model is applicable in some particular situation remains a common-sense problem, and the probabilities yielded by the model do not capture this uncertainty. If a well-established physical theory, applied by competent people, says that p=0.9999 for some event, common sense tells me that I should treat this event as near-certain -- and, if repeated many times, that it will come out the unlikely way very close to one in 10,000 times. On the other hand, if p=0.9999 is produced by some suspicious model that looks like it might be a product of data-dredging rather than real insight about reality, common sense tells me that the event is not at all certain. But there is no way to capture this intuitive uncertainty with a sensible number. The probabilities coming from calibration of repeated judgment are subject to analogous unquantifiable uncertainty.)

There is also a third logical possibility, namely that some people in some situations have precise enough intuitions of certaintly that they can quantify them in an accurate way, just like some people can guess what time it is with remarkable precision without looking at the clock. But I see little evidence of this occurring in reality, and even if it does, these are very rare special cases.

(4) The question of whether somebody is well calibrated is confused for some reason. Calibrating people has no sense. Although we may take the "almost certain" statements of a person and look at how often they are true, the resulting frequency has no sense for some reason.

I disagree with this, as explained above. Calibration can be done successfully in the special cases I mentioned. However, in cases where it cannot be done, which includes the great majority of the actual beliefs and conclusions made by human brains, devising numerical probabilities makes no sense.

(5) Unlike #3, beliefs can be ordered or classified on some scale (possibly imprecisely), but assigning numerical values brings confusing connotations and should be avoided. Alternatively said, the meaning subjective probabilities is preserved after monotonous rescaling.

This should be clear from the answer to (3).


[Continued in a separate comment below due to excessive length.]

Replies from: komponisto, Vladimir_M, prase
comment by komponisto · 2010-10-06T06:45:20.487Z · LW(p) · GW(p)

I should first state the general position I’m coming from, which motivates me to get into discussions of this sort. Namely, it is my firm belief that when we look at the present state of human knowledge, one of the principal sources of confusion, nonsense, and pseudosicence is physics envy, which leads people in all sorts of fields to construct nonsensical edifices of numerology and then pretend, consciously or not, that they’ve reached some sort of exact scientific insight.

I'll point out here that reversed stupidity is not intelligence, and that for every possible error, there is an opposite possible error.

In my view, if someone's numbers are wrong, that should be dealt with on the object level (e.g. "0.001 is too low", with arguments for why), rather than retreating to the meta level of "using numbers caused you to err". The perspective I come from is wanting to avoid the opposite problem, where being vague about one's beliefs allows one to get away without subjecting them to rigorous scrutiny. (This, too, by the way, is a major hallmark of pseudoscience.)

But I'll note that even as we continue to argue under opposing rhetorical banners, our disagreement on the practical issue seems to have mostly evaporated; see here for instance. You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities:

If I wish to express these probabilities as numbers, however, this is not a legitimate step unless the resulting numbers can be justified... If they can be so justified, then we say that the intuitive estimate is “well-calibrated.” However, calibration is usually not possible in practice...

As a theoretical matter, I disagree completely with the notion that probabilities are not legitimate or meaningful unless they're well-calibrated. There is such a thing as a poorly-calibrated Bayesian; it's a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else. We would of course like the beliefs so represented to be as accurate as possible; but they may not be in practice.

If my internal "Bayesian calculator" believes P(X) = 0.001, and X turns out to be true, I'm not made less wrong by having concealed the number, saying "I don't think X is true" instead. Less embarrassed, perhaps, but not less wrong.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-06T07:33:07.155Z · LW(p) · GW(p)

komponisto:

In my view, if someone's numbers are wrong, that should be dealt with on the object level (e.g. "0.001 is too low", with arguments for why), rather than retreating to the meta level of "using numbers caused you to err".

Trouble is, sometimes numbers can be not even wrong, with their very definition lacking logical consistency or any defensible link with reality. It is that category that I am most concerned with, and I believe that it sadly occurs very often in practice, with entire fields of inquiry sometimes degenerating into meaningless games with such numbers. My honest impression is that in our day and age, such numerological fallacies have been responsible for much greater intellectual sins than the opposite fallacy of avoiding scrutiny by excessive vagueness, although the latter phenomenon is not negligible either.

You also do admit in the end that fear of poor calibration is what is underlying your discomfort with numerical probabilities:

Here we seem to be clashing about terminology. I think that "poor calibration" is too much of a euphemism for the situations I have in mind, namely those where sensible calibration is altogether impossible. I would instead use some stronger expression clarifying that the supposed "calibration" is done without any valid basis, not that the result is poor because some unfortunate circumstance occurred in the course of an otherwise sensible procedure.

There is such a thing as a poorly-calibrated Bayesian; it's a perfectly coherent concept. The Bayesian view of probabilities is that they refer specifically to degrees of belief, and not anything else.

As I explained in the above lengthy comment, I simply don't find numbers that "refer specifically to degrees of belief, and not anything else" a coherent concept. We seem to be working with fundamentally different philosophical premises here.

Can these numerical "degrees of belief" somehow be linked to observable reality according to the criteria I defined in my reply to the points (1)-(2) above? If not, I don't see how admitting such concepts can be of any use.

If my internal "Bayesian calculator" believes P(X) = 0.001, and X turns out to be true, I'm not made less wrong by having concealed the number, saying "I don't think X is true" instead. Less embarrassed, perhaps, but not less wrong.

But if you do this 10,000 times, and the number of times X turns out to be true is small but nowhere close to 10, you are much more wrong than if you had just been saying "X is highly unlikely" all along.

On the other hand, if we're observing X as a single event in isolation, I don't see how this tests your probability estimate in any way. But I suspect we have some additional philosophical differences here.

comment by Vladimir_M · 2010-10-05T22:43:07.849Z · LW(p) · GW(p)

[Continued from the parent comment.]

(6) Although, strictly speaking, human reasoning can be modelled as a Bayesian network where beliefs have numerical strengths, human introspection is poor at assessing their values. Declared values more likely depend on anchoring than on the real strength of the belief. Speaking about numbers actually introduces noise into reasoning.

I have revised my view about this somewhat thanks to a shrewd comment by xv15. The use of unjustified numerical probabilities can sometimes be a useful figure of speech that will convey an intuitive feeling of certainty to other people more faithfully than verbal expressions. But the important thing to note here is that the numbers in such situations are mere figures of speech, i.e. expressions that exploit various idiosyncrasies of human language and thinking to transmit hard-to-convey intuitive points via non-literal meanings. It is not legitimate to use these numbers for any other purpose.

Otherwise, I agree. Except in the above-discussed cases, subjective probabilities extracted from common-sense reasoning are at best an unnecessary addition to arguments that would be just as valid and rigorous without them. At worst, they can lead to muddled and incorrect thinking based on a false impression of accuracy, rigor, and insight where there is none, and ultimately to numerological pseudoscience.

Also, we still don’t know whether and to what extent various parts of our brains involved in common-sense reasoning approximate Bayesian networks. It may well be that some, or even all of them do, but the problem is that we cannot look at them and calculate the exact probabilities involved, and these are not available to introspection. The fallacy of radical Bayesianism that is often seen on LW is in the assumption that one can somehow work around this problem so as to meaningfully attach an explicit Bayesian procedure and a numerical probability to each judgment one makes.

Note also that even if my case turns out to be significantly weaker under scrutiny, it may still be a valid counterargument to the frequently voiced position that one can, and should, attach a numerical probability to every judgment one makes.


So, that would be a statement of my position; I’m looking forward to any comments.

Replies from: jimrandomh
comment by jimrandomh · 2010-10-05T23:58:53.305Z · LW(p) · GW(p)

Suppose you have two studies, each of which measures and gives a probability for the same thing. The first study has a small sample size, and a not terribly rigorous experimental procedure; the second study has a large sample size, and a more thorough procedure. When called on to make a decision, you would use the probability from the larger study. But if the large study hadn't been conducted, you wouldn't give up and act like you didn't have any probability at all; you'd use the one from the small study. You might have to do some extra sanity checks, and your results wouldn't be as reliable, but they'd still be better than if you didn't have a probability at all.

A probability assigned by common-sense reasoning is to a probability that came from a small study, as a probability from a small study is to a probability from a large study. The quality of probabilities varies continuously; you get better probabilities by conducting better studies. By saying that a probability based only on common-sense reasoning is meaningless, I think what you're really trying to do is set a minimum quality level. Since probabilities that're based on studies and calculation are generally better than probabilities that aren't, this is a useful heuristic. However, it is only that, a heuristic; probabilities based on common-sense reasoning can sometimes be quite good, and they are often the only information available anywhere (and they are, therefore, the best information). Not all common-sense-based probabilities are equal; if an expert thinks for an hour and then gives a probability, without doing any calculation, then that probability will be much better than if a layman thinks about it for thirty seconds. The best common-sense probabilities are better than the worst statistical-study probabilities; and besides, there usually aren't any relevant statistical calculations or studies to compare against.

I think what's confusing you is an intuition that if someone gives a probability, you should be able to take it as-is and start calculating with it. But suppose you had collected five large studies, and someone gave you the results of a sixth. You wouldn't take that probability as-is, you'd have to combine it with the other five studies somehow. You would only use the new probability as-is if it was significantly better (larger sample, more trustworthy procedure, etc) than the ones you already had, or you didn't have any before. Now if there are no good studies, and someone gives you a probability that came from their common-sense reasoning, you almost certainly have a comparably good probability already: your own common-sense reasoning. So you have to combine it. So in a sense, those sorts of probabilities are less meaningful - you discard them when they compete with better probabilities, or at least weight them less - but there's still a nonzero amount of meaning there.

(Aside: I've been stuck for awhile on an article I'm writing called "What Probability Requires", dealing with this same topic, and seeing you argue the other side has been extremely helpful. I think I'm unstuck now; thank you for that.)

Replies from: Vladimir_M, None
comment by Vladimir_M · 2010-10-06T23:24:23.619Z · LW(p) · GW(p)

After thinking about your comment, I think this observation comes close to the core of our disagreement:

By saying that a probability based only on common-sense reasoning is meaningless, I think what you're really trying to do is set a minimum quality level.

Basically, yes. More specifically, the quality level I wish to set is that the numbers must give more useful information than mere verbal expressions of confidence. Otherwise, their use at best simply adds nothing useful, and at worst leads to fallacious reasoning encouraged by a false feeling of accuracy.

Now, there are several possible ways to object my position:

  • The first is to note that even if not meaningful mathematically, numbers can serve as communication-facilitating figures of speech. I have conceded this point.

  • The second way is to insist on an absolute principle that one should always attach numerical probabilities to one's beliefs. I haven't seen anything in this thread (or elsewhere) yet that would shake my belief in the fallaciousness of this position, or even provide any plausible-seeming argument in favor of it.

  • The third way is to agree that sometimes attaching numerical probabilities to common-sense judgments makes no sense, but on the other hand, in some cases common-sense reasoning can produce numerical probabilities that will give more useful information than just fuzzy words. After the discussion with mattnewport and others, I agree that there are such cases, but I still maintain that these are rare exceptions. (In my original statement, I took an overly restrictive notion of "common sense"; I admit that in some cases, thinking that could be reasonably called like that is indeed precise enough to produce meaningful numerical probabilities.)

So, to clarify, which exact position do you take in this regard? Or would your position require a fourth item to summarize fairly?

I think what's confusing you is an intuition that if someone gives a probability, you should be able to take it as-is and start calculating with it. [...] So in a sense, those sorts of probabilities are less meaningful - you discard them when they compete with better probabilities, or at least weight them less - but there's still a nonzero amount of meaning there.

I agree that there is a non-zero amount of meaning, but the question is whether it exceeds what a simple verbal statement of confidence would convey. If I can't take a number and start calculating with it, what good is it? (Except for the caveat about possible metaphorical meanings of numbers.)

Replies from: jimrandomh
comment by jimrandomh · 2010-10-11T22:52:41.972Z · LW(p) · GW(p)

My response to this ended up being a whole article, which is why it took so long. The short version of my position is, we should attack numbers to beliefs as often as possible, but for instrumental reasons rather than on principle.

comment by [deleted] · 2010-10-06T02:31:37.066Z · LW(p) · GW(p)

As a matter of fact I can think of one reason - a strong reason in my view - that the consciously felt feeling of certainty is liable to be systematically and significantly exaggerated with respect to the true probability assignment assigned by the person's mental black box - the latter being something that we might in principle elicit through experimentation by putting the same subject through variants of a given scenario. (Think revealed probability assignment - similar to revealed preference as understood by the economists.)

The reason is that whole-hearted commitment is usually best whatever one chooses to do. Consider Buridan's ass, but with the following alterations. Instead of hay and water, to make it more symmetrical suppose the ass has two buckets of water, one on either side about equally distant. Suppose furthermore that his mental black box assigns a 51% probability to the proposition that the bucket on the right side is closer to him than the bucket on the left side.

The question, then, is what should the ass consciously feel about the probability that the bucket on the right is closest? I propose that given that his black box assigns a 51% probability to this, he should go to the bucket on the right. But given that he should go to the bucket on the right, he should go there without delay, without a hesitating step, because hesitation is merely a waste of time. But how can the ass go there without delay if he is consciously feeling that the probability is 51% that the bucket on the right is closest? That feeling will cause within him uncertainty and hesitation and will slow him down. Therefore it is best if the ass consciously is absolutely convinced that the bucket on the right is closest. This conscious feeling of certainty will speed his step and get him to the water quickly.

So it is best for Buridan's ass that his consciously felt degrees of certainty are great exaggerations of his mental black box's probability assignments. I think this generalizes. We should consciously feel much more certain of things than we really are, in order to get ourselves moving.

In fact, if Buridan's ass's mental black box assigns exactly 50% probability to the right bucket being the closer one, the mental black box should in effect flip a coin and then delude the conscious self to become entirely convinced that the right (or, depending on the coin flip, the left) bucket is the closest and act accordingly.

This can be applied to the reactions of prey to predators. It is so costly for a prey animal to be eaten, and relatively so not very costly for the prey animal merely to waste a bit of its time running, that a prey animal is most likely to survive to reproduce if it is in the habit of completely believing that there is a predator after it far more often than there really is a predator after it. Even if possible-predator-signals in the environment actually signify predators 10% of the time or less, since the prey animal never knows which of those signals is the predator, the prey needs to run for its very life every single time it senses the possible-predator-signal. For it to do this, it must be fully mentally committed to the proposition that there is in fact a predator after it. There is no reason for the prey animal to have any less than full belief that there is a predator after it, each and every time it senses a possible predator.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-10-06T08:55:55.299Z · LW(p) · GW(p)

I don't agree with this conflation of commitment and belief. I've never had to run from a predator, but when I run to catch a train, I am fully committed to catching the train, although I may be uncertain about whether I will succeed. In fact, the less time I have, the faster I must run, but the less likely I am to catch the train. That only affects my decision to run or not. On making the decision, belief and uncertainty are irrelevant, intention and action are everything.

Maybe some people have to make themselves believe in an outcome they know to be uncertain, in order to achieve it, but that is just a psychological exercise, not a necessary part of action.

Replies from: None
comment by [deleted] · 2010-10-06T11:27:26.239Z · LW(p) · GW(p)

The question is not whether there are some examples of commitment which do not involve belief. The question is whether there are (some, many) examples where really, absolutely full commitment does involve belief. I think there are many.

Consider what commitment is. If someone says, "you don't seem fully committed to this", what sort of thing might have prompted him to say this? It's something like, he thinks you aren't doing everything you could possibly do to help this along. He thinks you are holding back.

You might reply to this criticism, "I am not holding anything back. There is literally nothing more that I can do to further the probability of success, so there is no point in doing more - it would be an empty and possibly counterproductive gesture rather than being an action that truly furthers the chance of success."

So the important question is, what can a creature do to further the probability of success? Let's look at you running to catch the train. You claim that believing that you will succeed would not further the success of your effort. Well, of course not! I could have told you that! If you believe that you will succeed, you can become complacent, which runs the risk of slowing you down.

But if you believe that there is something chasing you, that is likely to speed you up.

Your argument is essentially, "my full commitment didn't involve belief X, therefore you're wrong". But belief X is a belief that would have slowed you down. It would have reduced, not furthered, your chance of success. So of course your full commitment didn't involve belief X.

My point is that it is often the case that a certain consciously felt belief would increase a person's chances of success, given their chosen course of action. And in light of what commitment is - it is commitment of one's self and one's resources to furthering the probability of success - then if a belief would further a chance of success, then full, really full commitment will include that belief.

So I am not conflating conscious belief with commitment. I am saying that conscious belief can be, and often is, involved in the furthering of success, and therefore can be and often is a part of really full commitment. That is no more conflating belief with commitment than saying that a strong fabric makes a good coat conflates fabric with coats.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-10-06T12:06:34.548Z · LW(p) · GW(p)

You're right that my analogy was inaccurate: what corresponds in the train-catching scenario to believing there is a predator is my belief that I need to catch this train.

My point is that it is often the case that a certain consciously felt belief would increase a person's chances of success, given their chosen course of action. And in light of what commitment is - it is commitment of one's self and one's resources to furthering the probability of success - then if a belief would further a chance of success, then full, really full commitment will include that belief.

A stronger belief may produce stronger commitment, but strong commitment does not require strong belief. The animal either flees or does not, because a half-hearted sprint will have no effect on the outcome whether a predator is there or not. Similarly, there's no point making a half-hearted jog for a train, regardless of how much or little one values catching it.

Belief and commitment to act on the belief are two different parts of the process.

Of course, a lot of the "success" literature urges people to have faith in themselves, to believe in their mission, to cast all doubt aside, etc., and if a tool works for someone I've no urge to tell them it shouldn't. But, personally, I take Yoda's attitude: "Do, or do not."

Replies from: None
comment by [deleted] · 2010-10-06T13:48:10.418Z · LW(p) · GW(p)

Yoda tutors Luke in Jedi philosophy and a practice, which it will take Luke a while to learn. In the meantime, however, Luke is merely an unpolished human. And I am not here recommending a particular philosophy and practice of thought and behavior, but making a prediction about how unpolished humans (and animals) are likely to act. My point is not to recommend that Buridan's ass should have an exaggerated confidence that the right bucket is closer, but to observe that we can expect him to have an exaggerated confidence, because, for reasons I described, exaggerated confidence is likely to have been selected for because it is likely to have improved the chances of survival of asses who did not have the benefit of Yoda's instruction.

So I don't recommend, rather I expect that humans will commonly have conscious feelings of confidence which are exaggerated, and which do not truly reflect the output of the human's mental black box, his mental machinery to which he does not have access.

Let me explain by the way what I mean here, because I'm saying that the black box can output a 51% probability for Proposition P while at the same time causing the person to be consciously absolutely convinced of the truth of P. This may be confusing, because I seem to be saying that the black box outputs two probabilities, a 51% probability for purposes of decisionmaking and a 100% probability for conscious consumption. So let me explain with an example what I mean.

Suppose you want to test Buridan's ass to see what probability he assigns to the proposition that the right bucket is closer. What you can do is take the scenario and alter as follows: introduce a mechanism which, with 4% probability, will move the right bucket further than the left bucket before Buridan's ass gets to it.

Now, if Buridan's ass assigns a 100% probability that the right bucket is (currently) closer than the left bucket, then taking into account the introduced mechanism, this yields a 96% probability that, by the time the ass gets to it, the right bucket will still be closer to the ass's starting position. But if Buridan's ass assigns a 51% probability that the right bucket is (currently) closer than the left bucket, then taking into account the mechanism, this yields approximately a 49% probability (assuming I did the numbers right) that by the time the ass gets to it, the right bucket will be closer.

I am, of course, assuming that the ass is smart enough to understand and incorporate the mechanism into his calculations. Animals have eyes and ears and brains for a reason, so I don't think it's a stretch to suppose that there is some way to implement this scenario in a way that an ass really could understand.

So here's how the test works. You observe that the ass goes to the bucket on the right. You are not sure whether the ass has assigned a 51% probability or a 100% probability to the right bucket being nearer. So you redo the experiment with the added mechanism. If the ass now (with the introduced mechanism) now goes to the bucket on the left, then you can infer that the ass now believes that the probability that the right bucket will be closer by the time he reaches it is less than 50%. But it only changed by a few percentage points as a result of the added mechanism. Therefore he must have assigned only slightly more than 50% probability to it to begin with.

And in this sort of way, you can elicit the ass's probability assignments.

The ass's conscious state of mind, however, is something completely separate from this. If we grant the ass the gift of speech, the ass may well say, each time, "there's not a shred of doubt in my mind that the right bucket is closer", or "I am entirely confident that the left bucket is closer".

My point being that we may well be like the ass, and introspective examination of our own conscious state of mind may fail to reveal the actual probabilities that our mental black boxes have assigned to events. It may instead reveal only overconfident delusions that the black box has instilled in the conscious mind for the purpose of encouraging quick action.

comment by prase · 2010-10-06T09:51:37.357Z · LW(p) · GW(p)

Thanks for the lengthy answer. Still, why it is impossible to calibrate people in general, looking at how often they get the anwer right, and then using them as a device for measuring probabilities? If a person is right on approximately 80% of the issues he says he's "sure", then why not translating his next "sure" into an 80% probability? Doesn't seem arbitrary to me. There may be inconsistency between measurements using different people, but strictly speaking, the thermometers and clocks also sometimes disagree.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-07T00:41:28.098Z · LW(p) · GW(p)

I do discuss this exact point in the above lengthy comment, and I allow for this possibility. Here is the relevant part:

The first possible path towards accurate calibration is when the same person performs essentially the same judgment many times, and from the past performance we extract the frequency with which their brain tends to produce the right answer. If this level of accuracy remains roughly constant in time, then it makes sense to attach it as the probability to that person’s future judgments on the topic. This approach treats the relevant operations in the brain as a black box whose behavior, being roughly constant, can be subjected to such extrapolation.

Now clearly, the critical part is to ensure that the future judgments are based on the same parts of the person's brain and that the relevant features of these parts, as well as the problem being solved, remain unchanged. In practice, these requirements can be satisfied by people who have reached the peak of ability achievable by learning from experience in solving some problem that repeatedly occurs in nearly identical form. Still, even in the best case, we're talking about a very limited number of questions and people here.

Replies from: prase
comment by prase · 2010-10-07T09:09:11.571Z · LW(p) · GW(p)

I know you have limited it to repeated judgments about essentialy the same question. I was rather asking why, and I am still not sure whether I interpret it correctly. Is it that the judgments themselves are possibly produced by different parts of brain, or the person's self-evaluation of certainty are produced by different parts of brain, or both? And if so, so what?

Imagine a test is done on a particular person. During five consecutive years he is being asked a lot of questions (of all different types), and he has to give an answer and a subjective feeling of certainty. After that, we see that the answers which he has labeled as "almost certain" were right in 83%, 78%, 81%, 84% and 85% of cases in the five years. Let's even say that the experimenters were careful enough to divide the questions into different topics, and establish, that his "almost certain" anwers about medicine were right in 94% of the time in average and his "almost certain" answers about politics were right in 56% of the time in average. All other topics were near the overall average.

Do you 1) maintain that such stable results are very unlikely to happen, or that 2) even if most of people can be calibrated is such way, still it doesn't justify using them for measuring probabilities?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-08T04:32:49.799Z · LW(p) · GW(p)

prase:

I know you have limited it to repeated judgments about essentialy the same question. I was rather asking why, and I am still not sure whether I interpret it correctly. Is it that the judgments themselves are possibly produced by different parts of brain, or the person's self-evaluation of certainty are produced by different parts of brain, or both? And if so, so what?

We don't really know, but it could certainly be both, and also it may well be that the same parts of the brain are not equally reliable for all questions they are capable of processing. Therefore, while simple inductive reasoning tells us that consistent accuracy on the same problem can be extrapolated, there is no ground to generalize to other questions, since they may involve different parts of the brain, or the same part functioning in different modes that don't have the same accuracy.

Unless, of course, we cover all such various parts and modes and obtain some sort of weighted average over them, which I suppose is the point of your thought experiment, of which more below.

Do you 1) maintain that such stable results are very unlikely to happen, or that 2) even if most of people can be calibrated is such way, still it doesn't justify using them for measuring probabilities?

If the set of questions remains representative -- in the sense of querying the same brain processes with the same frequency -- the results could turn out to be fairly stable. This could conceivably be achieved by large and wide-ranging sets of questions. (I wonder if someone has actually done such experiments?)

However, the result could be replicated only if the same person is again asked similar large sets of questions that are representative with regards to the frequencies with which they query different brain processes. Relative to that reference class, it clearly makes sense to attach probabilities to answers, so, yes, here we would have another counterexample for my original claim, for another peculiar meaning of probabilities.

The trouble is that these probabilities would be useless for any purpose that doesn’t involve another similar representative set of questions. In particular, sets of questions about some particular topic that is not representative would presumably not replicate them, and thus they would be a very bad guide for betting that is limited to some particular topic (as it nearly always is). Thus, this seems like an interesting theoretical exercise, but not a way to obtain practically useful numbers.

(I should add that I never thought about this scenario before, so my reasoning here might be wrong.)

Replies from: prase
comment by prase · 2010-10-08T08:51:07.642Z · LW(p) · GW(p)

If there are any experimental psychologist reading this, maybe they can organise the experiment. I am curious whether people indeed can be calibrated on general questions.

comment by xv15 · 2010-10-04T03:56:14.810Z · LW(p) · GW(p)

I tell you I believe X with 54% certainty. Who knows, that number could have been generated in a completely bogus way. But however I got here, this is where I am. There are bets about X that I will and won't take, and guess what, that's my cutoff probability right there. And by the way, now I have communicated to you where I am, in a way that does not further compound the error.

Meaningless is a very strong word.

In the face of such uncertainty, it could feel natural to take shelter in the idea of "inherent vagueness"...but this is reality, and we place our bets with real dollars and cents, and all the uncertainty in the world collapses to a number in the face of the expectation operator.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T05:43:37.618Z · LW(p) · GW(p)

So why stop there? If you can justify 54%, then why not go further and calculate a dozen or two more significant digits, and stand behind them all with unshaken resolve?

Replies from: wnoise, xv15, wedrifid
comment by wnoise · 2010-10-04T10:12:59.199Z · LW(p) · GW(p)

You can, of course. For most situations, the effort is not worth the trade-off. But making a distinction between 1%, 25%, 50%. 75%. and 99% often is.

You can (at least formally) put error bars on the quantities that go into a Bayesian calculation. The problem, of course, is that error bars are short-hand for a distribution of possible values, and it's not obvious what a distribution of probabilities means or should mean. Everything operational about probability functions is fully captured by their full set of expectation values, so this is no different than just immediately taking the mean, right?

Well, no. The uncertainties are a higher level model that not only makes predictions, but also calibrates how much these predictions are likely to move given new data.

It seems to me that this is somewhat related to the problem of logical uncertainty.

comment by xv15 · 2010-10-04T15:09:50.568Z · LW(p) · GW(p)

Again, meaningless is a very strong word, and it does not make your case easy. You seem to be suggesting that NO number, however imprecise, has any place here, and so you do not get to refute me by saying that I have to embrace arbitrary precision.

In any case, if you offer me some bets with more significant digits in the odds, my choices will reveal the cutoff to more significant digits. Wherever it may be, there will still be some bets I will and won't take, and the number reflects that, which means it carries very real meaning.

Now, maybe I will hold the line at 54% exactly, not feeling any gain to thinking harder about the cutoff (as it gets harder AND less important to nail down further digits). Heck, maybe on some other issue I only care to go out to the nearest 10%. But so what? There are plenty of cases where I know my common sense belief probability to within 10%. That suggests such an estimate is not meaningless.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T23:08:11.674Z · LW(p) · GW(p)

xv15:

Again, meaningless is a very strong word, and it does not make your case easy.

To be precise, I wrote "meaningless, except perhaps as a vague figure of speech." I agree that the claim would be too strong without that qualification, but I do believe that "vague figure of speech" is a fair summary of the meaningfulness that is to be found there. (Note also that the claim specifically applies to "common-sense conclusions and beliefs," not things where there is a valid basis for employing mathematical models that yield numerical probabilities.)

In any case, if you offer me some bets with more significant digits in the odds, my choices will reveal the cutoff to more significant digits. Wherever it may be, there will still be some bets I will and won't take, and the number reflects that, which means it carries very real meaning.

You seem to be saying that since you perceive this number as meaningful, you will be willing to act on it, and this by itself renders it meaningful, since it serves as guide for your actions. If we define "meaningful" to cover this case, then I agree with you, and this qualification should be added to my above statement. But the sense in which I used the term originally doesn't cover this case.

Replies from: xv15
comment by xv15 · 2010-10-04T23:35:11.454Z · LW(p) · GW(p)

Fair. Let me be precise too. I read your original statement as saying that numbers will never add meaning beyond what a vague figure of speech would, i.e. if you say "I strongly believe this" you cannot make your position more clear by attaching a number. That I disagree with. To me it seems clear that:

i) "Common-sense conclusions and beliefs" are held with varying levels of precision. ii) Often even these beliefs are held with a level of precision that can be best described with a number. (Best=most succinctly, least misinterpretable, etc...indeed it seems to me that sometimes "best" could be replaced with "only." You will never get people to understand 60% by saying "I reasonably strongly believe"...and yet your belief may be demonstrably closer to 60 than 50 or 70).

I don't think your statement is defensible from a normal definition of "common sense conclusions," but you may have internally defined it in such a way as to make your statement true, with a (I think) relatively narrow sense of "meaningfulness" also in mind. For instance if you ignore the role of numbers in transmission of belief from one party to the next, you are a big step closer to being correct.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-05T00:53:29.427Z · LW(p) · GW(p)

xv15:

I don't think your statement is defensible from a normal definition of "common sense conclusions," but you may have internally defined it in such a way as to make your statement true, with a (I think) relatively narrow sense of "meaningfulness" also in mind. For instance if you ignore the role of numbers in transmission of belief from one party to the next, you are a big step closer to being correct.

You have a very good point here. For example, a dialog like this could result in a real exchange of useful information:

A: "I think this project will probably fail."
B: "So, you mean you're, like, 90% sure it will fail?"
A: "Um... not really, more like 80%."

I can imagine a genuine meeting of minds here, where B now has a very good idea of how confident A feels about his prediction. The numbers are still used as mere figures of speech, but "vague" is not a correct way to describe them, since the information has been transmitted in a more precise way than if A had just used verbal qualifiers.

So, I agree that "vague" should probably be removed from my original claim.

Replies from: HughRistik, xv15
comment by HughRistik · 2010-10-06T00:39:40.844Z · LW(p) · GW(p)

Therefore, there are only two ways in which you can arrive at a numerical probability estimate for a common-sense belief:

  1. Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes the number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.
  1. Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

On point #2, I agree with you. On point #1, I had the same reaction as xv15. Your example conversation is exactly how I would defend the use of numerical probabilities in conversation. I think you may have confused people with the phrase "vague figure of speech," which was itself vague.

Vague relative to what? "No idea / kinda sure / pretty sure / very sure?", the ways that people generally communicate about probability, are much worse. You can throw in other terms like "I suspect" and "absolutely certain" and "very very sure", but it's not even clear how these expressions of belief match up with others. In common speech, we really only have about 3-5 degrees of probability. That's just not enough gradations.

In contrast, when expressing a percentage probability, people only tend to use multiples of 10, certain multiples of 5, 0.01%, 1%, 2%, 98%, 99% and 99.99%. If people use figures like 87%, or any decimal places other than the ones previously mentioned, it's usually because they are deliberately being ridiculous. (And it's no coincidence that your example uses multiples of 10.)

I agree with you that feelings of uncertainty are fuzzy, but they aren't so fuzzy that we can get by with merely 3-5 gradations in all sorts of conversations. On some subjects, our communication becomes more precise when we have 10-20 gradations. Yet there are diminishing returns on more degrees of communicable certainty (due to reasons you correctly describe), so going any higher resolution than 10-20 degrees isn't useful for anything except jokes.

The numbers are still used as mere figures of speech, but "vague" is not a correct way to describe them, since the information has been transmitted in a more precise way than if A had just used verbal qualifiers.

Yes. Gaining the 10-20 gradations that numbers allow when they are typically used does make conversations relatively more precise than just by tacking on "very very" to your statement of certainty.

It's similar to the infamous 1-10 rating system for people's attractiveness. Despite various reasons that rating people with numbers is distasteful, this ranking system persists because, in my view, people find it useful for communicating subjective assessments of attractiveness. Ugly-cute-hot is a 3-point scale. You could add in "gorgeous," "beautiful," or modifiers like "smoking hot," but it's unclear how these terms rank against each other (and they may express different types of attraction, rather than different degrees). Again, it's hard to get more than 3-5 degrees using plain English. The 1-10 scale (with half-points, and 9.9) gives you about 20 gradations (though 1-3, and any half-point values below 5 are rarely used).

I think we have a generalized phenomenon where people resort to using numbers to describe their subjective feelings when common language doesn't grant high enough resolution. 3-5 is good enough for some feelings (3 gives you negative, neutral, and positive for instance), but for some feelings we need more. Somewhere around 20 is the upper-bound of useful gradations.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-06T18:38:58.655Z · LW(p) · GW(p)

I mostly agree with this assessment. However, the key point is that such uses of numbers should be seen as metaphorical. The literal meaning of a metaphor is typically nonsensical, but it works by somehow hacking the human understanding of language to successfully convey a point with greater precision than the most precise literal statement would allow, at least in as many words. (There are other functions of metaphors too, of course, but this one is relevant here.) And just like it is fallacious to understand a metaphor literally, it is similarly fallacious to interpret these numerical metaphors as useful for mathematical purposes. When it comes to subjective probabilities, however, I often see what looks like confusion on this point.

Replies from: jimrandomh
comment by jimrandomh · 2010-10-06T19:00:26.135Z · LW(p) · GW(p)

It is wrong to use a subjective probability that you got from someone else for mathematical purposes directly, for reasons I expand on in my comment here. But I don't think that makes them metaphorical, unless you're using a definition of metaphor that's very different than the one I am. And you can use a subjective probability which you generated yourself, or combined with your own subjective probability, in calculations. Doing so just comes with the same caveats as using a probability from a study whose sample was too small, or which had some other bad but not entirely fatal flaw.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-06T20:51:32.029Z · LW(p) · GW(p)

I will write a reply to that earlier comment of yours a bit later today when I'll have more time. (I didn't forget about it, it's just that I usually answer lengthy comments that deserve a greater time investment later than those where I can fire off replies rapidly during short breaks.)

But in addition to the theme of that comment, I think you're missing my point about the possible metaphorical quality of numbers. Human verbal expressions have their literal information content, but one can often exploit the idiosyncrasies of the human language interpretation circuits to effectively convey information altogether different from the literal meaning of one's words. This gives rise to various metaphors and other figures of speech, which humans use in their communication frequently and effectively. (The process is more complex than this simple picture, since frequently used metaphors can eventually come to be understood as literal expressions of their common metaphorical meaning, and this process is gradual. There are also other important considerations about metaphors, but this simple observation is enough to support my point.)

Now, I propose that certain practical uses of numbers in communication should be seen that way too. A literal meaning of a number is that something can ultimately be counted, measured, or calculated to arrive at that number. A metaphorical use of a number, however, doesn't convey any such meaning, but merely expects to elicit similar intuitive impressions, which would be difficult or even impossible to communicate precisely using ordinary words. And just like a verbal metaphor is nonsensical except for the non-literal intuitive point it conveys, and its literal meaning should be discarded, at least some practical uses of numbers in human conversations serve only to communicate intuitive points, and the actual values are otherwise nonsensical and should not be used for any other purposes -- and even if they perhaps are, their metaphorical value should be clearly seen apart from their literal mathematical value.

Therefore, regardless of our disagreement about subjective probabilities (of which more in my planned reply), this is a separate important point I wanted to make.

comment by xv15 · 2010-10-05T03:27:37.571Z · LW(p) · GW(p)

okay. I still suspect I disagree with whatever you mean by mere "figures of speech," but this rational truthseeker does not have infinite time or energy.

in any case, thank you for a productive and civil exchange.

comment by wedrifid · 2010-10-04T06:34:01.415Z · LW(p) · GW(p)

Or, you could slide up your arbitrary and fallacious slippery slope and end up with Shultz.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T06:55:00.555Z · LW(p) · GW(p)

Even if you believe that my position is fallacious, I am sure not the one to be accused of arbitrariness here. Arbitrariness is exactly what I object to, in the sense of insisting on the validity of numbers that lack both logically correct justification and clear error bars that would follow from it. And I'm asking the above question in full seriousness: a Bayesian probability calculation will give you as many significant digits as you want, so if you believe that it makes sense to extract a Bayesian probability with two significant digits from your common sense reasoning, why not more than that?

In any case, I have explained my position at length, and it would be nice if you addressed the substance of what I wrote instead of trying to come up with witty one-liner jabs. For those who want the latter, there are other places on the web full of people whose talent for such things is considerably greater than yours.

Replies from: wedrifid
comment by wedrifid · 2010-10-04T14:03:08.796Z · LW(p) · GW(p)

For those who want the latter, there are other places on the web full of people whose talent for such things is considerably greater than yours.

I specifically object to your implied argument in the grandparent. I will continue to reject comments that make that mistake regardless of how many times you insult me.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T18:14:48.823Z · LW(p) · GW(p)

Look, in this thread, you have clearly been making jabs for rhetorical effect, without any attempt to argue in a clear and constructive manner. I am calling you out on that, and if you perceive that as insulting, then so be it.

Everything I wrote here has been perfectly honest and upfront, and written with the goal of eliciting rational counter-arguments from which I might perhaps change my opinion. I have neither the time nor the inclination for the sort of one-upmanship and showing off that you seem to be after, and even if I were, I would pursue it in some more suitable venue. (Where, among other things, one would indeed expect to see the sort of performance you're striving for done in a much more skilled and entertaining way.)

Replies from: wedrifid
comment by wedrifid · 2010-10-05T04:45:20.306Z · LW(p) · GW(p)

Your map is not the territory. If you look a little closer you may find that my points are directed at the topic, and not your ego. In particular, take a second glance at this comment. The very example of betting illustrates the core problem with your position.

I am calling you out on that, and if you perceive that as insulting, then so be it.

The insult would be that you are telling me I'm bad at entertaining one-upmanship. I happen to believe I would be quite good at making such performances were I of a mind and in a context where it suited my goals (dealing with AMOGs, for example).

When dealing with intelligent agents, if you notice that what they are doing does not seem to be effective at achieving their goals it is time to notice your confusion. It is most likely that your model of their motives is inaccurate. Mind reading is hard.

Shultz does know nuthink. Slippery slopes do (arbitrarily) slide in both directions (to either Shultz to Omega in this case). Most importantly, if you cannot assign numbers to confidence levels you will lose money when you try to bet.

comment by torekp · 2010-10-03T16:14:00.513Z · LW(p) · GW(p)

Upvoted, because I think you're only probably right. And you not only stole my thunder, you made it more thunderous :(

Replies from: None, groupuscule
comment by [deleted] · 2010-10-03T16:50:56.050Z · LW(p) · GW(p)

Downvote if you agree with something, upvote if you disagree.

EDIT: I missed the word only. I just read "I think you're probably right." My mistake.

Replies from: magfrump
comment by magfrump · 2010-10-03T17:53:47.840Z · LW(p) · GW(p)

Upvote for disagreements of overconfidence OR underconfidence.

comment by groupuscule · 2010-10-05T06:17:48.682Z · LW(p) · GW(p)

Same here. A "pretty sure" confidence level would probably have done it for me.

comment by orthonormal · 2010-10-04T02:39:47.325Z · LW(p) · GW(p)

Um, so when Nate Silver tells us he's calculated odds of 2 in 3 that Republicans will control the house after the election, this number should be discarded as noise because it's a common-sense belief that the Republicans will gain that many seats?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-04T05:29:34.487Z · LW(p) · GW(p)

Boy did I hit a hornets' nest with this one!

No, of course I didn't mean anything like that. Here is how I see this situation. Silver has a model, which is ultimately a piece of mathematics telling us that some p=0.667, and for reasons of common sense, Silver believes (assuming he's being upfront with all this) that this model closely approximates reality in such a way that p can be interpreted, with reasonable accuracy, as the probability of Republicans winning a House majority this November.

Now, when you ask someone which party is likely to win this election, this person's brain will activate some algorithm that will produce an answer along with some rough level of confidence. Someone completely ignorant about politics might answer that he has no idea, and cannot say anything with any certainty. Other people will predict different results with varying (informally expressed) confidence. Silver himself, or someone else who agrees with his model, might reply that the best answer is whatever the model says (i.e. Republicans win with p=0.667), since it is completely superior to the opaque common-sense algorithms used by the brains of non-mathy political analysts. Others will have greater or lesser confidence in the accuracy of the model, and might take its results into account, with varying weight, alongside other common-sense considerations.

Ultimately, the status of this number depends on the relation between Silver's model and reality. If you believe that the model is a vast improvement over any informal common-sense considerations in predicting election results, just like Newton's theory is a vast improvement over any common-sense considerations in predicting the motions of planets, then we're not talking about a common-sense conclusion any more. On the other hand, if you believe that the model is completely out of touch with reality, then you would discard its result as noise. Finally, if you believe that it's somewhat accurate, but still not reliably superior to common sense, you might revise its conclusion using common sense.

What you believe about Silver's model, however, is still ultimately a matter of common-sense judgment, and unless you think that you have a model so good that it should be used in a shut-up-and-calculate way, your ultimate best prediction of the election results won't come with any numerical probabilities, merely a vague feeling of how confident you are.

Replies from: wedrifid
comment by wedrifid · 2010-10-04T06:36:45.780Z · LW(p) · GW(p)

What you believe about Silver's model, however, is still ultimately a matter of common-sense judgment, and unless you think that you have a model so good that it should be used in a shut-up-and-calculate way, your ultimate best prediction of the election results won't come with any numerical probabilities, merely a vague feeling of how confident you are.

Want to make a bet on that?

comment by [deleted] · 2010-10-03T19:26:06.204Z · LW(p) · GW(p)

In your linked comment you write:

For just about any interesting question you may ask, the algorithm that your brain uses to find the answer is not transparent to your consciousness -- and its output doesn't include a numerical probability estimate, merely a vague and coarsely graded feeling of certainty.

Do you not think that this feeling response can be trained through calibration exercises and by making and checking predictions? I have not done this myself yet, but this is how I've thought others became able to assign numerical probabilities with confidence.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-03T20:09:57.984Z · LW(p) · GW(p)

Luke_Grecki:

Do you not think that this feeling response can be trained through calibration exercises and by making and checking predictions?

Well, sometimes frequentism can come to the rescue, in a sense. If you are repeatedly faced with an identical situation where it's necessary to make some common-sense judgment, like e.g. on an assembly line, you can look at your past performance to predict how often you'll be correct in the future. (This assuming you're not getting better or worse with time, of course.) However, what you're doing in that case is treating a part of your own brain as a black box whose behavior you're testing empirically to extrapolate a frequentist rule -- you are not performing the judgment itself as a rigorous Bayesian procedure that would give you the probability for the conclusion.

That said, it's clear that smarter and more knowledgeable people think with greater accuracy and subtlety, so that their intuitive feelings of (un)certainty are also subtler and more accurate. But there is still no magic step that will translate these feelings output by black-box circuits in their brains into numbers that could lay claim to mathematical rigor and accuracy.

Replies from: None
comment by [deleted] · 2010-10-03T20:34:24.889Z · LW(p) · GW(p)

you are not performing the judgment itself as a rigorous Bayesian procedure that would give you the probability for the conclusion.

No, but do you think it is meaningless to think of the messy brain procedure (that produces these intuitive feelings) as approximating this rigorous Bayesian procedure? This could probably be quantified using various tests. I don't dispute that one couldn't lay claim to mathematical rigor, but I'm not sure that means that any human assignment of numerical probabilities is meaningless.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-03T21:03:15.790Z · LW(p) · GW(p)

Yes, with good enough calibration, it does make sense. If you have an assembly line worker whose job is to notice and remove defective items, and he's been doing it with a steady (say) 99.7% accuracy for a long time, it makes sense to assign p=0.997 to each single judgment he makes about an individual item, and this number can be of practical value in managing production. However, this doesn't mean that you could improve the worker's performance by teaching him about Bayesianism; his brain remains a black box. The important point is that the same typically holds for highbrow intellectual tasks too.

Moreover, for the great majority of interesting questions about the world, we don't have the luxury of a large reference class of trials on which to calibrate. Take for example the recent discussion about the AD-36 virus controversy. If you look at the literature, you'll presumably form an opinion about this question with a higher or lower certainty, depending on how much confidence you have in your own ability to judge about such matters. But how to calibrate this judgment in order to arrive at a probability estimate? There is no way.

comment by [deleted] · 2010-10-05T02:59:38.610Z · LW(p) · GW(p)

To try to understand your point, I will try to clarify it.

We have very limited access to our mental processes. In fact, in some cases our access to our mental processes is indirect - that is, we only discover what we believe once we have observed how we act. We observe our own act, and from this we can infer that we must have believed such-and-such. We can attempt to reconstruct our own process of thinking, but the process we are modeling is essentially a black box whose internals we are modeling, and the outputs of the black box at any given time are meager. We are of course always using the black box, which gives us a lot of data to go on in an absolute sense, but since the topic is constantly changing and since our beliefs are also in flux, the relevance of most of that data to the correct understanding of a particular act of thinking is unclear. In modeling our own mental processes we are rationalizing, with all the potential pitfalls associated with rationalization.

Nevertheless, this does not stop us from using the familiar gambling method for eliciting probability assessments, understood as willingness to wager. The gambling method, even if it is artificial, is at least reasonable, because every behavior we exhibit involves a kind of wager. However the black box operates, it will produce a certain response for each offered betting odds, from which its probability assignments can be derived. Of course this won't work if the black box produces inconsistent (i.e. Dutch bookable) responses to the betting odds, but whether and to what degree it does or not is an empirical question. As a matter of fact, you've been talking about precision, and I think here's how we can define the precision of your probability assignment. I'm sure that the black box's responses to betting odds will be somewhat inconsistent. We can measure how inconsistent they are. There will be a certain gap of a certain size which can be Dutch booked - the bigger the gap the quicker you can be milked. And this will be the measure of the precision of your probability assignment.

But suppose that a person always in effect bets for something given certain odds or above, in whatever manner the bet is put to him, and always bets against if given odds anywhere below, and suppose the cutoff between his betting for and against is some very precise number such as pi to twelve digits. Then that seems to say that the odds his black box assigns is precisely those odds.

You write:

The problem is that the algorithms that your brain uses to perform common-sense reasoning are not transparent to your conscious mind, which has access only to their final output. This output does not provide a numerical probability estimate, but only a rough and vague feeling of certainty.

But I don't we should be looking at introspectable "output". The purpose of the brain isn't to produce rough and vague feelings which we can then appreciate through inner contemplation. The purpose of the brain is to produce action, to decide on a course of action and then move the muscles accordingly. Our introspective power is limited at best. Over a lifetime of knowing ourselves we can probably get pretty good at knowing our own beliefs, but I don't thing we should think of introspection as the gold standard of measuring a person's belief. Like preference, belief is revealed in action. And action is what the gambling method of eliciting probability assignments looks at. While the brain produces only rough and vague feelings of certainty for the purposes of one's own navel-gazing, at the same time it produces very definite behavior, very definite decisions, from which can be derived, at least in principle, probability assignments - and also, as I mention above, precision of those probability assignments.

I grant, by implication, that one's own probability assignments are not necessarily introspectable. That goes without saying.

You write:

Therefore, there are only two ways in which you can arrive at a numerical probability estimate for a common-sense belief:

  • Translate your vague feeling of certainly into a number in some arbitrary manner. This however makes the number a mere figure of speech, which adds absolutely nothing over the usual human vague expressions for different levels of certainty.

  • Perform some probability calculation, which however has nothing to do with how your brain actually arrived at your common-sense conclusion, and then assign the probability number produced by the former to the latter. This is clearly fallacious.

Your first described way takes the vague feeling for the output of the black box. But the purpose of the black box is action, decision, and that is the output that we should be looking at, and it's the output that the gambling method looks at. And that is a third way of arriving at a numerical probability which you didn't cover.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-05T17:44:55.125Z · LW(p) · GW(p)

Aside from some quibbles that aren't really worth getting into, I have no significant disagreement with your comments. There is nothing wrong with looking at people's acts in practice and observing that they behave as if they operated with subjective probability estimates in some range. However, your statement that "one's own probability assignments are not necessarily introspectable" basically restates my main point, which was exactly about the meaninglessness of analyzing one's own common-sense judgments to arrive at a numerical probability estimate, which many people here, in contrast, consider to be the right way to increase the accuracy of one's thinking. (Though I admit that it should probably be worded more precisely to make sure it's interpreted that way.)

Replies from: None
comment by [deleted] · 2010-10-05T20:58:28.879Z · LW(p) · GW(p)

However, your statement that "one's own probability assignments are not necessarily introspectable" basically restates my main point, which was exactly about the meaninglessness of analyzing one's own common-sense judgments to arrive at a numerical probability estimate, which many people here, in contrast, consider to be the right way to increase the accuracy of one's thinking.

As it happens, early on I voted your initial comment down (following the topsy-turvy rules of the main post) because based on my first impression I thought I agreed with you. Reconsideration of your comment in light of the ensuing discussion brought to my mind this seeming objection. But you have disarmed the objection, so I am back to agreement.

comment by nwthomas · 2011-07-04T21:47:22.040Z · LW(p) · GW(p)

I have met multiple people who are capable of telepathically transmitting mystical experiences to people who are capable of receiving them. 90%.

Replies from: None
comment by [deleted] · 2012-04-13T11:25:18.552Z · LW(p) · GW(p)

Wow, telepathy is a pretty big thing to discuss. Sure there isn't a simpler hypothesis? Upvoted.

Replies from: nwthomas
comment by nwthomas · 2012-04-26T06:25:21.161Z · LW(p) · GW(p)

The data I'm working from is that contact with certain people sometimes causes me to have mystical experiences. This has happened somewhere between 20 and 100 times, with less than a dozen people. Sometimes but not always, it happens in both directions; i.e., they also have a mystical experience as a result of the contact.

The simpler hypothesis, from a materialist point of view, is that seeing these people just tripped some switch in my brain, without any direct mind-to-mind interaction being involved. Then we can say that I also tripped such a switch in their brains in the cases where it was reciprocal. We are left with the question of why this weird psychological phenomenon happens.

The religious explanation is in many ways easier and more natural. We can say that my souls brushed up against these people's. It makes sense from within the religious frame of mind that this sort of thing would happen. But obviously we run into the issues with religious views in general.

Replies from: ArisKatsaris, Richard_Kennaway, None
comment by ArisKatsaris · 2012-04-26T09:42:47.205Z · LW(p) · GW(p)

If we replaced "mystical experiences" with something of less religious connotations like "raging hard-ons", you wouldn't think that 'souls brushing up against each other' is the most natural explanation -- you'd instead conclude that some aspect of psychology/biochemistry/pheromones is causing you to have a more intense reaction towards certain people and vice-versa.

From a physicalist perspective the brain is as much an organ as the penis, and "mystical experiences" as much a physical event in the brain as erections are a physical event in the penis.

Replies from: None
comment by [deleted] · 2012-04-26T10:09:05.271Z · LW(p) · GW(p)

So true, so funny.

EDIT: Why was this downvoted? I intended to convey that I thought ArisKatsaris was right in saying that brains are just as physical as genitals, and also that I thought his similie was funny.

comment by Richard_Kennaway · 2012-04-26T07:25:28.247Z · LW(p) · GW(p)

Neither of these is an explanation.

comment by [deleted] · 2012-04-26T09:07:40.976Z · LW(p) · GW(p)

You're giving a mysterious answer and proposing ontologically basic mental substances.

I still say that it is a rather extraordinary claim, and thus requires extraordinary evidence. So far you have presented close to none, and what you have could easily and more sensibly be explained with psychological kinks. See cold readings.

comment by Eugine_Nier · 2010-10-03T22:30:58.978Z · LW(p) · GW(p)

The many worlds interpretation of Quantum Mechanics is false in the strong sense that the correct theory of everything will incorporate wave-function collapse as a natural part of itself. ~40%

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T22:55:02.206Z · LW(p) · GW(p)

(For those who think in terms of ensemble universes: 40% of this universe computation's measure comes from computations that don't 'bother' to leave in a real and thus computationally expensive wave function.) This is tricky, but I think I agree. Downvoted.

Replies from: LucasSloan
comment by LucasSloan · 2010-10-03T23:08:33.171Z · LW(p) · GW(p)

I would expect that most simulators who worried about computational capacity wouldn't bother simulating to the depth of quantum physics anyway. However, I'm not entirely sure that I should use this sort of argument when talking about the local laws of "physics". There is some sense, I think, in which the laws of physics around here are "supposed to be" MWI-like and that we should take them at face value.

comment by Eugine_Nier · 2010-10-03T03:27:31.764Z · LW(p) · GW(p)

Religion is a net positive force in society. Or to put it another way religious memes, (particularly ones that have survived for a long time) are more symbiotic than parasitic. Probably true (70%).

Replies from: orthonormal, Perplexed, Interpolate, Jayson_Virissimo, wedrifid
comment by orthonormal · 2010-10-04T02:26:32.605Z · LW(p) · GW(p)

If you changed "is" to "has been", I'd downvote you for agreement. But as stated, I'm upvoting you because I put it at about 10%.

Replies from: Eugine_Nier, wedrifid
comment by Eugine_Nier · 2010-10-04T02:35:45.650Z · LW(p) · GW(p)

I'd be curious to know when you think the crossover point was.

Replies from: orthonormal
comment by orthonormal · 2010-10-04T02:55:55.393Z · LW(p) · GW(p)

Around the time of J. S. Mill, I think. The Industrial Revolution helped crystallize an elite political and academic movement which had the germs of scientific and quantitative thinking; but this movement has been far too busy fighting for its life each time it conflicts with religious mores, instead of being able to examine and improve itself. It should have developed far more productively by now if atheism had really caught on in Victorian England.

Anyway, I'm not as confident of the above as I am that we've passed the crossover point now. (Aside from the obvious political effects, the persistence of religion creates mental antibodies in atheists that make them extremely wary of anything reminiscent of some aspect of religion; this too is a source of bias that wouldn't exist were it not for religion's ubiquity.)

comment by wedrifid · 2010-10-04T04:40:33.020Z · LW(p) · GW(p)

probably false (10%).

That's probable in your nomenclature?

Replies from: orthonormal
comment by orthonormal · 2010-10-04T19:35:59.344Z · LW(p) · GW(p)

Oops, I see the ambiguity. Edited.

comment by Perplexed · 2010-10-03T05:00:01.057Z · LW(p) · GW(p)

I think this is ambiguous. It might be interpreted as

  • Christianity is good for its believers - they are better off to believe than to be atheist.
  • Christianity is good for Christendom - it is a positive force for majority Christian societies, as compared to if those societies were mostly atheist.
  • Christianity makes the world a better place, as compared to if all those people were non-believers in any religion.

Which of these do you mean?

Replies from: Jayson_Virissimo, Eugine_Nier
comment by Jayson_Virissimo · 2010-10-03T18:23:50.910Z · LW(p) · GW(p)

Christianity makes the world a better place, as compared to if all those people were non-believers in any religion.

I think a better question is "would the world a better place if people who are currently Christian became their next most likely alternative belief system?". I'm going to go out on a limb here and speculate that if the median Christian lost his faith he wouldn't become a rational-empiricist.

comment by Eugine_Nier · 2010-10-03T05:15:53.475Z · LW(p) · GW(p)

Christianity is good for its believers - they are better off to believe than to be atheist.

I'd change this one to:

  • Christianity is good for most of its believers - they are better off to believe than to be atheist.

~62%

Christianity is good for Christendom - it is a positive force for majority Christian societies, as compared to if those societies were mostly atheist.

~69%

Christianity makes the world a better place, as compared to if all those people were non-believers in any religion.

~58%

Edit: I case it wasn't clear the 70% refers to the disjunction of the above 3.

comment by Interpolate · 2010-10-03T11:29:20.775Z · LW(p) · GW(p)

I downvoted this, and consider the artistic and cultural contributions of religion to society to alone warrant this assertion.

Replies from: JoshuaZ, Swimmy, Will_Newsome
comment by JoshuaZ · 2010-10-03T22:06:55.895Z · LW(p) · GW(p)

Note that it is in general very hard to tell if the artistic and cultural contributions associated with religion are actually due to religion. In highly religious cultures that's often the only form of expression that one is able to get funding for. Dan Barker wrote an essay about this showing how a lot of classical composers were agnostics, atheists or deists who wrote music with religious overtones mainly because that was their only option.

comment by Swimmy · 2010-10-04T16:56:14.918Z · LW(p) · GW(p)

Funny, I upvoted this because of the artistic and cultural contributions of religion. For most of history, until the Industrial Revolution or a little before, human economies were Malthusian. You could not increase incomes without decreasing average lifespans. The implication is that the money spent on cathedrals and gargoyles and all the rest came directly at the expense of people's lives. (A recent Steven Landsburg debate with Dinesh D'Souza explored this line of thinking more; I wouldn't recommend watching much more than the opening statements, though.)

I think the positive externalities of having more of those people's descendants alive today would be of higher value than the current benefits of past art--especially since most of that past art has been destroyed.

comment by Will_Newsome · 2010-10-03T20:58:42.661Z · LW(p) · GW(p)

You sound more confident than Eugine, in which case you should upvote. Or does 70% roughly match your belief?

comment by Jayson_Virissimo · 2010-10-03T18:28:44.105Z · LW(p) · GW(p)

My personal degree of belief is extremely sensitive to the definition of religion you are using here. I would appreciate some elaboration.

comment by wedrifid · 2010-10-03T05:12:43.824Z · LW(p) · GW(p)

The above is at -5. By the rules of the post that indicates that people overwhelmingly agree with the comment. This surprises me. (I didn't vote.)

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:53:06.649Z · LW(p) · GW(p)

It could be that people are browsing the recent comments section and impulse-downvoting. :/

It's a tough question, and involves reasoning heavily about counterfactuals. What would a humanity without religion look like? I tend to think it'd look a lot better, even though I admit there's a lot of confusion in the counterfactual surgery. So I upvoted.

Replies from: Relsqui
comment by Relsqui · 2010-10-03T07:18:03.183Z · LW(p) · GW(p)

What would a humanity without religion look like?

This gave me pause as well. Without religion, Mendel might have been too busy in another occupation to muck around with pea plants. We'd probably still learn what he learned, but who's to say how?

Replies from: whpearson
comment by whpearson · 2010-10-03T12:47:50.999Z · LW(p) · GW(p)

I have this memory that monks transcribed Aristotle, Plato and Pythagoras and kept them alive, when most of the world was illiterate.

I'm not sure if this is accurate or not.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-03T17:53:17.109Z · LW(p) · GW(p)

I have this memory that monks transcribed Aristotle, Plato and Pythagoras and kept them alive, when most of the world was illiterate.

Right idea, wrong philosophers. Keep in mind that Greek was a forgotten language in western Europe throughout the middle ages. They had translated copies of Aristotle but not any other Greek writer.

As for Pythagoras, well he didn't survive. All we know about him comes from second and third hand accounts.

comment by MattMahoney · 2011-04-26T16:29:04.749Z · LW(p) · GW(p)

There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).

Replies from: wedrifid
comment by wedrifid · 2011-04-26T17:57:08.573Z · LW(p) · GW(p)

How do the votes work in this game again? "Upvote for insane", right?

comment by MrShaggy · 2010-10-08T05:02:41.533Z · LW(p) · GW(p)

Eating lots of bacon fat and sour cream can reverse heart disease. Very confident (>95%).

Replies from: JGWeissman, RomanDavis
comment by JGWeissman · 2010-10-08T05:13:28.498Z · LW(p) · GW(p)

You have to actually think your degree of belief is rational.

I doubt you are following this rule.

Replies from: MrShaggy
comment by MrShaggy · 2010-10-09T06:09:36.981Z · LW(p) · GW(p)

I was worried people would think that, but if I posted links to present evidence, I ran the risk of convincing them so they wouldn't vote it up! All I've eaten in the past three weeks is: pork belly, butter, egg yolks (and a few whites), cheese, sour cream (like a tub every three days), ground beef, bacon fat (saved from cooking bacon) and such. Now, that's no proof about the medical claim but I hope it's an indication that I'm not just bullshiting. But for a few links: http://www.ncbi.nlm.nih.gov/pubmed/19179058 (the K2 in question is virtually found only in animal fats and meats, see http://www.westonaprice.org/abcs-of-nutrition/175-x-factor-is-vitamin-k2.html#fig4)--the pubmed is on prevention of heart disease in humans http://wholehealthsource.blogspot.com/2008/11/can-vitamin-k2-reverse-arterial.html shows reversal in rat studies from K2 http://trackyourplaque.com/ -- a clinic that uses K2 among other things to reverse heart disease note that I am not trying to construct a rational argument but to convince people that I do hold this belief. I do think a rational argument can be constructed but this is not it.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2011-09-14T18:44:46.508Z · LW(p) · GW(p)

This was about a year ago: do you still hold this belief? Has eating like you described worked out?

Replies from: MrShaggy
comment by MrShaggy · 2011-10-11T14:08:55.961Z · LW(p) · GW(p)

Not just hold the belief but eat that way even more consistently (more butter and less sour cream just because tastes change, but same basic principles). I'm young and didn't have any obvious signs of heart disease personally so can't say it "worked out" for me personally in that literal, narrow sense but I feel better, more mentally clear, etc. (I know that's kinda whatever of evidence, just saying since you asked).

Someone else recently posted their success with butter lowering their measurement of arterial plaque: "the second score was better (lower) than the first score. The woman in charge of the testing center said this was very rare — about 1 time in 100. The usual annual increase is about 20 percent." (http://blog.sethroberts.net/2011/08/04/how-rare-my-heart-scan-improvement/) (Note: I disagree with the poster's reasoning methods in general, just noting his score change.)

There was a recent health symposium that discussed this idea and related ones: http://vimeo.com/ancestralhealthsymposium/videos/page:1/sort:newest.

For those specifically related to heart health, these are most of them: http://vimeo.com/ancestralhealthsymposium/videos/search:heart/sort:newest

comment by RomanDavis · 2010-12-17T22:58:45.839Z · LW(p) · GW(p)

Downvoted. I've seen the evidence, too.

Replies from: MrShaggy, Desrtopa
comment by MrShaggy · 2010-12-24T03:43:52.168Z · LW(p) · GW(p)

Downvoted means you agree (on this thread), correct? If so, I've wanted to see a post on rationality and nutrition for a while (on the benefits of high-animal fat diet for health and the rationality lessons behind why so many demonize that and so few know it).

comment by Desrtopa · 2010-12-17T23:11:14.263Z · LW(p) · GW(p)

What evidence?

If you're referring to the Atkins diet, I think that's a rather different matter from simply eating lots of bacon fat and sour cream, which doesn't preclude also eating plenty of carbohydrates.

Or worse, it might entail eating nothing else. The post isn't very precise.

Replies from: RomanDavis
comment by RomanDavis · 2010-12-17T23:19:24.561Z · LW(p) · GW(p)

Eating some is better than none, because certain nutrients in animal fat are helpful for CDC. The point that vegetarianism is over rated for the health benefits is contrarian enough here and in the wider world to make a good post.

But yes, losing other vital nutrients would be bad.

And Atkins is silly and unhealthy. Why bring it up?

Replies from: Desrtopa
comment by Desrtopa · 2010-12-17T23:40:41.596Z · LW(p) · GW(p)

Because I thought that might be what you were referring to.

My mother lost about 90 pounds on it, and her health is definitely better than it was when she was overweight, but it did have some rather unpleasant side effects (although she generally refuses to acknowledge them, since they're lost in the halo effect.)

comment by Perplexed · 2010-10-03T04:49:29.779Z · LW(p) · GW(p)

Unless you are familiar with the work of a German patent attorney named Gunter Wachtershauser, just about everything you have read about the origin of life on earth is wrong. More specifically, there was no "prebiotic soup" providing organic nutrient molecules to the first cells or proto-cells, there was no RNA world in which self-replicating molecules evolved into cells, the Miller experiment is a red herring and the chemical processes it deals with never happened on earth until Miller came along. Life didn't invent proteins for a long time after life first originated. 500 million years or so. About as long as the time from the "Cambrian explosion" to us.

I'm not saying Wachtershauser got it all right. But I am saying that everyone else except people inspired by Wachtershauser definitely got it all wrong. (70%)

Replies from: khafra, JohannesDahlstrom, wedrifid, timtyler, Jonathan_Graehl, Mass_Driver, Will_Newsome
comment by khafra · 2010-10-04T14:50:13.787Z · LW(p) · GW(p)

Meh. What's the chances of some germanic guy sitting around looking at patents all day coming up with a theory that revolutionizes some field of science?

Replies from: BillyOblivion
comment by BillyOblivion · 2010-10-05T05:14:54.956Z · LW(p) · GW(p)

Brilliant.

comment by JohannesDahlstrom · 2010-10-03T22:29:52.427Z · LW(p) · GW(p)

You make the "metabolism first" school of thought sound like a minority contrarian position to the mainstream "genes first" hypothesis. I was under the impression that they were simply competing hypotheses with the jury being still out on the big question. That's how they presented the issue in my astrobiology class, anyway.

Replies from: Perplexed
comment by Perplexed · 2010-10-03T22:52:08.898Z · LW(p) · GW(p)

It was a minority, contrarian position just a decade ago. But Wachtershauser's position is not just "metabolism first". It is also "strictly autotrophic" and "lipid first". So I think it is still fair to call it a minority opinion.

comment by wedrifid · 2010-10-03T05:00:57.385Z · LW(p) · GW(p)

Downvoted because it approximately matches what I (literally) covered in Biology 101 a month ago. (70% seems right because to be perfectly honest I didn't pay that much attention and the Gunter guy may or may not have been relevant.)

Replies from: Perplexed
comment by Perplexed · 2010-10-03T23:44:28.964Z · LW(p) · GW(p)

Interesting. They are actually teaching this stuff now! Was the Origins material from the textbook, or from lectures? If textbook, could you name the book?

Replies from: wedrifid
comment by wedrifid · 2010-10-04T04:26:15.675Z · LW(p) · GW(p)

Was the Origins material from the textbook, or from lectures? If textbook, could you name the book?

Lectures. And the lecturer noted that the lecture notes from last year would be obsolete, since the science had changed.

comment by timtyler · 2010-10-03T13:38:19.786Z · LW(p) · GW(p)

To clarify, you do think there was an "RNA world" - but it just post-dated cell walls.

An RNA world before cell walls is really a completely ridiculous idea.

...and of course, I am obviously not going to agree with the last line. IMO, Wachtershauser came along rather late, long after the guts of the problem were sorted out.

Replies from: Perplexed
comment by Perplexed · 2010-10-03T14:11:53.465Z · LW(p) · GW(p)

To clarify, you do think there was an "RNA world" - but it just post-dated cell walls.

Yes, except that what it post-dated was cell membranes, not cell walls. The distinction is important. I do think that there was an "RNA world" stage in life's evolution when living cells could be modeled as "bags full of RNA". But I believe that there was an earlier stage when they could be modeled as simply "bags full of water and minerals" and an even earlier stage when life consisted of "patches of living bag material adhering to the surface of minerals".

there was no "prebiotic soup"

...sounds questionable, or at least very speculative: the first cells probably derived some nutrient value from at least one organic compound - not least because their cell [membranes] were probably composed of organic compounds.

Nope. No organic nutrients whatsoever. Autotrophic. This is the key idea that distinguishes Wachtershauser from almost everyone else. Yes, membrane materials are organic, but they were made (on site and just-in-time) by the first living membranes (on mineral surfaces).

...and of course, I am obviously not going to agree with the last line. IMO, Wachtershauser came along rather late, long after the guts of the problem were sorted out.

Of course. It amused me as I wrote my piece that you could write a strictly parallel contrarian position on the origin-of-life question. "Unless you are familiar with the work of Glascow chemist Graham Cairns-Smith, everything you have read about the origin of life on earth is wrong".

Replies from: timtyler
comment by timtyler · 2010-10-03T14:50:10.028Z · LW(p) · GW(p)

I shouldn't argue the "No organic nutrients whatsoever." point too much - and indeed, I thought I deleted it from my comment pretty quickly. Yes, maybe everything organic was made from inorganic CO2 at the time of the first cells - but do we really know that with 70% confidence? No organic nutrients seems like quite a strong claim.

Replies from: Perplexed
comment by Perplexed · 2010-10-03T15:13:00.135Z · LW(p) · GW(p)

... maybe everything organic was made from inorganic CO2 at the time of the first cells - but do we really know that with 70% confidence?

Well, actually I think the carbon sources were more likely inorganic CO and inorganic HCN, with H2CO (formaldehyde) a possibility. "Organic", in this claim, means having a C-C bond. And yes, I believe it with 70% confidence. Autotrophy came first. Heterotrophy came later.

No organic nutrients seems like quite a strong claim.

It is. It is the claim which forces a kind of intellectual honesty on the rest of your origin theory. You can't just postulate that some needed chemical arrived on a handy comet or something. If you need a molecule, you must figure out the chemistry of how to make it from the materials already at hand. Wachtershauser didn't suggests vents as the site of the origin simply because vents were new and "hot" at the time of his proposal. He did so because his chemical training told him that forming carbon-carbon bonds in high yields without enzymes requires a high-pressure high-temperature metal-catalyzed process like Fischer-Tropsch. And then he realized that vents provided an environment where this kind of chemistry could take place naturally.

Replies from: timtyler
comment by timtyler · 2010-10-03T15:27:57.003Z · LW(p) · GW(p)

I won't argue with the "Autotrophy came first. Heterotrophy came later." However, you were talking about the origin of cells here - and they "came later" too. Before there were cells there were very likely simpler "naked" replicators - including ones on mineral surfaces.

Surely though, this radically transforms your original claim:

"Organic", in this claim, means having a C-C bond.

Surely that is not what "organic" normally means in this context! E.g. see:

http://en.wiktionary.org/wiki/organic_compound

Formaldehyde is an organic compound.

If you say "organic nutrient molecules" and it actually turns out you mean only those with C-C bonds, your audience is very likely to get the wrong end of the stick.

Replies from: Perplexed
comment by Perplexed · 2010-10-03T15:51:43.255Z · LW(p) · GW(p)

Before there were cells there were likely simpler "naked" replicators.

I believe you are wrong if you are talking about replicating information-bearing molecules or crystals. 70% confidence.

Surely though, this radically transforms your claim ...

Not really. My original claim didn't even mention autotrophy. I added it as explanation of why Wachtershauser is so completely divergent from other ideas regarding the origin.

Contrary to your reference HCN is also considered inorganic, along with CO and CO2 and their hydrates. If you want to consider formaldehyde as an organic, and hence as a nutrient for a heterotroph, go ahead - I strongly doubt that it was the original carbon source in any case. 70% confidence.

Replies from: timtyler
comment by timtyler · 2010-10-03T16:16:02.720Z · LW(p) · GW(p)

Before there were cells there were likely simpler "naked" replicators.

I believe you are wrong if you are talking about replicating information-bearing molecules or crystals. 70% confidence.

Replication happens naturally, in crystal growth processes. Of course, that doesn't prove that early mineral copying processes ultimately led to modern organisms, but it makes me pretty confident of my specific statement above - maybe 90% confidence - and most of the remaining probability mass comes from panspermia and cosmic evolution scenarios - where the origin of life takes place far, far away.

Replies from: Perplexed
comment by Perplexed · 2010-10-03T17:04:03.065Z · LW(p) · GW(p)

Replication happens naturally, in crystal growth processes. Of course, that doesn't prove that early mineral copying processes ultimately led to modern organisms, but it makes me pretty confident of my specific statement above.

Ok, it is possible that there were information-bearing replicating crystals. Before organic forms of life. Totally irrelevant to LAWKI, but first. The only thing that makes me doubt that suggestion is that no one - including the abstract of the reference you provide - has given an example of an information-bearing replicating crystal. Good arguments for why that kind of thing might be possible, yes. But actual evidence of it happening somewhat naturally, no.

I've seen examples of information bearing crystals that repeat the same information layer-after-layer. And I've seen non-information-bearing crystals that actually do something comparable to reproduction (splitting, growth, splitting again). I've just never seen a paper where both were happening at the same time.

The clay theory is just not going to be taken seriously until someone has a population of clay "organisms" replicating away in a lab and then starts running long-term evolution experiments on them like Lenski is doing with bacteria.

Replies from: timtyler
comment by timtyler · 2010-10-03T17:18:33.961Z · LW(p) · GW(p)

I am puzzled by your terminology. Replication implies high-fidelity copying of information. That is what some crystals (e.g. barium ferrites) can do. It is an "information bearing replicating crystal". So, what exactly are you asking for? and why are the polytypic layer structures in barium ferrites not it?

You ask for splitting. However, one of the key insights in this area is that you can have evolution-without splitting - via "vegetative reproduction":

http://originoflife.net/vegetative_reproduction/

For some plant evolution, you don't need splitting, only growth. Much the same is true for some "2D" crystals too.

Not that splitting is terribly demanding. Make anything big enough and it will break up - if only under its own weight. The real issue is whether the split introduces mutations that lead to a meltdown. That is a potential problem for 1D crystals - but 2D ones don't depend on splitting - and if there are splits there are still likely to be operational viable growth fronts after the split.

The clay theory is just not going to be taken seriously until someone has a population of clay "organisms" replicating away in a lab and then starts running long-term evolution experiments on them like Lenski is doing with bacteria.

No-one else can make life from primitive materials yet either - this requirement strikes against every OOL theory equally.

To recap, the main reason for thinking Crystalline Ancestry is true is because clay mineral crystals actually replicate patterns of reasonable size with high fidelity under plausible pre-biotic conditions (and this is the #1 requirement for any evolving system) - whereas no other pre-biotically plausible structure has been demonstrated to do so.

However, it's a reasonable request to want to see evolution based on the theory in the lab. Growing many clays in the lab is terribly difficult - and often takes forever - but success there would be interesting. However, much of the existing work has been done with "found" natural clays. They seem to be a more obvious focus - in some respects.

comment by Jonathan_Graehl · 2010-10-03T07:10:45.682Z · LW(p) · GW(p)

I agree that this is plausible. I haven't investigated, so I don't know if 70% is reasonable or not.

comment by Mass_Driver · 2010-10-03T05:12:24.435Z · LW(p) · GW(p)

Downvoted for the sheer number of reversals of what used to be my background assumptions about biology without an obvious identification of a single lever that could be used to push on all of those variables.

I am now interested in Wachtershauser, but it takes more than a good LW post to make me think that everything I know is wrong and that it was all disproved by the same person.

You have raised my belief in your proposition from near-zero to about 30%, but that's still way short of 70%.

Replies from: Perplexed, Perplexed, Will_Newsome
comment by Perplexed · 2010-10-03T05:20:16.629Z · LW(p) · GW(p)

Errh. If you are disagreeing with me, doesn't that mean you should upvote?

Replies from: Mass_Driver
comment by Mass_Driver · 2010-10-03T16:06:06.713Z · LW(p) · GW(p)

Sorry, I got confused. Duly changed.

comment by Perplexed · 2010-10-03T14:46:57.748Z · LW(p) · GW(p)

Downvoted for the sheer number of reversals of what used to be my background assumptions about biology without an obvious identification of a single lever that could be used to push on all of those variables.

I am now interested in Wachtershauser, but it takes more than a good LW post to make me think that everything I know is wrong and that it was all disproved by the same person.

Well, he hasn't disproved anything, merely offered an alternative hypothesis. A convincing one, IMHO.

But there is a "single lever". Wachtershauser believes that the origin of life was "autotrophic". Everyone else - Miller, Orgel, Deamer, Dyson, even Morowitz on his bad days, thinks that the first living things were "heterotrophic". And since defining those two terms and explaining their significance would take more work than I want to expend right now, I'll leave the explaining to wikipedia and Google. I'll be happy to answer follow-up questions, though.

Replies from: timtyler
comment by timtyler · 2010-10-03T15:00:11.293Z · LW(p) · GW(p)

Everyone else - Miller, Orgel, Deamer, Dyson, even Morowitz on his bad days, thinks that the first living things were "heterotrophic".

Er, that is certainly not true of A. G. Cairns-Smith! He had the first organisms made of inorganic compounds and getting energy from supersaturated solutions way back in the 1960s - long before Wachtershauser weighed in on the topic.

Replies from: Perplexed
comment by Perplexed · 2010-10-03T15:23:54.688Z · LW(p) · GW(p)

Cairns-Smith thinks that the first living things were clay - completely inorganic, yes. So, to include him in my listing of the deluded heterotrophic theorists, I would have to point out that he believes that the first organism incorporating organic carbon got that organic carbon from the environment (soup) rather than making it itself.

back in the 1960s - long before Wachtershauser weighed in

We are only talking about 15 years or so. And it doesn't mean much to be first. Nor to be clever. You also need to be right. Wachtershauser got the important stuff right.

Replies from: timtyler
comment by timtyler · 2010-10-03T15:50:49.604Z · LW(p) · GW(p)

You would have to point that out, yes, and it would be nicest if you could supply references. I don't remember Cairns-Smith expressing strong views on that topic.

He tended to address the entry of carbon along the lines of:

  • look, the entry of carbon came later; natural selection did it; all it needed was some possible paths, and so - since the details of what happened are lost in the mists of time - here is an example of one...

Wachtershauser got the important stuff right.

Possibly - but only if you are talking about the origin of cells. In Crystalline Ancestry, cells are seen as high tech developments that came along well after the origin of living and evolving systems - and the story of the origin of evolution and natural selection is quite different from Wachtershauser's story. From that perspective Wachtershauser was not really wrong - he just wasn't describing the actual origin, but rather some events that happened much later on.

Replies from: Perplexed, Perplexed
comment by Perplexed · 2010-10-03T16:28:19.365Z · LW(p) · GW(p)

You would have to point that out, yes, and it would be nicest if you could supply references. I don't remember Cairns-Smith expressing strong views on that topic.

He tended to address the entry of carbon along the lines of:

  • look, the entry of carbon came later; natural selection did it; all it needed was some possible paths, and so here is an example of one...

Ok, I reread Chapter 8 ("Entry of Carbon") in "Genetic Takeover". You are right that he mostly remains agnostic on the question of autotrophic vs heterotrophic. That, in itself is remarkable and admirable. But, in his discussion of the origin of organic chirality (pp307-308) he seems to be pretty clearly assuming heterotrophy - he talks of selecting molecules of the desired handedness from racemic mixtures, rather than simply pointing out that the chiral crystal (flaw) structure will naturally lead to chiral organic synthesis.

Replies from: timtyler
comment by timtyler · 2010-10-03T16:41:35.222Z · LW(p) · GW(p)

Heterotrophy is kind-of allowed after you have an ecosystem of creatures that are messing about with organic chemistry as part of their living processes. At that stage there might well be an organic soup created by their waste products, decayed carcases, etc.

This autotrophic vs heterotrophic scene is your area interest - and efforts to paint Cairns-Smith as a heterotrophic theorist strike me as a bit of a misguided smear campaign. His proposed earliest creatures are made of clay! They "eat" supersaturated mineral solutions. You can't get much less "organic" than that.

comment by Perplexed · 2010-10-03T16:03:37.534Z · LW(p) · GW(p)

Yes, from your (Cairns-Smith) viewpoint that may be what you think Wachtershauser was saying. However, what he actually said is that Cairns-Smith is wrong. Full stop.

Please, Tim, we've been through this many times. Your favorite theory and my favorite theory are completely different.

If you want to provide links to your clay origin web pages, please do so. Don't demand that I provide them with free advertising. But if I am putting your words into Cairns-Smith's mouth, then I apologize.

Replies from: timtyler
comment by timtyler · 2010-10-03T16:27:41.178Z · LW(p) · GW(p)

That is a bit of a strange response, IMO. I don't know if you can be bothered with continuing our OOL discussion here - but, as you probably know, I don't think there's any good evidence that Cairns-Smith was incorrect - from Wachtershauser - or anyone else - and if you know differently, I would be delighted to hear about it!

Maybe that's not what you are saying. Maybe you are just saying that you think Wachtershauser provided a complete story that you find parsimonious - and which doesn't require earlier stages. That would not be so newsworthy for me, I already know all that.

comment by Will_Newsome · 2010-10-03T05:31:08.172Z · LW(p) · GW(p)

Then you should upvote, not downvote!

comment by Will_Newsome · 2010-10-03T05:10:23.100Z · LW(p) · GW(p)

I have no idea whether to disagree with this or not (the Wiki god barely has any info on the guy!) but I'm tempted to downvote this anyway for being so provocative! ;)

Replies from: Perplexed
comment by Perplexed · 2010-10-03T14:35:08.217Z · LW(p) · GW(p)

Unfortunately, most of Wachtershauser's papers are behind paywalls. This paper (one of his first publications) is an exception. Ignore everything beyond the first 15 pages or so.

This New York Times article is surprisingly good for pop-science journalism.

comment by Multiheaded · 2012-04-08T08:55:21.448Z · LW(p) · GW(p)

Bioware made the companion character Anders in Dragon Age 2 specifically to encourage Anders Breivik to commit his massacre, as part of a Manchurian Candidate plot by an unknown faction that attempts to control world affairs. That faction might be somehow involved with the Simulation that we live in, or attempting to subvert it with something that looks like traditional sympathetic magic. See for yourself. (I'm not joking, I'm stunned by the deep and incredibly uncanny resemblance.)

Replies from: VAuroch, ArisKatsaris, None
comment by VAuroch · 2014-01-12T07:40:53.862Z · LW(p) · GW(p)

The resemblance is shallow at best.

comment by ArisKatsaris · 2012-04-11T13:31:02.144Z · LW(p) · GW(p)

You didn't assign a probability estimate.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-11T13:34:33.754Z · LW(p) · GW(p)

Oh. Umm... 33%!

comment by [deleted] · 2012-04-11T06:44:59.398Z · LW(p) · GW(p)

Don't joke posts ruin the of the point of the Irrationality Games?

In any case you are taking the wrong approach. Clearly it is ultimately the fault of the Jews because they run everything, no further thought required.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-11T12:31:15.916Z · LW(p) · GW(p)

I'm truly not joking!!! You know perfectly well that I don't share much of what's commonly known as "sanity". So to me it's worthy of totally non-ironic consideration..

Replies from: None
comment by [deleted] · 2012-04-11T14:26:52.521Z · LW(p) · GW(p)

I'm sorry for the misunderstanding. I think my brain misfired because the theory involved a video game.

Can you elaborate on it? Also this probably isn't the only such incident you think is plausible, can you name others?

comment by dilaudid · 2010-10-13T07:40:08.086Z · LW(p) · GW(p)

There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)

Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.

Replies from: Relsqui, Richard_Kennaway
comment by Relsqui · 2010-10-13T07:54:40.221Z · LW(p) · GW(p)

I have eight computers here with 200 MHz processors and 256MB of RAM each. Thus, it would not benefit me to acquire a computer with a 1.6GHz processor and 2GB of RAM.

(I agree with your premise, but not your conclusion.)

Replies from: dilaudid
comment by dilaudid · 2010-10-13T08:11:34.905Z · LW(p) · GW(p)

To directly address your point - what I mean is if you have 1 computer that you never use, with 200MHz processor, I'd think twice about buying a 1.6GHz computer, especially if the 200MHz machine is suffering from depression due to it's feeling of low status and worthlessness.

I probably stole from The Economist too.

Replies from: Relsqui
comment by Relsqui · 2010-10-13T08:35:43.833Z · LW(p) · GW(p)

That depends on what you're trying to accomplish. If you're not using your 200MHz machine because the things you want to work on require at least a gig of processing power, buying the new one might be very productive indeed. This doesn't mean you can't find a good purpose for your existing one, but if your needs are beyond its abilities, it's reasonable to pursue additional resources.

Replies from: dilaudid
comment by dilaudid · 2010-10-13T11:14:02.777Z · LW(p) · GW(p)

Yeah I can see that applies much better to intelligence than to processing speed - one might think that a super-genius intelligence could achieve things that a human intelligence could not. Gladwell's Outliers (embarrassing source) seems to refute this - his analysis seemed to show that IQ in excess of 130 did not contribute to success. Geoffrey Miller hypothesised that intelligence is actually an evolutionary signal of biological fitness - in this case, intellect is simply a sexual display. So my view is that a basic level of intelligence is useful, but excess intelligence is usually wasted.

Replies from: Relsqui
comment by Relsqui · 2010-10-13T19:26:29.414Z · LW(p) · GW(p)

I'm sure that's true. The difference is that all that extra intelligence is tied up in a fallible meatsack; an AI, by definition, would not be. That was the flaw in my analogy--comparing apples to apples was not appropriate. It would have been more apt to compare a trowel to a backhoe. We can't easily parallelize among the excess intelligence in all those human brains. An AI (of the type I presume singulatarians predict) could know more information and process it more quickly than any human or group of humans, regardless of how intelligent those humans were. So, yes, I don't doubt that there's tons of wasted human intelligence, but I find that unrelated to the question of AI.

I'm working from the assumption that folks who want FAI expect it to calculate, discover, and reason things which humans alone wouldn't be able to accomplish for hundreds or thousands of years, and which benefit humanity. If that's not the case I'll have to rethink this. :)

Replies from: dilaudid
comment by dilaudid · 2010-10-14T12:00:09.796Z · LW(p) · GW(p)

I agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs).

I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian "singularity" - where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.

comment by Richard_Kennaway · 2010-10-13T07:43:36.392Z · LW(p) · GW(p)

Did you have this in mind? Cognitive Surplus.

Replies from: dilaudid
comment by dilaudid · 2010-10-13T07:52:53.213Z · LW(p) · GW(p)

Yes - thank you for the cite.

comment by knb · 2010-10-04T22:39:24.302Z · LW(p) · GW(p)

Life on earth was seeded, accidentally or on purpose, from outer space.

Replies from: magfrump
comment by magfrump · 2010-10-05T08:14:51.065Z · LW(p) · GW(p)

No probability estimate. I assign this hypothesis some probability, but unless you list yours I can only guess as to whether it is similar to mine.

Mine is quite low, however, so upvoted.

comment by dfranke · 2010-10-13T12:55:03.501Z · LW(p) · GW(p)

Nothing that modern scientists are trained to regard as acceptable scientific evidence can ever provide convincing support for any theory which accurately and satisfactorily explains the nature of consciousness.

Replies from: RobinZ, None, MichaelVassar, dfranke
comment by RobinZ · 2010-10-13T13:02:35.680Z · LW(p) · GW(p)

Confidence level?

Replies from: dfranke
comment by dfranke · 2010-10-13T13:48:54.845Z · LW(p) · GW(p)

Let's say 65%.

comment by [deleted] · 2012-04-13T11:30:14.696Z · LW(p) · GW(p)

Might be belief hysteresis, but I am inclined towards a similar confidence level in that proposition.

comment by MichaelVassar · 2010-10-16T15:44:10.619Z · LW(p) · GW(p)

I disagree but I think that might be considered a reasonable probability by most people here.

comment by dfranke · 2010-10-13T12:59:48.711Z · LW(p) · GW(p)

Furthermore: if the above is false, it will proven such within thirty years. If the above is true it will become the majority position among both natural scientists and academic philosophers within thirty years. Barring AI singularity in both cases. Confidence level 70%.

comment by Eugine_Nier · 2010-10-03T22:34:12.068Z · LW(p) · GW(p)

Conditional on this universe being a simulation, the universe doing the stimulating has laws vastly different from our own. For example, it might contain more than 3 extended-spacial dimensions, or bear a similar relation to our universe as our universe does to second life. 99.999%

Replies from: wedrifid, Snowyowl, army1987, Mass_Driver, Will_Newsome
comment by wedrifid · 2010-10-04T04:45:21.158Z · LW(p) · GW(p)

Upvoted for excessive use of nines. :)

(ie. Gross overcondidence.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-04T05:14:50.530Z · LW(p) · GW(p)

(ie. Gross overcondidence.)

I was originally going to include an additional 9, but decided I should compensate for overconfidence bias. :)

But, seriously, I don't understand why people are so reluctant to quote large probabilities. For some statements, e.g., "the sun will rise tomorrow", 99.999% seems way underconfident.

Replies from: wedrifid
comment by wedrifid · 2010-10-04T06:29:06.566Z · LW(p) · GW(p)

I wouldn't have said the number of nines indicated overconfidence if you were talking about the sun rising. I do not believe you have enough evidence to reach that level of certainty on this subject. I would include multiple nines in my declaration of confidence in that claim.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-06T07:04:27.291Z · LW(p) · GW(p)

You think there's a 999,999/100,000 chance the sun will rise tomorrow? I think you may be overconfident here...

comment by Snowyowl · 2010-10-04T21:23:31.381Z · LW(p) · GW(p)

Upvoted for disagreement. The most detailed simulations our current technology is used to create (namely, large networks of computers operating in parallel) are created for research purposes, to understand our own universe better. Galaxy/star formation, protein folding, etc. are fields where we understand enough to make a simulation but not enough that such a simulation is without value. A lot of our video games have three spatial dimensions, one temporal one, and roughly Newtonian physics. Even Second Life (which you named in your post) is designed to resemble our universe in certain aspects.

Basically, I fail to see why anyone would create such a detailed simulation if it bore absolutely no resemblance to reality. Some small differences, yes (I bet quantum mechanics works differently), but I would give a ~50% chance that, conditional on our universe being a simulation, the parent universe has 3 spatial dimensions, one temporal dimension, matter and antimatter, and something that approximates to General Relativity.

Replies from: NancyLebovitz, bogdanb
comment by NancyLebovitz · 2010-10-05T16:45:38.982Z · LW(p) · GW(p)

This is much less than obvious-- if the parent universe has sufficient resources, it's entirely plausible that it would include detailed simulations for fun-- art or gaming or some costly motivation that we don't have.

Replies from: Snowyowl
comment by Snowyowl · 2010-10-05T17:11:56.218Z · LW(p) · GW(p)

True. I would estimate that our universe resembles the parent universe with probability ~50%.

Replies from: sharpneli
comment by sharpneli · 2010-10-07T13:33:58.538Z · LW(p) · GW(p)

Considering how much stuff like convays game of life which bears no resemblance to our universe is played I'd put the probability much lower.

Whenever you run anything which simulates anything turing compatible (Ok. Finite state machine is actually enough due to finite amount of information storage even in our universe) there is a chance for practically anything to happen.

comment by bogdanb · 2011-02-19T22:17:17.398Z · LW(p) · GW(p)

Basically, I fail to see why anyone would create such a detailed simulation if it bore absolutely no resemblance to reality.

I have seen simulators of Conway’s Game of Life (or similar) that contain very complex things, including an actual Turing machine.

I could see someone creating a simulator for CGL that simulates a Turing machine that simulates a universe like ours, at least as a proof of concept. With ridiculous amounts of computation available I’m quite sure they’d run the inner universe for a few billion of years.

If by accident a civilization arises in the bottom universe and they found some way of “looking above” they’d find a CGL universe before finding the one similar to theirs.

comment by A1987dM (army1987) · 2012-04-11T13:22:41.195Z · LW(p) · GW(p)

I'm supposed to downvote if I think the probability of that is >= 99.999% and upvote otherwise? I'm upvoting, but I still the probability of that is > 90%.

Replies from: Salivanth
comment by Salivanth · 2012-04-13T09:32:49.559Z · LW(p) · GW(p)

Army1987: Not sure what the rules are for comments replying to the original, but hell. Voted down for agreement.

Replies from: wedrifid
comment by wedrifid · 2012-04-13T10:37:51.798Z · LW(p) · GW(p)

(I think we just vote normally in these replies. I agree with army too.)

Replies from: thomblake
comment by thomblake · 2012-04-13T15:02:29.649Z · LW(p) · GW(p)

Why in the world would the parent be downvoted? I'm having difficulty unraveling the paradox.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T15:35:53.789Z · LW(p) · GW(p)

Well, someone might agree with wedrifid (that second-order comments are to be voted on normally) but still disapprove of his comment for reasons other than disagreement (for example, think it clarifies what would otherwise have been a valuable point of confusion), and downvote (normally) on that basis.

Replies from: Salivanth
comment by Salivanth · 2012-04-13T16:00:08.343Z · LW(p) · GW(p)

Okay, given the confusion, I've retracted my downvote. I've also seen a comment get about 27 karma on this thread replying to another post, and that comment was certainly not massively irrational, so I assume we vote normally if it's not a first-order comment.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T16:19:13.046Z · LW(p) · GW(p)

I'm not actually sure why there was ever confusion. From the OP: "comment voting works normally for comment replies to other comments."

comment by Mass_Driver · 2010-10-06T05:34:06.725Z · LW(p) · GW(p)

I'd be with you with that much confidence if the proposition were "the top layer of reality has laws vastly different from our own."

One level up, there's surely at least an 0.1% chance that Snowyowl is right.

comment by Will_Newsome · 2010-10-03T22:52:04.446Z · LW(p) · GW(p)

I disagree with this one more than any other comment by far. Have you looked into Tegmark level 4 cosmology? It's really important to take into account concepts like measure and the utility functions of likely simulating agents when reasoning about this kind of thing. Upvoted.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-03T22:59:33.542Z · LW(p) · GW(p)

My reasoning is that it would take more then a universe's worth of computronium to completely stimulate a comprable universe.

One could argue that they're taking shortcuts with, e.g., the statistics of bulk matter, but I think we'd notice the edge cases caused by something like that.

Replies from: sfb
comment by sfb · 2010-10-04T23:18:20.908Z · LW(p) · GW(p)

My reasoning is that it would take more then a universe's worth of computronium to completely stimulate a comprable universe.

In realtime, maybe, but what if we're running at one simulated planck time per many time units of calculation?

comment by [deleted] · 2012-04-13T11:48:23.912Z · LW(p) · GW(p)

I believe that the universe exists tautologically as a mathematical entity and that from the complete mathamatical description of the universe every physical law can be derived, essentially erasing the distiction of map and territory. Roughly akin to the Tegmark 4 hypohtesis, and I have some very intuitively obvious arguments for this which I will post as a toplevel article at one point. Virtual certanity (99.9%).

Replies from: Zetetic
comment by Zetetic · 2012-04-17T00:39:44.576Z · LW(p) · GW(p)

essentially erasing the distiction of map and territory

This idea has been implied before and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory distinction just provides a nice way of framing a problem we already know exists.

In more detail:

Firstly, even if you take some sort of Platonic view where we have access to all the math, you still have to properly calibrate your map to figure out what part of the territory you're in. In this case you could think of calibrating your map as applying an appropriate automorphism, so the map/territory distinction is not dissolved.

Second, the first view is wrong, because human brains do not contain or have access to anything approaching a complete mathematical description of the level 4 multiverse. At best a brain will contain a mapping of a very small part of the territory in pretty good detail, and also a relatively vague mapping that is much broader. Brains are not logically omniscient; even given a complete mathematical description of the universe, the derivations are not all going to be accessible to us.

So the map territory distinction is not dissolved, and in particular you don't somehow overcome the mind projection fallacy, which is a practical (rather than philosophical) issue that cannot be explained away by adopting a shiny new ontological perspective.

Replies from: None
comment by [deleted] · 2012-04-17T05:49:21.097Z · LW(p) · GW(p)

It is true that a "Shiny" new ontological perspective changes little. Practical intelligences are still bayesians, for information theoretical reasons. What my rather odd idea looks at is specifically what one might call the laws of physics and the mystery of the first cause.

And if one might know the Math behind the Universe, the only thing that one might get is a complete theory of QM.

comment by andrewbreese · 2010-10-14T20:22:54.595Z · LW(p) · GW(p)

Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.

Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.

There are world-changing status-move tricks seen in recent history that no one of consequence uses today, and not because they wouldn't work. (88%) Top-of-the-First-World moderns should unearth, update & reapply lost status moves for managing much of the world. (74%) Wealthy, powerful rationalists should WIN! Just as other First Worlders should not retard FAI, so the developing world should not fester, struggle, agitate in ways that seriously increase existential risks.

Replies from: Multiheaded, NancyLebovitz, None
comment by Multiheaded · 2012-04-15T08:52:03.821Z · LW(p) · GW(p)

I don't understand..By what plausible mechanism could such a disastrous loss of knowledge happen specifically NOW?

comment by NancyLebovitz · 2012-04-11T15:19:03.165Z · LW(p) · GW(p)

Much of this know-how was even widely applied during the lifetimes of some now living. Our simple loss of such important knowledge flies in the face of deep assumptions in the water we all grew up in: progressivism, that knowledge is always increasing, that at least the best First World cultures since the Renaissance have always moved forward.

The good news is that some version of this knowledge keeps getting rediscovered.

The bad news is that the knowledge seems to be mostly tacit and (so far) unteachable.

comment by [deleted] · 2012-04-11T14:48:41.730Z · LW(p) · GW(p)

Valuable -- likely vital -- cooperative know-how for hugely changing the world has been LOST to the sands of time. (94%) Likely examples include the Manhattan Project, the Apollo program, genuinely uplifting colonialism, building the pyramids without epic hardships or complaints.

Down voted because I think this is very plausible.

comment by Eneasz · 2010-10-06T21:14:06.583Z · LW(p) · GW(p)

Predicated on MWI being correct, and Quantum Immortality being true:

It is most advantageous for any individual (although not necessarily for society) to take as many high-risk high-reward opportunities as possible as long as the result of failure is likely to be death. 90%

Replies from: magfrump, Risto_Saarelma, wedrifid
comment by magfrump · 2010-10-06T23:24:27.546Z · LW(p) · GW(p)

Phrased more precisely: it is most advantageous for the quantum immortalist to attempt highly unlikely, high reward activity, after making a stern precommitment to commit suicide in a fast and decisive way (decapitation?) if they don't work out.

This seems like a great reason not to trust quantum immortality.

comment by Risto_Saarelma · 2010-10-07T13:10:50.395Z · LW(p) · GW(p)

Not sure how I should vote this. Predicated on quantum immortality being true, the assertion seems almost tautological, so that'd be a downvote. The main question to me is whether quantum immortality should be taken seriously to begin with.

However, a different assertion that says that in case MWI is correct, you should assume quantum immortality works and try to give yourself anthropic superpowers by pointing a gun to your head would make for an interesting rationality game point.

Replies from: Eneasz
comment by Eneasz · 2010-10-07T16:03:36.883Z · LW(p) · GW(p)

The main question to me is whether quantum immortality should be taken seriously to begin with.

Perhaps a separate vote on that then?

comment by wedrifid · 2010-10-07T17:25:12.361Z · LW(p) · GW(p)

Quantum Immortality being true:

Which way do I vote things that aren't so much wrong as they are fundamentally confused?

Thinking about QI as something about which to ask 'true or false?' implies not having fully grasped the implications of (MWI) quantum mechanics on preference functions. At very least the question would need to e changed to 'desired or undesired'.

Replies from: Nisan, Eneasz
comment by Nisan · 2010-10-10T20:53:50.072Z · LW(p) · GW(p)

So, the question to ask is whether quantum immortality ought to be reflected in our preferences, right?

It's clear that evolution would not have given humans a set of preferences that anticipates quantum immortality. The only sense in which I can imagine it to be "true" is if it turns out that there's an argument that can convince a sufficiently rational person that they ought to anticipate quantum immortality when making decisions.

(Note: I have endorsed the related idea of quantum suicide in the past, but now I am highly skeptical.)

Replies from: jimrandomh
comment by jimrandomh · 2010-10-10T21:01:35.719Z · LW(p) · GW(p)

My strategy is to behave as though quantum immortality is false until I'm reasonably sure I've lost at least 1-1e-4 of my measure due to factors beyond my control, then switch to acting as though quantum immortality works.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-10T21:28:09.044Z · LW(p) · GW(p)

If you lose measure with time, you'll lose any given amount given enough time. It's better to follow a two-outcome lottery where for one outcome of probability 1-1e-4 you continue business as usual, otherwise as if quantum suicide preserves value.

comment by Eneasz · 2010-10-08T14:38:01.345Z · LW(p) · GW(p)

I can't think of any purely self-interested reason why any individual should care about their measure (I grant there are altruistic reasons)

Replies from: wedrifid
comment by wedrifid · 2010-10-09T06:48:02.035Z · LW(p) · GW(p)

Do you think there is a difference between what you would care about before you jumped in the box to play with Schrodinger's cat and what you would care about after?

Replies from: Eneasz
comment by Eneasz · 2010-10-10T14:24:50.104Z · LW(p) · GW(p)

Yes, but it's unclear why I should.

comment by vvineeth4u · 2010-10-04T18:01:02.699Z · LW(p) · GW(p)

Talent is mostly a result of hard work, passion and sheer dumb luck. It's more nurture than nature (genes). People who are called born-geniuses more often than not had better access to facilities at the right age while their neural connections were still forming. (~90%)

Update: OK. It seems I've to substantiate. Take the case of Barrack Obama. Nobody would've expected a black guy to become the US President 50 years ago. Or take the case of Bill Gates, Bill Joy or Steve Jobs. They just happened to have the right kind of technological exposure at an early age and were ready when the technology boom arrived. Or take the case of mathematicians like Fibonacci, Cardano, the Bernoulli brothers. They were smart. But there were other smart mathematicians as well. What separates them is the passion and the hard work and the time when they lived and did the work. A century earlier, they would've died in obscurity after being tried and tortured for blasphemy. Take Mozart. He didn't start making beautiful original music until he was twenty-one by when he had enough musical exposure that there was no one to match him. Take Darwin and think what he would have become if he hadn't boarded the Beagle. He would have been some pastor studying bugs and would've died in obscurity.

In short a genius is made not born. I'm not denying that good genes would help you with memory and learning, but it takes more than genes to be a genius.

Replies from: erratio, Will_Sawin, Risto_Saarelma, Perplexed, Scott78704
comment by erratio · 2010-10-04T19:23:52.180Z · LW(p) · GW(p)

I was with you right up until that second sentence. And then I thought about my sister who was speaking in full sentences by 1 and had taught herself to read by 3.

comment by Will_Sawin · 2010-10-04T18:23:31.489Z · LW(p) · GW(p)

the level of genius of geniuses, especially the non-hardworking ones, is too high & rare to be explained entirely by this.

Replies from: magfrump
comment by magfrump · 2010-10-05T08:19:53.267Z · LW(p) · GW(p)

Though I should talk to others about this as it is testable, I have seen evidence of affective intelligence spirals. Faith in oneself and hard work lead to success and a work ethic, making it easier to have faith and keep working.

I would expect this hypothesis (conditional on affective genius cycles which are more readily testable) to predict MORE "geniuses of geniuses," not fewer.

comment by Risto_Saarelma · 2010-10-05T16:10:42.400Z · LW(p) · GW(p)

Could this be more precisely rephrased as, "for a majority of people, say 80 %, there would have been a detailed sequence of life experiences that are not extraordinarily improbable or greatly unlike what you would expect to have in a 20th century first world country, which would have resulted them becoming what is regarded as genius by adulthood"?

Replies from: magfrump
comment by magfrump · 2010-10-06T23:56:42.582Z · LW(p) · GW(p)

I would interpret in the other direction;

"For people generally regarded as geniuses, likely 100% of them, there is a set of life experiences which is not extraordinarily improbable or greatly unlikely which would have resulted in them not being regarded as geniuses by at least 99% of those who regard them as geniuses."

Those figures might need to be adjusted for people who, for example, are regarded as geniuses by less than 100 people or more than ten million people.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-07T09:53:29.675Z · LW(p) · GW(p)

I don't see how anyone would disagree with in that formulation, since there are vastly more ways to fail than to succeed.

The debated idea is that most people, due to the genetic lottery, couldn't aspire to genius-level achievement no matter what their (reasonable) circumstances. Yours seems to be directed at a stance which completely dismisses the "dumb luck" part, after conception, of people ending up being considered a genius. I haven't seen anyone who thinks somewhat unusual genetics are probably a necessary precondition for genius for humans as they are today taking that stance.

Replies from: magfrump
comment by magfrump · 2010-10-07T17:04:59.996Z · LW(p) · GW(p)

I agree, thinking about my post again it is much weaker and not really useful to the discussion.

Although I did have the purpose of conflicting with some fictional evidence; for example, vampires always turn out to be rich and in "Deepness in the Sky" Pham Nuwen is said to have built up a trillion dollar fortune out of nothing after abandoned on a planet. These sorts of things tend to imply that regardless of the circumstances of a person if they are smart enough they can work their way out.

It's somewhat distinct in that the fictional characters have a basis to build upon whereas a newborn does not, but if anyone is updating on fictional evidence they should stop.

comment by Perplexed · 2010-10-04T18:58:34.638Z · LW(p) · GW(p)

Upvoting, even though I agree with the first sentence. But I disagree with the rest because I'm pretty sure that hard work and passion have a strong genetic component as well.

comment by Scott78704 · 2010-10-06T14:53:00.727Z · LW(p) · GW(p)

What does 'sheer dumb luck' mean?

comment by [deleted] · 2012-12-29T23:04:38.085Z · LW(p) · GW(p)

Before the universe, there had to have been something else (i.e. there couldn't have been nothing and then something). 95% That something was conscious. 90%

comment by homunq · 2010-10-14T02:09:03.097Z · LW(p) · GW(p)

The most advanced computer that it is possible to build with the matter and energy budget of Earth, would not be capable of simulating a billion humans and their environment, such that they would be unable to distinguish their life from reality (20%). It would not be capable of adding any significant measure to their experience, given MWI.(80%, which is obscenely high for an assertion of impossibility about which we have only speculation). Any superintelligent AIs which the future holds will spend a small fraction of their cycles on non-heuristic (self-conscious) simulation of intelligent life.(Almost meaningless without a lot of defining the measure, but ignoring that, I'll go with 60%)

NOT FOR SCORING: I have similarly weakly-skeptical views about cryonics, the imminence and speed of development/self-development of AI, how much longer Moore's law will continue, and other topics in the vaguely "singularitarian" cluster. Most of these views are probably not as out of the LW mainstream as it would appear, so I doubt I'd get more than a dozen or so karma out of any of them.

I also think that there are people cheating here, getting loads of karma for saying plausibly silly things on purpose. I didn't use this as my contrarian belief, because I suspect most LWers would agree that there are at least some cheaters among the top comments here.

Replies from: MattMahoney, MichaelVassar
comment by MattMahoney · 2011-04-26T16:01:43.220Z · LW(p) · GW(p)

I disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.

comment by MichaelVassar · 2010-10-16T15:42:49.881Z · LW(p) · GW(p)

Do you mean unable with any scientific instrumentation that they could build, unable with careful attention, or unlikely to casually?

Are you only interested in branches from 'this' world in terms of measure rather than this class of simulation?

What's your take on Moore's Law in detail

comment by Apprentice · 2010-10-05T19:44:25.949Z · LW(p) · GW(p)

The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)

Replies from: Mass_Driver, Vladimir_M, Ledfox
comment by Mass_Driver · 2010-10-06T05:30:14.655Z · LW(p) · GW(p)

Far too confident.

The typical Congressperson is decent rather than cruel, honest rather than corrupt, smart rather than dumb, and dutiful rather than selfish, but the conjunction of all four positive traits probably only occurs in about 60% of Congresspeople -- most politicians have some kind of major character flaw.

I'd put the odds that "the vast majority" of Congresspeople pass all four tests, operationalized as, say, 88% of Congresspeople, at less than 10%.

Replies from: Apprentice
comment by Apprentice · 2010-10-06T13:50:01.778Z · LW(p) · GW(p)

All right, I'll try to mount a defence.

I would be modestly surprised if any member of Congress has an IQ below 100. You just need to have a bit of smarts to get elected. Even if the seat you want is safe, i.e. repeatedly won by the same party, you likely have to win a competitive primary. To win elections you need to make speeches, answer questions, participate in debates and so on. It's hard. And you'll have opponents that are ready to pounce on every mistake you make and try make a big deal out of it. Even smart people make lots of mistakes and say stupid things when put on the spot. I doubt a person of below average intelligence even has a chance.

Even George W. Bush, who's said and done a lot of stupid things and is often considered dim for a politician, likely has an IQ above 120.

As for decency and honesty, a useful rule of thumb is that most people are good. Crooked people are certainly a significant minority but most of them don't hide their crookedness very well. And you can't be visibly crooked and still win elections. Your opponents are motivated to dig up the dirt on you.

As for honestly trying to serve their country I admit that this is a bit tricky. Congresspeople certainly have a structural incentive to put the interests of their district above that of their country. But they are not completely short-sighted and neither are their constitutents. Conditions in congressional district X are very dependent on conditions in the US as a whole. So I do think congresspeople try to honestly serve both their district and their country.

Non-corruption is again a bit tricky but here I side with Matt Yglesias and Paul Waldman:

The truth, however, is that Congress is probably less corrupt than at any point in our history. Real old-fashioned corruption, of the briefcase-full-of-cash kind, is extremely rare (though it still happens, as with William Jefferson, he of the $90,000 stuffed in the freezer).

Real old-school corruption like you have in third world countries and like you used to have more of in Congress is now very rare. There's still a real debate to be had about the role of lobbyists, campaign finance law, structural incentives and so on but that's not what I'm talking about here.

Are there still some bad apples? Definitely. But I stand by my view that the vast majority are not.

Replies from: Scott78704, magfrump
comment by Scott78704 · 2010-10-06T14:50:37.665Z · LW(p) · GW(p)

Conflating people with politicians is an egregious category error.

comment by magfrump · 2010-10-06T23:36:59.212Z · LW(p) · GW(p)

If by not-corrupt you meant "would consciously and earnestly object to being offered money for the explicit purpose of pursuing a policy goal that they perceived as not in the favor of their electorate or the country" and by "above-average intelligence" you meant "IQ at least 101" then I would downvote for agreement.

But if you meant "tries to assure that their actions are in the favor of their constituents and country, and monitors their information diet to this end" and "IQ above 110 and conscientiousness above average" then I maintain my upvote.

When I think of not-corrupt I think of someone who takes care not to betray people, rather than someone who does not explicitly betray them. When I think "above average intelligence" I think of someone who regularly behaves more intelligently than most, not someone who happens to be just to the right of the bell curve.

Replies from: Apprentice, bogdanb
comment by Apprentice · 2010-10-07T09:19:46.467Z · LW(p) · GW(p)

Point taken. And I concede that there are probably some congressmen with 100<IQ<110. But my larger point, which Vladimir made a bit more explicit, is that contrary to popular belief the problems of the USA are not caused by politicians being unusually stupid or unusually venal. I think a very good case can be made that politicians are less stupid and less venal than typical people - the problems are caused by something else.

Replies from: magfrump
comment by magfrump · 2010-10-07T16:55:30.171Z · LW(p) · GW(p)

I would certainly agree that politicians are unlikely to be below the mean level of competence, since they must necessarily run a campaign, be liked by a group of people, etc. I would be surprised if most politicians were very far from the median, although in the bell curve of politician intelligence there is probably a significant tail to the high-IQ side and a very small tail to the low-IQ side.

I would also agree that blaming politicians' stupidity for problems is, at the very least, a poor way of dealing with problems, which would be much better addressed with reform of our political systems; by, say, abolishing the senate or some kind of regulation of party primaries.

At the very least I'm not willing to give up on thinking that there are a lot of dumb and venal politicians, but I am willing to cede that that's not really a huge problem most of the time.

Replies from: wnoise, wedrifid
comment by wnoise · 2010-10-08T05:34:30.579Z · LW(p) · GW(p)

(Assuming US here). Abolishing the senate seems to be an overreaction at this point, though some reforms of how it does business certainly should be in order.

I think one of the biggest useful changes would be to reform voting so that the public gets more bits of input, by switching to approval or Condorcet style voting.

Replies from: magfrump, wedrifid
comment by magfrump · 2010-10-08T06:55:56.165Z · LW(p) · GW(p)

Yes, US.

You say that abolishing the senate seems to be an overreaction. Can you point to specific cases where having a second legislative house, wherein representatives of 14% of the population (the 20 least populous states) can stop any action whatsoever from being taken has actually had a use?

I'm sure that you can, but I'm also fairly sure that it's a poorly designed system and its best defense is status quo bias rather than effective governance.

Maybe I'm also biased in coming from California, that people from Wyoming have literally 68 times as much representation in the senate as I do.

You're probably right in suggesting a change of voting system. Basically anything that's not "first past the post" would be vastly better. But that doesn't make our senate worthwhile.

I'm going to precommit to not making any further posts on this topic because politics will kill my mind.

Replies from: wnoise
comment by wnoise · 2010-11-15T19:39:50.294Z · LW(p) · GW(p)

Can you point to specific cases where having a second legislative house, wherein representatives of 14% of the population (the 20 least populous states) can stop any action whatsoever from being taken has actually had a use?

It's rather difficult to find good examples. News coverage of bills that don't pass is much harder to find. There's an additional complication in that any given case where I think it was a fantastic thing that a bill didn't pass is as likely to be interpreted by someone else as a damn shame.

I'm also fairly sure that it's a poorly designed system and its best defense is status quo bias rather than effective governance.

I agree it's a poorly designed system. There absolutely are better ways of doing things. But I don't know entirely which they are, and there are far more ways of making the system worse than better. I'm just not convinced that abolishing is necessarily an improvement.

It's hard to design well-functioning political systems. Just as it's hard to design any complicated interacting system with many parts. Note too that the system is not just the formal rules, either, but includes the traditions about what is acceptable. These evolved in tandem. As an example of what can happen when they don't, many Latin American countries borrowed heavily from the formal structure of the U.S. and then promptly slid into dictatorships.

There's a computer programming adage that any complex working system was created by evolving a less-complex working system, rather than writing from scratch. I'd rather see incremental reform than large changes, barring an absolute necessity. Most of what the U.S. Congress does is not terribly time sensitive. It just doesn't matter if most legislative tweaks get passed this month, or even this year. The budget is admittedly a very important exception.

(I too am from California, though I don't currently live there. And yeah, the overrepresentation of "flyover country" is annoying. I would prefer the second chamber to be allocated differently than it currently is, but I still think two chambers is better than one, if for nothing else than slightly reducing groupthink.)

comment by wedrifid · 2010-10-08T06:12:56.812Z · LW(p) · GW(p)

I think one of the biggest useful changes would be to reform voting so that the public gets more bits of input, by switching to approval or Condorcet style voting.

What do you use currently? Something worse than approval? Tell me it isn't "First Past the Post"!

Condorcet voting systems seem like a good option. We've been using Instant Runoff Voting here (Australia) since before we federated but it seems like Condorcet would be a straightforward upgrade. The principle ('preference voting') is the same but Condorcet looks like it would better handle the situation where your first preference is (for example) the 2nd most popular candidate.

Replies from: wnoise
comment by wnoise · 2010-10-08T06:35:43.008Z · LW(p) · GW(p)

What do you use currently? Something worse than approval? Tell me it isn't "First Past the Post"!

Why are you asking me to lie?

A proportional-representation system just won't fly in (most of) the U.S. I certainly don't like the enhanced party-discipline it tends to reinforce.

Although in theory approval is subject to most of the same strategic voting problems as FPTP, STV/IRV, and Borda count, in practice, approval works quite well. It's simpler to explain and count compared to Condorcet, and for n candidates requires only n counts instead of the n(n-1)/2 counts that Condorcet would.

(I do regularly run votes for my smallish, intelligent gaming group, and there we do use Condorcet to e.g. pick the next game and who's running it -- though usually as nice summary for establishing consensus).

Replies from: wedrifid
comment by wedrifid · 2010-10-08T08:27:21.700Z · LW(p) · GW(p)

Although in theory approval is subject to most of the same strategic voting problems as FPTP, STV/IRV, and Borda count, in practice, approval works quite well.

You're comparing approval favorably to IRV along dimensions related to strategic voting? That seems bizarre to me. Thinking of cases in which to vote strategically with IRV is relatively difficult - it very rarely matters and only changes the payoffs marginally. With approval voting strategic voting is more or less necessary to vote effectively. You need to know where to draw the line on what could have otherwise been a preference ordering in order to minimise the loss of your preference information due to the system.

I probably wouldn't bother with Concorcet if not for the ability to use computers to do the counting. IRV is much simpler to count by hand. "OK guys. This candidate is out. Let's take this box, cross off the top name and sort them again."

Replies from: wnoise
comment by wnoise · 2010-10-09T04:29:40.863Z · LW(p) · GW(p)

You're comparing approval favorably to IRV along dimensions related to strategic voting?

Yep. Strategic voting for IRV becomes relevant as soon as the third-ranked candidate becomes competitive, and essentially gives you first-past-the-post behavior. It's less likely to encourage strategic voting than FPTP, and this is definitely important in practice, but it still falls under the Gibbard-Satterthwaite theorem. See, for example, http://minguo.info/election_methods/irv/

It's true that optimally setting a cut-off in approval is part of the strategy. But there is never an incentive to lie and approve a lessor-favored candidate over a more-favored one. The second is far more informationally damaging. (And I think it is sometimes easier to just measure each candidate against a cut-off rather than doing a full ranking.)

I probably wouldn't bother with Concorcet if not for the ability to use computers to do the counting. IRV is much simpler to count by hand.

I'd describe that slightly differently -- Condorcet is easier to count by hand -- it's just the pairwise races that matter. Determining the winner from the counts involves a bit of skull sweat. IRV, the counting proper needs a separate bucket for each permutation, but is easier to analyze by hand and determine the winner. YMMV, on whether this is a useful distinction.

comment by wedrifid · 2010-10-07T17:09:19.995Z · LW(p) · GW(p)

by, say, abolishing the senate

I don't care what you do over there, so long as you don't try that over here.

comment by bogdanb · 2011-02-19T22:08:55.189Z · LW(p) · GW(p)

About the first paragraph: does your definition include in “corrupt” people who do not object in that situation because they believe that the benefit to the country of receiving the money (because they’d be able to use it for good things) exceeds the damage done to the country by whatever they’re asked to do?

I ask because I suspect many people in high positions have an honest but incorrectly high opinion about their worth to whatever cause they’re nominally supporting. (E.g., “without this money I’ll lose the election and the country would be much worse off because the other guy is evil”.)

Replies from: magfrump
comment by magfrump · 2011-02-20T21:23:56.250Z · LW(p) · GW(p)

I think that having damagingly uninformed opinions about the values of your actions (e.g. "I'll lose the election and the other guy is evil") counts as either corrupt (in terms of not monitoring information diet to take care not to betray people) or stupid (in terms of being unable to do so.)

If someone were to accept significant bribes, and then, say, donate all of the money to a highly efficient charity such as SIAI, NFP, or VillageReach, after doing a half-hour or longer calculation involving spreadsheets, then I might not count them as corrupt. However I think the odds that this has actually EVER occurred are practically insignificant.

comment by Vladimir_M · 2010-10-06T21:32:30.089Z · LW(p) · GW(p)

Apprentice:

The vast majority of members of both houses of the US congress are decent, non-corrupt people of above average intelligence honestly trying to do good by their country. (90%)

Downvoted for agreement.

However, I must add that it would be extremely fallacious to conclude from this fact that the country is being run competently and not declining or even headed for disaster. This fallacy would be based on the false assumption that the country is actually run by the politicians in practice. (I am not arguing for these pessimistic conclusions, at least not in this context, but merely that given the present structure of the political system, optimistic conclusions from the above fact are generally unwarranted.)

Replies from: Apprentice
comment by Apprentice · 2010-10-06T21:44:45.777Z · LW(p) · GW(p)

I absolutely agree with you.

comment by Ledfox · 2010-10-10T19:57:48.616Z · LW(p) · GW(p)

The "Meno" demands a down-vote from me, but only in this game.

comment by Wrongnesslessness · 2012-04-13T17:02:12.548Z · LW(p) · GW(p)

All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

Replies from: Incorrect, TimS, ArisKatsaris
comment by Incorrect · 2012-04-13T19:01:32.750Z · LW(p) · GW(p)

All existence is intrinsically meaningless

I'm trying to figure out what this statement means. What would the universe look like if it were false?

Replies from: TheOtherDave, Locaha, thomblake
comment by TheOtherDave · 2012-04-13T20:12:54.521Z · LW(p) · GW(p)

In context, I took it to predict something like "Above a certain limit, as a system becomes more intelligent and thus more able to discern the true nature of existence, it will become less able to motivate itself to achieve goals."

comment by Locaha · 2014-01-21T17:09:27.628Z · LW(p) · GW(p)

I'm trying to figure out what this statement means.

You can't. We live in an intrinsically meaningless universe, where all statements are intrinsically meaningless. :-)

comment by thomblake · 2012-04-13T19:11:51.445Z · LW(p) · GW(p)

I'm not sure it's a bug if "all existence is meaningless" turns out to be meaningless.

comment by TimS · 2012-04-13T17:27:55.716Z · LW(p) · GW(p)

Aren't you supposed to separate distinct predictions? Edit: don't see it in the rules, so remainder of post changed to reflect.

I upvote the second prediction - the existence of self-aware humans seems evidence of overconfidence, at the very least.

Replies from: Wrongnesslessness
comment by Wrongnesslessness · 2012-04-13T18:24:03.066Z · LW(p) · GW(p)

But humans are crazy! Aren't they?

Replies from: TimS
comment by TimS · 2012-04-13T18:30:19.859Z · LW(p) · GW(p)

If we define crazy as "sufficiently mentally unusual as to be noticeably dysfunctional in society" then I estimate at least 50% of humanity is not crazy.

If we define crazy as "sufficiently mentally unusual that they cannot achieve ordinary goals more than 70% of the time," then I estimate that at least 75% of humanity is not crazy.

comment by ArisKatsaris · 2012-04-13T18:34:46.521Z · LW(p) · GW(p)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

This prediction isn't falsifiable -- the word "crazy" is not precise enough, and the word "sufficient" is a loophole you can drive the planet Jupiter through.

comment by simplicio · 2010-10-07T23:28:44.246Z · LW(p) · GW(p)

The distinction between "sentient" and "non-sentient" creatures is not very meaningful. What it's like for (say) a fish to be killed, is not much different from what it's like for a human to be killed. (70%)

Our (mainstream) belief to the contrary is a self-serving and self-aggrandizing rationalization.

Replies from: RobinZ, wedrifid, WhiskyJack, None
comment by RobinZ · 2010-10-08T14:44:41.957Z · LW(p) · GW(p)

Allow me to provide the obligatory complaint about (mainstream) conflation of sentience and sapience, said complaint of course being a display the former but not the latter.

comment by wedrifid · 2010-10-07T23:49:26.844Z · LW(p) · GW(p)

Our belief to the contrary is a self-serving and self-aggrandizing rationalization.

Our? :)

Replies from: simplicio
comment by simplicio · 2010-10-07T23:53:15.380Z · LW(p) · GW(p)

Fixed.

Replies from: wedrifid
comment by wedrifid · 2010-10-08T00:15:59.259Z · LW(p) · GW(p)

But possibly introducing a new problem in as much as the very term 'sentient' and some of the concept it represents isn't even present in the mainstream.

I recall back in my early high school years writing an essay that included a reference to sentience and was surprised when she didn't know what it meant. She was actually an extremely good English teacher and quite well informed generally... just not in the same subculture. While I didn't have the term for it back then it stuck in my mind as significant lesson on the topic of inferential distance.

comment by WhiskyJack · 2012-04-13T13:05:59.532Z · LW(p) · GW(p)

I’m inclined to disagree. While I am far from a weapons grade philosopher it seems to me that if we can rationally assign suffering any negative value then the suffering of a sentient being is a worse thing.

Say a gold fish is imprisoned in a fish bowl allowed to starve to death. Say a human being endures the same thing. The gold fish will die in a poor fashion (are there good ones?) and will suffer greatly. The human, by virtue of intellect, can suffer in ways that the gold fish cannot. The human can rail against the injustice of their situation. The human may lament the mistake that led to their imprisonment. They can suffer in numerous unique ways because they can think. Their suffering is greater because it is deeper. The human will understand much more of what is happening to them.

To further expand with: Burying a loved one sucks. The gut level emotional suffering is great. The act of knowing that they are dead and gone makes it so much worse. Comprehending what death is makes it all the more horrible, does it not?

If we allow that suffering is not bad and should not be ameliorated…. I don’t know how to even begin processing that world.

AMMENDED: I fail. Lesson: Read the article and don't just jump in. If you think somone couldn't possibly mean what they said make sure you understand the rules of engagement. slaps self

comment by [deleted] · 2012-04-13T11:33:10.792Z · LW(p) · GW(p)

I disagree: We desperately need a continuous scale of personhood. Dolphins and Chims and Ara Parrots are people too!

comment by prase · 2010-10-03T22:58:04.519Z · LW(p) · GW(p)

Many-world interpretation of quantum physics is wrong. Reasonably certain (80%).

I suppose the MWI is an artifact of our formulation of physics, where we suppose systems can be in specific states that are indexed by several sets of observables. I think there is no such thing as a state of the physical system.

Replies from: Vladimir_M, wnoise
comment by Vladimir_M · 2010-10-04T00:22:01.214Z · LW(p) · GW(p)

prase:

I think there is no such thing as a state of the physical system.

Could you elaborate by any chance? I can't really figure out what exactly you mean by this, but I suspect it is very interesting.

Replies from: prase, prase
comment by prase · 2010-10-04T20:44:34.427Z · LW(p) · GW(p)

Disclaimer: If I had something well thought through, consistent, not vague and well supported, I would be sending it to Phys.Rev. instead of using it for karma-mining in the Irrationality thread on LW. Also, I don't know your background in physics, so I will probably either unnecessarily spend some time explaining banalities, or leave something crucial unexplained, or both. And I am not sure how much of what I have written is relevant. But let me try.

The standard formulation of the quantum theory is based on the Hamiltonian formalism. In its classical variant, it relies on the phase space, which is coordinatised by dynamical variables (or observables; the latter term is more frequent in the quantum context). The observables are conventionally divided into pairs of canonical coordinates and momenta. The set of observables is called complete if their values determine the points in the phase space uniquely.

I will distinguish between two notions of state of a physical system. First, the instantaneous state corresponds to a point in the phase space. Such a state evolves, which means that as time passes, the point moves through the phase space along a trajectory. It has sense to say "the system at time t is in instantaneous state s" or "the instantaneous state s corresponds to the set of observables q". In the quantum mechanics, the instantaneous state is described by state vectors in the Schrödinger picture.

Second, the permanent state is fixed and corresponds to a parametrised curve s=s(t). It has sense to say "the system in the state s corresponds to observable values q(t)". In quantum mechanics, this is described by the state vectors in the Heisenberg picture. The quantum observables are represented by operators, and either state vectors evolve and operators remain still (Schrödinger), or operators evolve and state vectors remain still (Heisenberg). The distinction may feel a bit more subtle on the classical level, where the observables aren't "reified", so to speak, but it is still possible.

Measuring all necessary observables one determines the instantaneous state of the system. To predict the values of observables in a different instant, one needs to calculate the evolution of the instantaneous state, or equivalently to find out the permanent state.

Now there's a problem already on the classical level: the time. We know that the microscopic laws are invariant with respect to the Lorentz transformation, which mix time and space, so it has no sense to treat time and space so differently (the former as a parameter of evolution and the latter as an observable), unless one is dealing with statistical physics where time is really special. Since the Hamiltonian formalism does treat space and time differently, the Lorentz invariance isn't manifest there and the relativistic theories look awkward. So to do relativistic physics efficiently, either one leaves the Hamiltonian formulation, or turns from mechanics to field theory (where time and space are both parameters). However the Hamiltonian formulation is needed for the standard formulation of quantum theory. The move to field theory does help in the classical physics, but one has to resuscitate the crucial role of time at the moment of quantisation, and then the elegance and Lorentz invariance is lost again.

Another problem comes with general relativity. The general relativity is formulated in such a way that neither time nor spatial coordinates have any physical meaning: any coordinates can be used to address the spacetime points, and no set of coordinates is prefered by the laws of nature. This is called general covariance and has important consequences. Strictly speaking, there isn't the time in general relativity. We can consider different times measured by particular clocks, but those are clearly not different from other observables.

Nevertheless, the Hamiltonian formalism can be salvaged. It's done by adding the time (and its associated momentum, which may or may not be interpreted as energy) to the phase space. (In the field theory, one adds also the spatial coordinates, but I'll limit myself to mechanics here.) The phase space has now two dimension more. The permanent (Heisenberg) states now correspond to trajectories q(τ), where the original time t is contained in q. The parameter τ has no physical meaning and the trajectory q(τ) can be reparametrised, while the state remains the same. For most realistic systems, one can choose such a parametrisation where t=τ, but there is no need to do so. This is the relativistic Hamiltonian formalism, whose field-theoretic version is used in attempts to quantise gravity (loop gravitists do that, string theorists do not).

The relativistic Hamiltonian formalism leads to surprising simplification of the Hamilton equations (at least when written in a coordinate-independent form) and Hamilton-Jacobi equations (written in any form). The Lorentz invariance is manifest in this formalism, too. Those facts suggest that this version of the formalism is closer to the real structure of nature than the standard, time-chauvinistic Hamiltonian formalism. An important point is that the notion of instantaneous state has no sense in the relativistic Hamiltonian formalism. Time and coordinates are treated equally, and to ask "in what state the system was at moment t" has roughly as much sense as to ask "in what state the system was at point x".

(Notice that the usual talk about MWI is done using the Schrödinger picture. It looks a lot less intuitive and clear in a Heisenberg picture. To be fair, the collapse postulate in the Heisenberg picture is litterally bizarre.)

Forfeiting the right to parametrise evolution by time, one has to be sort of careful when asking questions. The question "what was the particle's position x at time t" can be answered, but it's no more a natural formulation of the question. The trajectories aren't parametrised by t, they are parametrised by τ. (But to ask "what's the position at τ" is even worse: τ is an unphysical, arbitrary, meaningless auxiliary parameter that should be elliminated from all questions of fact. Put so it may seem trivial, but untrained people tend to ask meaningless questions in general relativity precisely because they intuitively feel that the spacetime coordinates have some meaning, and it is often difficult to resolve the paradoxes they obtain from such questions.)

The natural form of a question is rather "what doublets x,t can be measured in the (permanent) state s?" But if x and t form a complete set of observables, one measurement of that doublet does determine the state s. Therefore, we can formulate an alternative question: "is it possible to measure both x1,t1 and x2,t2 on a single system?" In this formulation, the mention of state has been omitted. In practice, however, states are indexed by measurement outcomes and those two formulations are isomorphic. It may not be so in quantum theory.

In the standard Hamiltonian quantum theory (the one with time as parameter), one can measure only half of the observables compared to the classical theory - either the canonical coordinates, or the canonical momenta. Furthermore, there is no one-to-one correspondence between the state and the observable values. Nevertheless each observable has a probability distribution in any given instantaneous (Schrödinger) state. It's possible to speak about Heisenberg states, but then, the probabilities which sum up to one are given by scalar products of the state vector and the eigenvectors of observable operators taken in one specific time instant. Measurement, as it happens, is supposed to be instantaneous. This poses a problem for relativistic theories, and consistent relativistic quantum mechanics is impossible (but see my remark at the bottom).

In particular, let's ask what happens when two measurements are done. The orthodox interpretation says that during the first measurement the state collapses into the eigenstate of the measured observables, which corresponds to the observed values. We then ask for the probability of the second set of values, which can then be calculated from the new, collapsed wave function. The decoherence interpretations, and MWI in particular, tell us that (in the Schrödinger picture) during the measurement the observer's own state vector becomes correlated. In the Heisenberg picture, this translates into a statement about the observable operators. The role of time can be obscured easily in such description, but in either interpretation, there have to be planes of simultaneous events defined in the space-time to normalise the state vector. Any such definition violates Lorentz invariance, of course. (See also the second remark.)

(Comment too long, continued in a subcomment.)

Replies from: prase
comment by prase · 2010-10-04T20:44:46.761Z · LW(p) · GW(p)

Like in the classical mechanics, one can resort to the relativistic Hamiltonian formalism. The formalism can be adopted to use in quantum theory, but now there are no observable operators q(t) with time-dependent eigenvectors: both q and t are (commuting) operators. There are indeed wave functions ψ(q,t), but their interpretation is not obvious. For details see here (the article partly overlaps with the one which I link in the remark 2, but gets deeper into the relativistic formalism). The space-time states discussed in the article are redundant - many distinct state vectors describe the same physical situation.

So what we have: either violation of the Lorentz symmetry, or a non-transparent representation of states. Of course, all physical questions in quantum physics can be formulated as questions of the second type as described four paragraphs above. One measures the observables twice (the first measurement is called preparation), and can then ask: "What's the probability of measuring q2, when we have prepared the system into q1?" Which is equivalent to "what's the probability of measuring q1 and q2 on the same system?"

And of course, there is the path integral formulation of quantum theory, which doesn't even need to speak about state space, and is manifestly Lorentz-covariant. So it seems to me that the notion of a state of a system is redundand. The problem with collapse (which is really a problem - my original statement doesn't mean an endorsement of collapse, although some readers may perceive it as such) doesn't exist when we don't speak about the states. Of course, the state vectors are useful in some calculations. I only don't give them independent ontological status.

Remarks:

  1. The fact that the quantum mechanics and relativity don't fit together is often presented as a "feature, not bug": it points out to the necessity of field theory, which, as we know, is a more precise description of the world. In my opinion, such declarations miss the mark, as they implicitly suggest that quantumness somehow doesn't fit well with relativity and mechanics. But the problem here isn't quantumness, the problem is the standard Hamiltonian formalism which singles out time as a special parameter. This can be concealed in the classical mechanics where, like time, dynamical variables are simple numbers, but it's no longer true in quantum setting. Using the relativistic Hamiltonian formalism instead of the standard one, a Lorentz-invariant quantum mechanics can be consistently formulated.

  2. In the decoherence interpretation, a measurement is thought of as an interaction between different parts of the world - the observer and the observed system - an interaction in principle no different from all other interactions. However, it is not so easy to describe such interaction. In any sensible definition the observer must retain memory of his observation. To do that, the interaction Hamiltonian has to be non-Hermitian or time-dependent; both are physically problematic properties. Non-Hermitian interactions are better choice, as they can model dissipation, which is actually the reason for memory in real observers. Another problem with measurement comes when one needs to think about resolution, as no detector can accurately measure the position of a particle with infinite precision. A finite precision of a position measurement is a trivial problem, but when it comes to time measurement, it can really be a mess. See this for a dicussion of a realistic measurement (collapse, but easily translatable into decoherence).

Replies from: Perplexed
comment by Perplexed · 2010-10-05T04:42:30.643Z · LW(p) · GW(p)

An outstanding summary. It reminded me of stuff I once knew and taught me one or two things I had missed until now. And in two parts to make it easy to upvote it twice.

But the purpose was to cast doubt on MWI. If you are merely pointing out that MWI is a non-relativistic theory, and hence cannot be exactly right, well, ok. But that just means we need a Lorentz invariant version of MWI. But I thought we already have one. Feynman's sum-over-histories approach.

I guess my question is this: Are you just saying that MWI is wrong because it is not Lorentz invariant, or that it is wrong because it cannot be made Lorentz invariant, or that it is wrong because it cannot be made Lorentz invariant without giving up the interpretation that there are many worlds?

ETA: second question:

... there is the path integral formulation of quantum theory, which doesn't even need to speak about state space ...

I guess I don't understand the path integral formulation then. I thought the paths being integrated were paths (trajectories) through a kind of state space. How am I wrong?

Replies from: prase
comment by prase · 2010-10-05T11:33:21.910Z · LW(p) · GW(p)

Are you just saying that MWI is wrong because it is not Lorentz invariant, or that it is wrong because it cannot be made Lorentz invariant, or that it is wrong because it cannot be made Lorentz invariant without giving up the interpretation that there are many worlds?

This is a difficult question. I have written the disclaimer above the grandparent precisely because I am not able to demonstrate that MWI is wrong. I believe MWI can be made Lorentz invariant and retain its interpretation, for the price of losing its intuitive appeal and making it awkward. One can postulate some kind of Lorentz invariant measurement procedure (like the one suggested in articles I've linked to) and do the interpretational stuff on the level of observer. In the Schrödinger picture it looks nice - in the Heisenberg picture not so.

My attack doesn't aim to MWI specifically. I think the objective collapse is even a greater problem. Partly, to include MWI in the statement was part of my dirty tactic to make the statement more prominent, since belief in MWI is accepted here as one of the rationality tests (hell, there is even a sequence about it). But I suspect that the very dispute between collapse and many worlds is an artifact of asking about the behaviour of objective states of the system, and if it is possible to avoid speaking about states, the problem disappears. I want to explain away what MWI proponents want to explain. To further justify my inclusion of MWI specifically in the formulation of my supposedly irrational belief, I can add that, unlike the MWI proponents, there are (and were since the very beginning of the quantum theory) Copenhagenists who accept that the collapse is only a mathematical tool useful within our imperfect understanding of nature and it has no independent ontological status. This is a position with which I sympathise.

But I thought we already have one. Feynman's sum-over-histories approach.

Could you explain in more detail?

I thought the paths being integrated were paths (trajectories) through a kind of state space.

When the path integral formulation is derived from the standard formulation, one integrates over paths in the phase space. However the integrations over momenta can be performed exactly and one is left with the integration over paths in the configuration space only (which is half of the phase space). This is the form which is prefered, as after the integration sign stands the exponential of action, which is a functional of the classical trajectory or field configuration (we can call both path). These paths needn't solve the equations of motion, so there even isn't a correspondence path - state.

Replies from: Douglas_Knight, Perplexed, Vladimir_M
comment by Douglas_Knight · 2010-10-05T18:26:51.257Z · LW(p) · GW(p)

We experience a classical world. To explain this "away" would be bad. The broadest interpretation of the phrase "many worlds" is that there are many classical worlds equally real to the single world we experience. Surely you accept this. There are questions of how real is this classical world and where it comes from. The decoherence program tries to address this, though I understand it to be incomplete, or at least controversial.

What gets worse when you move to QFT? You seem concerned with what is ontologically fundamental. The classical states are not ontologically fundamental in ordinary QM. If that's what you mean by MWI...well, you already admitted to being a troll.

I'm not so concerned about fundamental ontology, so I'm happy to talk about QFT as a bunch of ordinary QM systems, one for each reference frame. The decomposition into classical states is not the same in each frame (ie, is not relativistically covariant). Is this a problem? Isn't the situation of ordinary QM already almost this bad? In ordinary QM, you can give states classical names, but they don't actually evolve classically. The macroscale classical worlds that do evolve classically are pretty fuzzy.

Replies from: prase
comment by prase · 2010-10-06T09:19:40.199Z · LW(p) · GW(p)

[T]here are many classical worlds equally real to the single world we experience. Surely you accept this.

Surely? I don't even know what it means. Words "real" and "experience" are close neighbours in my vocabulary, real unexperienced world sounds a lot like an oxymoron, at least if not based on a really strong argument.

What gets worse when you move to QFT?

Nothing. I have tried to (incompletely of course) explain the relation between the conventional and relativistic Hamiltonian formalism in case of mechanics, where it is slightly more intuitive and simpler. If you address my first remark, you have disinterpreted it. I don't say that move to QFT isn't justified, but that one conventional argument used to support this move isn't good.

The classical states are not ontologically fundamental in ordinary QM. If that's what you mean by MWI...

It isn't. By MWI I mean probably the same thing as anybody else. Nothing particularly related to classical states. I have discussed classical states in order to give some background to my intuitions. My statement was that probably the quantum states are a redundant concept.

I'm happy to talk about QFT as a bunch of ordinary QM systems, one for each reference frame.

I would understand that QFT is a bunch of QM systems, one for each spacetime point. I don't understand what reference frames do with it. In any fixed reference frame QFT has infinite number of degrees of freedom. Maybe you speak about momentum representation? I am confused.

comment by Perplexed · 2010-10-05T15:00:59.589Z · LW(p) · GW(p)

I suspect that the very dispute between collapse and many worlds is an artifact of asking about the behaviour of objective states of the system, and if it is possible to avoid speaking about states, the problem disappears. I want to explain away what MWI proponents want to explain.

Amen to that. Whenever we cease believing we are working with models and doing phenomenology, and start believing we are dealing with reality and doing ontology; at that point we have stopped doing science and entered the realm of metaphysics.

But I thought we already have one [relativistic MWI]. Feynman's sum-over-histories approach.

Could you explain in more detail?

Be forewarned that my physics is at the "QM and QFT for Dummies" level. But I thought that a slogan of "one Everett world = one Feynman diagram" had some validity. At least if you think of really big diagrams. (>5%)

comment by Vladimir_M · 2010-10-06T05:46:21.561Z · LW(p) · GW(p)

prase:

I can add that, unlike the MWI proponents, there are (and were since the very beginning of the quantum theory) Copenhagenists who accept that the collapse is only a mathematical tool useful within our imperfect understanding of nature and it has no independent ontological status.

How do these people interpret interaction-free measurements? Specifically, let's observe one of the possible outcomes of the Elitzur-Vaidman bomb-tester thought experiment, namely the one that identifies a working bomb without exploding it. To describe this experiment in Copenhagen terms, we could say that the interaction between the photon wave function and the bomb has, as a measurement, collapsed the photon wave function to the upper arm of the interferometer. Since we actually see this result in the detector, and obtain useful information about the bomb from it, I don't see how we can deny that the collapse has been observed as an actual process while still insisting on Copenhagen. (But I'm sure there is a way to do it, if there are actual physicists who hold this position.)

Replies from: prase
comment by prase · 2010-10-06T09:02:03.681Z · LW(p) · GW(p)

This relates to the discussion where you've apparently participated, and I am not sure whether I can say more. I am quite content with the prediction of the theory, and don't trust much the feeling of need of further verbal explanation here. If I were pressed to say something, I would say that probably the present formalism of quantum theory isn't particularly well suited for human intuition. After all, I believe we will get better formalism in future, whatever it means.

The feeling that the collapse is needed somehow to mediate the bomb's interaction with the detector falls to the same category with the belief that light must propagate in some medium, or a feeling that there must be some absolute time. Such intuitions are sometimes correct, more often wrong.

Based on my experience, most of the ordinary physicists don't think interpretations of QM are a big issue. It isn't discussed too often, people are content to do the calculations most of the time. Of course, this may be different among the first-rank researchers.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-07T00:32:32.048Z · LW(p) · GW(p)

Just to clarify: in that discussion, I claimed that the bomb tester thought-experiment doesn't pose any principal difficulty for Copenhagen relative to the standard variations on the double-slit experiment, so that might seem to contradict what I write here. What I meant to say there is that the main feature of the bomb-tester, namely the interaction-free measurement, is also featured in a less salient way in these classic though-experiments, so that Copenhagen also makes sense for the bomb tester if you accept that it makes sense at all.

But if I may ask, how would you reply to the following statement? "Consider the case when we have a dud bomb, and a case when we have a working bomb that doesn't explode. There is an observable difference between what the detector shows in these outcomes, so replacing the dud bomb with a working one changed the system in a measurable way. We call this change -- whatever exactly it might be -- collapse."

Do you believe that this statement would be flawed, or that it is, after all, somehow compatible with the idea that "the collapse is only a mathematical tool"?

Replies from: prase
comment by prase · 2010-10-07T10:50:13.530Z · LW(p) · GW(p)

Comparing a system with a dud to a system with a working bomb is comparing two different systems, or the same system in two instances with different initial conditions, and thus doesn't relate to the collapse. I suppose you rather had in mind a statement: "Consider two experiments with a working bomb, and in one the bomb explodes, while in the second it doesn't. There is an observable difference..."

Well, it is undeniable that there is a difference. The two systems were the same in the beginning and are different in the end. There are three conventional explanations. 1) The systems were different all way long, but in the beginning the difference was invisible for us (hidden parameters). 2) The difference emerged from a non-deterministic process before or during the measurement (collapse). 3) There is no difference, but we see only a portion of reality after the measurement, and a different one in each of the cases (many worlds).

I suggest fourth point of view: Don't ask in what state the system is, this is meaningless. Ask only what measurement outcomes are possible, given the outcomes we had from the already performed measurements. If you do that, there is no paradox to solve.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-07T20:14:27.969Z · LW(p) · GW(p)

prase:

There are three conventional explanations. 1) The systems were different all way long, but in the beginning the difference was invisible for us (hidden parameters). 2) The difference emerged from a non-deterministic process before or during the measurement (collapse). [...]

Actually, that's the distinction I missed! The notion of "collapse" specifically refers to a non-deterministic process, not to a deterministic process that would at some point reveal the previously existing hidden variables.

I suggest fourth point of view: Don't ask in what state the system is, this is meaningless. Ask only what measurement outcomes are possible, given the outcomes we had from the already performed measurements. If you do that, there is no paradox to solve.

That would basically be the "ensemble interpretation," right? The theory tells you the probability distribution of outcomes, which you'll see if you repeat the experiment prepared the same way a bunch of times (frequentism!), and that's all there is to it. I do have a lot of sympathy for that view, as you might guess from the recent discussion of subjective probabilities, though I cannot say that my superficial understanding of QM gives me much confidence in any views I might hold about it.

Replies from: prase
comment by prase · 2010-10-07T22:12:18.287Z · LW(p) · GW(p)

The theory tells you the probability distribution of outcomes, which you'll see if you repeat the experiment prepared the same way a bunch of times (frequentism!), and that's all there is to it.

Well, the quantum probabilities are certainly frequentist. However, I don't suppose strict Bayesians deny that there are probabilities with frequentist interpretation. I am also not sure about the label ensemble interpretation. It seems that its proponents somehow deny the validity of QM for small, non-ensemblish systems, which is a position I don't subscribe to. After all, both collapse and many-world interpretations are no more Bayesian and no less frequentist than the ensemble one. The hidden parameters are deterministic, but have their own well known problems.

As for the frequentist-Bayes controversy, although I am probably more than you sympathetic to the Bayesian position, I have some sympathy for frequentism. I think both interpretation can coexist, with different sensible meanings of "probability".

comment by prase · 2010-10-04T15:41:11.556Z · LW(p) · GW(p)

I was writing a longer reply and accidentally deleted it while being in the half. It's frustrating. I will write it once more, but not sure whether today.

comment by wnoise · 2010-10-04T18:57:45.161Z · LW(p) · GW(p)

Of course it is wrong, because standard quantum physics is an approximate model that only applies in certain conditions.

Wrong, of course, is not the same as "not useful", nor does "MWI is wrong" mean "there is an objective collapse".

Replies from: prase
comment by prase · 2010-10-04T20:14:34.501Z · LW(p) · GW(p)

I haven't said that there is an objective collapse.

comment by Academian · 2010-10-04T22:04:31.517Z · LW(p) · GW(p)

This comment currently (at the time of reading) has at least 10 net upvotes.

Confidence: 99%.

Replies from: Perplexed, magfrump
comment by Perplexed · 2010-10-05T04:10:20.403Z · LW(p) · GW(p)

You realize, of course, that your confidence level is too high. Eventually, the score should cycle between +9 and +10. Which means that the correct confidence level should be 50%.

Nonetheless, it is very cute. So, I'll upvote it for overconfidence, to say nothing of currently being wrong.

Replies from: JGWeissman, wedrifid
comment by JGWeissman · 2010-10-05T05:57:59.004Z · LW(p) · GW(p)

Once it gets to 10 points, it should be voted up for underconfidence.

Replies from: magfrump
comment by magfrump · 2010-10-05T08:10:45.472Z · LW(p) · GW(p)

Except that there's a chance that it's been downvoted by someone else that's sufficient for 99% to warrant agreement rather than a statement of underconfidence (if and only if people decide that this is true!) which would be easily broken if it got up to 11 but would be far more easily broken if the confidence was set at say, 75%.

comment by wedrifid · 2010-10-05T05:19:16.820Z · LW(p) · GW(p)

You realize, of course, that your confidence level is too high. Eventually, the score should cycle between +9 and +10. Which means that the correct confidence level should be 50%.

It would actually be +8 to +11. (I don't think that changes the 50%.)

comment by magfrump · 2010-10-08T23:04:51.110Z · LW(p) · GW(p)

Cycle's broken! Now upvoted for underconfidence.

comment by [deleted] · 2010-10-03T07:23:30.993Z · LW(p) · GW(p)

The gaming industry is going to be a major source of funding* for AGI research projects in the next 20 years. (85%)

*By "major" I mean contributing enough to have good odds of causing actual progress. By gaming industry I include joint ventures, so long as the game company invested a nontrivial portion of the funding for the project.

EDIT: I am referring to video game companies, not casinos.

Replies from: Eugine_Nier, dfranke, lukstafi, Perplexed
comment by Eugine_Nier · 2010-10-03T07:33:02.746Z · LW(p) · GW(p)

I assume you mean designing better AI opponents, as this seems to be one type of very convenient problem for AI.

Needless to say having one of these go FOOM would be very, very bad.

Replies from: Risto_Saarelma, None, NancyLebovitz, magfrump
comment by Risto_Saarelma · 2010-10-03T09:57:40.608Z · LW(p) · GW(p)

Opponents can be done reasonably well with even the simple AI we have now. The killer app for gaming would be AI characters who can respond meaningfully to the player talking to them, at the level of actually generating new prewritten game plot quality responses based on the stuff the player comes up with during the game.

This is quite different from chatbots and their ilk, I'm thinking of complex, multiagent player-instigated plots such as the player convincing AI NPC A to disguise itself as AI NPC B to fool AI NPC C who is expecting to interact with B, all without the game developer having anticipated that this can be done and without the player feeling like they have gone from playing a story game to hacking AI code.

So I do see a case here. The game industry has thus far been very conservative about weird AI techniques, but since cutting edge visuals seem to be approaching diminishing returns, there could be room for a gamedev enterprise going for something very different. The big problem is that when sorta-there visuals can be pretty impressive, sorta there general NPC AI will probably look quite weird and stupid in a game plot.

Replies from: Kaj_Sotala, magfrump
comment by Kaj_Sotala · 2010-10-03T17:46:17.030Z · LW(p) · GW(p)

Opponents can be done reasonably well with even the simple AI we have now.

Not for games like Civilization they can't. Especially not if they're also supposed to deal with mods that add entirely new features.

Some EURISKO-type engine that could play a lot of games against itself and then come up with good strategies (and which could be rerun after each rules change) would be a huge step forward.

comment by magfrump · 2010-10-03T18:24:58.508Z · LW(p) · GW(p)

This is what I was trying to say but much better.

comment by [deleted] · 2010-10-04T01:41:22.960Z · LW(p) · GW(p)

It would be very bad if an opponent AI went FOOM. Or even one which optimized for certain types of "fun", say, rescue scenarios.

But consider a game AI which optimized for features found in some games today (generalized):

  • The challenges of many games require you to learn to think faster as the game progresses.
  • They often require you to know more (and learn to transfer that knowledge, part of what I would call "thinking better").
  • Through roleplaying and story, some games lead you to act the part of a person more like who you wish you were.
  • Many social games encourage you to rapidly develop skills in cooperation and teamwork, to exchange trust and empathy in and out of the game. They want you to catch up to the players who already have an advantage: those who had grown up farther together.

There are more conditions to CEV as usually stated, and they are hard to correlate with goals that any existing game designers consciously implement. They might have to be a hard pitch, "social innovations" for a "revolutionary game".

If it was done consciously, it's conceivable that AI researchers could use game funding to implement Friendly AGI.

(Has there been a post or discussion yet on designing a Game AI that implements CEV? If so, I must read it. If not, I will write it.)

comment by NancyLebovitz · 2010-10-03T19:18:06.971Z · LW(p) · GW(p)

Needless to say having one of these go FOOM would be very, very bad.

Maybe, but the purpose of such an opponent isn't to crush humans, it's to give them as good a game as possible. The big risk might be an AI which is inveigling people into playing the game more than is good for them, leading to a world which is indistinguishable from a world in which humans are competing to invent better superstimulus games.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2010-10-04T06:51:58.120Z · LW(p) · GW(p)

eh, given the space of various possible futures I would regard this as one of the better ones.

comment by magfrump · 2010-10-03T09:22:27.640Z · LW(p) · GW(p)

AI opponents seem to have a relatively easy timing defeating human players at many games already.

I think it's possible that the development of AI players that are more fun to play with or against will be a new direction for gaming AI to go which would be far less tragic (compared with, say, Astonishing X-men volume 3 issues 8-15+)

Of course this is just a possibility, I don't mean to say that this is the most likely outcome.

Replies from: Pavitra
comment by Pavitra · 2010-10-04T07:13:39.565Z · LW(p) · GW(p)

I think that's been happening for a while already. I vaguely remember reading somewhere that the main difficulty in designing AI game opponents was making them stupid enough to beat.

comment by dfranke · 2010-10-13T17:28:37.262Z · LW(p) · GW(p)

Upvoted for overconfidence, but I'd downvote at 40%.

comment by lukstafi · 2010-10-06T12:35:54.808Z · LW(p) · GW(p)

Downvoted, but I don't think it will be bigger than other major sources.

comment by Perplexed · 2010-10-03T18:59:26.884Z · LW(p) · GW(p)

Actually, "the gaming industry" usually refers to casino operators. So, when you said they would provide funding, I initially thought you meant that the funds would be provided involuntarily as in The Eudaemonic Pie.

Replies from: None
comment by [deleted] · 2010-10-03T19:10:50.010Z · LW(p) · GW(p)

Sorry to be unclear, I meant the video game industry. Thanks though for the book reference, looks like a fun read :-)

comment by Angela · 2014-01-21T15:41:13.141Z · LW(p) · GW(p)

The hard problem of consciousness will be solved within the next decade (60%).

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2022-06-24T20:35:43.943Z · LW(p) · GW(p)

Clock's ticking...

comment by gwern · 2010-10-07T02:08:56.612Z · LW(p) · GW(p)

Julian Jaynes's theory of bicameralism presented in The Origin of Consciousness in the Breakdown of the Bicameral Mind is substantially correct, and explains many engimas and religious belief in general. (25%)

comment by blogospheroid · 2010-10-05T05:15:42.035Z · LW(p) · GW(p)

There will be a net positive to society by measures of overall health, wealth and quality of life if the government capped reproduction at a sustainable level and distributed tradeable reproductive credits for that amount to all fertile young women. (~85% confident)

Replies from: Alicorn, wedrifid, mattnewport, MattMahoney, gwern
comment by Alicorn · 2010-10-05T13:15:56.322Z · LW(p) · GW(p)

How I evaluate this statement depends very heavily on how the policy is enforced, so I'm presently abstaining; can you elaborate on how people would be prohibited from reproducing without the auspices of one of these credits?

Replies from: blogospheroid
comment by blogospheroid · 2010-10-06T10:39:24.740Z · LW(p) · GW(p)

I do not expect that the human population has gone so much in overshoot that the sustainable level has gone below 1 child per woman, so the couple will have one child atleast, from the original credit allocation.

Almost any government order has the threat of force behind it. This is no different.

How would it be enforced would depend on the sustainability research and the gap it finds out between the present birth rate and the sustainable level.

Depending on the gap, policy can vary from mild to draconian.

  • Public appeal on the internet seeking anyone else willing to trade in their credits
  • Giving incentives for sterilisation
  • Ceasing of subsidy of school
  • Ceasing of welfare benefits
  • Allowing a born child time until the age of 40 to accumulate enough money to pay for their credit.
  • Fines equivalent to the extra load on the sustainability infrastructure
  • Ostracisation of couple
  • Sending away to a reservation where the couple have their share of the sustainable resources and can decide what to do with it.
  • Denial of legal recourse (making someone an outlaw, but not initiating any force against them)
  • Imprisonment in a work camp
  • Forcible sterilization of the offending adults
  • Forcible sterilization of the children born
  • Torture of parent
  • Forced Abortion
  • Fathers to be killed in exchange for the child to be born

I think we are presently at the level of time allowance and fines and that is the level where I would say my statement about the improved lot of people came from.

Replies from: wedrifid
comment by wedrifid · 2010-10-06T11:26:43.237Z · LW(p) · GW(p)

Fathers to be killed in exchange for the child to be born

Fathers? Crazy talk. It's the mother that has the ability to abort the child to prevent transgressing upon the law. Killing the father seems not just innapropriate but also extremely impractical. It means the father should kill any mother who doesn't abort the pregnancy at his request in order to save his own life. Not a desirable payoff structure.

An even worse implication of that means of enforcement - practical, legally sanctioned assassination.

  • Paternity is far more difficult to trace than maternity. It is possible the father is not even aware that a child of his is gestating.
  • Consider either woman with a grudge against a male enemy or a male willing to pay a willing baby-popping pseudo-assassin.
  • Said woman simply needs to acquire sperm from the male. This is a relatively simple task in many instances. Options include:
    • Seduce intended victim yourself. Use faulty condoms and or lie about your own birth control status.
    • Seduce intended victim yourself, intentionally take semen from the used condom or neglect certain practical guidelines of use.
    • Pay someone to seduce the intended victim and acquire a sample for you.
    • Invade the victim's privacy with stealth and acquire semen produce during the victim's private sex life or even lack thereof. (Presumably just poisoning the guy while doing this would be too suspicious?...)
  • Identify a willing or clueless cuckold that can think they are the parent until too late for it to matter.
  • Sell your reproductive credit at the last minute.

If you create a system of rules they will be gamed. That rule is far too easy to game.

Replies from: blogospheroid
comment by blogospheroid · 2010-10-06T16:26:05.022Z · LW(p) · GW(p)

In all fairness, that rule does lie on the draconian end of things. I was thinking more on the mild end, because my confidence level is more appropriate at that level of punishment.

You can probably scratch out the last one or replace it with mothers.

Replies from: wedrifid
comment by wedrifid · 2010-10-06T16:28:06.092Z · LW(p) · GW(p)

In all fairness, that rule does lie on the draconian end of things.

Absolutely, I appreciate the whole 'scale of sanction' thing and with :s/father/mother/ it would fit just fine.

comment by wedrifid · 2010-10-05T05:21:27.974Z · LW(p) · GW(p)

The implications of that on mating payoffs are fascinating.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-10-06T05:32:14.661Z · LW(p) · GW(p)

Explain! Rot13 if necessary.

Replies from: wedrifid
comment by wedrifid · 2010-10-06T07:08:15.565Z · LW(p) · GW(p)

I was just 'following the money' to work out how market forces would likely play out with respect to mating credits. It looks at first glance like we would end up with surprisingly similar reproductive payoffs to those in the EEA. Guys, have as many children as you can afford or cuckold off on other people. Girls, seek out guys with abundant resources who can buy reproductive credits but if possible get someone with better genes to do the actual impregnation.

I'm thinking that matter-of-course paternity testing would be a useful addition to blogospheroid's proposal.

comment by mattnewport · 2010-10-06T17:40:01.748Z · LW(p) · GW(p)

Historically, global population increase has correlated pretty well with increases in measures of overall health, wealth and quality of life. What empirical evidence do you derive your theory that zero or negative population growth would be better for these measures from?

Replies from: blogospheroid
comment by blogospheroid · 2010-10-07T05:02:52.603Z · LW(p) · GW(p)

The peak oil literature and global climate change is something that has made me seriously reconsider the classic liberal viewpoint towards population control.

Also, The reflective consistency of the population control logic. Cultures that restrict their reproduction for altruistic reasons will die out, leaving the earth for selfish replicators who will , if left uncontrolled, take every person's living standards back to square one. Population control will be on the agenda of even a moral singleton.

I live in India and have seen China overtake India bigtime because of a lot of institutional improvement, but also because of the simple fact that they controlled their population. People talk about India's demographic dividend but we are not even able to educate and provide basic hygiene and health to our children to take advantage of this dividend. I've seen the demographic transition in action everywhere in the world and it seems like a good thing to happen to societies.

Setting up an incentive system that rewards altruistic control of reproduction, careful creation of children and sustainability seems to be an overall plus to me.

My only concern is if this starts a level-2 status game where more children become a status good and political pressure increases the quotas beyond sustainability.

comment by MattMahoney · 2011-04-26T16:16:06.756Z · LW(p) · GW(p)

It's a good idea but upvote because evolution will thwart your plans.

comment by gwern · 2010-10-07T01:05:53.980Z · LW(p) · GW(p)

Downvote on condition you meant a global cap on reproduction, since that seems like a huge no-brainer to me that population pressures are seriously bad and the demographic transition is good for the nations which undergo it.

If you only meant the US or something... I'd need to think about it more.

comment by timujin · 2014-01-12T04:59:12.780Z · LW(p) · GW(p)

Eliezer Yudkowsky is evil. He trains rationalists and involves them into FAI and Xrisk for some hidden egoistic goal, other than saving the world and making people happy. Most people would not want him reach that goal, if they knew what it is. There is a grand masterplan. Money we're giving to CFAR and MIRI aren't going into AI research as much as into that masterplan. You should study rationality via means different from LW, OB and everything nearby, or nor study it at all. You shouldn't donate money when EY wants you to. ~5%, maybe?

comment by thomblake · 2012-04-13T15:03:31.908Z · LW(p) · GW(p)

This comment will be massively upvoted. 100%.

EDIT: See here. Retracted.

Replies from: TheOtherDave, MarkusRamikin
comment by TheOtherDave · 2012-04-13T15:18:29.863Z · LW(p) · GW(p)

Were I a robot from 1960s SF movies, my head would now explode.

Replies from: thomblake
comment by thomblake · 2012-04-13T15:22:09.696Z · LW(p) · GW(p)

The stable solution is for everyone to notice that few people will read the comment and so it will only be moderately upvoted, and so upvote it.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-13T15:26:32.508Z · LW(p) · GW(p)

DO NOT MESS WITH KARMA

Replies from: thomblake, None
comment by thomblake · 2012-04-13T15:47:06.906Z · LW(p) · GW(p)

noted.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-13T17:15:38.694Z · LW(p) · GW(p)

Aw, didn't mean you to actually do that. :) Guess I'll upvote you here instead.

comment by [deleted] · 2012-04-13T15:43:38.658Z · LW(p) · GW(p)

Why... not?

Replies from: thomblake, MarkusRamikin
comment by thomblake · 2012-04-13T16:23:52.958Z · LW(p) · GW(p)

There isn't a reason - that just turned out to be another stable solution to the paradox.

Replies from: None
comment by [deleted] · 2012-04-13T18:02:23.758Z · LW(p) · GW(p)

What paradox? There wasn't even a paradox.

Replies from: TheOtherDave, thomblake
comment by TheOtherDave · 2012-04-13T18:15:32.544Z · LW(p) · GW(p)

As I understood it, the paradox was that by the rules of the thread, "This comment will be massively upvoted. 100%" is something I should upvote if I believe it's unlikely to be true. But if I upvote it on that basis, I should expect others to upvote it as well. But if I expect others to upvote it, then I should expect it to be upvoted, and therefore I should consider it likely to be true. But if I consider it likely to be true, then by the rules of the thread, I should downvote it. But if I downvote on that basis, I should expect others to downvote it as well, and therefore I should consider it unlikely to be true. But...

comment by thomblake · 2012-04-13T18:12:10.667Z · LW(p) · GW(p)

Naively:

Everyone should agree that 100% certainty of something is infinitely overconfident. Then, everyone should upvote. Knowing this, I'm completely certain that I'll get lots of upvotes, and so absurdly large amounts of certainty seem justified. And as a kicker, everyone said I was overconfident of something that turned out to be correct.

Obviously, there are other possibilities (like me retracting the comment before it can be massively upvoted), so (like usual) 100% certainty really isn't justified. And unforseen consequences like that are exactly why you don't play with outcome pumps, as the time turner story reminds us.

comment by MarkusRamikin · 2012-04-13T15:46:14.999Z · LW(p) · GW(p)

The universe might end due to paradox.

Replies from: None
comment by [deleted] · 2012-04-13T15:47:41.175Z · LW(p) · GW(p)

I seriously doubt the universe's integrity depends on the state some bits stored on hardware that exists inside of it.

comment by MarkusRamikin · 2012-04-13T15:21:32.536Z · LW(p) · GW(p)

Nice.

comment by gwern · 2010-10-10T01:12:38.882Z · LW(p) · GW(p)

Previous survey on this topic: http://lesswrong.com/lw/2l/closet_survey_1/

comment by RobinZ · 2010-10-05T03:50:26.786Z · LW(p) · GW(p)

Between (edit:) 10% and 0.1% of college students understand any mathematics beyond elementary arithmetic above the level of rote calculation. ~95%

comment by ialdabaoth · 2014-01-11T18:38:43.204Z · LW(p) · GW(p)

I think that "personal identity" and "consciousness" are fundamentally incoherent concepts. Reasonably confident (~80%)

comment by Angela · 2014-01-11T18:16:23.396Z · LW(p) · GW(p)

The amount of consciousness that a neural network S has is given by phi=MI(A^H_max;B)+MI(A;B^H_max), where {A,B} is the bipartition of S which minimises the right hand side, A^H_max is what A would be if all its inputs were replaced with maximum-entropy noise generators and MI(A,B)=H(A)+H(B)-H(AB) is the mutual information between A and B and H(A) is the entropy of A. 99.9%

comment by Salivanth · 2012-04-09T07:37:44.837Z · LW(p) · GW(p)

The Big Bang is not how our universe was created. Our universe was created by a naturalistic event that we have not yet seriously theorised, due to a lack of scientific knowledge. (15%)

comment by gRR · 2012-04-08T13:13:44.580Z · LW(p) · GW(p)

Richard Dawkins' genocentric ("Selfish Gene") view is a bad metaphor for most of what happens with sufficiently advanced life forms. Organism-centered view is a much better metaphor. New body forms and behaviors first appear in phenotype, in response to changing environment. Later, they get "written" into the genotype if the new environment persists for enough time. Baldwin effect is ubiquitous. (60%)

comment by 79zombies · 2011-03-25T00:58:46.765Z · LW(p) · GW(p)

You will upvote this comment. (Completely confident, 100%)

comment by avalot · 2010-10-08T17:49:35.889Z · LW(p) · GW(p)

"Self" is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)

More bothersome: The illusion of "Self" might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)

NOTE: This comment is a re-post. I initially posted it in the "Comments on Irrationality Game" thread because I'm a moron. Sorry about that.

comment by Strange7 · 2010-10-08T08:28:20.419Z · LW(p) · GW(p)

What's with all this 'infinite utility/disutility' nonsense? Utility is a measure of preference, and 'preference' itself is a theoretical construct used to predict future decisions and actions. No one could possibly gain infinite utility from anything, because for that to happen, they'd have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it, which (barring hyperinflation so cataclysmic that some government starts issuing banknotes with aleph numbers on them, and further market conditions so inconceivably bizarre that such notes are widely accepted at face value) isn't even remotely possible. Protestations of willingness in the absence of demonstrated ability don't count; talk is cheap, if you really cared that much you'd be finding a way instead of whining.

I've had a funny feeling about this subject for a while, but the logic finally clicked just recently. Still, there could be some flaw I missed. ~98%

Replies from: wedrifid
comment by wedrifid · 2010-10-08T09:18:10.197Z · LW(p) · GW(p)

No one could possibly gain infinite utility from anything, because for that to happen, they'd have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it,

Just willing. If they want it infinitely much and someone else gives it to them then they have infinite utility. Their wishes may also be arbitrarily trivial to achieve. They could assign infinite utility to having a single paperclip and be willing to do anything they can to make sure they have a paperclip. Since they (probably) do have the ability to get and keep a paperclip they probably do have infinite utility.

Call her "Clippet", she's a Paperclip Satisficer. Mind you she will probably still take over the universe so that she can make sure nobody else takes her paperclip away from her but while she's doing that she'll already have infinite utility.

The problem with infinities in the utility function is that it's stupid, not that it's impossible.

Replies from: Strange7, khafra
comment by Strange7 · 2010-10-08T16:24:12.275Z · LW(p) · GW(p)

Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.

In short, the theory that a given agent is currently, or would under some specific circumstance, experience 'infinite utility,' makes no meaningful predictions.

Replies from: Larks, wedrifid
comment by Larks · 2010-12-15T17:04:53.922Z · LW(p) · GW(p)

Consider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She'll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.

Replies from: cata, JoshuaZ
comment by cata · 2010-12-15T17:30:21.737Z · LW(p) · GW(p)

Does it even make sense to talk about "the chance to do X at no cost to Y?" Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it's a negligible influence, but if Y's utility is literally supposed to be infinite, it would dominate.

comment by JoshuaZ · 2010-12-15T17:34:11.876Z · LW(p) · GW(p)

No. This is one of the problems with trying to have infinite utility. Kind Clippet won't actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won't be commutative which leads to other problems.

Replies from: JGWeissman, Larks
comment by JGWeissman · 2010-12-15T17:47:37.714Z · LW(p) · GW(p)

And if you try to use ordinal arithmetic then the addition won't be commutative which leads to other problems.

You can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won't matter.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-12-15T18:04:26.662Z · LW(p) · GW(p)

Yes, you're right. You can do this with sorted n-tuples.

comment by Larks · 2010-12-15T17:50:49.165Z · LW(p) · GW(p)

Just put Kind Clippet in a box with no paperclips.

Replies from: Strange7
comment by Strange7 · 2010-12-16T02:49:53.635Z · LW(p) · GW(p)

That would cause Kind Clippet to escape from the box and acquire a paperclip by any means necessary, and preserve humanity in the process if it was convenient to do so.

comment by wedrifid · 2010-10-09T06:08:01.671Z · LW(p) · GW(p)

Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.

Um... yes? That's how it works. It just doesn't particularly relate to your declaration that infinite utility is impossible (rather than my position - that is is lame).

In short, the theory that a given agent is currently, or would under some specific circumstance, experience 'infinite utility,' makes no meaningful predictions.

It is no better or worse or better than a theory that the utility function is '1' for having a paperclip and '0' for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn't infinite obviously rescales to 'infinitely small'). You appear to be confused about how the 'not testable' concept applies here...

comment by khafra · 2010-10-08T12:56:43.375Z · LW(p) · GW(p)

I'd be interested in the train of thought that lead to "paperclip" being switched out in favor of "grapefruit."

Replies from: wedrifid
comment by wedrifid · 2010-10-08T13:35:41.682Z · LW(p) · GW(p)

Failed to switch out a grapefruit to paperclip when I was revising. (Clips seemed more appropriate.)

Replies from: khafra
comment by khafra · 2010-10-08T16:09:54.906Z · LW(p) · GW(p)

Thanks; I'm rather disappointed in myself for not guessing that. I'd imagined you having a lapse of thought while eating a grapefruit while typing it up, or thinking about doing so; but that now seems precluded to a rather ridiculous degree by Occam's Razor.

comment by DilGreen · 2010-10-05T19:30:41.350Z · LW(p) · GW(p)

As:

formal complexity [http://en.wikipedia.org/wiki/Complexity#Specific_meanings_of_complexity] is inherent in may real-world systems that are apparently significantly simpler than the human brain,

and the human brain is perhaps the third most complex phenomena yet encountered by humans [ brain is a subset of ecosystem is a subset of universe]

and a characteristic of complexity is that prediction of outcomes requires greater computational resource than is required to simply let the system provide its own answer,

any attempt to predict the outcome of a successful AI implementation is speculative. 80% confident

Replies from: magfrump
comment by magfrump · 2010-10-06T23:51:43.968Z · LW(p) · GW(p)

Either you're saying "we can't say anything about AI" which seems clearly false, or you're saying "an AI will surprise us" which seems clearly true.

Depending on what you mean by speculative, you're either overconfident or underconfident, but I can't imagine a proposition that is "in between" enough to be 80% likely.

Replies from: DilGreen, DilGreen
comment by DilGreen · 2010-10-07T14:41:46.518Z · LW(p) · GW(p)

I accept this analysis of what I wrote. In the attempt to be concise, I haven't really said what I meant very clearly.

I don't mean that "we can't say anything about AI", but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.

By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week's weather. It's worth pushing the quality of the tools and the analysis, but don't expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.

Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.

comment by DilGreen · 2010-10-07T14:40:41.593Z · LW(p) · GW(p)

I accept this analysis of what I wrote. In the attempt to be concise, I haven't really said what I meant very clearly.

I don't mean that "we can't say anything about AI", but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.

By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week's weather. It's worth pushing the quality of the tools and the analysis, but don't expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.

Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.

Replies from: magfrump
comment by magfrump · 2010-10-07T16:59:45.894Z · LW(p) · GW(p)

So when you say "speculative" you mean "generations-away speculation"?

I agree that I didn't really understand what your intent was from your post. If you were to say something along the lines of "AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn't be taken into account by those who eventually design it" then I would disagree because it seems substantially overconfident. Is that about right?

Replies from: DilGreen
comment by DilGreen · 2010-10-09T21:47:49.207Z · LW(p) · GW(p)

Um. I've still failed to be clear.

The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.

I'm saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.

It's the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I've ended up risking appearing mean. I'm going to stop here.

comment by mattnewport · 2010-10-03T20:24:34.766Z · LW(p) · GW(p)

Are we only supposed to upvote this post if we think it is irrational?

Replies from: wnoise
comment by wnoise · 2010-10-03T20:30:13.782Z · LW(p) · GW(p)

Is this post a top-level comment to this post?

Replies from: Perplexed, prase
comment by Perplexed · 2010-10-04T19:33:48.797Z · LW(p) · GW(p)

The probability of that is <25%.

comment by prase · 2010-10-05T12:36:12.823Z · LW(p) · GW(p)

I am looking at this comment second time and still can't parse the strange self-reference in it.

Replies from: wnoise
comment by wnoise · 2010-10-05T15:14:34.360Z · LW(p) · GW(p)

There is no self-reference in that comment. It's pointing out that the post is not self-referential: the post suggests different voting rules for top-level comments, not for the post itself.

Replies from: prase
comment by prase · 2010-10-05T16:16:58.387Z · LW(p) · GW(p)

Either I have some mental block or I am simply stupid, either way I still don't know what do the two instances of "this post" in the discussed comment refer to. Each could refer to [mattnewport 03 October 2010 08:24:34PM] or [wnoise03 October 2010 08:30:13PM], anyway, I am not able to make any sense of it.

Replies from: wnoise
comment by wnoise · 2010-10-05T19:45:43.759Z · LW(p) · GW(p)

I call both of those comments. This post was what mattnewport was responding to -- the large essay outlining the game. In the context of Less Wrong (rather than Usenet) I restrict post to mean these top-level things.

comment by Ronny Fernandez (ronny-fernandez) · 2011-06-15T11:59:47.558Z · LW(p) · GW(p)

The natural world is only different from other mathematically describable worlds in content not in type. Any universe that is described by some mathematical system has the same ontological status as the one that we experience directly. (90% about)

Replies from: None
comment by [deleted] · 2012-04-18T16:06:29.173Z · LW(p) · GW(p)

I agree with this hypothesis.

comment by ata · 2010-10-21T22:11:22.410Z · LW(p) · GW(p)

Most vertebrates have at least some moral worth; even most of the ones that lack self-concepts sufficiently strong to have any real preference to exist (beyond any instinctive non-conceptualized self-preservation) nevertheless are capable of experiencing something enough like suffering that they impinge upon moral calculations at least a little bit. (85%)

Replies from: tenshiko, Vladimir_Nesov
comment by tenshiko · 2010-10-23T03:02:29.069Z · LW(p) · GW(p)

Objection: Why is the line drawn between vertebrates and invertebrates? True, the nature of spinal cords means vertebrates are generally capable of higher mental processing and therefore have a greater ability to formulate suffering, but you're counting "ones that lack self-concepts sufficiently strong to have any real preference to exist". Are you saying the presence of a notochord gives a fish higher moral worth than a crab?

Replies from: RobinZ
comment by RobinZ · 2010-10-23T20:01:20.060Z · LW(p) · GW(p)

That's a good point - there are almost certainly invertebrate species on the same side of the line. Squid, for example.

comment by Vladimir_Nesov · 2010-10-21T22:29:17.915Z · LW(p) · GW(p)

"At least a little bit" is too unclear. Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.

Replies from: ata
comment by ata · 2010-10-21T23:04:35.100Z · LW(p) · GW(p)

Even tiny changes in the positions of atoms are probably morally relevant (and certainly, some of them), albeit to a very small degree.

How so? You mean to the extent that any tiny change has some remote chance of affecting something that someone cares about, or anything more direct than that?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-21T23:32:02.493Z · LW(p) · GW(p)

Change, to the extent the notion makes sense (in the map, not territory) already comes with all of its consequences (and causes).

Given any mapping Worlds->Utilities, you get a partition of Worlds on equivalence classes of equal utility. Presumably, exactly equal utility is not easy to arrange, so these classes will be small in some sense. But whatever the case, these classes have boundaries, so that an arbitrarily small change in one direction or the other (from a point on a boundary) determines higher or lower resulting utility. Just make it so that one atom is at a different location.

Replies from: ata
comment by ata · 2010-10-21T23:58:16.648Z · LW(p) · GW(p)

Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program "print 'I am experiencing pain'" doesn't.

(That's assuming that moral questions have correct answers, and are about something other than the mind of the person asking the question. I'm not too confident about that one way or the other, but my original post should be taken as conditional on that being true, because "My subjective emotivist intuition says that x is valuable, 85%" would not be an interesting claim.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-10-22T00:05:24.233Z · LW(p) · GW(p)

Okay. I thought that was pretty clearly not what I was talking about; I was claiming that most vertebrate animals have minds structured such that they are capable of experience that matters to moral considerations, in the same way that human suffering matters but the program "print 'I am experiencing pain'" doesn't.

If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof (in this case, involving "experience of pain", which is only one way to see the territory that simultaneously consists of atoms).

If your claim is about "experience of pain", then talking about resulting moral worth is either a detail of the narrative not adding to the argument (i.e. a property of "experience of pain" that naturally comes to mind and is nice to mention in context), or a lever that is dangerously positioned to be used for rationalizing some conclusion about that claim (e.g. moral worth is important, which by association suggests that "experience of pain" is real).

Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.

Replies from: ata
comment by ata · 2010-10-22T00:35:09.628Z · LW(p) · GW(p)

I don't really understand this comment, and I don't think you were understanding me. Experience of pain in particular is not what I was talking about, nor was I assuming that it is inextricably linked to moral worth. "print 'I am experiencing pain'" was only an example of something that is clearly not a mind with morally-valuable preferences or experience; I used that as a stand-in for more complicated programs/entities that might engage people's moral intuitions but which, under reflection, will almost certainly not turn out to have any of their own moral worth (robot dogs, fictional characters, teddy bears, one-day-old human embryos, etc.), as distinguished from more complicated programs that may or may not engage people's moral intuitions but do have moral worth (biological human minds, human uploads, some subset of possible artificial minds, etc.).

If your claim is about moral worth of animals, then you must accept any argument about validity of that claim, and not demand a particular kind of proof

My claim is about the moral worth of animals, and I will accept any argument about the validity of that claim.

Now, that pain experienced by animals is at least as morally relevant as a speck in the eye would be one way to rectify things, as that would put a lower bar on the amount of moral worth in question, so that presumably only experience of pain or similar reasons would qualify as arguments about said moral worth.

I would accept that. I definitely think that a world in which a random person gets a dust speck in their eye is better than a world in which a random mammal gets tortured to death (all other things being equal, e.g. it's not part of any useful medical experiment). But I suspect I may have to set the bar a bit higher than that (a random person getting slapped in the face, maybe) in order for it to be disagreeable enough for the Irrationality Game while still being something I actually agree with.

comment by RobinZ · 2010-10-04T12:54:08.588Z · LW(p) · GW(p)

1 THz semiconductor-based computing will prove to be impossible. ~50%

(Note for the optimistic: I expect multiplying cores will continue to increase consumer computer performance for some years after length-scale limitations on clock rate are reached.)

Replies from: Tenek
comment by Tenek · 2010-10-04T19:48:13.949Z · LW(p) · GW(p)

At that speed, you have less than 0.3 mm per clock cycle for your signals to propagate. Seems like you'd either need to make ridiculously tiny gadgets, or devote a lot of resources to managing the delays. Seems reasonable enough.

Replies from: RobinZ
comment by RobinZ · 2010-10-04T21:07:57.470Z · LW(p) · GW(p)

You would agree with my stated confidence? I'm not sure what physical processes limit the size of elements at the bottom end - for all I know, they might already hit at 100 GHz.

comment by Will_Newsome · 2010-10-03T02:44:33.367Z · LW(p) · GW(p)

Metadiscussion: Reply to this comment to discuss the game itself, or anything else that's not a proposition for upvotes/downvotes.

Replies from: Risto_Saarelma, wedrifid, Alicorn, Douglas_Knight, Douglas_Knight, GreenRoot, None, Eugine_Nier, Zvi, Pavitra, timtyler, ata
comment by Risto_Saarelma · 2010-10-03T07:36:52.230Z · LW(p) · GW(p)

You might want to put a big bold please read the post before voting on the comments, this is a game where voting works differently right at the beginning of your post, just in case people dive in without reading very carefully.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T07:40:49.202Z · LW(p) · GW(p)

Good suggestion, thank you.

comment by wedrifid · 2010-10-03T05:04:51.318Z · LW(p) · GW(p)

This post makes the recent comments thread look seriously messed up!

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:06:55.702Z · LW(p) · GW(p)

Sorry! Couldn't think of any other way to provide good incentives for organized insanity.

Replies from: wedrifid
comment by wedrifid · 2010-10-03T05:17:37.521Z · LW(p) · GW(p)

It wasn't a complaint. :)

comment by Alicorn · 2010-10-03T04:57:44.997Z · LW(p) · GW(p)

I recommend adding, up in the italicized introduction, a remark to the effect that in order to participate in this game one should disable any viewing threshold for negatively voted comments.

Replies from: wedrifid, Will_Newsome
comment by wedrifid · 2010-10-03T05:23:17.799Z · LW(p) · GW(p)

Or just click on the "negative voted" comments to see what they are...

comment by Will_Newsome · 2010-10-03T05:04:57.528Z · LW(p) · GW(p)

Right, damn, I forgot about that since I deactivated it. Thanks!

comment by Douglas_Knight · 2010-10-03T07:09:54.963Z · LW(p) · GW(p)

If anyone wants to do this again or otherwise use voting weirdly, it is probably a good idea to have everyone put a disclaimer at the beginning of their comment warning that it's part of the experiment, for the sake of the recent comments thread.
(I don't trust any of the scores on this post. At the very least, I expect people to vote up anything at -3 or below that doesn't sound insulting in isolation.)

I've felt for a while that LW has a pretty serious problem of people voting from the recent comments page without considering the context.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T08:39:35.758Z · LW(p) · GW(p)

The karma scores seem to have gotten closer to what I would have expected. Agree with your point though.

comment by Douglas_Knight · 2010-10-04T03:06:06.403Z · LW(p) · GW(p)

Aggregating accusations of overconfidence with underconfidence seems absurd to me.

Thus people should (and, I think, did) phrase their predictions to be accused of overconfidence, so that if I propose that Antipope Christopher would have been a good leader at 30%, it's not because I expect most people put it at 90%.

comment by GreenRoot · 2010-10-03T22:18:36.066Z · LW(p) · GW(p)

Great idea for a post. I've really enjoyed reading the comments and discussion they generated.

comment by [deleted] · 2010-10-03T04:07:05.557Z · LW(p) · GW(p)

At first I didn't think this was a good idea, but now I think it is brilliant. Bravo!

comment by Eugine_Nier · 2010-10-03T03:43:21.428Z · LW(p) · GW(p)

How about replying to posts with what you think the probability should be.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T03:46:03.360Z · LW(p) · GW(p)

Good idea, I'll suggest people do so in the post. That way you can see if people are more or less confident in your belief than you are.

comment by Zvi · 2010-10-06T19:27:37.721Z · LW(p) · GW(p)

I believe that the fact that we upvote for disagreement in either direction means it will be very hard to interpret the results. I think this game would have been more useful if the person making the claim made it clear which direction he felt disagreement was in and we only upvoted for disagreement in that direction.

comment by Pavitra · 2010-10-05T02:39:07.075Z · LW(p) · GW(p)

I thought I'd taken into account the probabilistic burdensomeness of being contrarian with respect to highly intelligent people, but after seeing some of the obviously wrong things here and the corresponding gross overconfidences, I feel considerably less certain.

I don't know if the fact that actually seeing evidence that I should have expected to see changes my probability-feeling means something profound and important about aliefs vs. beliefs, or if it just means I'm bad at assigning confidence levels.

comment by timtyler · 2010-10-03T19:03:58.048Z · LW(p) · GW(p)

This sub-thread needs the word "META" in it somewhere! Incidentally, interesting game!

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T21:16:21.136Z · LW(p) · GW(p)

Incidentally, interesting game!

Thanks! Are you going to add any comments? I always got the impression from your comments that you had odd/interesting/unpopular ideas that I'd like to hear to explained in better context.

comment by ata · 2010-10-03T03:51:04.822Z · LW(p) · GW(p)

Should upvotes go to comments where my probability estimate is significantly lower or higher, or just when mine is lower?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T03:55:33.420Z · LW(p) · GW(p)

Different in either direction, I'll note that in the post.

Replies from: Salivanth
comment by Salivanth · 2012-04-09T07:40:35.357Z · LW(p) · GW(p)

Gah. Deleted it because I figured nobody would still be playing. Reposting:

The Big Bang was not the event that created our universe. The real cause was a naturalistic event, which we have not yet theorised, due to lack of scientific knowledge. (15%)

comment by VAuroch · 2014-01-12T06:54:50.508Z · LW(p) · GW(p)

Pope Francis will do more good than harm in the world. (80%)

comment by Salivanth · 2012-04-13T08:57:16.839Z · LW(p) · GW(p)

Nobody has ever come up with the correct solution to how Eliezer Yudkowsky won the AI-Box experiment in less than 15 minutes of effort. (This includes Eliezer himself). (75%)

Replies from: None
comment by [deleted] · 2012-04-18T16:05:37.262Z · LW(p) · GW(p)

Well, no. The solution is definitely non-obvious and I am also quite certain it took Eliezer himself to come up with a good strategy.

comment by tenshiko · 2010-10-23T03:38:45.381Z · LW(p) · GW(p)

I believe that virtually perfect gender egalitarianism will not be achieved within my lifetime in the United States with certainty of 90%.

This depends on the assumption that I will only live at most about eighty more years, i.e. that the transhumanist revolution will not occur within that time and that I am either not frozen or fail to thaw. My belief in that assumption is 75%.

Replies from: Alicorn, wedrifid
comment by Alicorn · 2010-10-23T03:43:02.527Z · LW(p) · GW(p)

Define "virtually perfect gender egalitarianism".

Replies from: tenshiko
comment by tenshiko · 2010-10-23T04:17:57.072Z · LW(p) · GW(p)

I have to admit that I knew in my heart I should define it but didn't, mostly because I know that the tenets are purely subjective and there's no way I can cover everything that would be involved. Here are a couple points:

  1. No personality traits are considered acceptable in males and unacceptable in females, or vice versa. E.x. aggressiveness, confinement to the domestic sphere, sexual conquest.
  2. Gender is absent from your evaluation of a person's potential utility, except in specific cases where reproduction is relevant (e.g., concern about maternity leave). Even if it is conclusively proven that average men cannot work in business companies without getting into some kind of scandal eventually or that average women cannot think about math as seriously, that shouldn't affect your preconceptions of Jane Doe or John Smith.
  3. For the love of ice, please let the notion of the man as the default human just die, like it should have SO LONG AGO. PLEASE.

I hope this doesn't fall into a semantics controversy.

Replies from: Alicorn
comment by Alicorn · 2010-10-23T04:26:30.945Z · LW(p) · GW(p)
  1. "Considered" by whom? Can I have, say, an aesthetic preference about these things (suppose I think that women look better in aprons than men do, can I prefer on this obviously trivial basis that women do more of the cooking?), or is any preference about the division of traits amongst sexes a problem for this criterion?

  2. "Potential utility" meaning the utility that the person under consideration might experience/get, or might produce? Also, does this lack of preconception thing seem to you to be compatible with Bayesianism? If I have no reason to suspect that John and Jane are anything other than average, on what epistemic basis do I not guess that he is likelier (by the hypothetical proofs you suppose) to be better at math and more likely to cause scandal?

  3. So what gender should the default human be, or should we somehow have two defaults, or should the default human be one with a set of sex/gender characteristics that rarely appear together in the species, or should there be no default at all (in which case what will serve the purposes currently served by having a default)?

I'm totally in favor of gender egalitarianism as I understand it, but it seems a little wooly the way you've written it up here. I'm sincerely trying to figure out what you mean and I'll back off if you want me to stop.

Replies from: tenshiko, Relsqui
comment by tenshiko · 2010-10-23T14:11:55.415Z · LW(p) · GW(p)
  1. Perhaps an aesthetic preference isn't a problem (obviously there are certain physical traits that are attractive in one sex and not another, which does lend itself to certain aesthetic preferences). Note that I used the word "personality traits" - some division of other traits is inevitable. Things that upset me with the current state of affairs are where one boy fights with another and it is dismissed as boys being boys, while any other combination of genders would probably result in disciplinary action. Or how the general social trends (in Western cultures, at least) think that women wearing suits is commendable and becoming ordinary, but a man in a dress is practically lynched.

  2. Potential utility produced, for your company or project. I think I phrased this one a little wonkily earlier - you're right, under the proofs I layed out, if all you know about John and Jane are their genders, then of course the Bayesian thing to do is assume John will be better at math. What I mean is more that, if you do know more about John and Jane, having had an interview or read a resume, the assumption that they necessarily reflect the averages of their gender is like not considering whether a woman's positive mammogram could be false. For an extreme example, the majority of homocides in many countries are committed by men. Should the employer therefore assume that Jane is less likely than John to commit such a crime, even if she has a criminal record?

  3. I don't see why having an ungendered default is so difficult, besides for the linguistic dance associated with it in our language (and several others, but far from all of them), which is probably not going to be a problem for many more generations due to the increasing use of "they" as a singular pronoun. For instance, having a raceless or creedless default has proven not to be that hard, even if members of different races or creeds would react differently in such a situation. If one of the things I'm talking about actually happens in a cishuman lifetime, my bet would go on this one. Now, in situations where you need a more specific everyman, who goes to church every Sunday and has two children and a dog, there might be more use in a gendered, race-bearing, creed-bearing individual.

Maybe I should just go back and say "where virtually perfect acknowledges that there are some immutable differences between the sexes but that all others with detrimental effect have been eradicated".

This is why it surprises me so much that the levels of communication post had so little focus on the level of values or potential misunderstandings that can occur on the level of facts due to the ambiguity of language. The value that I am trying to express, and which I assume that you are as well or something close to it, is that men and women should be treated equally, but completely equal treatment would be impractical and not equal in the terms of benefit conferred. (For example, growth of breasts in men should be taken as a health concern, not a sign of attractiveness.) So we are forced to add specifics to our definitions that make them less clear.

Unless you still think something is wrong or missing in my definition to the point that we're talking about significantly different things, I would appreciate it if we moved on from this aspect of the issue.

Replies from: None
comment by [deleted] · 2010-12-12T12:11:28.511Z · LW(p) · GW(p)

(obviously there are certain physical traits that are attractive in one sex and not another, which does lend itself to certain aesthetic preferences).

Some personality traits are considered attractive in one sex and not another.

Replies from: tenshiko
comment by tenshiko · 2010-12-12T22:33:54.247Z · LW(p) · GW(p)

As I implicitly stated, I don't think that personality traits for the most part should be considered attractive in one sex and not another. There are some physical traits that are arbitrary, like long hair, with attractiveness dimorphism, but I'm talking about physical traits that distinctly vary in whether they would be healthy between males and females. Like having pronounced mammary glands. That's obviously not a fertility marker in both sexes.

Replies from: RomanDavis
comment by RomanDavis · 2010-12-17T22:46:07.379Z · LW(p) · GW(p)

Are you sure this doesn't apply for personality traits as well?

Going into evopsych is so tempting right now, but the "just so story" practically writes itself.

Here's an alternative:

Since major personality traits are associated with hormones produced by parts of our body produced through embryogenisis based on our genes and the traits of our mother's womb. And since our reproductive organs are also so, it would be very surprising to find there was no correlation between personality traits and fertility/ virility, and it would be a major blow against your argument if it turned out to be one that is both strong and positive.

comment by Relsqui · 2010-10-23T05:27:28.956Z · LW(p) · GW(p)

in which case what will serve the purposes currently served by having a default

What are those purposes, anyway?

Replies from: Alicorn
comment by Alicorn · 2010-10-23T12:56:42.935Z · LW(p) · GW(p)

Literary "everyman" types, not needing to awkwardly dance around the use of gendered personal pronouns when talking about a hypothetical person of no specific traits besides defaults, and probably something I'm not remembering.

Replies from: Relsqui
comment by Relsqui · 2010-10-23T17:02:55.023Z · LW(p) · GW(p)

not needing to awkwardly dance around the use of gendered personal pronouns when talking about a hypothetical person of no specific traits besides defaults

How do you do that in English as it is now?

Replies from: Alicorn
comment by Alicorn · 2010-10-23T17:41:20.904Z · LW(p) · GW(p)

People say things like "Take your average human. He's thus and such." If you want to start a paragraph with "Take your average human" and not use gendered language, you have to say things like "They're thus and such" (sometimes awkward, especially if you're also talking about plural people or objects in the same paragraph) or "Ey's thus and such", which many people don't understand and others don't like.

Replies from: Vladimir_M, NancyLebovitz, Relsqui, Mercy
comment by Vladimir_M · 2010-10-23T18:29:51.349Z · LW(p) · GW(p)

Alicorn:

"Ey's thus and such"

I find these invented pronouns awful, not only aesthetically, but also because they destroy the fluency of reading. When I read a text that uses them, it suddenly feels like I'm reading some language in which I'm not fully fluent so that every so often, I have to stop and think how to parse the sentence. It's the linguistic equivalent of bumps and potholes on the road.

Replies from: JGWeissman
comment by JGWeissman · 2010-10-23T18:39:38.298Z · LW(p) · GW(p)

After reading one story that used these pronouns, I was sufficiently used to them that they do not impact my reading fluency.

Replies from: Transfuturist
comment by Transfuturist · 2013-08-11T05:51:28.063Z · LW(p) · GW(p)

Link?

Replies from: JGWeissman
comment by JGWeissman · 2013-08-11T06:48:23.459Z · LW(p) · GW(p)

The story was Alicorn's Damage Report.

comment by NancyLebovitz · 2010-12-12T14:55:19.434Z · LW(p) · GW(p)

I don't have an average human, and I don't think the universe does either. I think there's a lot to be said for not having a mental image of an average human.

Furthermore, since there are nearly equal numbers of male and female humans, gender is trait where the idea of an average human is especially inaccurate.

I think the best substitute is "Take typical humans. They're thus and such." Your average alert listener will be ready to check on just how typical (modal?) those humans are.

Replies from: shokwave
comment by shokwave · 2010-12-12T15:32:54.036Z · LW(p) · GW(p)

Exactly. People make a fuss about a lack of singular nongendered pronouns. The plural nongendered pronouns are right there.

comment by Relsqui · 2010-10-23T17:53:23.615Z · LW(p) · GW(p)

Hmm. It's true, people do, but I think it's getting less common already. Were you asking, then, which of those alternatives the original commenter preferred?

Replies from: Alicorn
comment by Alicorn · 2010-10-23T17:54:52.462Z · LW(p) · GW(p)

Not really, I'm just pointing out that gendered language isn't a one-sided policy debate. (I favor a combination of "they" and "ey", personally, or creating specific example imaginary people who have genders).

Replies from: Relsqui
comment by Relsqui · 2010-10-23T18:30:42.201Z · LW(p) · GW(p)

Not sure what you mean about policy, but I think we're pretty far removed from the main point now, and don't actually disagree, so I'm disinclined to argue further. :)

comment by Mercy · 2010-10-23T19:09:11.500Z · LW(p) · GW(p)

How is "they" any more ambiguous than "you"? Both can easily qualified with "all".

Replies from: Relsqui
comment by Relsqui · 2010-10-23T20:07:47.914Z · LW(p) · GW(p)

It's not always grammatically feasible or elegant to do so. Also, the singular "you" is much more common than the singular "they," so your readers are more likely to expect it and are prepared for the potential ambiguity.

Replies from: Cyan
comment by Cyan · 2010-10-24T00:29:47.299Z · LW(p) · GW(p)

I often use "one" if I can get away with it grammatically and if it's not unbearably pompous. (As a result, I often (in my own judgment) end up sounding bearably pompous.)

comment by wedrifid · 2010-12-12T14:00:49.070Z · LW(p) · GW(p)

Upvoted for drastic underconfidence.

comment by lukstafi · 2011-06-27T13:20:19.317Z · LW(p) · GW(p)

The Friendliness component of AGI adds relatively little Kolmogorov complexity -- much less than the Kolmogorov complexity of the brain of a specific adult human. Very confident. (See here for the opposite statement.)

comment by avalot · 2010-10-04T16:23:32.045Z · LW(p) · GW(p)

Surprised that nobody has posted this yet...

"Self" is an illusion created by the verbal mind. The Buddhists are right about non-duality. The ego at the center of language alienates us to direct perception of gestalt, and by extension, from reality. (95%)

More bothersome: The illusion of "Self" might be an obstacle to superior intelligence. Enhanced intelligences may only work (or only work well) within a high-bandwidth network more akin to a Vulcan mind meld than to a salon conversation, one in which individuality is completely lost. (80%)

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-05T05:56:44.994Z · LW(p) · GW(p)

I read somewhere about the basis for consciousness or "self" being basically about being able to commit to acting towards a specific goal for a longer duration, instead of just being swamped by moment-to-moment sensory input. For example being able to carry a hot bowl of soup to table without dropping it midway when it starts burning one's fingers.

So upvote on the verbal mind thing, as long as we're talking about human minds here.

Replies from: gwern
comment by gwern · 2010-10-07T00:51:06.814Z · LW(p) · GW(p)

Maybe you got that from http://www.rifters.com/crawl/?p=791 about the PRISM theory of consciousness?

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-07T04:09:30.799Z · LW(p) · GW(p)

That's it, exactly.

comment by Morendil · 2010-10-03T16:57:46.395Z · LW(p) · GW(p)

The corporation, as such entities are legally defined in most countries at the present time, is a major contributor to a kind of "astronomical waste". Alternate forms for organizing trade exist that would require only human-level intelligence to find and would yield much greater total prosperity than does having the corporation as the unit of organization.

(Strong hunch, >70%)

Replies from: Perplexed, Kaj_Sotala, gwern, magfrump
comment by Perplexed · 2010-10-03T19:20:24.601Z · LW(p) · GW(p)

Upvoted for disagreement. People are inventive and resourceful. They have explored "organization space" pretty thoroughly. Many successful alternatives to corporations already exist and are functioning successfully. Any corporation producing "astronomical waste" will quickly be destroyed by corporate or non-corporate competitors

Replies from: Morendil
comment by Morendil · 2010-10-03T19:31:27.080Z · LW(p) · GW(p)

Many successful alternatives to corporations already exist and are functioning successfully.

Such as?

Replies from: Perplexed
comment by Perplexed · 2010-10-03T20:05:42.856Z · LW(p) · GW(p)

Partnerships, sole proprietorships, co ops, socialism (i.e. state run enterprises). Much of the construction industry involves small firms coordinating the work of subcontractors. Same with advertising, cleaning services, etc.

comment by Kaj_Sotala · 2010-10-03T17:41:06.819Z · LW(p) · GW(p)

Upvoted for disagreement, but this sounds interesting. Say more?

Replies from: Morendil
comment by Morendil · 2010-10-03T18:31:27.743Z · LW(p) · GW(p)

I'm not even sure where to start, this documentary is a deliberately provoking exposition of some of the issues.

This suspicion of mine is more heavily fueled by personal experience though - I've seen so many decent people turn into bastards or otherwise abdicate moral responsibility when they found themselves at the helm of a company, no matter how noble their initial intentions.

Replies from: mattnewport, wnoise
comment by mattnewport · 2010-10-03T18:36:52.339Z · LW(p) · GW(p)

I've seen so many decent people turn into bastards or otherwise abdicate moral responsibility when they found themselves at the helm of a company, no matter how noble their initial intentions.

Do you think this is different from the general 'power corrupts' tendency? The same thing seems to happen to politicians for example.

comment by wnoise · 2010-10-03T19:50:54.983Z · LW(p) · GW(p)

How do you know they were decent people? Were they actually tested, or was running a corporation their first test? It's easy to be "decent" when there's nothing really at stake.

Replies from: Morendil
comment by Morendil · 2010-10-04T06:49:58.346Z · LW(p) · GW(p)

Good point. What I mean is that I knew them first as employees, and I heard them speak about their employers and how employers should behave, and inferred from that some values of theirs. When they became employers in turn and I saw these values tested, they failed these tests miserably.

comment by gwern · 2010-10-07T00:31:17.519Z · LW(p) · GW(p)

Voted down. Note that my interpretation is that your 'human-level intelligence' clause allows for tweaked uploads which could be almost arbitrarily re-engineered without exceeding human-level intelligence (for example, an organization made of the same mind replicated many times would be able to almost eliminate any internal controls, checks, balances, loafing, cheating etc. which take up so much of a modern corporation's energy). I think there could be far more efficient structures and minds optimized for them rather than monkey packs.

This says nothing about whether such organizations will come to exist, outcompete existing corporations or organizations; they may just be known as a possibility before the Singularity happens and renders the question 'what is possible with merely human-level intelligence' entirely moot.

comment by magfrump · 2010-10-03T18:37:07.666Z · LW(p) · GW(p)

My liberal roots are telling me to hate corporations and agree with you, but I don't think that actually constitutes agreement. I'm also curious to hear more.

comment by Vladimir_M · 2010-10-03T10:59:08.675Z · LW(p) · GW(p)

Utilitarianism is impossible to even formulate precisely in a logically coherent way. (Almost certain.)

Even if some coherent formulation of utilitarianism can be found, applying it in practice requires belief in fictional metaphysical entities. (Absolutely certain.)

Finally, as a practical philosophy, utilitarianism is pernicious because it represents exactly the sort of quasi-rational thinking that is apt to mislead otherwise very smart people into terrible folly. (Absolutely certain.)

Replies from: NancyLebovitz, Perplexed, wedrifid, mattnewport
comment by NancyLebovitz · 2010-10-03T19:20:24.044Z · LW(p) · GW(p)

What are the fictional metaphysical entities?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-10-03T20:21:09.270Z · LW(p) · GW(p)

I have in mind primarily the way "utility" is reified, especially in arguments that assume that cross-personal utility comparisons are meaningful. The subsequent leap over the is-ought problem typically also qualifies.

comment by Perplexed · 2010-10-03T20:01:43.655Z · LW(p) · GW(p)

Downvoted for agreement. This might make a good topic for a top-level posting.

Adding or averaging utilities of different people seems like adding apples and oranges to me. But be aware that at least one top-flight economist might disagree. John Harsanyi in this classic pdf.pdf).

Replies from: wnoise, Vladimir_M
comment by wnoise · 2010-10-03T20:32:54.011Z · LW(p) · GW(p)

I think you mean: http://darp.lse.ac.uk/papersdb/Harsanyi_(JPolE_55).pdf.pdf)

The markdown eats parentheses in an URL -- you have to escape it with a backslash: \).

comment by Vladimir_M · 2010-10-03T20:34:44.174Z · LW(p) · GW(p)

The link is broken -- I assume you mean this paper? (URLs with parentheses get messed up due to the odd markup syntax here.)

comment by wedrifid · 2010-10-04T04:51:09.052Z · LW(p) · GW(p)

Finally, as a practical philosophy, utilitarianism is pernicious because it represents exactly the sort of quasi-rational thinking that is apt to mislead otherwise very smart people into terrible folly. (Absolutely certain.)

Downvoted for this. (I'll not nitpick on 'absolutely certain' and I may have voted on the other parts differently if I thought they were important.)

comment by mattnewport · 2010-10-03T20:02:50.611Z · LW(p) · GW(p)

Agree with 1 and 3, not sure exactly what you mean with 2.

comment by [deleted] · 2010-10-03T03:45:16.087Z · LW(p) · GW(p)

Cryonics does not maximize expected utility. (approx. 65%)

Edit: wording changed for clarity

Edit #2: Correct wording should be "Cryonics does not maximize your (the reader's) expected utility. (approx. 65%)"

Replies from: magfrump, gwern, Relsqui, Will_Newsome
comment by magfrump · 2010-10-03T04:39:53.516Z · LW(p) · GW(p)

This is still exceptionally unclear to me. Also the reference class of "Less Wrong posters" doesn't distinguish between, for example, Less Wrong posters over 60 (I'd think a pretty good chance that it's a good investment) and Less Wrong posters under 25 (At the very least we should wait a decade).

I don't know if there are many (any?) LWers over 60 but I'm sure there are a few over 40 and a few under 20 and their utility from:

  • signing up for cryonics
  • getting a life insurance policy that covers cryonics
  • being frozen
  • being frozen conditional on being successfully revived

are all different.

Replies from: Perplexed, None
comment by Perplexed · 2010-10-03T05:41:38.662Z · LW(p) · GW(p)

I don't know if there are many (any?) LWers over 60

63

Uh, I mean one. Me

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-03T07:14:06.768Z · LW(p) · GW(p)

Very surprised. Cool.

There are a lot of really young people (15-20) actively commenting, I think. I'm ready to believe that such people are at least as clever as me (I'm 32).

comment by [deleted] · 2010-10-03T04:44:59.157Z · LW(p) · GW(p)

Doesn't everyone sign up based on the idea of "being frozen being conditional on being successfully revived"? That's where all of the positive utility comes in. The process of signing up, etc. is a means to that end, or instrumental values. That seems like it should clear up most discrepancies within the reference class.

Replies from: magfrump, Will_Newsome
comment by magfrump · 2010-10-03T04:54:52.467Z · LW(p) · GW(p)

If I, as a 22-year old in very good health, were to be frozen right now, I would be sacrificing a large portion of my initial life. If I were 77 in good health, I might be looking into methods to get myself frozen so as to avoid having my body fall apart.

That is, the expected utility of freezing and revival varies widely, distinctly from the wide variation of expectations about the possibility of success or the financial impact.

So my agreement or disagreement would hinge on the demographics of the reference class. (In addition to my beliefs about cryonics AND my beliefs about medicine vs. charity)

Replies from: None
comment by [deleted] · 2010-10-03T05:28:09.016Z · LW(p) · GW(p)

Oh, I understand. I changed the wording as per Will_Newsome's suggestion.

Replies from: magfrump
comment by magfrump · 2010-10-03T05:43:50.517Z · LW(p) · GW(p)

Okay, downvoted in agreement now :)

comment by Will_Newsome · 2010-10-03T05:13:46.211Z · LW(p) · GW(p)

The reference class problem could be avoided by saying "Signing up for cryonics does not maximize your (the reader's) expected utility." The reference class then emerges naturally.

Replies from: None
comment by [deleted] · 2010-10-03T05:30:31.389Z · LW(p) · GW(p)

Yes, that would seem to solve the problem. Fixed.

Also: that also means that I should change the probability I assigned, but not significantly. I'd have to think about some of your arguments from your Abnormal Cryonics post a bit more.

comment by gwern · 2010-10-07T00:39:07.470Z · LW(p) · GW(p)

I read the comments, but I'm still not sure what you mean. Do you mean 'diverting 250k USD of expected consumption to cryonics' isn't maximizing? Then I'd have to downvote. Or 'if you were offered free cyronics whenever you happen to be dying, it would not maximize utility'? Then I'd have to upvote. And so on.

Replies from: None
comment by [deleted] · 2010-10-07T03:07:34.633Z · LW(p) · GW(p)

When I first wrote it, I meant spending one's own money to pay for cryonics for oneself. But I realize the scenario could be expanded to include a wide variety of choices. Take your pick.

Replies from: gwern
comment by gwern · 2010-10-07T03:09:30.060Z · LW(p) · GW(p)

No! I demand you pick a specific scenario!

Replies from: None
comment by [deleted] · 2010-10-07T04:00:30.715Z · LW(p) · GW(p)

Ok, then go with "signing myself up for cryonics does not increase my expected utility."

comment by Relsqui · 2010-10-03T07:14:27.214Z · LW(p) · GW(p)

Upvoted because I'm more confident about it than you are. :)

comment by Will_Newsome · 2010-10-03T03:48:24.312Z · LW(p) · GW(p)

I take it you mean cryonics won't lead to successful revival? It's interesting, 'cuz I think a lot of cryonauts would be more confident of that than you are, but the low probability of high utility justifies the expense. 65% is thus a sort of odd figure here. I expect most people would be at around 90-99.9%. Disagreed because I am significantly more doubtful of the general chance of cryonic revival: 95%.

Replies from: None
comment by [deleted] · 2010-10-03T03:50:48.172Z · LW(p) · GW(p)

Oh wow, that was poorly phrased. What I meant was really closer to "cryonics will not maximize expected utility." I will rephrase that.

(I really need more sleep...)

Replies from: Sniffnoy
comment by Sniffnoy · 2010-10-03T03:53:28.983Z · LW(p) · GW(p)

But that could just be a preference... perhaps add a statement of for who?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T03:54:49.656Z · LW(p) · GW(p)

I'd interpret the who to mean 'Less Wrong commenters', since that's the reference class we're generally working with here.

Replies from: None
comment by [deleted] · 2010-10-03T03:57:17.987Z · LW(p) · GW(p)

That was the reference class I was referring to, but it really doesn't matter much in this case--after all, who wouldn't want to live through a positive Singularity?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T03:59:20.114Z · LW(p) · GW(p)

True, but a positive Singularity doesn't necessarily raise the cryonic dead. I'd bet against it, for one. (Figuring out whether I agree or disagree with you is making me think pretty hard right now. At least my post is working for me! I probably agree, though almost assuredly for different reasons than yours.)

Replies from: None
comment by [deleted] · 2010-10-03T04:05:12.094Z · LW(p) · GW(p)

My reasons for disagreement are as follows: (1) I am not sure that the current cryonics technology is sufficient to prevent information-theoretic death (2) I am skeptical of the idea of "hard takeoff" for a seed AI (3) I am pessimistic about existential risk (4) I do not believe that a good enough seed AI will be produced for at least a few more decades (5) I do not believe any versions of the Singularity except Eliezer's (i.e. Moore's Law will not swoop in to save the day) (6) Even an FAI might not wake the "cryonic dead" (I like that term, I think I'll steal it, haha) (7) Cryonically preserved bodies may be destroyed before we have the ability to revive them ...and a few more minor reasons I can't remember at the moment.

I'm curious, what are yours?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T04:12:20.464Z · LW(p) · GW(p)

My thoughts have changed somewhat since writing this post, but that's the general idea. It would be personally irrational for me to sign up for cryonics at the moment. I'm not sure if this extends to most LW people; I'd have to think about it more.

But even your list of low probabilities might be totally outweighed by the Pascalian counterargument: FAI is a lot of utility if it works. Why don't you think so?

By the way, I think it's really cool to see another RWer here! LW's a different kind of fun than RW, but it's a neat place.

Replies from: None
comment by [deleted] · 2010-10-03T04:20:04.937Z · LW(p) · GW(p)

I remember that post--it got me to think about cryonics a lot more. I agree with most of your arguments, particularly bullet point #3.

I do struggle with Pascal's Mugging--it seems to me, intuitively, that Pascal's Mugging can't be true (that is, in the original scenario, Pascal should not give up his money), but I can't find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don't see a reason why this is always the case. So, while I can't defuse Pascal's Mugging, I am skeptical about its conclusion.

I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.

Replies from: rwallace, Will_Newsome
comment by rwallace · 2010-10-03T19:03:04.939Z · LW(p) · GW(p)

There is a reason to expect that it will scale in general.

To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn't have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.

Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.

Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can't be regarded as equivalent to a thousand century-long lives chained together in sequence?

BTW, what does RW refer to?

Replies from: None
comment by [deleted] · 2010-10-03T19:22:53.307Z · LW(p) · GW(p)

Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can't be regarded as equivalent to a thousand century-long lives chained together in sequence?

I'm not sure if we can write this off as a technical detail because we are formulating our prior based on it. What if we assume that we are talking about money and the mugger offers to give us an amount of money that is equivalent in terms of utility to creating N happy lives (assuming he knows your utility function)? If your reasoning is correct, then the prior probability for that would have to be the same as your prior for the mugger creating N happy lives, but since totally different mechanisms would be involved in doing so, this may not be true. That, to me, seems like a problem because we want to be able to defuse Pascal's Mugging in any general case.

BTW, what does RW refer to?

RW = RationalWiki

Replies from: rwallace
comment by rwallace · 2010-10-03T20:44:27.371Z · LW(p) · GW(p)

Well, there is no necessary reason why all claimed mechanisms must be equally probable. The mugger could say "I'll heal the sick with my psychic powers" or "when I get to the bank on Monday, I'll donate $$$ to medical research"; even if the potential utilities were the same and both probabilities were small, we would not consider the probabilities equal.

Also, the utility of money doesn't scale indefinitely; if nothing else, it levels off once the amount starts being comparable to all the money in the world, so adding more just creates additional inflation.

Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.

Replies from: None
comment by [deleted] · 2010-10-03T20:53:29.573Z · LW(p) · GW(p)

Well, there is no necessary reason why all claimed mechanisms must be equally probable.

That's why I don't think we can defuse Pascal's Mugging, since we can potentially imagine a mechanism for which our probability that the mugger is honest doesn't scale with the amount of utility the mugger promises to give. That would imply that there is no fully general solution to Bostrom's formulation of Pascal's Mugging. And that worries me greatly.

However:

Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.

This gives me a little bit of hope, since we might be able to use it as a heuristic when dealing with situations like these. That's not as good as a proof, but it's not bad.

Also:

The mugger could say "I'll heal the sick with my psychic powers" or "when I get to the bank on Monday, I'll donate $$$ to medical research"

Only on LessWrong does that sentence make sense and not sound funny :)

comment by Will_Newsome · 2010-10-03T04:30:40.986Z · LW(p) · GW(p)

I do struggle with Pascal's Mugging--it seems to me, intuitively, that Pascal's Mugging can't be true (that is, in the original scenario, Pascal should not give up his money), but I can't find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don't see a reason why this is always the case. So, while I can't defuse Pascal's Mugging, I am skeptical about its conclusion.

Ah, Pascal's mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists. That way they have no incentive to try to terrorize you -- you won't give them what they want no matter what -- and you don't incentivize even more terrorists to show up and demand even bigger sums.

But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don't give Pascal's mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.

I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.

I'm almost never there anymore... I know this is a dick thing to say, but it's not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it's been effectively replaced.

Replies from: None
comment by [deleted] · 2010-10-03T04:40:36.311Z · LW(p) · GW(p)

Ah, Pascal's mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists.

I understand this idea--in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.

But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don't give Pascal's mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.

This is what I was afraid of: we can't do anything about Pascal's Mugging with respect to purely epistemic questions. (I'm still not entirely sure why, though--what prevents us from treating cryonics just like we would treat the mugger?)

I'm almost never there anymore... I know this is a dick thing to say, but it's not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it's been effectively replaced.

Ha, Trent's essay was what introduced me to Bayes as well! And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page. I know what you mean, though--LW and RW have very different methods of evaluating ideas, and I'm suspicious of the heuristics RW uses sometimes. (I am sometimes suspicious here too, but I realize I am way out of my depth so I'm not quick to judge.) RW tends to use labels a bit too much--if an idea sounds like pseudoscience, then they automatically believe it is. Or, if they can find a "reliable" source claiming that someone is a fraud, then they assume he/she is.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:03:53.546Z · LW(p) · GW(p)

I understand this idea--in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.

Eliezer finally published TDT a few days ago, I think it's up at the singinst.org site by now. Perhaps we should announce it in a top level post... I think we will.

This is what I was afraid of: we can't do anything about Pascal's Mugging with respect to purely epistemic questions. (I'm still not entirely sure why, though--what prevents us from treating cryonics just like we would treat the mugger?)

Cryonics isn't an agent we have to deal with. Pascal's Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there's no one to argue with: either cryonics works, or it doesn't. We just have to figure it out.

The invalidity of paying Pascal's mugger doesn't have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, improbable or not, large or small.

And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page.

Might it have been here? That's where I was first introduced to LW and Eliezer.

(I am sometimes suspicious here too, but I realize I am way out of my depth so I'm not quick to judge.)

Any ideas/heuristics you're suspicious of specifically? If there was a Less Wrong and an SIAI belief dichotomy I'd definitely fall in the SIAI belief category, but generally I agree with Less Wrong. It's not exactly a fair dichotomy though; LW is a fun online social site whereas SIAI folk are paid to be professionally rational.

Replies from: wedrifid, None, None
comment by wedrifid · 2010-10-03T05:31:13.414Z · LW(p) · GW(p)

hat gets sidestepped entirely by refusing to engage in negative sum actions of any kind, negative sum or not, large or small.

The second 'negative sum' seems redundant...

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:34:03.070Z · LW(p) · GW(p)

Are you claiming that 100% of negative sum interactions are negative sum?! 1 is not a probability! ...just kidding. I meant 'improbable or not'.

Replies from: wedrifid
comment by wedrifid · 2010-10-03T05:47:48.801Z · LW(p) · GW(p)

Come to think of it negative sum isn't quite the right phrase. Rational agents do all sorts of things in negative sum contexts. They do, for example, pay protection money to the thieves guild. Even though robbing someone is negative sum. It isn't the sum that needs to be negative. The payoff to the other guy must be negative AND the payoff to yourself must be negative.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:56:54.544Z · LW(p) · GW(p)

That's true. Negative expected value is what I really mean. I'm too lazy to edit it though.

comment by [deleted] · 2010-10-03T05:22:46.119Z · LW(p) · GW(p)

If there was a Less Wrong and an SIAI belief dichotomy I'd definitely fall in the SIAI belief category, but generally I agree with Less Wrong.

I guess I'm not familiar enough with the positions of LW and SIAI--where do they differ?

comment by [deleted] · 2010-10-03T05:15:47.196Z · LW(p) · GW(p)

Eliezer finally published TDT a few days ago, I think it's up at the singinst.org site by now.

Excellent, that'll be a fun read.

Cryonics isn't an agent we have to deal with. Pascal's Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there's no one to argue with: either cryonics works, or it doesn't. We just have to figure it out. The invalidity of paying Pascal's mugger doesn't have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, negative sum or not, large or small.

I'm still not sure if I follow this--I'll have to do some more reading on it. I still don't see how the two situations are different--for example, if I was talking to someone selling cryonics, wouldn't that be qualitatively the same as Pascal's Mugging? I'm not sure.

Might it have been here? That's where I was first introduced to LW and Eliezer.

Unfortunately no, it was here. I didn't look at that article until recently.

Any ideas/heuristics you're suspicious of specifically?

That opens a whole new can of worms that it's far too late at night for me to address, but I'm thinking of writing a post on this soon, perhaps tomorrow.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:26:48.004Z · LW(p) · GW(p)

I still don't see how the two situations are different--for example, if I was talking to someone selling cryonics, wouldn't that be qualitatively the same as Pascal's Mugging?

Nah, the cryonics agent isn't trying to mug you! (Er, hopefully.) He's just giving you two options and letting you calculate.

In this case of Pascal's Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don't care. Unless they find joy in torturing people (then you're screwed) they have no incentive to actually use up the resources to go through with it. So they leave you alone, 'cuz you won't budge.

Cryonics is a lot simpler in its nature, but a lot harder to calculate. You have two options, and the options are given to you by reality, not an agent you can outwit. (Throwing in a cryonics agent doesn't change anything.) When you have to choose between the binary cryonics versus no cryonics, it's just a matter of seeing which decision is better (or worse). It could be that both are bad, like in the Pascal's mugger scenario, but in this case you're just screwed: reality likes to make you suffer, and you have to take the best possible world. Telling reality that it can go ahead and give you tons of disutility doesn't take away its incentive to give you tons of disutility. There's no way out of the problem.

That opens a whole new can of worms that it's far too late at night for me to address, but I'm thinking of writing a post on this soon, perhaps tomorrow.

Cool! Be careful not to generalize too much, though: there might bad general trends, but no one likes to be yelled at for things they didn't do. Try to frame it as humbly as possible, maybe. Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!

Replies from: None
comment by [deleted] · 2010-10-03T05:38:51.218Z · LW(p) · GW(p)

In this case of Pascal's Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don't care.

That works for the LW version of the problem (and I understand why it does), but not for Bostrom's original formulation. In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet. This means that the mugger avoids the rule "ignore all threats of blackmail but accept postiive-sum trades." That's why it looks so much like cryonics to me, and therein lies the problem.

Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!

Will do! I obviously don't want to sound obnoxious; there's no reason to be rude about rationality.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:46:02.296Z · LW(p) · GW(p)

In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet.

Oh, sorry! In that case all my talk was egregious. That sounds like a much better problem whose answer isn't immediately obvious to me. I shall think about it.

Replies from: None
comment by [deleted] · 2010-10-03T05:51:20.597Z · LW(p) · GW(p)

That sounds like a much better problem whose answer isn't immediately obvious to me.

Yep, that's the problem I've been struggling with. Like I said, it would help if Pascal's disbelief in the mugger's powers scaled with the utility the mugger promises him, but there's not always a reason for that to be so. In any case, it might help to look at Bostrom's version. And do let me know if you come up with anything, since this one really bothers me.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T05:55:25.403Z · LW(p) · GW(p)

In any case, it might help to look at Bostrom's version. And do let me know if you come up with anything, since this one really bothers me.

Thanks for pointing this out, I'm shocked I hadn't heard of it. I'll let you know if I think up something. If I can't, I'll ask a decision theory veteran, they're sure to know.

Replies from: None
comment by [deleted] · 2010-10-03T05:57:13.135Z · LW(p) · GW(p)

If I can't, I'll ask a decision theory veteran, they're sure to know.

I'm not so sure, but I certainly hope someone knows.

comment by magfrump · 2010-10-03T04:43:38.446Z · LW(p) · GW(p)

When it is technologically feasible for our descendants to simulate our world, they will not because it will seem cruel (conditional on friendly descendants, such as FAI or successful uploads with gradual adjustments to architecture.) I would be surprised if it were different, but not THAT surprised. (~70%)

Replies from: Relsqui, Will_Newsome, Eugine_Nier
comment by Relsqui · 2010-10-03T07:18:58.025Z · LW(p) · GW(p)

I agree with you up 'til the first comma.

ETA: ... the only comma, I guess.

comment by Will_Newsome · 2010-10-03T05:12:03.075Z · LW(p) · GW(p)

Upvoted for disagreement: postulating that most of my measure comes from simulations helps resolve a host of otherwise incredibly confusing anthropic questions.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-03T07:12:16.383Z · LW(p) · GW(p)

I'm sure there's more to it than came across in that sentence, but that sounds like shaky grounds for belief.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T07:23:09.811Z · LW(p) · GW(p)

Scientifically it's bunk but Bayesically it seems sound to me. A simple hypothesis that explains many otherwise unlikely pieces of evidence.

That said, I do have other reasons, but explaining the intuitions would not fit within the margins of my time.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-03T18:00:00.364Z · LW(p) · GW(p)

I like thinking about being in a simulation, and since it makes no practical difference (except if you go crazy and think it's a good idea to test every possible means of 'praying' to any possible interested and intervening simulator god), I don't think we need to agree on the odds that we are simulated.

However, I'd say that it seems impossible to me to defend any particular choice of prior probability for the simulation vs. non-simulation cases. So while it matters how well such a hypothesis explains the data, I have no idea if I should be raising p(simulation) by 1000db from -10db or from -10000000db. If you have 1000db worth of predictions following from a disjunction over possible simulations, then that's of course super interesting and amusing even if I can't decide what my prior belief is.

comment by Eugine_Nier · 2010-10-03T04:52:26.849Z · LW(p) · GW(p)

Up voted because I disagree with your first statement.

Assuming reasonably complex values of stimulate, i.e., second life doesn't count.

comment by blogospheroid · 2010-10-05T05:25:05.385Z · LW(p) · GW(p)

In the pre-AI era, david brin's transperent society (sousveillance) is our best solution to the "who watches the watchmen" problem. (~95% confident)

Replies from: wedrifid, Vladimir_M
comment by wedrifid · 2010-10-05T05:32:36.996Z · LW(p) · GW(p)

Sounds interesting and I would probably agree. Could you give an explanation on Brin's suggestion or is "transparent society" sufficient for me to get the gist?

Replies from: blogospheroid
comment by blogospheroid · 2010-10-05T05:46:18.714Z · LW(p) · GW(p)

I found the wikipedia link was pretty instructive.

http://en.wikipedia.org/wiki/The_Transparent_Society

You can read other reviews of his ideas also. The idea is constant surveillance and sousveillance woven into the fabric of life. Where privacy is required, for eg. to guard secrets, the camera footage goes into an archive which gets read and released as per pre-decided guidelines. eg. 10 years for a patent. or 2 years for a psychiatric consultation.

If someone watches too many videos of ordinary people in a creepy way, they are boycotted by others (since their watching is also watched). The societal equilibrium shifts to holding fewer secrets.

comment by Vladimir_M · 2010-10-05T05:50:56.077Z · LW(p) · GW(p)

There was a discussion of this issue in this thread from a few months ago, which you might be interested to check out if you haven't seen it already:

http://lesswrong.com/lw/1ay/is_cryonics_necessary_writing_yourself_into_the/26u5

Replies from: blogospheroid
comment by blogospheroid · 2010-10-05T06:39:48.761Z · LW(p) · GW(p)

Good discussion. As per my post, you already know which side of the discussion i fall on. I believe that the camera is here anyway. And it is wielded by the elites. Turning the camera around seems to be a much better solution than anything else. Expecting that it will not be used or prohibited is irrational.

comment by JenniferRM · 2010-10-04T21:04:23.132Z · LW(p) · GW(p)

People who proclaim that there is something very special about "human conscious experience" or "sentience", are mostly just revealing how bad they are at deploying value-attributing empathy when they try to understand a world brimming with differently configured optimization processes. (35%, but I think of this as a high estimate given how many burdensome details are in the theory and the base rate for confusion about "consciousness".)

As per the game (grr... which I just lost), vote up if you think I'm wrong, down otherwise.

comment by nazgulnarsil · 2010-10-04T07:11:19.571Z · LW(p) · GW(p)

colonialism on average increased the standard of living of those living under colonialist regimes. (80%)

Replies from: LucasSloan
comment by LucasSloan · 2010-10-04T08:46:03.027Z · LW(p) · GW(p)

Over what time scale? Measured against what counterfactual? Relative to the option of "everything became frozen in time" colonialism is good (now), but it did really huge amounts of damage in the mean time.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2010-10-04T10:57:39.731Z · LW(p) · GW(p)

measured against prior to and post-independence of areas subject to colonialism.

Replies from: khafra
comment by khafra · 2010-10-04T14:08:34.737Z · LW(p) · GW(p)

Is this a naive comparison of, say, India an hour before Vasco de Gama landed in 1498 to an hour after the British ceded them independence in 1947? Because most people's standard of living increased between 1498 and 1947; I don't think you could say colonialism was responsible for that.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2010-10-08T17:39:35.668Z · LW(p) · GW(p)

background of course adjusts for inflation (average increase in world SoL during the time period)

comment by 79zombies · 2011-03-25T01:03:25.900Z · LW(p) · GW(p)

You will downvote this comment (Not confident at all - 0%).

comment by Jordan · 2010-10-05T02:06:14.984Z · LW(p) · GW(p)

Humans are not utility maximizers, don't have a utility function, nor is there an implicit utility function hidden somewhere.

What a human would want under self reflection and increased intelligence is inextricably linked to external stimulus. (80%)

Replies from: Dre
comment by Dre · 2010-10-05T03:31:38.907Z · LW(p) · GW(p)

In the sense that there are multiple equilibriums or that there is no equilibrium for reflection?

Replies from: Jordan
comment by Jordan · 2010-10-05T04:20:47.770Z · LW(p) · GW(p)

Either would qualify, although I put a higher chance on multiple equilibriums.

comment by Risto_Saarelma · 2010-10-03T19:16:20.784Z · LW(p) · GW(p)

There will be no plausible, complete and ready to be implemented theory for friendly artificial intelligence good enough for making a safe singleton AI, regardless of the state of artificial intelligence research in general, by 2100. (90 %)

Replies from: Snowyowl, wedrifid, Will_Newsome, Liron
comment by Snowyowl · 2010-10-16T21:04:20.971Z · LW(p) · GW(p)

Voted up for underconfidence. 90% seems low :)

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-16T22:11:52.272Z · LW(p) · GW(p)

90 years has room for a lot of compound weird.

comment by wedrifid · 2010-10-04T04:46:53.365Z · LW(p) · GW(p)

Downvoted for agreement. (But 90% makes trying to do the impossible well and truly worth it.)

Replies from: Jordan, Jordan
comment by Jordan · 2010-10-05T02:29:05.823Z · LW(p) · GW(p)

Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)

Replies from: wedrifid
comment by wedrifid · 2010-10-05T05:06:23.526Z · LW(p) · GW(p)

Agreed. I would add that the 90% also makes trying to look for alternative paths to a positive singularity truly worth it (whole brain emulation, containment protocols for unfriendly AI, intelligence enhancement, others?)

Worth investigating as a possibility. In some cases I suggests that may lead us to actively acting to thwart searches that will create a negative singularity.

comment by Jordan · 2010-10-05T02:24:56.823Z · LW(p) · GW(p)

Your confidence in a simulation universe has shaded many of your responses in this thread. You've stated you're unwilling to expend the time to elaborate on your certainty, so instead I'll ask: does your certainty affect decisions in your actual life?

Replies from: wedrifid
comment by wedrifid · 2010-10-05T05:01:37.388Z · LW(p) · GW(p)

Your confidence in a simulation universe has shaded many of your responses in this thread. You've stated you're unwilling to expend the time to elaborate on your certainty,

I'm honestly confused. Are you mistaking me with someone else? I know Will and at least one other guy have mentioned such predictions. I don't have confidence in a simulation universe and most likely would expend time to discuss it.

so instead I'll ask: does your certainty affect decisions in your actual life?

I'll consider the question as a counterfactual and suppose that I would let it affect my decisions somewhat. I would obviously consider the whether or not it was worth expending resources to hack the matrix, so to speak. Possibly including hacking the simulators if that is the most plausible vulnerability. But I suspect I would end up making similar decisions to the ones I make now.

The fact that there is something on the outside of the sim doesn't change what is inside it, so most of life goes on. Then, the possibility of influencing the external reality is one that is probably best exploited by creating an FAI to do it for me.

When it comes to toy problems, such as when dealing with superintelligences that say they can simulate me, I always act according to whatever action will most benefit the 'me' that I care about (usually the non-simmed me, if there is one). This gives some insight into my position.

Replies from: Jordan
comment by Jordan · 2010-10-05T22:55:30.917Z · LW(p) · GW(p)

Sorry! My comment was intended for Will_Newsome. Thank you for answering it anyway though, instead of just calling me an idiot =D

comment by Will_Newsome · 2010-10-03T21:04:46.740Z · LW(p) · GW(p)

Upvoted for disagreement; this universe computation is probably fun theoretic, and I think a tragic end would be cliche.

Replies from: Jordan
comment by Jordan · 2010-10-05T22:57:25.540Z · LW(p) · GW(p)

I accidentally asked this of wedrifid above, but it was intended for you:

Your confidence in a simulation universe has shaded many of your responses in this thread. You've stated you're unwilling to expend the time to elaborate on your certainty, so instead I'll ask: does your certainty affect decisions in your actual life?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-05T23:18:26.938Z · LW(p) · GW(p)

You've stated you're unwilling to expend the time to elaborate on your certainty, so instead I'll ask: does your certainty affect decisions in your actual life?

(About the unwillingness to expend time to elaborate: I really am sorry about that.)

Decisions? ...Kind of. In some cases, the answer is trivially yes, because I decide to spend a lot of time thinking about the implications of being in the computation of an agent whose utility function I'm not sure of. But that's not what you mean, I know.

It doesn't really change my decisions, but I think that's because I'm the kind of person who'd be put in a simulation. Or, in other words, if I wasn't already doing incredibly interesting things, I wouldn't have heard of Tegmark or the simulation argument, and I would have significantly less anthropic evidence to make me really pay attention to it. (The anthropic evidence is in no way a good source of argument or belief, but it forces me to pay attention to hypotheses that explain it.) If by some weird counterfactual miracle I'd determined I was in a simulation before I was trying to do awesome things, then I'd switch to trying to do awesome things, as awesome things people probably have more measure, and more measure lets me better achieve my goals. But it's not really possible to do that, because you only have lots of measure (observer moments) in the first place if you're doing simulation-worthy things. (That's the part where anthropics comes in and mucks things up, and probably where most people would flat out disagree with me; nonetheless, it's not that important for establishing >95% certainty in non-negligible simulation measure.) This is the point where hypotheses of structural uncertainty or the outside view like "I'm a crazy narcissist" and "everything I know is wrong" are most convincing. (Though I still haven't prevented the real arguments these are counterarguments against.)

So to answer your question: no, but for weird self-fulfilling reasons.

Replies from: Jordan
comment by Jordan · 2010-10-08T03:00:43.030Z · LW(p) · GW(p)

Interesting. Your reasoning in the counterfactual miracle is very reminiscent of UDT reasoning on Newcomb's problem.

Thanks for sharing. If you ever take the time to lay out all your reasons for having >95% certainty in a simulation universe, I'd love to read it.

comment by Liron · 2010-10-03T20:24:04.115Z · LW(p) · GW(p)

:(

comment by Eugine_Nier · 2010-10-03T21:20:35.424Z · LW(p) · GW(p)

The obligatory libertarian counterpart to this.

Governments, and government organizations/departments/bureaucracies, are a major contributor to a kind of "astronomical waste".

Reducing both the size of governments, i.e., unloading more government functions onto the private sector, and the sizes of countries, i.e., breaking up large countries into several smaller ones, (if not completely eliminating governments but I'm not quite sure what to replace them with) would yield greater individual prosperity.

One piece of evidence for the second is to notice how nations with small populations tend to cluster near the top of lists of countries by per-capita-GDP.

75%

Replies from: toto, Elithrion
comment by toto · 2010-10-06T11:54:36.834Z · LW(p) · GW(p)

One piece of evidence for the second is to notice how nations with small populations tend to cluster near the top of lists of countries by per-capita-GDP.

1) So do nations with very high taxes, i.e. Nordic countries (or most of Western Europe for that matter).

One of the outliers (Ireland) has probably been knocked down a few places recently, as a result of a worldwide crisis that might well be the result of excessive deregulation.

2) In very small countries, one single insanely rich individual will make a lot of difference to average wealth, even if the rest of the population is very poor. I think Brunei illustrates the point. So I'm not sure the supposedly high rank of small countries is indicative of anything (median GDP would be more useful).

3) There are many small-population countries at the bottom of the chart too.

Upvoted.

comment by Elithrion · 2013-01-22T03:07:47.550Z · LW(p) · GW(p)

One piece of evidence for the second is to notice how nations with small populations tend to cluster near the top of lists of countries by per-capita-GDP.

That's probably mostly a statistical artifact of small nations being more numerous and having greater variance in conditions rather than a significant governance difference. There are also many non-statistical confounding factors, such as nations which allow/promote immigration (slightly) tending to more "average" GDP per capitas, and higher populations; small nations having more homogeneous populations which increases social stability (and which may not be possible to replicate by a breakup of large nations); some small nations perhaps undergoing a sort of "gentrification", wherein poorer inhabitants choose to leave due to high prices, while wealthier ones move there (not as sure this one is valid).

Overall still downvoted for general agreement.

comment by Alicorn · 2010-10-03T03:09:44.546Z · LW(p) · GW(p)

I think there is no values-preserving representation of any human's approximation of a utility function according to which risk neutrality is unambiguously rational. (70%)

Replies from: torekp, Perplexed, Douglas_Knight, magfrump
comment by torekp · 2010-10-03T16:41:10.663Z · LW(p) · GW(p)

You imply that of the billions of varied human personalities, none have rational goal-seeking that can be described by such a utility function. Had you restricted it to most humans, I would agree. Upvoted.

Replies from: Alicorn
comment by Alicorn · 2010-10-03T17:15:46.538Z · LW(p) · GW(p)

That's my other major source of uncertainty.

comment by Perplexed · 2010-10-03T05:02:45.006Z · LW(p) · GW(p)

Is this the same as saying that everyone is either risk averse or risk seeking about something?

Replies from: Alicorn
comment by Alicorn · 2010-10-03T14:16:28.688Z · LW(p) · GW(p)

No; humans are dumb and even if there were a risk seeking or risk neutral person running around, that wouldn't mean it would necessarily be rational for them to be so.

comment by Douglas_Knight · 2010-10-03T04:42:01.893Z · LW(p) · GW(p)

I think there is no values-preserving representation of any human's approximation of a utility function according to which risk neutrality is unambiguously rational.

Could you clarify this?
I think you are saying that human values are not well-described by a utility function (and stressing certain details of the failure), but you seem to explicitly assume a good approximation by a utility function, which makes me uncertain.

Risk neutrality is often used with respect to a resource. But if you just want to say that humans are not risk-neutral about money, there's no need to mention representations - you can just talk about preferences.
So I think you're talking about risk neutrality with respect putative-utiles. But to be a utility function, to satisfy the vNM axioms, is exactly risk neutrality about utiles. If one satisfies the axioms, the way one reconstructs the utility function is by risk-neutrality with respect to a reference utile.

I propose:

I think there is no numeric representation of any human's values according to which risk neutrality is unambiguously rational.

Am I missing the point?

Replies from: Alicorn, magfrump
comment by Alicorn · 2010-10-03T04:52:03.152Z · LW(p) · GW(p)

I don't think that human values are well described by a utility function if, by "utility function", we mean "a function which an optimizing agent will behave risk-neutrally towards". If we mean something more general by "utility function", then I am less confident that human values don't fit into one.

Replies from: timtyler, Eugine_Nier
comment by timtyler · 2010-10-03T12:11:09.449Z · LW(p) · GW(p)

It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.

Replies from: Alicorn
comment by Alicorn · 2010-10-03T14:18:02.049Z · LW(p) · GW(p)

...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human's values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?

Replies from: timtyler
comment by timtyler · 2010-10-03T14:37:09.392Z · LW(p) · GW(p)

I was trying to get you to clarify what you meant.

As far as I can tell, your reply makes no attempt to clarify :-(

"Utility function" does not normally mean:

"a function which an optimizing agent will behave risk-neutrally towards".

It means the function which, when maximised, explains an agent's goal-directed actions.

Apart from the issue of "why-redefine", the proposed redefinition appears incomprehensible - at least to me.

Replies from: Alicorn
comment by Alicorn · 2010-10-03T14:52:25.306Z · LW(p) · GW(p)

I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.

comment by Eugine_Nier · 2010-10-03T05:01:36.615Z · LW(p) · GW(p)

Can you give an example of a non-risk-neutral utility function that can't be converted a standard utility function by rescaling.

Bonus points if it doesn't make you into a money pump.

Replies from: Alicorn
comment by Alicorn · 2010-10-03T14:15:31.177Z · LW(p) · GW(p)

No, because I don't have a good handle on what magic can and cannot be done with math; when I have tried to do this in the past, it looks like this.

Me: But thus and so and thresholds and ambivalence without indifference and stuff.

Mathemagician: POOF! Look, this thing you don't understand satisfies your every need.

comment by magfrump · 2010-10-03T18:32:49.224Z · LW(p) · GW(p)

My guess would be that she meant that there is no physical event that corresponds to a utile with which humans want to behave risk-neutrally toward, and/or that if you abstracted human values enough to create an abstract such utile, it would be unrecognizable and unFriendly.

Replies from: Alicorn
comment by Alicorn · 2010-10-03T18:40:04.590Z · LW(p) · GW(p)

This is at least close, if I understand what you're saying.

comment by magfrump · 2010-10-03T04:35:21.874Z · LW(p) · GW(p)

voted up for underconfidence

Replies from: Alicorn
comment by Alicorn · 2010-10-03T04:55:15.437Z · LW(p) · GW(p)

It's as low as 70% because I'm Aumanning a little from people who are better at math than me assuring me very confidently that, with math, one can perform such magic as to make risk-neutrality sensible on a human-values-derived utility function. The fact that it looks like it would have to actually be magic prevents me from entertaining the proposition coherently enough simply to accept their authority on the matter.

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-10-03T19:51:01.075Z · LW(p) · GW(p)

There may be some confusion here. I don't think any serious economist has ever argued that risk neutrality is the only rational stance to take regarding risk. What they have argued is that they can draw up utility functions for people who prefer $100 to a 50:50 gamble for $200 or 0. And they can also draw functions for people who prefer the gamble and for people who are neutral. That is, risk (non)neutrality is a value that can be captured in the personal utility function just like (non)neutrality toward artificial sweeteners.

Now, one thing that these economists do assume is at least a little weird. Say you are completely neutral between a vacation on the beach and a vacation in the mountains. According to the economists, any rational person would then be neutral between the beach and a lottery ticket promising a vacation but making it 50:50 whether it will be beach or mountains. Risk aversion in that sense is indeed considered irrational. But, by their definitions, that 'weird' preference is not really "risk aversion".

comment by timtyler · 2010-10-03T12:04:28.060Z · LW(p) · GW(p)

"Human-values-derived utility function" is a vague and wooly concept - too vague to be of much use, IMHO.

comment by DilGreen · 2010-10-05T19:15:34.248Z · LW(p) · GW(p)

Human activity is responsible for a significant proportion of observable climate change. 90% confidence

comment by CronoDAS · 2010-10-04T00:15:54.723Z · LW(p) · GW(p)

If a government were to implement strong libertarianism (roughly defined as "the only role of government is to enforce contracts and property rights), both median income and GDP per capita will tend to decrease, not increase, over the twenty years that follow.

Replies from: knb
comment by knb · 2010-10-04T22:26:41.015Z · LW(p) · GW(p)

I thought there were a lot of libertarians on LW! I'm stunned by how unsuccessful this one was!

Incidentally, do you mean GDP per capita would decrease relative to more interventionist economies or in absolute terms? Since there is an overall increasing trend, (in both more and less libertarian) economies that would be very surprising to me.

A good example, in spite of the fact that Somalia has effectively no government services (not even private property protections or enforcement of contracts), its economy has generally grown year by year.

Replies from: mattnewport
comment by mattnewport · 2010-10-04T22:51:41.482Z · LW(p) · GW(p)

Incidentally, do you mean GDP per capita would decrease relative to more interventionist economies or in absolute terms? Since there is an overall increasing trend, in both more and less libertarian economies that would be very surprising to me.

I wondered about this as well. It seems an extremely strong and unlikely claim if it is intended to mean an absolute decrease in GDP per capita.

comment by novalis · 2010-10-03T16:45:47.135Z · LW(p) · GW(p)

[irrationality game comment, please read post before voting] Eliezer Yudkowsky will not create a AGI, friendly or otherwise: 99%.

My reasoning here is that knowledge representation is an impossible problem. It's only impossible in the Yudkowskyan sense of the word, but that appears to be enough. Yudkowsky is now doing that thing that people do when they can't figure something out: doing something else. There is no conceivable way that the rationality book has anything like the utility of a FAI. And then he's going to take a year to study "math". What math? Well, if he knew what he needed to learn to build a FAI, he would just learn it. Instead, he's so confused that he thinks just learning any math will be necessary before he becomes unconfused. Yeah, he's noticed his confusion, which is far better than 99% of AI researchers. But he's not fixing it. He's writing a book. This implies that he believes, ultimately, that he can't succeed. And he's much smarter than I am -- if he's given up, why should I keep up hope?

I should note that I hope to be wrong on this one.

Replies from: orthonormal, Perplexed, thomblake
comment by orthonormal · 2010-10-04T02:44:41.737Z · LW(p) · GW(p)

What would be your probability assessment if you replaced "Eliezer Yudkowsky" with "SIAI"?

Replies from: novalis
comment by novalis · 2010-10-06T05:29:39.300Z · LW(p) · GW(p)

About the same, but mostly because I don't follow it well enough to know whether they have any other smart enough people working there. Although I think thomblake may be right that I have set the probability too low.

comment by Perplexed · 2010-10-03T19:15:03.874Z · LW(p) · GW(p)

Upvoted for disagreement. I definitely disagree on whether writing the book is a rational step toward his goals. I also disagree on whether EY will build an AGI. I doubt that he will build the first one (unless he already has) at something like your 99% level.

comment by thomblake · 2010-10-04T19:23:31.627Z · LW(p) · GW(p)

Upvoted for underconfidence.

Replies from: Snowyowl
comment by Snowyowl · 2010-10-04T21:31:22.355Z · LW(p) · GW(p)

99% is underconfident? Downvoted for agreement.

Replies from: Salivanth
comment by Salivanth · 2012-04-13T09:39:19.944Z · LW(p) · GW(p)

Agree the chance is >50%, but upvoted for overconfidence.

comment by Relsqui · 2010-10-03T07:11:32.032Z · LW(p) · GW(p)

For a large majority of people who read this, learning a lot about how to interact with other human beings genuinely and in a way that inspires comfort and pleasure on both sides is of higher utility than learning a lot about either AI or IA. ~90%

Replies from: Will_Newsome, Relsqui, thomblake, cata
comment by Will_Newsome · 2010-10-03T21:10:10.520Z · LW(p) · GW(p)

At -20 it looks like you're winning the 'most rational belief, least rational time to say it' award!

Replies from: Relsqui
comment by Relsqui · 2010-10-03T22:40:32.856Z · LW(p) · GW(p)

Hahaha indeed. Oh well. I was afraid of that, but opted to because I was worrying about the karma hit. It seems like a good habit to not take karma seriously.

I guess I'll have to go be insightful in some other thread now or something.

Replies from: wedrifid
comment by wedrifid · 2010-10-04T05:03:48.418Z · LW(p) · GW(p)

"Least rational time to say it" does not necessarily or even primarily refer to karma. By making your claim here you are asserting, via the rules of the post, that you believe you understand this better than lesswrong does. Apart from being potentially condescending it is also a suboptimal way of achieving a desirable influence. It is better to act as if people already know this and are working on enhancing their social skills, encouraging continued efforts as appropriate.

Replies from: Relsqui
comment by Relsqui · 2010-10-04T18:18:09.923Z · LW(p) · GW(p)

By making your claim here you are asserting, via the rules of the post, that you believe you understand this better than lesswrong does.

I was asserting that, and I'm delighted to be incorrect.

Apart from being potentially condescending

Granted, but that would be true regardless of the topic. (Every proposition commented to this post implies condescension about the topic in question.)

It is better to act as if people already know this and are working on enhancing their social skills

I'm not sure I agree with that in general. The people who DO know this and are trying to enhance their social skills will simply agree with me (no change); the ones who don't and aren't will either continue not trying (no change) or perhaps consider whether they're incorrrect (positive effect, in my mind). Now, if I knew I were speaking to a particular individual who was already working on this, then yes, reminding them it was important would be rude. But I'm addressing a group of people, among whom that is true of some and not others; I'm trusting the ones of whom it's already true not to interpret it as if I were speaking to them alone.

Did I offend you?

Replies from: wedrifid
comment by wedrifid · 2010-10-05T04:19:08.802Z · LW(p) · GW(p)

Did I offend you?

Why would I be offended? No, I was responding to the implicit assumption that 'rational' applied to a Karma Maximiser. This misses most of the social nuance.

Granted, but that would be true regardless of the topic. (Every proposition commented to this post implies condescension about the topic in question.)

(This is entire conversation is just tangential technicalities we are discussing but) actually it doesn't. Disagreement is disrespect but the act of condescension requires more specific social positioning. A comment here could demonstrate obstinacy or arrogance without condescension. (A lot of Tim's contrarian comments could be taken as examples.)

It is better to act as if people already know this and are working on enhancing their social skills

I'm not sure I agree with that in general. The people who DO know this and are trying to enhance their social skills will simply agree with me (no change); the ones who don't and aren't will either continue not trying (no change) or perhaps consider whether they're incorrrect (positive effect, in my mind).

On this we disagree on a substantive matter of fact. This is actually one of the most critical lessons to be learned when doing that work on social skills that you consider so important. And, while most of us are well aware of the fact, it is just the social, instrumental rationality err most likely to be seen on LessWrong. One doesn't have to look too hard to find examples of people here achieving precisely the opposite of their intended result via direct challenge and accusation. (ie. If I particularly cared about influencing your behaviour instead of discussing details I would not be making replies here.)

While this kind of subject is of interest to me LessWrong isn't the place where I most enjoy (alternately, consider it instrumentally rational) to discuss such things in depth. That being the case I had best leave it at that.

comment by Relsqui · 2010-10-03T07:38:16.850Z · LW(p) · GW(p)

I just realized that either I get karma for this or I get warm fuzzies from people agreeing with me. Suddenly the magic of the game is clear.

ETA: ... although now I'm wondering how strongly I would have to word it before people stopped agreeing with it.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T07:59:10.910Z · LW(p) · GW(p)

I just realized that either I get karma for this or I get warm fuzzies from people agreeing with me. Suddenly the magic of the game is clear.

:)

This one's really hard for me... there's low hanging fruit in IA that gives the problem a Pascalian flavor. Generally, people would do much better with social skills, but if one person finds one really good IA technique and tells the future FAI team, that might be enough to tip the balance. So 90% seems possibly too high.

Replies from: wedrifid
comment by wedrifid · 2010-10-04T04:55:12.981Z · LW(p) · GW(p)

Generally, people would do much better with social skills, but if one person finds one really good IA technique and tells the future FAI team, that might be enough to tip the balance.

And don't forget the direct benefit that good IA techniques can have on the ability to develop social skills!

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T05:01:09.547Z · LW(p) · GW(p)

Adderall does wonders for my ability to interact with people. Now if only I could get an oxytocin nasal spray prescription... well, I'm definitely going to give it a shot.

Replies from: Relsqui
comment by Relsqui · 2010-10-04T18:19:04.004Z · LW(p) · GW(p)

Adderall does wonders for my ability to interact with people.

Interesting! I'd never heard that. How so?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T18:42:57.686Z · LW(p) · GW(p)

I was very energetic and chatty, and didn't really care about personal space. My friend gave it to me at a party because he wanted to see what it did to my chess ability. It was too hard to tell if it improved my chess, but it definitely led to me sitting very close to a girl that lives in my neighborhood and actually connecting with her. Normally I come across as laconic, which works pretty well for some reason, but it was nice to actually feel passionate about getting to know someone, and feel an emotional bond forming in real time. I ended up driving with her to Denny's, and I was so into the conversation that I kept on taking my eyes off the road and looking at her, and this led to slightly suboptimal driving safety. So, a warning: adderall helps you focus, but not necessarily on the right things.

Replies from: wedrifid, Relsqui
comment by wedrifid · 2010-10-05T05:28:22.704Z · LW(p) · GW(p)

I ended up driving with her to Denny's, and I was so into the conversation that I kept on taking my eyes off the road and looking at her, and this led to slightly suboptimal driving safety. So, a warning: adderall helps you focus, but not necessarily on the right things.

I've had the same experience (conversation vs driving attention focus based on stimulants). Watch out for that stuff!

comment by Relsqui · 2010-10-05T04:50:54.478Z · LW(p) · GW(p)

I was very energetic and chatty, and didn't really care about personal space.

It took me a couple reads to make sense of your description, because I parsed this as the "before" picture, and didn't see the difference. :)

and feel an emotional bond forming in real time

I love that feeling. I've only gotten it with a few friends--usually it's either too gradual to notice, or I realize suddenly that we've gotten closer without feeling it happen.

So, a warning: adderall helps you focus, but not necessarily on the right things.

Haha. Noted.

comment by thomblake · 2010-10-04T19:31:51.468Z · LW(p) · GW(p)

I was just thinking about this one the other day. I was musing about taking adderall and piracetam, and thinking "Is intelligence/cognition really a bottleneck I need to clear up? Shouldn't everyone else be taking this stuff?"

comment by cata · 2010-10-03T16:57:29.957Z · LW(p) · GW(p)

Roughly how big do you think a "large" majority is? Closer to 65%, or closer to 90%?

Replies from: Relsqui, Relsqui
comment by Relsqui · 2010-10-08T02:55:12.932Z · LW(p) · GW(p)

Wait, I was certain I had replied to this, but I just stopped into the thread again and it doesn't seem to be here. Sorry about that! I intended closer to 90%.

Replies from: cata
comment by cata · 2010-10-08T03:49:47.377Z · LW(p) · GW(p)

I'm certain you replied to it too -- I see it right there underneath this one:

http://lesswrong.com/lw/2sl/the_irrationality_game/2qil?c=1

Potential bug?

Replies from: Relsqui
comment by Relsqui · 2010-10-08T05:12:07.575Z · LW(p) · GW(p)

Well, NOW I see it. It must've just gotten buried under something before. Oh well. Better to've answered twice than never.

comment by Relsqui · 2010-10-03T22:48:17.933Z · LW(p) · GW(p)

Good question. I don't have a lot of information about who reads these (especially including the people who read but don't comment or vote), but 90% seems like the right ballpark.