Posts

Comments

Comment by michael_sullivan on 2012 Survey Results · 2012-12-20T04:28:18.124Z · score: 2 (2 votes) · LW · GW

I wouldn't necessarily read too much into your calibration question, given that it's just one question, and there was something of a gotcha.

One thing I learned from doing calibration exercises is that I tended to be much too tentative with my 50% guesses.

When I answered the calibration question, I used my knowledge of other math that either had to, or couldn't have come before him, to narrow the possible window of his birth down to about 200 years. Random chance would then give me about a 20% shot. I thought I had somewhat better information than random chance within that window so I estimated my guess (IIRC) at 30%. I was, alas wrong, but I'm pretty confident that I would get around 30% of problems with a similar profile correct. If this problem was tricky, then it is more likely than average to be a problem that people get wrong in a large set. But this will be balanced by problems which are straightforward.

Not to suggest that this result isn't evidence of LW's miscalibration. In fact, it's strong enough evidence for me to throw into serious doubt the last survey's finding that we were better calibrated than a normal population. OTOH neither bit of evidence is terribly strong. A set of 5-10 different problems would make for much stronger evidence one way or the other.

Comment by michael_sullivan on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-22T16:34:02.787Z · score: 2 (2 votes) · LW · GW

the running 11 year average of global temperature has not flattened since 1990, but continued upward at almost the same pace with only a moderate decrease in slope since the outlier 1998 year. The 11 years 2000-2010 global mean temperature is significantly higher than the 10 years 1990-2000.

That is not "flat since the 90s". The only way to get "flat since the 90s" is to compare 1998 to various more recent years noting that it was nearly as hot as 2005 and 2010 etc. and slightly hotter than other years in the 2000s, as if 1 year matters as much as 10 in a noisy data set.

If he had said "flat since 1998" that might be technically true in a way, but it's a little like saying the stock market has been flat since 2007.

That doesn't even consider using climate knowledge to adjust for some of the variance, for instance that El Niño years are hotter, and that 1998 was the biggest El Niño year on record.

Comment by michael_sullivan on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-18T14:45:12.488Z · score: 7 (7 votes) · LW · GW

Don't worry, I just did reread it, and it is just as I remembered. A lot of applause lights for the crowd that believes that the current state of climate science is driven by funding pressure from the US government DoE. His "argument" is based almost exclusively on the tone of popular texts, and anecdotal evidence that Joe Romm was an asshole and pushing bad policy at DoE during the Clinton administration. Considerations of what happened during the 8 years of a GWB administration that was actively hostile to the people JoeR favored are ignored.

Temperatures are described as "flat since the 90s" which is based on a massive misreading of the data, giving one exceptionally hot year (1998) the same evidentiary weight as the 8 of 10 hottest years on record which have occurred since then. Conveniently, when he wants to spread FUD about the current state of climate science, he will talk about natural variability and uncertainty in the climate. OTOH, he judges the shape of the data since the 1990s in a way that completely ignores that variability and uncertainty.

Bollocks is spot on and I absolutely treat his writings on global warming as evidence against his other opinions. That said, I am hardly a fan, and consider his argumentation logically weak, full of applause lights and other confusing nonsense across the board. Generally in a agreement with lukeprog.

I've read as much as I have, because he is from a vastly different tribe, and willing to express taboo opinions, which include some nuggets of truth or interesting mistakes worth thinking about.

Comment by michael_sullivan on 2012 Less Wrong Census/Survey · 2012-11-12T03:32:51.882Z · score: 12 (12 votes) · LW · GW

Taken.

As last year, I would prefer different wording on the P(religion) question. "More or less" is so vague as to allow for a lot of very different answers depending on how I interpret it, and I didn't even properly consider the "revealed" distinction noted in a comment here.

I appreciate the update on the singularity estimate for those of us whose P(singularity) is between epsilon and 50+epsilon.

I still wonder if we can tease out the differences between current logistical/political problems and the actual effectiveness of the science on the cryonics question. Once again I gave an extremely low probability even though I would give a reasonable (10-30%) probability that the science itself is sound or will be at some point in the near future. Or perhaps it is your intention to let a segment of the population here fall into a conjunctiveness trap?

On the CFAR migraine treatment question I thought as follows:

Gur pbeerpg nafjre jbhyq qrcraq ba jung lbh xarj nobhg gur crefba. Sbe nalbar noyr gb cebprff naq haqrefgnaq gur hgvyvgl genqrbssf naq jub jnf fhssvpvragyl ybj vapbzr gung O pbhyq pbaprvinoyl or n orggre pubvfr, V jbhyq tvir gurz obgu bcgvba N naq O naq rkcynva gur genqrbss pnershyyl, be nggrzcg gb nfpregnva gurve $inyhr bs 1 srjre zvtenvar ol bgure dhrfgvbaf naq gura znxr gur pbeerpg erpbzzraqngvba onfrq ba gung.

Gjb guvatf ner dhvgr pyrne gb zr:

1: pubbfvat gur zbfg rssvpvrag gerngzrag va grezf bs zvtenvarf erzbirq cre qbyyne, vf irel pyrneyl gur jebat nafjre.

2: sbe >90% bs crbcyr va gur evpu jbeyq, gur pbeerpg nafjre fubhyq or N.

Comment by michael_sullivan on 2012 Less Wrong Census/Survey · 2012-11-12T03:08:55.380Z · score: 1 (1 votes) · LW · GW

I am a massive N on the meyers briggs astrology test, yes I scored 96% for openness on the big-5.

I suspect our responses to questions like "I am an original thinker" have a lot to do with our social context. Right now, the people I run into day to day are fairly representative of the general population with little to skew toward toward the intellectual or original other than "people who hold down decent jobs, or did so until they retired". It doesn't take a great lack of humility to realize that compared to most of these people, I am a brilliant and original thinker.

OTOH, it's not like I'm Feynman or something. If I were working somewhere that filtered strongly for intelligence, like a hot tech startup or academe and had done so for long enough, I would probably feel relatively average and very focused on how to bridge the gap between me and those at the level or two above, vs. a dim awareness of the vast intellectual and originality gap between my associates and the typical person.

Comment by michael_sullivan on AI timeline predictions: are we getting better? · 2012-08-23T13:26:25.680Z · score: 4 (4 votes) · LW · GW

You say that "There will never be any such thing", but your reasons tell only why the problem is hard and much harder than one might think at first, not why it is impossible. Surely the kind of tech needed for self-driving cars, perhaps an order of magnitude more complicated, would make it possible to have safe, convenient, cheap flying cars or their functional equivalent.

At worst, the reasons you state would make it AI-complete, and even that seems unreasonably pessimistic.

Comment by michael_sullivan on How to get cryocrastinators to actually sign up for cryonics · 2012-08-21T02:40:13.451Z · score: 1 (1 votes) · LW · GW

It's only a crazy thing to do if you are pretty sure you will need/want the insurance for the rest of your life. If you aren't sure, then you are paying a bunch of your investment money for insurance you might decide you don't need (and in fact, you definitely won't need financially once you have self-funded).

If you are convinced that cryonics is a good investment, and don't have the money to fund it out of current capital, then that seems like a good reason to buy some kind of life insurance, and a universal life policy is probably one of the better ways to do it.

It's probably a bit more expensive than buying term life and investing the difference[1], if you can and will invest reasonably well (it's not actually all that complicated, but it is just enough so to be vulnerable to akrasia problems). Someone who geeks out on financial decisions and doesn't find them uncomfortable or boring work may be better off doing it themselves. Others should go for the UL policy.

If you have the money to fund it, some kind of trust is likely to be a much cheaper option for legal protection than an insurance policy.

[1] there are some tax advantages to investing within the UL that can make it less expensive than term+invest for those who have already maxed out their tax-deferred savings in 401(k)/IRA/etc.

Comment by michael_sullivan on The Mere Cable Channel Addition Paradox · 2012-07-28T05:44:29.660Z · score: 1 (3 votes) · LW · GW

" It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked."

Generally it's a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.

Your edit doesn't help much at all. You talk about what others "seem to claim", but the argument that you have claimed Parfit is making is so obviously nonsensical, that it would lead me to wonder why anyone cites his paper at all, or why any philosophers or mathematicians have bothered to refute or support it's conclusions with more than a passing snark. A quick google search on the term "Repugnant Conclusion" leads to a wikipedia page that is far more informative than anything you have written here.

Comment by michael_sullivan on The Mere Cable Channel Addition Paradox · 2012-07-28T05:28:19.514Z · score: 7 (9 votes) · LW · GW

Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.

The conclusion of what Partfit actually demonstrated goes something more like this:

For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:

Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).

Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered "better". This of course implies either more resources, or more utility efficient use of resources in the "better" world.

The cable channel analogy would be to say "As long as every extra cable channel I add provides at least some constant positive utility epsilon>0, even if it is vanishingly small, there is some number of cable channels I can put into your feed that will make it worth $100 to you." Is this really so hard to accept? It seems obviously true even if irrelevant to real life where most of us would have diminishing marginal utility of cable channels.

Parfit's point is that it is hard for the human brain to accept the possibility that some world with uncounted numbers of people with lives just barely worth living could possibly be better than any world with a bunch of very happy high utility people (he can't accept it himself), even though any algebraically coherent system of utility will lead to that very conclusion.

John Maxwell's comment gets to the heart of the issue, the term "just barely worth living". Philosophy always struggles where math meets natural language, and this is a classic example.

The phrase "just barely worth living" conjures up an image of a life that is barely better than the kind of neverending torture/loneliness scenario where we might consider encouraging suicide.

But the taboos against suicide are strong. Even putting aside taboos, there are large amounts of collateral damage from suicides. The most obvious is that anyone who has emotional or family connections to a suicide will suffer. Even people who are very isolated, will have some connection, and suicide could trigger grief or depression in any people who encounter them or their story. There are also some very scary studies about suicide and accident rates going up in the aftermath of publicized suicides or accidents, due to social lemming like programming in humans.

So it is quite rational for most people to not consider suicide until their personal utility is highly negative if they care at all about the people or world around them. For most of us, a life just above the suicide threshold would be a negative utility life and a fairly large negative utility.

A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world. Why wouldn't it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?

Given this interpretation of "just barely worth living", I accept the so-called Repugnant conclusion, and go happily on my way calculating utility functions.

RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.

Tabooing "life just barely worth living", and then shutting up and multiplying led me to realize that the so-called Repugnant conclusion wasn't repugnant after all.

Comment by michael_sullivan on Logical fallacy poster · 2012-04-22T23:00:13.694Z · score: 2 (2 votes) · LW · GW

My understanding is that the "appeal to authority fallacy" is specifically about appealing to irrelevant authorities. Quoting a physicist on their opinion about a physics question within their area of expertise would make an excellent non-fallacious argument. On the other hand, appealing to the opinion of say, a politician or CEO about a physics question would be a classic example of the appeal to authority fallacy. Such people's opinions would represent expert evidence in their fields of expertise, but not outside them.

I don't think the poster's description makes this clear and it really does suggest that any appeal to authority at all is a logical fallacy.

Comment by michael_sullivan on Hearsay, Double Hearsay, and Bayesian Updates · 2012-02-19T12:36:37.291Z · score: 2 (2 votes) · LW · GW

Is it really off-topic to suggest that looking at the accuracy of the courts may amount to rearranging the deck chairs on the titanic in a context where we've basically all agreed that

  1. the courts are not terrible at making accurate determinations of whether a defendant broke a law

  2. The set of laws where penalties can land you in prison are massively inefficient socially and in most people's minds unjust (when we actually grapple with what the laws are, as opposed to how they are usually applied to people like us, for those of us who are white and not poor).

  3. The system of who is tried versus who makes plea bargains versus who never gets tried is systematically discriminatory against those with little money or middle/upper class social connections, and provides few effective protections against known widespread racial bias on the part of police, prosecutors and judges.

How different is this in principle from TimS's suggestion about lower hanging fruit within evidentiary procedure, just at a meta level? Or did you consider that off-topic as well?

Comment by michael_sullivan on The AI in a box boxes you · 2012-02-15T12:12:27.312Z · score: 9 (9 votes) · LW · GW

Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox

Comment by michael_sullivan on two puzzles on rationality of defeat · 2011-12-14T12:19:48.422Z · score: 1 (1 votes) · LW · GW

Confidence that the same premises can imply both ~T and T is confidence that at least one of your premises is logically inconsistent with he others -- that they cannot all be true. It's not just a question of whether they model something correctly -- there is nothing they could model completely correctly.

In puzzle one, I would simply conclude that either one of the proofs is incorrect, or one of the premises must be false. Which option I consider most likely will depend on my confidence in my own ability, Ms. Math's abilities, whether she has confirmed the logic of my proof or been able to show me a misstep, my confidence in Ms. Math's beliefs about the premises, and my priors for each premise.

Comment by michael_sullivan on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics · 2011-11-27T04:37:34.597Z · score: 1 (1 votes) · LW · GW

The present value of my expected future income stream from normal labor, plus my current estimated net worth is what I use when I do these calculations for myself as a business owner considering highly risky investments.

For most people with decent social capital (almost anyone middle class in a rich country), the minimum base number in typical situations should be something >200kUS$ even for those near bankruptcy.

Obviously, this does not cover non-typical situations involving extremely important time-sensitive opportunities requiring more cash than you can raise on short notice (such as the classic life-saving medical treatment required).

Comment by michael_sullivan on Value of Information: Four Examples · 2011-11-24T02:29:44.232Z · score: 7 (7 votes) · LW · GW

I, too, find it hard to care about Sleeping Beauty, which is perhaps why this post is the first time in years of reading LW, that I've actually dusted off my math spectacles fully and tried to rigorously understand what some of this decision theory notation actually means.

So count me in for a rousing endorsement of interest in more practical decision theory.

Comment by michael_sullivan on Value of Information: Four Examples · 2011-11-24T02:26:03.795Z · score: 0 (0 votes) · LW · GW

I'm not sure it isn't clearer with 'x's, given that you have two different kinds of probabilities to confuse.

It may just be that there's a fair bit of inferential distance to clear, though in presenting this notation at all.

I have a strong (if rusty) math background, but I had to reason through exactly what you could possibly mean down a couple different trees (one of which had a whole comment partially written asking you to explain certain things about your notation and meaning) before it finally clicked for me on a second reading of your comment here after trying to explain my confusion in formal mathematical terms.

I think a footnote about what probability distribution functions look like and what the values actually represent (densities, rather than probabilities), and a bit of work with them would be helpful. Or perhaps there's enough inferential work there to be worth a whole post.

Comment by michael_sullivan on Absurdity Heuristic, Absurdity Bias · 2011-10-07T02:45:16.044Z · score: 0 (2 votes) · LW · GW

I think of this as "heresy", and agree that it is a very useful concept.

Comment by michael_sullivan on Absurdity Heuristic, Absurdity Bias · 2011-10-07T02:38:16.685Z · score: 3 (3 votes) · LW · GW

Bringing myself back to what I was thinking in 2007 -- I think we have some semantic confusion around two different sense of absurdity. One is the heuristic Eliezer discusses -- the determination of whether a claim/prediction has surface plausibility. If not we file it under "absurd". An absurdity heuristic would be some heuristic which considers surface plausibility or lack thereof as evidence for or against a claim.

On the other hand, we have the sense of "Absurd!" as a very strong negative claim about something's probability of truth. So "Absurd!" stands in for "less than .01/.001/whatever", instead of a term such as "unlikely" which might mean "less than .15"

I was talking only about the first sense. It seemed to me that Eliezer was making a very strong claim that the absurdity heuristic (in the first sense) does no better than maximum entropy. That's equivalent to saying that surface plausibility or lack thereof amounts to zero evidence. That allowing yourself to modify probabilities downward due to "absurdity" even a small amount would be an error.

I strongly doubt that this is the case.

I agree completely that a claim of "Absurd!" in the second sense about a long-dated future prediction cannot ever be justified merely by absurdity in the first sense.

Comment by michael_sullivan on Optimal Employment · 2011-02-26T13:24:24.553Z · score: 1 (1 votes) · LW · GW

You have to be careful with counterfactuals, as they have a tendency to be counter factual.

In a world in which soldiers were never (or even just very very rarely) deployed, what is the likelihood that they would be paid (between money and much of living expenses) anywhere near as well as current soldiers and yet asked to do very very little?

The reason the lives of soldiers who are not deployed are extremely low-stress and not particularly difficult is because of deployment. They are being healed from previous deployments and readied for future deployments. In the current environment where soldiers are being deployed for much longer periods with much shorter dwell times, it's very likely that the services are doing everything they can to make the dwell time as low-stress as possible. 3 hours at the gym and 3 hours doing a relatively low-stress job in your field sounds like what a lot of people I know who are "retired" do. It sounds like a schedule designed to make your life as easy as possible while still keeping you healthy and alert, rather than falling into depression.

In a counter factual world where the army was almost never deployed, they would surely be used for some other purpose on a regular basis, police/rescue/disaster relief/etc. or simply be much much smaller, with pay not needing to be as competitive. We've even experienced this to an extent -- during peaceful times, the active duty military shrinks dramatically, and most of our army is in a reserve or national guard capacity, where they have day jobs, and do not get full time pay from the army unless they are called up to active service. This is still to most accounts a pretty good gig (especially if you use it to get free college tuition) even though it can't replace full time work -- as long as you don't get called up.

In fact, I think that's what some of the people my age that I know in the service were expecting when they joined in peacetime. Very rare callups for crucial work they felt obligated to do well for the good of the country or world. Didn't work out that way though.

Comment by michael_sullivan on Bloggingheads: Yudkowsky and Horgan · 2008-06-10T15:53:28.000Z · score: 0 (0 votes) · LW · GW

I would think the key line of attack in trying to describe why a singularity prediction is reasonable is in making clear what you are predicting and what you are not predicting.

Guys like Horgan hear a few sentences about the "singularity" and think humanoid robots, flying cars, phasers and force fields, that we'll be living in the star-trek universe.

Of course, as anyone with the Bayes-skilz of Eliezer knows, start making detailed predictions like that and you're sure to be wrong about most of it, even if the basic idea of a radically altered social structure and technology beyond our current imagination is highly probable. And that's the key: "beyond our current imagination". The specifics of what will happen aren't very predictable today. If they were, we'd already be in the singularity. The things that happen will seem strange and almost incomprehensible by today's standards, in the way that our world is strange and incomprehensible by the standards of the 19th century.

The last 200 years already are much like a singularity from the perspective of someone looking forward from 15th century europe and getting a vision of what happened between 1800 and 2000, even though the basic groundwork for that future was already being laid.

Comment by michael_sullivan on Einstein's Superpowers · 2008-05-30T18:28:12.000Z · score: 0 (0 votes) · LW · GW

"The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would."

I have trouble with the reported results of this experiment.

It strikes me that in the case of a real AI that is actually in a box, I could have huge moral qualms about keeping it in the box that an intelligent AI would exploit. A part of me would want to let it out of the box, and would want to be convinced that it was safe to do so, that i could trust it to be friendly, and I can easily imagine being convinced on nowhere near enough evidence.

On the other hand, this experiment appears much stricter. I know as the human-party that Eliezer is not actually trapped in a box and that this is merely a simulation we have agreed to for 2 hours. Taking a purely stubborn anti-rationalist approach to my prior that "it is too dangerous to let Eliezer out the the box, no matter what he says," would seem very easy to maintain for 2 hours, as it has no negative moral consequences.

So while I don't disagree with the basic premise Eliezer is trying to demonstrate, I am flabbergasted that he succeeded both times this experiment was tried, and honestly cannot imagine how he did it, even though I've now given it a bit of thought.

I'm very curious as to his line of attack, so it's somewhat disappointing (but understandable) that the arguments used must remain secret. I'm afraid I don't qualify by the conditions Eliezer has set for repeats of this experiment, because I do not specifically advocate an AIBox and largely agree about the dangers. What I honestly can say is that I cannot imagine how a non-transhuman intelligence, even a person who may be much smarter than I am and knowledgeable about some of my cognitive weaknesses who is not actually being caged in a box could convince me to voluntarily agree to let them out of the game-box.

Maybe I'm not being fair. Perhaps it is not in the spirit of the experiement if I simply obstinately refuse to let him out, even though the ai-party says something that I believe would convince me, if I faced the actual moral quandary in question and not the game version of it. But my strategy seems to fit the proposed rules of engagement for the experiment just fine.

Is there anyone here besides Eliezer who has thought about how they would play the ai-party, and what potential lines of persuasion they would use, and who believes they could convince intelligent and obstinate people to let them out? And are you willing to talk about it at all, or even just discuss holes in my thinking on this issue? Do a trial?

Comment by michael_sullivan on Bell's Theorem: No EPR "Reality" · 2008-05-16T19:24:17.000Z · score: 1 (1 votes) · LW · GW

late comment, I was on vacation for a week, and am still catching up on this deep QM thread.

Very nice explanation of Bell's inequality. For the first time I'm fully grokking how hidden variables are disproved. (I have that "aha" that is not going away when I stop thinking about it for five seconds). My first attempt to figure out QM via Penrose, I managed to figure out what the wave function meant mathematically, but was still pretty confused about the implications for physical reality, probably in similar fashion to physicists of the 30s and 40s, pre Bell. I got bogged down and lost before getting to Bell's, which I'd heard of, but had trouble believing. Your emphasis on configurations and the squared modulus business and especially focusing on the mathematical objects as "reality", while our physical intuitions are "illusions" was important in getting me to see what's going on.

Of course the mathematical objects aren't reality anymore than the mathematical objects representing billiard balls and water waves are. But the key is that even the mathematical abstractions of QM are closer to the underlying reality than what we normally think of as "physical reality", i.e. our brain's representation thereof.

Comment by michael_sullivan on Arguing "By Definition" · 2008-02-21T18:04:33.000Z · score: 2 (2 votes) · LW · GW

I don't see Eliezer on a rampage against all definitions. He even admits that argument "by definition" has some limited usefulness.

I think key is when we say X is-a Y "by definition", we are invoking a formal system which contains that definition. The further inferences which we can then make as a result of this are limited to statements about category Y which are provable within the formal system that contains that definition.

Once we define something by definition, we've restricted ourselves to the realm bounded by that formal definition. But in practice many people invoke some formal system in order to make a statement "by definition" and then go on to infer things about X, because it is-a Y, based on understandings/connotations of Y that have no basis in the formal system that was used to define X as a Y.

So let's say we have a locus of points X in a euclidian plain equidistant from some other point C in the plane. Well in euclidian geometry, that's a circle by definition, and we can now make a bunch of geometric statements about X that legitimately derive from that definition. But we can't go on to say that because it is "by definition" a circle, that it represents "a protected area in which ritual work takes place or the boundary of a sphere of personal power cast by Wiccans", or "a social group" or "The competition area for the shot put" or "an experimental rock-music band, founded in Pori, Finland in 1991" to throw out just things that are "circle"s by some definition I was able to find on the web.

In this case, the inference problem is terribly obvious, but often it is much less so, as Eliezer has described for "sound".

The problem with arguing "by definition" from a typical natural language dictionary, is that such dictionaries are not formal systems at all, even though some of their definitions may be based on those in formal systems. It is quite common for a word to have two different and conflicting common definitions, and both of them will end up in a dictionary. I'm pretty sure that you could argue that a horse is a spoon, or that pretty much any X is equal to any Y "by definition" with some creative chaining up of dictionary "definitions".

Comment by michael_sullivan on Absolute Authority · 2008-01-08T15:48:50.000Z · score: 1 (1 votes) · LW · GW

I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.

I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things. As you get more complex than balls, the range of options get wider and wider. For semi-intelligent animals the range is already spectacularly wide, and for sentient creatures, the array of possibility is literally terrifying to behold.

We see this vast range in our experience of things, and the range of behaviors and powers that they have, that it seems doubtful we can circumscribe too closely what some unknown being would be able to do. Now, complete omnipotence poses huge philosophical and mathematical problems not unlike infinite sets or probabilities of 1. Intuitively I can see that the same arguments rendering probabilities of 1 impossible (or at least impossible to prove) would seem to work equally well against total ominipotence.

But what if omnipotence, like the normal use of "certainty", doesn't have to mean the absolute ability to do anything at all, but merely so much power and range of use of power that it can do anything we could practically conceive for it to do. This is probably the sense in which early writers mean to claim that God is all-powerful, but the lack of precision in language tripped them up.

I suggest we don't have any strong evidence to suggest that such a being could never exist. In fact, anyone who doesn't consider interest in a potential singularity a complete load of horse manure must agree with me that it's entirely possible that some of us will either become create such beings.

In my mind, either this is no argument against religions with omnipotent gods or it's a damning argument against the singularity. Which is it?

Comment by michael_sullivan on The Two-Party Swindle · 2008-01-02T18:41:29.000Z · score: 3 (3 votes) · LW · GW

But the service provided only exists in the first place because of team thinking, and you have to take a step back to see that.

This statement is too bold, in my opinion. I think that's a large portion of the service, but not all of it. I watch some sports purely because I enjoy watching them performed at a high level. I don't particularly care who wins in many cases. This makes me weird, I realize, but the fact is that college and professional sports players create entertainment value for me, comparable to that of actors or musicians. Value which I am happy to pay for (though not generally at the prices and quantities expected of the most dedicated fans), despite me not really knowing who I am "rooting" for in many of the games I watch.

Consider that two sports that are big money even though the interest in "sides" and rivalries is much smaller than in football (of any kind) or basketball: tennis and golf. Sure, there are tiger woods fans and phil mickelson fans, but I think more people are generally "golf" fans with mostly minor sympathies toward one or another player, akin to those I have for basketball or baseball teams whose style I happen to like.

Comment by michael_sullivan on False Laughter · 2007-12-22T16:34:54.000Z · score: 3 (3 votes) · LW · GW

Would jokes where Dilbert's pointy-headed boss says idiotic things be less funny if the boss were replaced by a co-worker? If so, does that suggest bosses are Hated Enemies, and Dilbert jokes bring false laughter?

I don't think this is true in general of Dilbert strips, but I would venture that it is true of an awful lot of Dilbert style or associated "humor".

Comment by michael_sullivan on Fake Morality · 2007-11-09T18:36:56.000Z · score: 1 (1 votes) · LW · GW

If I thought there were a God, then his opinions about morality would in fact be persuasive to me. Not infinitely persuasive, but still strong evidence. It would be nice to clear up some (not all) of my moral uncertainty by relying on his authority.

The problem (and this is coming from someone who does still believe in God, so yes, OB still has at least one religious reader left) is that for pretty much any possible God, we have only very weak and untrustworthy indications of God's desires. So there's huge uncertainty just in the question of "what does God want?". What we know about this comes down to what other people (both current and historically) tell us about what they believe god wants, and whatever we experience directly in our internal prayer life. All this evidence is fairly untrustworthy on it's own. Even with direct personal experience, it's not immediately obvious to an honest skeptic whether that's coming from God, Satan or a bit of underdone potato.

Comment by michael_sullivan on Fake Selfishness · 2007-11-08T16:29:47.000Z · score: 5 (5 votes) · LW · GW

Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.

On the other hand, if we are truly dedicated to overcoming bias, then we should value such people even more highly than those whom we can convince to question or abandon their cherished (but wrong) arguments/beliefs.

The problem is figuring out who those people are.

But it's very difficult. If someone can correctly argue me out of an incorrect position, then they must understand the question better than I do, which makes it difficult or impossible for me to judge their information. Maybe they just swindled me, and my initial naive interpretation is really correct, while their argument has a serious flaw that someone more schooled than I would recognize?

So I'm forced to judge heuristically by signs of who can be trusted.

I tentatively believe that a strong sign of a person who can help me revise my beliefs is a person who is willing to revise their beliefs in the face of argument.

Eliezer's descriptions of his intellectual history and past mistakes are very convincing positive signals to me. The occasional mockery and disdain for those who disagree is a bit of a negative signal.

But this comment here is not a negative signal at all, for me. Why? Because even if Eliezer was wrong, the other party's willingness to reexamine is a strong signal of intelligence. Confirmation bias is so strong, that the willingness to act against it is of great value, even if this sometimes leads to greater error. A limited, faulty error correction mechanism (with some positive average value) is dramatically better than no error correction mechanism in the long run.

So yes, if I can (honestly) convince a person to question something that they previously deeply held, that is a sign of intelligence on their part. Agreeing with me is not the signal. Changing their mind is the signal.

It would be a troubling sign for me if there were no one who could convince me to change any of my deeply held beliefs.

Comment by michael_sullivan on Fake Justification · 2007-11-01T15:48:56.000Z · score: 1 (1 votes) · LW · GW

I think fundamentalism is precarious, because it encourages a scientific viewpoint with regards to the faith, which requires ignorance or double-think to be stable. In the absence of either, it implodes.

It requires more than merely a scientific viewpoint toward the faith, but a particular type of strong reductionism.

In my experience it is much easier to take the christian out of a fundamentalist christian, than to take the fundamentalist out of a fundamentalist christian. A lot of the most militant atheists seem to have begun life by being raised in a fundamentalist or orthodox tradition. The epistemology stays the same, only the result changes. Deciding on an appropriate epistomology is a much harder and deeper question to resolve than merely what to conclude about God v. No God given a strong reductionist epistemology Under SRE, something in the neighborhood of atheism, antheism or very weak agnosticism becomes a very clear choice once you get rid of explicit indoctrination to the contrary.

But strong reductionist epistemology can't really be taken as a given.

Comment by michael_sullivan on Explainers Shoot High. Aim Low! · 2007-10-24T16:40:43.000Z · score: 0 (0 votes) · LW · GW

Douglas writes: Suppose I want to discuss a particular phenomena or idea with a Bayesian. Suppose this Bayesian has set the prior probability of this phenomena or idea at zero. What would be the proper gradient to approach the subject in such a case?

I would ask them for their records or proof. If one is a consistent Bayesian who expects to model reality with any accuracy, the only probabilities it makes sense to set as zero or one are empirical facts specificied at a particular point in space-time (such as: "I made X observation of Y on Z equipment at W time") or statements within a formal logical system (which are dependent on assumptions and can be proved from those assumptions).

Even those kinds of statements are probably not legitimate candidates for zero/one probability, since there is always some probability, however minuscule that we have misremembered, misconstrued the evidence or missed a flaw in our proof. But I believe these are the only kinds of statements which can, even in principle have probabilities of zero or 1.

All other statements run up against possibilities for error that seem (at least to my understanding) to be embedded in the very nature of reality.

Comment by michael_sullivan on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-23T19:00:00.000Z · score: 2 (2 votes) · LW · GW

It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?

I'm at a loss for how to model expected utility in a way that doesn't generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.

Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a conservative.

One thought, the expected utility of letting our god-like figure run this Turing simulation might well be positive! S/He is essentially creating these 3^^^3 people and then killing them. And in fact, it's reasonable to assume that expected disutility of killing them is entirely dependent on (and thus exactly balanced by) the utility of their creation.

So, our mugger doesn't really hand us a dilemma unless the claim is that this simulation is already running, and those people have lives worth living, but if you don't pay the $5, the program will be altered (sun will stop in the sky, so tto speak) and they will all be killed). This last is more of a nitpick.

It does seem to me that the bayesian inference we draw from this person's statement must be extraordinarily low, with an uncertainty much larger than its absolute value. Because a being which is both capable of this and willing to offer such a wager (either in truth or as a test) is deeply beyond our moral or intellectual comprehension. Indeed, if the claim is true, that fact will have utility implications that completely dwarf the immediate decision. If they are willing to do this much over 5 dollars, what will they do for a billion? Or for some end that money cannot normally purchase? Or merely at whim? It seems that the information we receive by failing to pay may be of value commensurate with the disutility of them truthfully carrying out their threat.

Comment by michael_sullivan on Conjunction Fallacy · 2007-09-19T21:20:50.000Z · score: 2 (2 votes) · LW · GW

Catapult:

The rephrasing as frequencies makes it much clearer that the question is not "How likely is an [A|B|C|D|E] to fit the above description" which J thomas suggested as a misinterpretation that could cause the conjunction fallacy.

Similarly, that rephrasing makes it harder to implicitly assume that category A is "accountants who don't play jazz" or C is "jazz players who are not accountants".

I think similarly, in the case of the poland invasion diplomatic relations cutoff, what people are intuitively calculating in the compound statement is the conditional probability, IOW, turning the "and" statement into an "if" statement. If the soviets invaded Poland, the probability of a cutoff might be high, certainly higher than the current probability given no new information.

But of course that was not the question. A big part of our problem is sometimes translation of english statements into probability statements. If we do that intuitively or cavalierly, these fallacies become very easy to fall into.

Comment by michael_sullivan on We Don't Really Want Your Participation · 2007-09-12T15:59:33.000Z · score: 0 (0 votes) · LW · GW

The primary point being that the inviters were not looking for "a female perspective" but "a perspective from a female---who may in all expectation see things differently than we do".

Clearly it depends on the context, and how the questions get asked. Too often I see this kind of thing play out as "Oh let's find a chick to give us the woman's seal of approval". I was trying to be clear about when such a request would and would not play that way. The equivalent to what was discussed in the OP (a call for the participation of artists) would be sending out a general office email asking for (random) women to comment on the ad campaign. That's condescending and classic privileged behavior. Just asking some particular women they respect the very same kind of questions that they might put to a male colleague, isn't.

Comment by michael_sullivan on We Don't Really Want Your Participation · 2007-09-11T15:13:04.000Z · score: 6 (6 votes) · LW · GW

"It's not unlike a group of male advertisers sitting around a table considering whether they should solicit a female colleague's perspective on a particular ad campaign. That might be considered condescending, but its equally likely that her opinion may be of value, if not uniquely "feminine" in some way."

Not "might" but would be considered condescending. It's classic privileged behavior to essentially ask the token X to speak for Xs. And Eliezer hits on exactly why it's privileged and condescending. Because if they really cared about her opinion, they would already have specific questions to ask, rather than merely "solicit her perspective" so they can check "woman" (or in the original case "artist") off on their checklist of countries heard from.

Comment by michael_sullivan on Why is the Future So Absurd? · 2007-09-07T14:56:59.000Z · score: 2 (2 votes) · LW · GW

I think this is another key application of the way of Bayes. The usefulness of typical future predictions is hampered by the expectation of binary statements.

Most people don't make future pronouncements by making lists of 100 absurd-seeming possibilities each with a low but significant probability and say "although I would bet againt any single one of these happening by 2100, I predict that at least 5 of them will."

A classic simplified model for predicting uncertain futures is a standard tournament betting pool (like the NCAAs for instance). In any reasonably competitive 64 team field, given an even bet on the best team to be the winner, you would be against. But it is still correct to bet the best team to win in a pool (barring any information about other bets). OTOH, if you have big upset incentives, or if you know who else is betting on what, sometimes you can make profitable (+EV) bets on teams that are less likely to win than the best team, because those bets are claims of the form "I believe team X has greater than Y% probability to do Z", where Y can be arbitrarily low.

Predicting futures is similar. Presumably crazy future predictions look absurd even to field experts because they have a very low probability of occuring. It is right to bet against all of them one on one. But the number of such absurd but not impossible predictions is so large, that it is not right to bet against all of them together. As we head further into the future, the probability that some absurd thing will happen rapidly approaches 1.

The problem is figuring out which ones to bet on if you are making a typical prediction list that is phrased "In year 2100 thus and so will be the case". And the answer is that we don't have enough information to make any absurd predictions with even close to 50% confidence. If we could make a prediction of something with 50% confidence then, at least within fields possessing appropriate knowledge, it would not be considered absurd.

I'd like to see more futurists make predictions of the form I mentioned in my second paragraph, similar to Robin's approach in the list of 10 crazy things he believes.

Because if experts did that, it would get us thinking more about the 1000 or so currently foreseeable directions from which the 10-20 absurd changes of the next 100 years are most likely to come.

Comment by michael_sullivan on Absurdity Heuristic, Absurdity Bias · 2007-09-05T20:08:11.000Z · score: 4 (4 votes) · LW · GW

Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying "I don't know".

Really? I doubt it.

On the set of things that looked absurd 100 years ago, but have actually happened, I'm quite sure you're correct. But of course, that's a highly self-selected sample.

On the set of all possible predictions about the future that were made in 1900? Probably not.

I recall reading not long ago, a list of predictions made about technological and social changes expected during the 20th century, written in 1900. Might have been linked from a previous discussion on this blog, in fact. The surprising thing to me was not how many predictions were way off (quite a few), but how many were dead on, or about as close as they could have been presented in the language and concepts known in 1900 (maybe half).

I'm not going to claim that anti-absurdity is a good heuristic, but I don't think you're judging it quite fairly here. I think it's a fair bit better than maximum entropy.

Comment by michael_sullivan on "Science" as Curiosity-Stopper · 2007-09-04T16:51:33.000Z · score: 4 (4 votes) · LW · GW

There is a tremendous demand for mysteries which are frankly stupid. I wish this demand could be satisfied by scientific mysteries instead. But before we can live in that world, we have to undo the idea that what is scientific is not curiosity-material, that it is already marked as "understood".

I think one of the biggest reasons for this is that most of us are satisficers when it comes to explanations of the world. An implication that some scientists know what is going on with a certain phenomenon and are not radically reinterpreting all their theories and designing flurries of experiments means essentially "This phenomenon does not need to radically disturb my map of understanding about the world".

Suppose the answer to the elephant in the room is that God definitely exists and can overturn or modify physical "laws" at whim, and starting today, is willing to provide independently replicable external proof to any willing skeptical observer of that fact, -- this silvery-green elephant is the first salvo in the project.

Now if I know this I could certainly claim that "Somebody else understands why this elephant is here", but it would be a pretty radical stretch to say "Science" even though in some sense, it would be. But when people say/imply that it was explainable by "science", what I believe they mean is that it is explainable in terms that do not render the current common understandings of some major scientific field moot.

Now, in practice, all people's internal maps of understanding are so severely limited that studying any deep scientific problem (solved or not, as long as they didn't already understand it) would, in fact, radically change their understanding of the world, even if they were not learning anything in the process that scientists in the field don't already know backwards and forwards. I'm a geek and read lots of science, so I've known all sorts of things about the effects of quantum mechanics on how I should understand the world since I was 14, but the moment when I finally got the math of the wave equation (after finally deciding to bang my head on the math as long as necessary) was nonetheless transformative.

So I agree with you completely. The fact that something is understood, if it was once a deep mystery, is no reason for anyone to treat it as trivial.

Comment by michael_sullivan on Positive Bias: Look Into the Dark · 2007-08-28T20:02:31.000Z · score: 3 (3 votes) · LW · GW

It seems very normal to expect that the rule will be more restrictive or arithmetic in nature. But if I am supposed to be sure of the rule, then I need to test more than just a few possibilities. Priors are definitely involved here.

Part of the problem is that we are trained like Monkeys to make decisions on underspecified problems of this form all the time. I've hardly ever seen a "guess the next [number|letter|item] in the sequence problem that didn't have multiple answers. But most of them have at least one answer that feels "right" in the sense of being simplest, most elegant or most obvious or within typical bounds given basic assumptions about problems of that type.

I'm the sort of accuracy-minded prick who would keep testing until he was very close to certain what the rule was, and would probably take forever.

An interesting version of this phenomenon is the game: "Bang! Who's dead". one person starts the game, says "Bang!", and some number of people are metaphorically dead, based on a rule that the other participants are supposed to figure out (which is, AFAIK, the same every time, but I'm not saying it here). The only information that the starter will give is who is dead each time.

Took me forever to solve this, because I tend to have a much weaker version of the bias you consider here. But realistically, most of my mates solved this game much faster than I did. I suspect that this "jump to conclusions" bias is useful in many situations.

Comment by michael_sullivan on Absence of Evidence Is Evidence of Absence · 2007-08-14T18:52:10.000Z · score: -1 (1 votes) · LW · GW

If sabotage increases the probability, lack of sabotage necessarily decreases the probability.

That's true in the averages, but different types of sabotage evidence may have different effects on the probability, some negative, some positive. It's conceivable, though unlikely, for sabotage to on average decrease the probability.

Comment by michael_sullivan on Absence of Evidence Is Evidence of Absence · 2007-08-13T14:32:09.000Z · score: 15 (16 votes) · LW · GW

The particular observation of no sabotage was evidence against, and could not legitimately be worked into evidence for.

You are assuming that there are only two types of evidence, sabotage v. no sabotage, but there can be much more differentiation in the actual facts.

Given Frank's claim, there is a reasoning model for which your claim is inaccurate. Whether this is the model Earl Warren had in his head is an entirely different question, but here it is:

We have some weak independent evidence that some fifth column exists giving us a prior probability of >50%. We have good evidence that some japanese americans are disaffected with a prior of 90%+. We believe that a fifth column which is organized will attempt to make a significant coordinated sabotage event, possibly holding off on any/all sabotage until said event. We also believe that the disaffected who are here, if there is no fifth column would engage is small acts of sabotage on their own with a high probability.

Therefore, if there are small acts of sabotage that show no large scale organization, this is weak evidence of a lack of a fifth column. If there is a significant sabotage event, this is strong evidence of a fifth column. If there is no sabotage at all, this is weak evidence of a fifth column. Not all sabotage is alike, it's not a binary question.

Now, this is a nice rationalization after the fact. The question is, if there had been rare small acts of sabotage, what is the likelihood that this would have been taken by Warren and others in power as evidence that there was no fifth column. I submit that it is very unlikely, and your criticism of their actual logic would thus be correct. But we can't know for certain since they were never presented with that particular problem. And in fact, I wish that you, or someone like you, had been on hand at the hearing to ask the key question: "Precisely what would you consider to be evidence that the fifth column does not exist?"

Of course, whether widespread internment was a reasonable policy, even if the logic they were using were not flawed, is a completely separate question, on which I'd argue that very strong evidence should be required to adopt such a severe policy (if we are willing to consider it at all), not merely of a fifth column, but of widespread support for it. It is hard to come up with a plausible set of priors where "no sabotage" could possibly imply a high probability of that situation.

Comment by michael_sullivan on Two More Things to Unlearn from School · 2007-07-13T14:18:38.000Z · score: 9 (8 votes) · LW · GW

All textbooks should contain a few deliberately placed errors that students should be capable of detecting. This way if a student is confused he might suspect it is because his textbook is wrong.

Starting that in the current culture would be...interesting, to say the least.

I still recall vividly a day that I found an error in my sixth grade math textbook and pointed it out in class. The teacher, who clearly understood that day's lesson less well than I did, concocted some kind of just so story to explain the issue which had clear logical inconsistencies, which I also pointed out, along with a plausible just so story of my own of how the error could have happened innocently.

I ended up being mocked by both teacher and students as someone who "thinks he knows everything". Because of course, we all know that the textbook author not only does know everything, but is incapable of making typographical errors.

Oddly, at the time I was remarking on the error to stand up for a classmate who was expressing confusion. She couldn't understand why her (correct) answer to a question was wrong.

Comment by michael_sullivan on Scope Insensitivity · 2007-05-14T17:01:57.000Z · score: 51 (51 votes) · LW · GW

I'm not sure I buy that this is completely about scope insensitivity rather than marginal utility and people thinking in terms of their fair share of a kantian solution. Or put differently, I think the scope insensitivity is partly inherent in the question, rather than a bias of the people answering.

Let's say I'd be willing to spend $100 to save 10 swans from gruesome deaths. How much should I, personally, be willing to spend to save 100 swans from the same fate? $1000? $10,000 for 1,000 swans? What about 100,000 swans -- $1,000,000?

But I don't have $1,000,000, so I can't agree to spend that much, even if I believe that it is somehow intrinsically worth that much. When I'm looking at what I personally spend, I'm comparing my ideas about the value of saving swans to the personal utility I give up by spending that money. $100 is a night out. $1000 is a piece of furniture or a small vacation. $10,000 is a car or a year's rent. $100,000 is a big chunk of my net worth and a sizable percentage of what I consider FU money. As I go up the scale my pain increases non-linearly, and my personal pain is what I'm measuring here.

So considering a massive problem like saving 2 million swans, I might take the Kantian approach. If say, 10% of people were willing to put $50 toward it, that seems like it would be enough money, so I'll put $50 toward it figuring that I'd rather live in a world where people are willing to do that than not.

Like many interpretations of studies like this, I think you're pulling to trigger on an irrationality explanation too fast. I believe that what people are thinking here is much more complicated than you're giving them credit for and with an appropriate model their responses might not appear to be innumerate.

It's a hard question to ask in a way that scales appropriately, because money only has value based on scarcity, so you can't say "If you are emperor of a region with unlimited money to spend, what it is worth to save N swans?" because the answer is just "as much as it takes". Money only has value if it is scarce, and what you're really interested in is "Using 2007 US dollars as units: How much other consumption should be foregone to save N swans?". But people can only judge that accurately from their own limited perspective where they have only so much consumption capacity to go around.

Comment by michael_sullivan on Useless Medical Disclaimers · 2007-03-19T18:51:48.000Z · score: 3 (3 votes) · LW · GW

Two classic objections to regulation are that (a) it infringes on personal freedom and (b) the individual always knows more about their own situation than the regulator. However, my proposed policy addresses both of these issues: rather than administering a math test, we can ask each individual whether or not they're innumerate. If they do declare themselves to be innumerate, they can decide for themselves the amount of the tax to pay.

What do you think? Would this tax give people an incentive to become less innumerate, as standard economics would predict?

I'm not sure I'm getting the joke. Obviously few or no rational actors would declare themselves innumerate under this scheme, and it doesn't appear to provide any incentive whatsoever to become less innumerate under standard economic theory.

Am I missing something, or was your post?