Posts

Comments

Comment by AndySimpson on Rationality Quotes: January 2011 · 2011-01-04T19:01:56.063Z · LW · GW

"...natural selection built the brain to survive in the world and only incidentally to understand it at a depth greater than is needed to survive. The proper task of scientists is to diagnose and correct the misalignment." -- E. O. Wilson

Comment by AndySimpson on Rationality Quotes: January 2011 · 2011-01-04T18:56:41.740Z · LW · GW

"Fanatics may suppose, that dominion is founded on grace, and that saints alone inherit the earth; but the civil magistrate very justly puts these sublime theorists on the same footing with common robbers, and teaches them by the severest discipline, that a rule, which, in speculation, may seem the most advantageous to society, may yet be found, in practice, totally pernicious and destructive." -- David Hume

More of an anti-fanaticism quotation, but it seems to belong.

Comment by AndySimpson on Pain · 2009-08-03T17:45:05.430Z · LW · GW

Pain is broadly not preferred. That is to say, an absence of cognition is preferred to the cognition of pain. This makes the question easy for a preference utilitarian, who holds there is nothing impeding the value of the preferences of subjects: Badness attaches to pain when a subject would rather not be feeling it. When a subject prefers pain for whatever reason, there is nothing wrong with it. For objective moral systems outside of preference utilitarianism, the question is a little more threatening.

Comment by AndySimpson on The Nature of Offense · 2009-07-24T10:38:46.751Z · LW · GW

I have no idea what I'm wading into here, but a few things occured to me reading this:

Taking offense to something relies on status and perhaps more significantly on interpellation. Interpellation and its inherent insistence on dignity create barriers to what I'll call effective communication and introduce a rhetoric of respect. If we wish to be rationalists, really and truly, it seems like we must have a discourse that avoids insisting on respect for anyone or anything. We must all get thick skins, be willing to hear ourselves treated as objects of outside analysis and be willing to be ignored when we have bad ideas. Unwise, "offensive" comments like the one that seemed to kick off this discussion can be assayed because they are examples of poor thinking rather than because they are causes of emotional distress. Here, when it gets down to serious business, we should each have no more merit or status than our own arguments give us.

However, I have no idea how to sum this up in a maxim or otherwise implement this. What I offer is not a solution but an objective. I hope others can flesh it out.

Comment by AndySimpson on Not Technically Lying · 2009-07-06T09:38:31.297Z · LW · GW

The Pope is a good neutral third party. He has taken the consolation prize of being the World's Most Moral Man because he can't be Vladimir Putin or Barack Obama, both of whom have more friends and more power.

Comment by AndySimpson on Media bias · 2009-07-06T09:15:41.063Z · LW · GW

Two corollary explanations come to mind. First, writing uses a wider variety of registers and styles than spoken language. Forms and usages that would sound exaggerated or affected in spoken language are socially appropriate in writing. Writing is constructed over time and predominately "for the record," so it uses precise, unforgiving language that suits the specific context of the writing. This is why the first line of a Wikipedia article on some topic in math, poetry, or physics is often indecipherable to a lay reader, even an educated one, without further reading. Spoken language, on the other hand, is first and foremost a form of communication from a speaker to a listener, and is composed and interpreted in real time, even if it's guided by notes. This makes it more fluid and colloquial, and more likely to employ a register that the speaker and listener will both understand readily. Since successful writers use the more precise, ossified language and successful speakers use the more fluid one, they diverge through memetic evolution, as suggested.

The second explanation has more to do with the way writing is taught. I don't know how much it applies to technical writing, maybe somebody can share their experience on that point. Since the Victorian era, prose has embraced brevity. The briefest explanation that still conveys the broad meaning of an author's idea is usually treated as the best stylistically. This sacrifices precision for a kind of clarity, but in a field like mathematics, precision is clarity. Typical admonitions about brevity of style, then, render useless attempts to explain big, scary concepts. Lecturers, however, have the opportunity to pursue digressions and explain minutiae in half-organized ways and still hold the attention of an audience because the lecturer can easily signal the importance of a difficult intermediate step to the wider narrative in a way that would be clunky and perhaps abrupt in writing.

Comment by AndySimpson on The Aumann's agreement theorem game (guess 2/3 of the average) · 2009-06-10T09:50:29.886Z · LW · GW

Here is my question: Is there any payoff whatsoever for everyone drawing?

Comment by AndySimpson on Link: The Case for Working With Your Hands · 2009-05-28T22:17:02.523Z · LW · GW

Understood. I should've made it clear I was responding specifically to

A large part of the satisfaction of motorcycle work that Crawford describes comes from the fact that such work requires one to confront reality, however harsh it may be. Reality cannot be placated by hand-waving, Powerpoint slides, excuses, or sweet talk. But the very harshness of the challenge means that when reality yields to the finesse of a craftsman, the reward is much greater.

Comment by AndySimpson on Link: The Case for Working With Your Hands · 2009-05-28T21:23:27.726Z · LW · GW

Reality is not very harsh when all you're dealing with is a broken motorcycle or a program that won't compile. When you're dealing with public policy, which even in its best form is usually social triage, deciding who gets what and who will be left unemployed, poor, sick, in debt, unfunded, oppressed, or dead, the facts have a much greater sting.

And, as MichaelVassar points out, political success is usually pretty clear cut, at least in the long run. Just ask Walter Mondale or John McCain.

Comment by AndySimpson on Do Fandoms Need Awfulness? · 2009-05-28T21:09:11.001Z · LW · GW

This seems to broaden the discussion considerably from works of art with fandoms to anything with a following. I think you'll agree that there's a noticeable difference between the attitude of otaku toward anime and F1 followers toward F1 cars and races.

Comment by AndySimpson on Do Fandoms Need Awfulness? · 2009-05-28T20:48:08.267Z · LW · GW

This strikes me as the right answer. Things like Star Trek and Tolkien are incredibly powerful for very small subsets of the population because their creators make risky aesthetic and narrative choices. It isn't so much that fans feel they must come to the defense of their preferred works, but that those works speak to them in rare and intense ways that are really distasteful to most people. So fans bask in the uncommon power of their fan-objects and disregard prevailing opinion. People aren't as fanatical about things like Indiana Jones or Animal Farm because their appeal is shallow and broad: everyone seems to agree that Indiana Jones is a sympathetic and entertaining character and Animal Farm is a clever allegory, but they only speak to one thing, and one thing that is widely understood. Star Trek, by comparison, is an immersive universe that goes down peculiar and deep paths that explore culture, power, ethics, and history among other things. It is not so much that all fan-objects possess objective awfulness, but they all do sacrifice wide appeal for a constrictive spiritual completeness.

Comment by AndySimpson on Homogeneity vs. heterogeneity (or, What kind of sex is most moral?) · 2009-05-24T18:05:21.452Z · LW · GW

As other commenters have suggested, what is moral is not reducible to what is natural. This assumption, which underlies the entire post, is left totally un-addressed. I understand that genetic fitness is relevant to morality because people must endure, but this doesn't seem to demand that the extent of morals be fitness. I would love a post that explains morality as inherently and solely about fitness.

This post flies from one topic to another very quickly, and I can't understand all the connections between topics. Why is the human designer of transhumanity suddenly free to choose a new moral chassis for his creation, and why should he care about the moral success of the transhumans? Shouldn't he create a transhumanity that maximizes his own fitness?

More broadly, are we talking about real transhumans or a human-designed strong AI?

Comment by AndySimpson on Homogeneity vs. heterogeneity (or, What kind of sex is most moral?) · 2009-05-24T17:16:05.390Z · LW · GW

Also, organisms are always adaptation-executors rather than direct fitness-maximizers.

Comment by AndySimpson on Least Signaling Activities? · 2009-05-22T18:01:37.216Z · LW · GW

On first glance, the answer that came to mind was accidental death or serious injury due to sheer incompetence, like walking off a cliff. Something that has a massive survival cost and only communicates failure seems like it couldn't be signaling. Mistakes are revealing, after all. But this kind of signaling happens all the time, mostly as a flawed means of signaling courage or simply drawing attention.

It struck me then that the question of what is "least signaling" may not be useful for determining states of mind, that every behavior can be an attempt at signaling. All that changes is the size of the audience and the success of the signaling. Conversely, a behavior that is usually associated with signaling can occur for perfectly honest or private reasons. (This is the pretense of polite society, that someone "meant nothing by it" even when "it" is dressing in a frock coat and top hat or, alternatively, stripping half naked. But that is for another thread.) The point is we are not bound to always think in a signaling way when we're involved in behavior that readily signals.

Comment by AndySimpson on A Parable On Obsolete Ideologies · 2009-05-16T10:38:06.235Z · LW · GW

Colonel F suggests the worst kind of compromise between the optimal and the real. Political actors must not overlook reality, as many of the great revolutionaries of history did, but neither should they bend their agendas to it, as Chamberlain, Kerensky, and so many tepid liberals and social democrats did. To do so is to surrender without even fighting. This is especially true for political actors with a true upper hand, like Eisenhower or MacArthur after World War II. They must control the conversation, they must push the Overton window away from competing ideologies and towards their own, because all advantages are tentative. There is no sense compromising with a broken enemy.

That said, it is clearly unwise to be overtly punitive after a victory because punishment suggests weakness on the part of the victor, it suggests an order that can only be maintained by retaliation and fear. This is why the Emperor remained on the throne in Japan and initiatives like the Morgenthau Plan were discarded. The Emperor was not the enemy, Germany was not the enemy: the ideologies of militant nationalism were the enemy.

To me, Colonel Y is obviously correct. I guess this is because I don't buy the analogy. Religion is emergent, pervasive, and broadly well-intentioned. Nobody ever defeated it in the field of battle, because it never waged open war against civilization. On the contrary, it has cemented itself as part of civilization. Nazism, however, was transient, antagonistic to civilization, and destructive. Even if it were rendered metaphorical, it would make more problems than it would ever solve. There was a German identity before the Nazis and, as we've seen, there is one afterward.

Comment by AndySimpson on Rationalist Role in the Information Age · 2009-04-30T22:55:11.904Z · LW · GW

The thing is, I think Wikipedia beat you to the punch on this one. They may not be Yudkowskian, big-R Rationalists, but they are, broadly-speaking, rational. And they do an incredibly effective job of pooling, assessing, summarizing, and distributing the best available version of the truth already. Even people of marginal source-diligence can get a clear view of things from Wikipedia, because extensive arguments have already distilled what is clearly true, what is accepted, what is speculation, and what is on the fringe.

I encourage you to bring the clarity of thought taught in the Less Wrong community to Wikipedia by contributing.

That said, it would be pretty cool if they'd implement a karma-like system for Wikipedia contributors. It would make vandals, fools, trolls, noobs, editors in good standing, and heroic contributors easily recognizable.

Comment by AndySimpson on Rationalist Role in the Information Age · 2009-04-30T22:42:42.165Z · LW · GW

NPOV does not stand for "No point of view." Nor does it mean "balance between competing points of view." Check out this and this. NPOV requires that Wikipedia take the view of an uninvolved observer, and it is supplemented by verifiability, which requires that Wikipedia take an empirical, secondary point of view that credits established academia.

So content disputes are usually settled by evaluating claims as true or false through verification. Those who continue to object to a claim once it has been established do not have to be included in a consensus. That is why Wikipedia is able to assert the truth of the Armenian Genocide, the Holocaust, and the moon landings.

Comment by AndySimpson on Epistemic vs. Instrumental Rationality: Approximations · 2009-04-28T08:50:20.345Z · LW · GW

So what lesson does a rationalist draw from this? What is best for the Bayesian mathematical model is not best in practice? Conserving information is not always "good"?

Also,

I will simply rationalize some other explanation for the destruction of my apartment.

This seems distinctly contrary to what an instrumental rationalist would do. It seems more likely he'd say "I was wrong, there was actually an infinitesimal probability of a meteorite strike that I previously ignored because of incomplete information/negligence/a rounding error."

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T09:39:36.807Z · LW · GW

On the whole, we're agreed, but I still don't know how I'm supposed to choose values.

This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.

I think this tactic works best when you're dealing with a particular constituency that agrees on some creed that they hold to be objective. Usually, when you call your opponent a bad person, you're playing to your base, not trying to grab the center.

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T09:32:34.307Z · LW · GW

I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.

To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to "objective" morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T08:57:27.811Z · LW · GW

Peaceful coexistence is not something I object to. Neither does anything oblige agents to perfectly align their values, each is free to choose. I strongly endorse people with wildly different values cooperating in areas of common interest: I'm firmly in Anton LaVey's corner on civil liberties, for instance. It should be recognized, though, that some are clearly more wrong than others because some people get poor information and others reason poorly through akrasia or inability. Anton LaVey was not trying hard enough. I think the question is worth asking, because it is the basis of building the minimal framework of rules from each person's judgement: How are we supposed to choose values?

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T08:38:15.762Z · LW · GW

Why do you think it needs to be confronted? ... I don't however feel the need to 'prove' that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle - I simply take it as a given.

I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?

Most problems in the world seem to arise from conflicting goals, either internally or between different people. I'm primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts.

Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I'm saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T08:11:29.106Z · LW · GW

In theory, the westerners would just be sending their money to desperately poor people.

I'm not an economist, and but I think you could model that as a kind of demand. And I don't think I stipulated to there being a transfer of wealth.

Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.

For me, the interesting question is how one goes about choosing "terminal values." I refuse to believe that it is arbitrary or that all paths are of equal validity. I will contend without hesitation that John Stuart Mill was a better mind, a better rationalist, and a better man than Anton LaVey. My own thinking on these lines leads me to the conclusion of an "objective" morality, that is to say one with expressible boundaries and one that can be applied consistently to different agents. How do you choose your terminal values?

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T06:46:43.081Z · LW · GW

Ok, here is what I don't agree with:

Choosing those goals is not something that rationality can help much with - the best it can do is try to identify where goals are not internally consistent.

I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn't we try to pin down and either discard or accept some version of "purpose," as a sort of first instrumental rationality?

I mention objectivity because I don't think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There's little to discuss if you don't, because "everything is permitted." That said, I think ethics has to understand each person's competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more "important" thing isn't agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I'm in substantial agreement with:

Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.

And I would enjoy thoroughly a post on this topic.

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T05:39:54.773Z · LW · GW

It's hard to reconcile any western lifestyle with traditional utilitarianism though so if that's your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.

One of the beauties of utilitarianism is that its ethics can adapt to different circumstances without losing objectivity. I don't think every "western lifestyle" is necessarily reprobate under utilitarianism. First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild. We can't all afford to be Gandhi. The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars without causing an ethically greater amount of inconvenience or economic harm.

All that said, I'd be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.

Comment by AndySimpson on This Didn't Have To Happen · 2009-04-24T05:00:15.585Z · LW · GW

This may be a naïve question, but could someone make or link me to a good case for cryonics?

I know there's a fair probability that we could each be revived in the distant future if we sign up for cryonics, and that is worth the price of admission, but that always struck me as a mis-allocation of resources. Wouldn't it be better, for the time being, if we dispersed all the resources used on cryonics to worthwhile causes like Iodized salt, clean drinking water, or childhood immunization and instead gave up our organs for donation after death? Isn't the cryonics things one big fuzzy, or at least a luxury?

Comment by AndySimpson on LessWrong Boo Vote (Stochastic Downvoting) · 2009-04-24T04:35:48.093Z · LW · GW

Why a 0.3 chance? Is that totally arbitrary? Also, it seems like a "boo" button would quickly become a means for people to indulge in inappropriate down-voting and feel insulated from responsibility for the outcome. It would also be a tempting false compromise between actually down-voting and doing nothing. Usually, one or the other is the right choice.

Comment by AndySimpson on LessWrong Boo Vote (Stochastic Downvoting) · 2009-04-24T04:22:36.496Z · LW · GW

I really do think we're all getting too worked up over the minutia of the karma system.

Agreed, but:

This isn't a game.

We must admit that to a great extent, it is. We are all attempting to make ourselves appear more useful to the community, and karma is the only quantitative way to tell if we're making progress. Like so many things, it feels like it trivializes but it is there for a purpose.

Comment by AndySimpson on Escaping Your Past · 2009-04-23T01:04:45.283Z · LW · GW

An important, so-often-useful distinction. This reminds me of the Buddhist notion of fetters. Fetters are personal features that impair your attainment of enlightenment and bind you to suffering. You can cast them off, but in order to do so, you have to cut the crap and practice doing without them, with the full knowledge that it may takes many lifetimes to free yourself. It is not sufficient to announce your adhesion to the creed of enlightenment. The only things that make you do better are the things that make you do better. Everything else is window-dressing, or at best means to that end.

On another note...

I feel bad blogging about rationality, given that I'm so horribly, ludicrously bad at it. I'm also horribly, ludicrously bad at writing.

Is that hyperbolic self-effacement I detect?

Comment by AndySimpson on Atheist or Agnostic? · 2009-04-21T23:30:59.422Z · LW · GW

I used to be worried about this, too. Then I found this beautifully concise term that resolves the whole question and ends semantic arguments over this arbitrary, imaginary distinction: agnostic atheist. This correctly describes me and I think it describes most other people who would call themselves agnostic or atheist. I encourage you to spread the term, and, when it's necessary or convenient, collapse the term into what you mean: atheist, which signifies only a lack of positive theism.

Also, Bertrand Russell explored this question thoroughly in his essay, "Am I an Atheist or an Agnostic?" I commend it as well for anyone who is confused about how to identify themselves.

On a side-quibble, I'm also careful about saying I'm "an atheist," with the article. I'm not "an" atheist in the same way a methodist is a methodist: my atheism doesn't mean I'm part of a discrete association of people. I don't go to atheist non-church with my fellow atheists on my unholy day. Think of how odd and even offensive it would seem, for instance, if we said each person with blue eyes was "a blue-eyed." Why? Socially, we would falsely be tagging him or her as merely a part of a greater faction of blue-eyed people. This is how nouns work in English: we have a set of social assumptions about "a doctor", but no such assumptions about "someone trained in medicine."

So "I am atheist" or, if you must, "I am agnostically atheist," work well.

Comment by AndySimpson on Well-Kept Gardens Die By Pacifism · 2009-04-21T22:55:13.301Z · LW · GW

Which was terrible and sitting at -1? I don't understand. All I was trying to indicate is that I've noticed a pronounced deviation from standard upvoting and downvoting practices in this thread, mostly towards downvoting.

Comment by AndySimpson on Well-Kept Gardens Die By Pacifism · 2009-04-21T22:52:41.161Z · LW · GW

Really must set up my LessWrong dev environment so I can add a patch to show both upvotes and downvotes!

Indeed. If that is the only change to this site's system or ethic that comes out of this discussion, it will have been worth it.

Comment by AndySimpson on Well-Kept Gardens Die By Pacifism · 2009-04-21T16:06:37.224Z · LW · GW

Agreed --- What seems to be happening, funny enough, is an echo chamber. Eliezer said "you must downvote bad comments liberally if you want to survive!" and so everyone's downvoting everyone else's comments on this thread.

Comment by AndySimpson on Well-Kept Gardens Die By Pacifism · 2009-04-21T13:54:25.296Z · LW · GW

I have the same apprehension. I'm somewhere between "complete poser" and "well-established member of the community," I just sort of found out about this movement about 50 days ago, started reading things and lurking, and then started posting. When I read the original post, I felt a little pang of guilt. Am I a fool running through your garden?

I'm doing pretty well for myself in the little Karma system, but I find that often I will post things that no one responds to, or that get up-voted or down-voted once and then left alone. I find that the only things that get down-voted more than once or twice are real attempts at trolling or otherwise hostile comments. Then again, many posts that I find insightful and beneficial to the discussion rarely rise about 2 or 3 karma points. So I'm left to wonder if my 1-point posts are controversial but good, above average but nothing special, or just mediocre and uninteresting.

Something that shows the volume of up- and down-votes as well as the net point score might provide more useful feedback.

Comment by AndySimpson on The Sin of Underconfidence · 2009-04-20T23:13:56.788Z · LW · GW

gjm asks wisely:

What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?

The central thrust of Eliezer's post is a true and important elaboration of his concept of improper humility, but doesn't it overlook a clear and simple political reality? There are reputational effects to public failure. It seems clear that those reputational effects often outweigh whatever utility is gained from an empirical "test" of one's own abilities: this is why international relations theory isn't a rigorous empirical science. We live in an irrational kaleidescope of power, driven by instinct and emotion, ordered only fleetingly by rhetoric and guile. In this situation, we need to keep our cards close to our chest if we want to win.

Mulciber adds something along the same lines:

By increasing the challenge the way you suggest, you may very well be acting rationally toward the goal of testing yourself, but you're not doing all you can to cut the opponent. To rationally pursue winning the debate, there's no excuse for not doing your research.

And Eliezer does seem to approve of this mode of thinking in some cases:

Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost. In that case you make everything as easy for yourself as possible. To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.

So, to sum up my concern, how is this principle of pragmatism reconciled to your choice not to prepare? Isn't it best to test yourself in the peace and safety of your dojo, or in circumstances where the stakes are not high, and use every means available to resist on the actual field of battle?

Comment by AndySimpson on The Epistemic Prisoner's Dilemma · 2009-04-19T14:47:57.681Z · LW · GW

But in this case, someone with a degree of astronomical knowledge comparable to yours, acting in good faith, has come up to you and has said "I'm 99% confident that a meteor will hit your house today. You should leave." Why not investigate his claim before dismissing it?

Comment by AndySimpson on The Epistemic Prisoner's Dilemma · 2009-04-18T20:57:31.162Z · LW · GW

What if you're wrong?

Comment by AndySimpson on Rationality Quotes - April 2009 · 2009-04-18T20:54:13.174Z · LW · GW

I get the same sense.

Comment by AndySimpson on Rationality Quotes - April 2009 · 2009-04-18T19:55:05.097Z · LW · GW

Facts do not cease to exist because they are ignored.

--Aldous Huxley

Comment by AndySimpson on Rationality Quotes - April 2009 · 2009-04-18T19:51:53.318Z · LW · GW

Life is short, and truth works far and lives long: let us speak the truth.

--Arthur Schopenhauer

Comment by AndySimpson on Rationality Quotes - April 2009 · 2009-04-18T19:50:55.151Z · LW · GW

...natural selection built the brain to survive in the world and only incidentally to understand it at a depth greater than is needed to survive. The proper task of scientists is to diagnose and correct the misalignment.

-E. O. Wilson

Comment by AndySimpson on Rationality Quotes - April 2009 · 2009-04-18T19:39:23.820Z · LW · GW

Before we study Zen, the mountains are mountains and the rivers are rivers. While we are studying Zen, however, the mountains are no longer mountains and the rivers are no longer rivers. But then, when our study of Zen is completed, the mountains are once again mountains and the rivers once again rivers.

-- Buddhist saying

Comment by AndySimpson on The Epistemic Prisoner's Dilemma · 2009-04-18T19:15:58.404Z · LW · GW

It seems like you assume implicitly that there's an equal probability of the other doctor defecting: (0 + 10,000)/2 < (5,000 + 15,000)/2. That makes sense in the original prisoner's dilemma, but given that you can communicate, why assume this?

Comment by AndySimpson on The Trouble With "Good" · 2009-04-17T15:48:25.241Z · LW · GW

In utilitarianism, sometimes some animals can be more equal than others.. It's just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.

Comment by AndySimpson on The Trouble With "Good" · 2009-04-17T15:15:32.952Z · LW · GW

That is a good question for a statistician, and I am not a statistician.

One thing that leaps to mind, however, is two-boxing on Newcomb's Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don't begin to understand suggests that either response to Newcomb's problem is defensible using Bayesian nets.

There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.

Also, it's struck me that a frequentist statistician might call most Bayesian uses of the theorem "abuses."

I'm not sure those are really good examples, but I hope they're satisfying.

Comment by AndySimpson on The Trouble With "Good" · 2009-04-17T11:39:24.568Z · LW · GW

I'm not sure this is always a bad thing.

It may be useful shorthand to say "X is good", but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement "Bayes' Theorem is valid, true, and useful in updating probabilities" collapses into "Bayes' Theorem is good," we invite the abuse of Bayes' Theorem.

So I wouldn't say it's always a bad thing, but I'd say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.

Comment by AndySimpson on Bayesians vs. Barbarians · 2009-04-16T07:51:13.681Z · LW · GW

What army of free-market mercenaries could seriously hope to drive the modern US Armed Forces, augmented by a draft, to capitulation? Perhaps more relevantly, what army of free-market mercenaries could overcome the fanatical, disciplined mass of barbarians?

What I'm inferring from your comment is that a rational society could defend itself using market mechanisms, not central organization, if the need ever arose. Those mechanisms of the market might do well in supplying soldiers to meet a demand for defense, but I'm skeptical of the ability of the blind market to plan a grand strategy or defeat the enemy in battle. It's also very difficult to take one's business elsewhere when you're hiring men with guns to stop an existential threat and they don't do a good job of it. In order to defend a society, first there must be understanding that there is a society and that it's worth defending.

Comment by AndySimpson on Bayesians vs. Barbarians · 2009-04-15T11:39:37.170Z · LW · GW

This is a thoughtful, thorough analysis of some of the inherent problems with organizing rational, self-directing individuals into a communal fighting force. What I don't understand is why you view it as a special problem that needs a special consideration.

Society is an agreement among a group of people to cooperate in areas of common concern. The society as one body defends the personal safety and livelihood of its component individuals and it furnishes them with certain guarantees of livability and fair play. In exchange, the component individuals pledge to defend the integrity of the society and contribute to it with their labor and ingenuity. This happens and it works because Pareto improvements are best achieved through long-term schemes of cooperation rather than one-off interactions. The obligation to collective defense, then, happens at the moment of social contract and it needs no elaboration. Even glancingly rational people in pseudo-rational societies recognize this on some level, and when society is threatened, they will go to its defense. So, there is no real incentive to defect against society when there is a draft to fight an existential threat because the gains of draft-dodging are greatly outweighed by the risk of the fall of civilization.

I think you go too far in saying that modern drafts are "a tool of kings playing games in need of toy soldiers." The model of the draft can be abused, as it was in the US during the Vietnam War, where there was no existential threat and draft-dodging was the smart move, but it worked remarkably well during World War II when a truly threatening horde of barbarians did emerge.

Along these lines, why is it that a lottery and chemical courage "is the general policy that gives us the highest expectation of survival?" Why couldn't we do the job with traditional selective-service optimization for fitness, intelligence, and psychological stability, coupled with the perfectly rational understanding that risking life in combat is better than guaranteeing societal collapse by running from battle?

Reading through your post, especially your suggestions for a coordinated response, I found myself thinking about the absurd spectacle of the Army of Mars in Kurt Vonnegut's Sirens of Titan. New soldiers could get any kind of ice cream they wanted, right after their memories were wiped and implants were installed to beam the persistent "rent, rent, rented-a-tent" of a snare drum to their mind whenever they were made to march in formation. Somehow I don't think Vonnegut was suggesting an improvement.

Comment by AndySimpson on GroupThink, Theism ... and the Wiki · 2009-04-14T02:07:13.289Z · LW · GW

Rationalism isn't exclusively or even necessarily empirical. Just ask Descartes.

Comment by AndySimpson on Marketing rationalism · 2009-04-13T09:51:31.380Z · LW · GW

I think coming to agreement on terms through a dialectic is something most everyone can agree to engage in, and I don't think it's offensive to or beyond the scope of rationality. Socrates' way is the sort of meta-winning way, the way that, if fully pursued, will arrive at the conclusion of rationality.

For instance, In any one of those cases, I could start with a dialectic about problem-solving in everyday life, or at least general cases, and proceed to the principle that rationality is the best way. I'd try to come to agreement about the methods we use to diagnose a car problem, calculate how much they owe in taxes, or decide to enter an intersection, and extrapolate to epistemology from there. The philosopher, the Christian, and the hedonist all use reason, not will-to-power, faith, or desire to fix and drive their cars and pay their taxes, and this gives the evangelist of reason a method of proving the epistemological assertion that there is such a thing as truth, which we encounter in passing, and that rationality is the optimal way to approach it.