The ideas you're not ready to post

post by JulianMorrison · 2009-04-19T21:23:42.999Z · LW · GW · Legacy · 264 comments

I've often had half-finished LW post ideas and crossed them off for a number of reasons, mostly they were too rough or undeveloped and I didn't feel expert enough. Other people might worry their post would be judged harshly, or feel overwhelmed, or worried about topicality, or they just want some community input before adding it.

So: this is a special sort of open thread. Please post your unfinished ideas and sketches for LW posts here as comments, if you would like constructive critique, assistance and checking from people with more expertise, etc. Just pile them in without worrying too much. Ideas can be as short as a single sentence or as long as a finished post. Both subject and presentation are on topic in replies. Bad ideas should be mined for whatever good can be found in them. Good ideas should be poked with challenges to make them stronger. No being nasty!

264 comments

Comments sorted by top scores.

comment by Daniel_Burfoot · 2009-04-21T06:59:09.443Z · LW(p) · GW(p)

The Dilbert Challenge: you are working in a company in the world of Dilbert. Your pointy-haired boss comes to you with the following demand:

"One year from today, our most important customer will deliver us a request for a high-quality reliable software system. Your job and the fate of the company depends on being able to develop and deploy that software system within two weeks of receipt of the specifications. Unfortunately we don't currently know any of the requirements. Get started now."

I submit that this preposterous demand is really a deep intellectual challenge, the basic form of which arises in many different endeavors. For example, it's reasonable to believe that at some point in the future, humanity will face an existential threat. Given that we will not know the exact nature of that threat until it's almost upon us, how can we prepare for it today?

Replies from: cousin_it, thomblake, None
comment by cousin_it · 2009-04-21T13:02:12.898Z · LW(p) · GW(p)

Wow. I'm a relatively long-time participant, but never really "got" the reasons why we need something like rationality until I read your comment. Here's thanks and an upvote.

comment by thomblake · 2009-04-21T15:24:31.606Z · LW(p) · GW(p)

That's one of the stated objectives of computer ethics (my philosophical sub-field) - to determine, in general, how to solve problems that nobody's thought of yet. I'm not sure how well we're doing at that so far.

comment by [deleted] · 2009-04-21T10:54:25.595Z · LW(p) · GW(p)

deleted

comment by MBlume · 2009-04-20T03:11:59.991Z · LW(p) · GW(p)

On the Care and Feeding of Rationalist Hardware

Many words have been spent here in improving rationalist software -- training patterns of thought which will help us to achieve truth, and reliably reach our goals.

Assuming we can still remember so far back, Eliezer once wrote:

But if you have a brain, with cortical and subcortical areas in the appropriate places, you might be able to learn to use it properly. If you're a fast learner, you might learn faster - but the art of rationality isn't about that; it's about training brain machinery we all have in common

Rationality does not require big impressive brains any more than the martial arts require big bulging muscles. Nonetheless, I think it would be rare indeed to see a master of the martial arts willfully neglecting the care of his body. Martial artists of the wisest schools strive to improve their bodies. They jog, or lift weights. They probably do not smoke, or eat unhealthily. They take care of their hardware so that the things they do will be as easy as possible.

So, what hacks exist which enable us to improve and secure the condition of our mental hardware? Some important areas that come to mind are:

  • sleep
  • diet
  • practice
Replies from: Vladimir_Golovin, AngryParsley, Drahflow, blogospheroid, jimmy, None
comment by Vladimir_Golovin · 2009-04-20T09:23:54.082Z · LW(p) · GW(p)

I'd definitely want to read about a good brain-improving diet (I have no problems with weight, so I'd prefer not to mix these two issues).

comment by AngryParsley · 2009-04-20T09:13:15.270Z · LW(p) · GW(p)

I agree. LW doesn't have many posts about maintaining and improving the brain.

I would also add aerobic exercise to your list, and possibly drugs. For example, caffeine or modafinil can help improve concentration and motivation. Unfortunately they're habit-forming and have various health effects, so it's not a simple decision.

Replies from: randallsquared
comment by randallsquared · 2009-04-20T21:36:54.490Z · LW(p) · GW(p)

I've only had modafinil once (but it was amazing in the concentration-boosting department), but I have a lot of experience with caffeine, and the effects are primarily mood-affecting, for me. Large amounts of caffeine destroy concentration, offsetting any improvements, and, like other drugs, the effect grows weaker the longer you take it. On the plus side, caffeine is only weakly addicting, so you can just stop every now and then to reset things, which I do every few months.

comment by Drahflow · 2009-04-20T08:53:51.813Z · LW(p) · GW(p)

While we are at it:

  • caffeine
  • meditation
  • music
  • mood
  • social interaction

Also, which hacks are available to better interface our mental hardware with the real world:

  • information presentation
  • automated information filtering
comment by blogospheroid · 2009-04-21T06:00:07.661Z · LW(p) · GW(p)

Increasing the level of fruit in my diet helped me maintain a positive mood for longer. I tried it when i was in alone for a while in a foreign country, so i'm not sure if it was a placebo affect.

comment by jimmy · 2009-04-21T06:18:16.255Z · LW(p) · GW(p)

Piracetam and other "nootropics" are worth checking out.

Piracetam supposedly helps with memory and cognition by increasing blood flow to the brain or something... I got some to play around with and will let you guys know if anything interesting happens.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-04-21T06:28:39.622Z · LW(p) · GW(p)

Piracetam supposedly helps with memory and cognition by increasing blood flow to the brain or something... I got some to play around with and will let you guys know if anything interesting happens.

Piracetam works by influencing acetylcholine. Vinpocetine and ginko-biloba are examples of vasodilators (work by increasing blood flow to the brain).

I (strongly) recommend adding a choline supplement when supplementing with piracetam (and the other *racetams). You burn through choline more quickly when using them and so can end up with mediocre results and sometimes a headache if you neglect a choline supplement.

Also give the piracetam a couple of weeks before you expect to feel the full impact.

The imminst.org forums have a useful subforum on nootropics that is worth checking out.

Replies from: jimmy
comment by jimmy · 2009-04-21T16:56:29.454Z · LW(p) · GW(p)

Thanks for the info.

I was planning on trying it without the choline first to see if it was really needed.

Any ideas on how to actually test performance?

Replies from: badger
comment by badger · 2009-04-21T19:57:06.181Z · LW(p) · GW(p)

Seth Roberts tracked the influence of omega-3 on brain function via arithmetic tests in R:

http://www.blog.sethroberts.net/2009/01/05/tracking-how-well-my-brain-is-working/ http://www.blog.sethroberts.net/2007/04/14/omega-3-and-arithmetic-continued/

It's a little hard to distinguish the benefit from practice and the benefit from omega-3, so ideally you'd alternate periods of supplement and no supplement.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-13T15:27:21.307Z · LW(p) · GW(p)

Also, ideally you wouldn't know when you were getting omega-3 and when you were getting a placebo during the course of the experiment.

comment by [deleted] · 2009-04-20T11:15:39.255Z · LW(p) · GW(p)

If you are going to spend time researching this, I suggest including the agents of short-term cognitive decline (cognitive impairment in jargon). I once scored 103 on an unofficial (but normed) online IQ test after drinking 3 whiskeys the night before, and feeling just a little bit unmotivated. Depression is also known to, uh, depress performance.

comment by PhilGoetz · 2009-04-20T02:09:12.861Z · LW(p) · GW(p)

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other, because they compress information differently.

Analogy: Take some problem domain in which each data point is a 500-dimensional vector. Take a big set of 500D vectors and apply PCA to them to get a new reduced space of 25 dimensions. Store all data in the 25D space, and operate on it in that space.

Two programs exposed to different sets of 500D vectors, which differ in a biased way, will construct different basic vectors during PCA, and so will reduce all vectors in the future into a different 25D space.

In just this way, two people with life experiences that differ in a biased way (due to eg socioeconomic status, country of birth, culture) will construct different underlying compression schemes. You can give them each a text with the same words in it, but the representations that each constructs internally are incommensurate; they exist in different spaces, which introduce different errors. When they reason on their compressed data, they will reach different conclusions, even if they are using the same reasoning algorithms and are executing them flawlessly. Futhermore, it would be very hard for them to discover this, since the compression scheme is unconscious. They would be more likely to believe that the other person is lying, nefarious, or stupid.

Replies from: ChrisHibbert, David_Gerard, Daniel_Burfoot, John_Maxwell_IV, MendelSchmiedekamp
comment by ChrisHibbert · 2009-04-20T05:19:52.936Z · LW(p) · GW(p)

If you're going to write about this, be sure to account for the fact that many people report successful communication in many different ways. People say that they have found their soul-mate, many of us have similar reactions to particular works of literature and art, etc. People often claim that someone else's writing expresses an experience or an emotion in fine detail.

comment by David_Gerard · 2011-04-13T14:37:31.948Z · LW(p) · GW(p)

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other, because they compress information differently.

FWIW, this is one of the problems postmodernism attempts to address: the bit that's a series of exercises in getting into other people's heads to read a given text.

Replies from: Jade
comment by Jade · 2012-08-20T15:46:51.681Z · LW(p) · GW(p)

Does it work for understanding non-human peoples?

comment by Daniel_Burfoot · 2009-04-21T06:41:31.562Z · LW(p) · GW(p)

Yeah. I thought about this a lot in the context of the Hanson/Yudkowsky debate about the unmentionable event. As was frequently pointed out, both parties aspired to rationality and were debating in good faith, with the goal of getting closer to the truth.

Their belief was that two rationalists should be able to assign roughly the same probability to the same sequence of events X. That is, if the event X is objectively defined, then the problem of estimating p(X) is an objective one and all rational persons should obtain roughly the same value.

The problem is that we don't - maybe can't - estimate probabilities in isolation of other data. All estimates we make are really of conditional probabilities p(X|D), where D is a person's unique huge background dataaset. The background dataset primes our compression/inference system. To use the Solomonoff idea, our brains construct a reasonably short code for D, and then use the same set of modules that were helpful in compressing D to compress X.

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-20T17:39:51.862Z · LW(p) · GW(p)

No idea what PCA means, but this sounds like a very mathematical way of expressing an idea that is often proposed by left-wingers in other fields.

Replies from: conchis
comment by MendelSchmiedekamp · 2009-04-20T03:40:42.181Z · LW(p) · GW(p)

I want to write about this too, but almost certainly from a very different angle, dealing with communication and the flow of information. And perhaps at some point I will have the time.

comment by Richard_Kennaway · 2009-04-22T11:01:52.384Z · LW(p) · GW(p)

There is a topic I have in mind that could potentially require writing a rather large amount, and I don't want to do that unless there is some interest, rather than suddenly dumping a massive essay on LW without any prior context. The topic is control theory (the engineering discipline, not anything else those words might suggest). Living organisms are, I say (following Bill Powers, who I've mentioned before) built of control systems, and any study of people that does not take that into account is unlikely to progress very far. Among the things I might write about are these:

  • Purposes and intentions are the set-points of control systems. This is not a metaphor or an analogy.

  • Perceptions do not determine actions; instead, actions determine perceptions. (If that seems either unexceptionable or obscure, try substituting "stimulus" for "perception" and "response" for "action".)

  • Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

  • Inner conflict is, literally, a conflict between control systems that are trying to hold the same variable in two different states.

  • How control systems behave is not intuitively obvious, until one has studied control systems.

This is the only approach to the study of human nature I have encountered that does not appear to me to mistake what it looks like from the inside for the underlying mechanism.

What say you all? Vote this up or down if you want, but comments will be more useful to me.

Replies from: Gray, pjeby, rhollerith, JulianMorrison, cousin_it, Vladimir_Nesov, Daniel_Burfoot
comment by Gray · 2011-04-20T03:15:39.165Z · LW(p) · GW(p)

I wouldn't dump a huge essay on the site. It seems that this medium has taken on the form of dividing the material into separate posts, and then stringing them together into a sequence. Each post should be whole in itself, but may presume that readers already have the background knowledge contained in previous posts of the sequence.

I've thought about writing to try to persuade people here into a form of virtue theory, but before that I would want to write a post attacking anti-naturalist ethics. I would use the same sort of form.

comment by pjeby · 2009-04-22T14:46:44.041Z · LW(p) · GW(p)

I agree with some of your points -- well, all of them if we're discussing control systems in general -- but a couple of them don't quite apply to brains, as the cortical systems of brains in general (not just in humans) do use predictive models in order to implement both perception and behavior. Humans at least can also run those models forward and backward for planning and behavior generation.

The other point, about actions determining perceptions, is "sorta" true of brains, in that eye saccades are a good example of that concept. However, not all perception is like that; frogs for example don't move their eyes, but rely on external object movement for most of their sight.

So I think it'd be more accurate to say that where brains and nervous systems are concerned, there's a continuous feedback loop between actions, perceptions, and models. That is, models drive actions, actions generate raw data that's filtered through a model to become a perception, that may update one or more models.

Apart from that though, I'd say that your other three points apply to people and animals quite well.

comment by rhollerith · 2009-04-22T12:40:21.467Z · LW(p) · GW(p)

Heck yeah, I want to see it. I suggest adopting Eliezer's modus operandi of using a lot of words. And every time you see something in your draft post that might need explanation, post on that topic first.

comment by JulianMorrison · 2009-04-22T13:17:37.306Z · LW(p) · GW(p)

it sounds like you want to write a book! But a post would be much appreciated.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-04-22T21:49:00.671Z · LW(p) · GW(p)

There are several books already on the particular take on control theory that I intend to write about, so I'm just thinking in terms of blog posts, and keeping them relevant to the mission of LW. I've just realised I have a shortage of evenings for the rest of this week, so it may take some days before I can take a run at it.

comment by cousin_it · 2009-04-22T11:09:26.622Z · LW(p) · GW(p)

I'd love to see this as a top-level post. Here's additional material for you: online demos of perceptual control theory, Braitenberg vehicles.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-04-22T21:47:08.156Z · LW(p) · GW(p)

I know the PCT site :-) It was Bill Powers' first book that introduced me to PCT. Have you tried the demos on that site yourself?

Replies from: cousin_it
comment by cousin_it · 2009-04-23T09:41:55.340Z · LW(p) · GW(p)

Yes, I went through all of them several years ago. Like evolutionary psychology, the approach seems to be mostly correct descriptively, even obvious, but not easy to apply to cause actual changes. (Of course utility function-based approaches are much worse.)

comment by Vladimir_Nesov · 2009-04-22T14:41:59.389Z · LW(p) · GW(p)

Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

But they should act according to a rigorous decision theory, even though they often don't. It seems to be an elementary enough statement, so I'm not sure what are you asserting.

Replies from: cousin_it
comment by cousin_it · 2009-04-23T09:47:21.638Z · LW(p) · GW(p)

"Should" statements cannot be logically derived from factual statements. Population evolution leads to evolutionarily stable strategies, not coherent decision theories.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-23T11:46:52.770Z · LW(p) · GW(p)

"Should" statements come from somewhere, somewhere in the world (I'm thinking about that in the context of something close to "The Meaning of Right"). Why do you mention evolution?

Replies from: cousin_it
comment by cousin_it · 2009-04-23T20:57:52.558Z · LW(p) · GW(p)

In that post Eliezer just explains in his usual long-winded manner that morality is our brain's morality instinct, not something more basic and deep. So your morality instinct tells you that agents should follow rigorous decision theories? Mine certainly doesn't. I feel much better in a world of quirky/imperfect/biased agents than in a world of strict optimizers. Is there a way to reconcile?

(I often write replies to your comments with a mild sense of wonder whether I can ever deconvert you from Eliezer's teachings, back into ordinary common sense. Just so you know.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-23T21:05:28.475Z · LW(p) · GW(p)

To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn't require interacting with anyone, this process may theoretically be wholly personal, you against math.

Replies from: cousin_it
comment by cousin_it · 2009-04-23T21:19:34.664Z · LW(p) · GW(p)

Yes, an agent with a well-defined utility function "should" act to maximize it with a rigorous decision theory. Well, I'm glad I'm not such an agent. I'm very glad my life isn't governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-23T21:38:39.289Z · LW(p) · GW(p)

Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-14T15:04:20.442Z · LW(p) · GW(p)

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

What if some people don't prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?

Replies from: JGWeissman, wedrifid, Vladimir_Nesov
comment by JGWeissman · 2011-04-15T03:24:32.536Z · LW(p) · GW(p)

Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn't have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.

comment by wedrifid · 2011-04-14T16:57:14.835Z · LW(p) · GW(p)

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-14T19:42:54.591Z · LW(p) · GW(p)

OTOH, if it also comes packaged with an inability to notice public assertions that they prefer not to be aware, then you're safe.

Replies from: wedrifid
comment by wedrifid · 2011-04-15T03:08:16.193Z · LW(p) · GW(p)

If only... :P

comment by Vladimir_Nesov · 2011-04-14T15:36:08.391Z · LW(p) · GW(p)

Then how would you ever know? Rational ignorance is really hard.

comment by Daniel_Burfoot · 2009-04-22T13:17:29.501Z · LW(p) · GW(p)

I don't necessarily believe you, but I would be happy to read what you write :-) I would also be happy to learn more about control theory. To comment further would require me to touch on unmentionable subjects.

comment by PhilGoetz · 2009-04-20T01:58:04.493Z · LW(p) · GW(p)

We are Eliza: A whole lot of what we think is reasoned debate is pattern-matching on other people's sentences, without ever parsing them.

I wrote a bit about this in 1998.

But I'm not as enthused about this topic as I was then, because then I believed that parsing a sentence was reasonable. Now I believe that humans don't parse sentences even when reading carefully. The bird the cat the dog chased chased flew. Any linguist today would tell you that's a perfectly fine English sentence. It isn't. And if people don't parse grammatic structures to just 2 levels of recursion, I doubt recursion, and generative grammars, are involved at all.

Replies from: pangloss, JulianMorrison, Risto_Saarelma
comment by pangloss · 2009-04-20T08:18:02.703Z · LW(p) · GW(p)

i believe that linguists would typically claim that it is formed by legitimate rules of English syntax, but point out that there might be processing constraints on humans that eliminate some syntactically well formed sentences from the category of grammatical sentences of English.

comment by JulianMorrison · 2009-04-20T02:42:11.177Z · LW(p) · GW(p)

Eh, I could read it, with some stack juggling. I can even force myself to parse the "buffalo" sentence ;-P

Replies from: William
comment by William · 2009-04-20T07:57:38.696Z · LW(p) · GW(p)

You can force yourself to parse the sentence but I suspect that the part of your brain that you use to parse it is different from the one you use in normal reading and in fact closer to the part of the brain you use to solve a puzzle.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-20T08:49:11.001Z · LW(p) · GW(p)

I puzzle what goes where, but the bit that holds the parse once I've assembled it feels the same as normal.

Replies from: randallsquared
comment by randallsquared · 2009-04-20T21:46:00.085Z · LW(p) · GW(p)

The result isn't as important as the process in this case. Even if the result is stored the same way, for the purpose of William's statement it's only necessary that the process is sufficiently different.

comment by Risto_Saarelma · 2011-04-14T05:46:40.444Z · LW(p) · GW(p)

A bit like described in this Stephen Bond piece?

comment by Psy-Kosh · 2009-04-26T19:12:50.407Z · LW(p) · GW(p)

I'm kind of thinking of doing a series of posts gently spelling out step by step the arguments for Bayesian decision theory. Part of this is for myself: I've read a while back Omohundro's vulnerability argument, but felt there were missing bits that I had to personally fill in, assumptions I had to sit and think on before I could really say "yes, obviously that has to be true". Some things that I think I can generalize a bit or restate a bit, etc.

So as much as for myself, to organize and clear that up, as for others, I want to do a short series of "How not to be stupid (given unbounded computational power)" In which in each each post I focus on one or a small number of related rules/principles of Bayesian Decision theory and epistemic probabilities, and gently derive those from the "don't be stupid" principle. (Again, based on Omohundro's vulnerability arguments and the usual dutch book arguments for Bayesian stuff, but stretched out and filled in with the details that I personally felt the need to work out, that I felt were missing.)

And I want to do it as a series, rather than a single blob post so I can step by step focus on a small chunk of the problem and make it easier to reference related rules and so on.

Would this be of any use to anyone here though? (maybe a good sequence for beginners, to show one reason why Bayes and Decision Theory is the Right Way?) Or would it be more clutter than anything else?

Replies from: Eliezer_Yudkowsky, Cyan, Vladimir_Nesov, JulianMorrison
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-26T19:27:30.065Z · LW(p) · GW(p)

It's got my upvote.

comment by Cyan · 2009-04-26T19:58:35.392Z · LW(p) · GW(p)

I have a similar plan -- however, I don't know when I'll get to my post and I don't think the material I wanted to discuss would overlap greatly with yours.

comment by Vladimir_Nesov · 2009-04-26T19:54:10.585Z · LW(p) · GW(p)

Can you characterize a bit more concretely what you mean, by zooming in on a tiny part of this planned work? It's no easy task to go from common sense to math, and not shoot your both feet off in the process.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-04-27T02:16:26.506Z · LW(p) · GW(p)

Basically, I want to reconstruct, slowly, the dutch book and vulnerability arguments, but step by step, with all the bits that confused me filled in.

The basic common sense rule that these are built on is "don't accept a situation in which you know you automatically lose" (where "lose" is used to the same level of generality that "win" is in "rationalists win.")

One of the reasons I like dutch book/vulnerability arguments is that each step ends up being relatively straightforward as to getting from that principle to the math. (Sometimes an additional concept needs to be introduced, not so much proven as much as defined and made explicit.)

comment by JulianMorrison · 2009-04-26T19:40:43.787Z · LW(p) · GW(p)

Sounds interesting.

comment by MBlume · 2009-04-20T03:08:53.076Z · LW(p) · GW(p)

This doesn't even have an ending, but since I'm just emptying out the drafts folder

Memetic Parasitism

I heard a rather infuriating commercial on the radio today. There's no need for me to recount it directly -- we've all heard the type. The narrator spoke of the joy a woman feels in her husband's proposal, of how long she'll remember its particulars, and then, for no apparent reason, transitioned from this to a discussion of shiny rocks, and where we might think of purchasing them.

I hardly think I need to belabor the point, but there is no natural connection between shiny rocks and promises of monogamy. There was not even any particularly strong empirical connection between the two until about a hundred years ago, when some men who made their fortunes selling shiny rocks decided to program us to believe there was.

What we see here is what I shall call memetic parasitism. We carry certain ideas, certain concepts, certain memes to which we attach high emotional valence. In this case, that meme is romantic love, expressed through monogamy. An external agent contrives to derive some benefit by attaching itself to that meme.

Now, it is important to note when describing a Dark pattern that not everything which resembles this parttern is necessarily dark. Carnation attempts to connect itself in our minds to the Burns and Allen show. Well, on reflection, it seems this is right. Carnation did bring us the Burns and Allen show. It paid the salary of each actor, each writer, each technician, who created the show each week. Carnation deserves our gratitude, and any custom which may result from it. Romantic love existed for many centuries before the shiny-rock-sellers came along, and they have done nothing to enhance it.

Of course, I think most of us have seen this pattern before. This comic makes the point rather well, I think.

So, right now, I know that the shiny-rock-sellers want to exploit me, this outrages me, and I choose to have nothing to do with them. How do we excite people's shock and outrage at the way the religions have tried to exploit them?

Replies from: Nanani, JulianMorrison
comment by Nanani · 2009-04-22T01:07:18.371Z · LW(p) · GW(p)

A Series of Defense Against the Dark Arts would not be unwelcome, especially for those who haven't gone through the OB backlog. Voting up.

comment by JulianMorrison · 2009-04-20T03:28:56.921Z · LW(p) · GW(p)

Anti-advertising campaigners have tried. The trouble is that their advocacy was immediately parasitized by shiny-rock sellers of the political sort, and people tend to reject or accept both messages at once.

comment by JulianMorrison · 2009-04-19T23:19:03.166Z · LW(p) · GW(p)

Buddhism.

What it gets wrong. Supernatural stuff - rebirth, karma in the magic sense, prayer. Thinking Buddha's cosmology was ever meant as anything more than an illustrative fable. Renunciation. Equating positive and negative emotions with grasping. Equating the mind with the chatty mind.

What it gets right. Meditation. Karma as consequences. There is no self, consciousness is a brain subsystem, emphasis on the "sub" (Cf. Drescher's "Cartesian Camcorder" and psychology's "system two"). The chatty mind is full of crap and a huge waste of time, unless used correctly. Correct usage includes noticing mostly-subconscious thought loops (Cf. cognitive behavioral therapy). A lot of everyday unreason does stem from grasping, which roughly equates to "magical thinking" or the idea that non-acknowledgment of reality can change it. This includes various vices and dark emotions, including the ones that screw up attempted rationality.

What rationalists should do. Meditate. Notice themselves thinking. Recognize grasping as a mechanism. Look for useful stuff in Buddhism.

Why I can't post. Not enough of an expert. Not able to meditate myself yet.

Replies from: SoullessAutomaton, Drahflow, blogospheroid
comment by SoullessAutomaton · 2009-04-19T23:34:43.967Z · LW(p) · GW(p)

It actually strikes me that a series of posts on "What can we usefully learn from X tradition" would be interesting. Most persistent cultural institutions have at least some kind of social or psychological benefit, and while we've considered some (cf. the martial arts metaphors, earlier posts on community building, &c.) there are probably others that could be mined for ideas as well.

comment by Drahflow · 2009-04-20T08:42:55.364Z · LW(p) · GW(p)

I'd be similarly interested in covering philosophical Daoism, the path to wisdom I follow, and believe to be mostly correct.

Things they get wrong: Some of them believe in rebirth, too much reverence for "ancient masters" without good reevaluation, some believe in weird miracles.

Things they get right: Meditation, purely causal view of the world, free will as local illusion, relaxed attitude to pretty much everything (-> less bias from social influence and fear of humiliation), the insight that akrasia is overcome best not by willpower but by adjusting yourself to feel that what you need to do is right, apparently ways to actually help you (at least me) with that, a decent way accept death as something natural.

Replies from: gwern, JulianMorrison
comment by gwern · 2009-04-21T03:05:07.455Z · LW(p) · GW(p)

Things they get wrong: Some of them believe in rebirth, too much reverence for "ancient masters" without good reevaluation, some believe in weird miracles.

I kept waiting for 'alchemy' and immortality to show up in your list!

I recently read through an anthology of Taoist texts, and essentially every single thing postdating the Lieh Tzu or the Huai-nan Tzu (-200s) was absolute rubbish, but the preceding texts were great. I've always found this abrupt disintegration very odd.

Replies from: David_Gerard
comment by David_Gerard · 2011-04-13T14:42:35.333Z · LW(p) · GW(p)

I kept waiting for 'alchemy' and immortality to show up in your list!

Know what alchemy's good for? Art and its production. Terrible chemistry, great for creation of art.

Know what's actually a good text for this angle on alchemy? Promethea by Alan Moore, in which he sets out his entire system. (Not only educational, but a fantastic book that is at least as good as his famous '80s stuff.)

Replies from: None
comment by [deleted] · 2011-04-13T14:55:19.155Z · LW(p) · GW(p)

Respectfully disagree. I found Promethea to be poorly executed. There was a decent idea somewhere in there, but I think he was too distracted by the magic system to find it.

One exception -- the aside about how the Christian and Muslim Prometheas fought during the Crusades. That was nicely done.

Replies from: David_Gerard
comment by David_Gerard · 2011-04-14T14:49:37.926Z · LW(p) · GW(p)

Yeah, the plot suffers bits falling off the end. Not the sides, thankfully. I think it's at least as coherent as Miracleman, and nevertheless remains an excellent exposition of alchemy and art.

comment by JulianMorrison · 2009-04-20T08:54:39.592Z · LW(p) · GW(p)

Daoism flunks badly on nature-worship.

comment by blogospheroid · 2009-04-21T05:15:29.970Z · LW(p) · GW(p)

Not enough of an expert on buddhism, but I live its mother religion - hinduism. There are enough similarities for me to comment on a few of your comments.

Rebirth - The question of which part of your self you choose to identify with is a persistent thing in OB/LW. When X and Y conflict and you choose to align yourself with X instead of Y, WHO OR WHAT has made that decision? One might say, the consensus in the mind or more modern answers. The point is that there are desires and impulses which stem from different levels of personality within you. There are animal impulses, basic human impulses(evo-psych), societal drives. There are many levels to you. The persistent question in almost all the dharma religions is - what do you choose to identify with? Even in rebirth, the memories of past lives are erased and the impulses that drove you the greatest at your time of death decide where in the next life you would be. If you are essentially still hungering for stuff, the soul would be sent to stations where that hunger can be satiated. if you are essentially at peace, having lived a full life, you will go to levels that are subtler and presumably more abstract. You become more soul and less body, in a crude sense.

Vedanta does believe in souls. I'm holding out for a consistent theory of everything of physics before i drop my beliefs about that one.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-21T13:20:51.550Z · LW(p) · GW(p)

I'm holding out for a consistent theory of everything of physics

Would you understand one?

Replies from: blogospheroid
comment by blogospheroid · 2009-04-22T09:23:47.818Z · LW(p) · GW(p)

I would try very hard to understand a theory that has been proclaimed by the majority of scientists as a true TOE.

In particular, I would try to understand if there is a possibility of transmission of information that is similar to the transmigration of the soul. If there is no such comfort in the new theory, I assume I will spend a very difficult month and then get back on my feet with a materialist's viewpoint.

comment by PhilGoetz · 2009-04-20T01:38:09.641Z · LW(p) · GW(p)

Aumann agreements are pure fiction; they have no real-world applications. The main problem isn't that no one is a pure Bayesian. There are 3 bigger problems:

  • The Bayesians have to divide the world up into symbols in exactly the same way. Since humans (and any intelligent entity that isn't a lookup table) compress information based on their experience, this can't be contemplated until the day when we derive more of our mind's sensory experience from others than from ourselves.
  • Bayesian inference is slow; pure Bayesians would likely be outcompeted by groups that used faster, less-precise reasoning methods, which are not guaranteed to reach agreement. It is unlikely that this limitation can ever be overcome.
  • In the name of efficiency, different reasoners would be highly orthogonal, having different knowledge, different knowledge compression schemes and concepts, etc.; reducing the chances of reaching agreement. (In other words: If two reasoners always agree, you can eliminate one of them.)

This would probably have to wait until May.

Replies from: conchis
comment by conchis · 2009-04-20T20:50:19.368Z · LW(p) · GW(p)

"Pure fiction" and "no real world application" seem overly strong. Unless you are talking about individuals actually reaching complete agreement, in which case the point is surely true, but relatively trivial.

The interesting question (real world application) is surely how much more we should align our beliefs at the margin.

Also, whether there are any decent quality signals we can use to increase others' perceptions that we are Bayesian, which would then enable us to use each others' information more effectively.

comment by swestrup · 2009-04-21T22:15:08.524Z · LW(p) · GW(p)

I think there's a post somewhere in the following observation, but I'm at a loss as to what lesson to take away from it, or how to present it:

Wherever I work I rapidly gain a reputation for being both a joker and highly intelligent. It seems that I typically act in such a way that when I say something stupid, my co-workers classify it as a joke, and when I say something deep, they classify it as a sign of my intelligence. As best I can figure, its because at one company I was strongly encouraged to think 'outside the box' and one good technique I found for that was to just blurt out the first technological idea that occurred to me when presented with a technological problem, but to do so in a non-serious tone of voice. Often enough the idea is one that nobody else has thought of, or automatically dismissed for what, in retrospect, were insufficient reasons. Other times its so obviously stupid an idea that everyone thinks I'm making a joke. It doesn't hurt that often I do deliberately joke.

I don't know if this is a technique others should adopt or not, but I've found it has made me far less afraid of appearing stupid when presenting ideas.

comment by MendelSchmiedekamp · 2009-04-20T03:52:16.260Z · LW(p) · GW(p)

Willpower building as a fundamental art. And some of the less obvious pit falls. Including the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

I need to hunt back down some of the cognitive science research on this before I feel comfortable posting it.

Replies from: pjeby, ciphergoth, matt
comment by pjeby · 2009-04-20T04:47:47.981Z · LW(p) · GW(p)

...the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

Easy answer: don't use willpower. Ever.

I quit it cold turkey in late 2007, and can count on one hand the number of times I've been tempted to use it since.

(Edit to add: I quit it in order to force myself to learn to understand the things that blocked me, and to learn more effective ways to accomplish things than by pushing through resistance. It worked.)

Replies from: conchis, MrShaggy, MendelSchmiedekamp
comment by conchis · 2009-04-20T12:24:26.875Z · LW(p) · GW(p)

don't use willpower. Ever.

Could you do a post on that?

Replies from: PhilGoetz, John_Maxwell_IV
comment by PhilGoetz · 2009-04-20T17:59:50.535Z · LW(p) · GW(p)

Consider cognitive behavioral therapy. You don't get someone to change their behavior by telling them to try really hard. You get them to convince themselves that they will get what they want if they change their behavior.

People do what they want to do. We've gone over this in the dieting threads.

comment by MrShaggy · 2009-04-25T03:59:19.612Z · LW(p) · GW(p)

My idea that I'm not ready to post is now: find a way to force pjeby to write regular posts.

comment by MendelSchmiedekamp · 2009-04-20T15:38:41.240Z · LW(p) · GW(p)

By all means do post. Clarification would be welcome, since we're almost certainly not using the term willpower in the same way.

Replies from: pjeby
comment by pjeby · 2009-04-20T16:36:12.977Z · LW(p) · GW(p)

Clarification would be welcome, since we're almost certainly not using the term willpower in the same way.

I'm using it to mean relying on conscious choice in the moment, to overcome preference reversal. Forcing yourself to do something that, at that moment, you'd prefer not to, or to not do something, that you'd prefer to.

What I do instead, is find out why my preference has changed, and either:

  1. Remove that factor from the equation, either by changing something in my head, or in the outside world, or

  2. Choose to agree with my changed preference, for the moment. (Not all preference reversals are problematic, after all!)

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-20T16:54:01.105Z · LW(p) · GW(p)

From that usage your claim makes much more sense.

Willpower in my usage is more general, when impulses are overridden or circumvented. In your example, it includes the conspicuous consumption of which you describe, but also more subtle costs like the cognitive computation of determining the "why" and forestalling the impulse to remove internal or external factors.

My main point is that willpower is a limited resource that ebbs and flows during cognitive computation, often due to changing costs. But it can be trained up, conserved, and refreshed effectively, if certain hazards can be avoided.

Replies from: pjeby
comment by pjeby · 2009-04-20T19:36:56.079Z · LW(p) · GW(p)

Willpower in my usage is more general, when impulses are overridden or circumvented.

I don't see how that's any different from what I said. How is an "impulse" different from a preference reversal? (i.e., if it's not a preference reversal, why would you need to override or circumvent it?)

comment by Paul Crowley (ciphergoth) · 2009-04-20T07:32:06.324Z · LW(p) · GW(p)

I repeat my usual plea at this point: please read Breakdown of Will before posting on this.

Replies from: pjeby, MendelSchmiedekamp
comment by pjeby · 2009-04-20T16:04:05.031Z · LW(p) · GW(p)

I repeat my usual plea at this point: please read Breakdown of Will before posting on this.

That book doesn't actually contain any solutions to anything, AFAICT. The two useful things I've gotten from it that enhanced my existing models were:

  1. The idea of conditioned appetites, and

  2. The idea that "reward" and "pleasure" are distinct.

There were other things that I learned, of course, like his provocative reward-interval hypothesis that unifies the mechanism of things like addiction, compulsion, itches and pain on a single, time-based scale. But that's only really interesting in an intellectual-curiosity sort of way at the moment; I haven't figured out anything one can DO with it, that I couldn't already do before.

Even the two useful things I mentioned, are mostly useful in explaining why certain things happen, and why certain of my techniques work on certain things. They don't really give me anything that can be turned into actual improvements on the state of the art, although they do suggest some directions for stretching what I apply some things to.

Anyway, if you're already familiar with the basic ideas of discounting and preference reversal, you're not going to get a lot from this book in practical terms.

OTOH, if you think it'd be cool to know how and why your bargains with yourself fail, you might find it interesting reading. But I'm already quite familiar with how that works on a practical level, and the theory really adds nothing to my existing practical advice of, "don't do that!"

(Really, the closest the book comes to giving any practical advice is to vaguely suggest that maybe willpower and intertemporal bargaining aren't such good ideas. Well, not being a scientist, I can state it plainly: they're terrible ideas. You want coherent volition across time, not continuous conflict and bargaining.)

comment by MendelSchmiedekamp · 2009-04-20T12:05:15.642Z · LW(p) · GW(p)

I'll take a closer look at it.

comment by CannibalSmith · 2009-04-20T03:42:46.876Z · LW(p) · GW(p)

Some bad ideas on the theme "living to win":

  • Murder is okay. There are consequences, but it's a valid move nonetheless.
  • Was is fun. In fact, it's some of the best fun you can have as long as you don't get disabled or killed permanently.
  • Being a cult leader is a winning move.
  • Learn and practice the so called dark arts!
Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-20T18:00:35.647Z · LW(p) · GW(p)

Was is fun.

"War", I think you mean.

comment by JulianMorrison · 2009-04-20T00:26:48.131Z · LW(p) · GW(p)

What would a distinctively rationalist style of government look like? Cf. Dune's Bene Gesserit government by jury, what if a quorum of rationalists reaching Aumann Agreement could make a binding decision?

What mechanisms could be put in place to stop politics being a mind-killer?

Why not posted: undeveloped idea, and I don't know the math.

Replies from: XFrequentist, blogospheroid
comment by XFrequentist · 2010-09-10T00:25:18.125Z · LW(p) · GW(p)

This is a year late, but it's simply not ok that Futarchy not be mentioned here.

So there you are.

comment by blogospheroid · 2009-04-21T05:53:36.368Z · LW(p) · GW(p)

Mencius Moldbug believes that if we were living in a world of many mini sovereign corporations who compete for citizens, then they would be forced to be rational. They will try to seek every way to keep paying customers (taxpayers).

Another dune idea could be relevant over here - The god emperor. Have a really long lived guy be king. He cannot take the short cuts that many others do and has to think properly on how to govern.

Addendum - I understand that this is a system builder's perspective, and not an entrepreneur's perspective, i.e. a meta answer rather than an answer, sorry for that.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-21T08:39:47.397Z · LW(p) · GW(p)

That sounds like an evolution-style search, and he ought to be more careful, evolution only optimizes for the utility function - in this case, the ability to trap and hold "customers".

I would categorize that among the pre-rational systems of government - alongside representative democracy, kings, constitutions, etc. A set of rules or a single decider to do the thinking for a species who can't think straight on their own.

I was more interested in what a rationalist government would be like.

comment by steven0461 · 2009-04-25T16:28:49.692Z · LW(p) · GW(p)

I'm vaguely considering doing a post about skeptics. It seems to me they might embody a species of pseudo-rationality, like Objectivists and Spock. (Though it occurs to me that if we define "S-rationality" as "being free from the belief distortions caused by emotion", then "S-rationality" is both worthwhile and something that Spock genuinely possesses.) If their supposed critical thinking skills allow them to disbelieve in some bad ideas like ghosts, Gods, homeopathy, UFOs, and Bigfoot, but also in some good ideas like cryonics and not in other bad ideas like extraterrestrial contact, ecological footprints, p-values, and quantum collapse, then how does the whole thing differ from loyalty to the scientific community? Loyalty to the scientific community isn't the worst thing, but there's no need to present it as independent critical thinking.

I'm sure there are holes in this line of thought, so all criticism is welcome.

Replies from: Annoyance
comment by Annoyance · 2009-04-25T16:40:26.943Z · LW(p) · GW(p)

"but also in some good ideas like cryonics and not in other bad ideas like extraterrestrial contact, ecological footprints, p-values, and quantum collapse,"

Your listing of 'bad' and 'good' ideas reveals more about your personal beliefs than any supposed failings of skeptics.

Replies from: steven0461
comment by steven0461 · 2009-04-25T16:50:46.763Z · LW(p) · GW(p)

OK, so can you name any idea that you think is bad, is accepted/fashionable in science-oriented circles, but is rejected by skeptics for the right reasons?

Replies from: Annoyance
comment by Annoyance · 2009-04-25T17:33:56.662Z · LW(p) · GW(p)

Whether I think some idea is bad is completely irrelevant. What matters is whether I can show that there are compelling rational reasons to conclude that it's bad. There are lots of claims that I suspect may be true but that I cannot confirm or disprove. I don't complain about skeptics not disregarding the lack of rational support for those claims, nor do I suggest that the nature of skepticism be altered so that my personal sacred cows are spared.

Replies from: steven0461
comment by steven0461 · 2009-04-25T18:14:33.264Z · LW(p) · GW(p)

Do you believe, then, that there are no ideas that are accepted/fashionable in science-oriented circles, yet that have rational support against them? I wouldn't have listed the ideas that I listed if I didn't think I could rationally refute them as being true, coherent, or useful.

If it's not the case that 1) such ideas exist and 2) skeptics disagree with them, then what's the point of all their critical thinking? Why not just copy other people's opinions and call it a day? Is skepticism merely about truth-advocating and not truth-seeking?

comment by byrnema · 2009-04-20T05:26:04.354Z · LW(p) · GW(p)

Yet another post from me about theism?

This time, pushing for a more clearly articulated position. Yes, I realize that I am not endearing myself by continuing this line of debate. However, I have good reasons for pursuing.

  • I really like LW and the idea of a place where objective, unbiased truth is The Way. Since I idealistically believe in Aumann’s Agreement theorem, I think that we are only a small number of debates away from agreement.

  • To the extent to which LW aligns itself with a particular point of view, it must be able to defend that view. I don’t want LW to be wrong, and am willing to be a nuisance to make sure.

  • If defending atheism is not a first priority, can we continue using religion as a convenient example of irrationality, even as the enemy of rationality?

  • There is a definite sense that theism is not worth debating, that the case is "open-and-shut". If so, it should be straight-forward to draft a master argument. (Five separate posts of analogies is not strong evidence in my Bayesian calculation that the case is open-and-shut.)

  • A clear and definitive argument against theism would make it possible for theists (and yourselves, as devil's advocates) to debate specific points that are not covered adequately in the argument. (If you are about to downvote me on this comment, think about how important it would be to permit debate on an ideology that is important to this group. Right now it is difficult to debate whether religion is rational because there is no central argument to argue with.)

  • Relative to the ‘typical view’, atheism is radical. How does a religious person visiting this site become convinced that you’re not just a rationality site with a high proportion of atheists?

Replies from: MBlume, Nanani, ciphergoth, saturn, spriteless
comment by MBlume · 2009-04-20T05:50:18.188Z · LW(p) · GW(p)

(Um, this started as a reply to your comment but quickly became its own "idea I'm not ready to post" on deconversions and how we could accomplish them quickly.)

Upvoted. It took me months of reading to finally decide I was wrong. If we could put that "aha" moment in one document... well, we could do a lot of good.

Deconversions are tricky though. Did anyone here ever read Kissing Hank's Ass? It's a scathing moral indictment of mainline Christianity. I read it when I was 15 and couldn't sleep for most of a night.

And the next day, I pretty much decided to ignore it. I deconverted seven years later.

I believe the truth matters, and I believe you do a person a favor by deconverting them. But if you've been in for a while, if you've grown dependent on, for example, believing in an eternal life... there's a lot of pain in deconversion, and your mind's going to work hard to avoid it. We need to be prepared for that.

If I were to distill the reason I became an atheist into a few words, it would look something like:

Ontologically fundamental mental things don't make sense, but the human mind is wired to expect them. Fish swim in a sea of water, humans swim in a sea of minds. But mental things are complicated. In order to understand them you have to break them down into parts, something we're still working hard to do. If you say "the universe exists because someone created it," it feels like you've explained something, because agents are part of the most fundamental building blocks from which you build your world. But agency, intelligence, desire, and all the rest, are complicated properties which have a specific history here on earth. Sort of like cheesecake. Or the foxtrot. Or socialism.

If somebody started talking about the earth starting because of cheesecake, you'd wonder where the cheesecake came from. You'd look in a history book or a cook book and discover that the cheesecake has its origins in the roman empire, as a result of, well, people being hungry, and as a result of cows existing, and on and on, and you'd wonder how all those complex causes could produce a cheesecake predating the universe, and what sense it would make cut off from the rich causal net in which we find cheesecakes embedded today. Intelligence should not be any different. Agency trips up Occam's rasor, because humans are wired to expect there to always be agents about. But an explanation of the universe which contains an agent is an incredibly complicated theory, which only presents itself to us for consideration because of our biases.

A complicated theory that you never would have thought of in the first place had you been less biased is not a theory that might still be right -- it's just plain wrong. In the same sense that, if you're looking for a murderer in New York city, and you bring a suspect in on the advice of one of your lieutenants, and then it turns out the lieutenant picked the suspect by reading a horoscope, you have the wrong guy. You don't keep him there because he might be the murderer after all, and you may as well make sure. With all of New York to canvas, you let him go, and you start over. So too with agency-based explanations of the universe's beginning.

I've rambled terribly, and were that a top-level post, or a "master argument" it would have to be cleaned up considerably, but what I have just said is why I am an atheist, and not a clever argument I invented to support it.

Replies from: David_Gerard, PhilGoetz, John_Maxwell_IV, orthonormal, Jack
comment by David_Gerard · 2011-04-13T14:23:48.382Z · LW(p) · GW(p)

Ontologically fundamental mental things don't make sense, but the human mind is wired to expect them. Fish swim in a sea of water, humans swim in a sea of minds.

These two sentences, particularly the second, just explained for me why humans expect minds to be ontologically fundamental. Thank you!

Replies from: shokwave
comment by shokwave · 2011-04-13T14:38:28.782Z · LW(p) · GW(p)

Thank you for bringing this post to my attention! I'm going to use those lines.

comment by PhilGoetz · 2009-04-20T17:57:19.814Z · LW(p) · GW(p)

If somebody started talking about the earth starting because of cheesecake, you'd wonder where the cheesecake came from. You'd look in a history book or a cook book and discover that the cheesecake has its origins in the roman empire, as a result of, well, people being hungry, and as a result of cows existing, and on and on, and you'd wonder how all those complex causes could produce a cheesecake predating the universe, and what sense it would make cut off from the rich causal net in which we find cheesecakes embedded today. Intelligence should not be any different. Agency trips up Occam's rasor, because humans are wired to expect there to always be agents about. But an explanation of the universe which contains an agent is an incredibly complicated theory, which only presents itself to us for consideration because of our biases.

You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things."

Here is something I think would be useful: A careful information-theoretic explanation of why God must be complicated. When you explain, to Christians, that it doesn't make sense to say complexity originated because God created it and God must be complicated, Christians reply (and I'm generalizing here because I've heard these replies so many times) one of 2 things:

  • God is outside of space and time, so causality doesn't apply. (I don't know how to respond to this.)
  • God is not complicated. God is simple. God is the pure essence of being, the First Cause. Think of a perfect circle. That's what God is like.

It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia Brittanica, God has at least enough complexity to store that information.

Of course, putting this explanation on LW might do no good to anybody.

Replies from: Nick_Tarleton, jimmy, pangloss
comment by Nick_Tarleton · 2009-04-21T16:57:24.177Z · LW(p) · GW(p)

It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia Brittanica, God has at least enough complexity to store that information.

Keep in mind that if this complexity was derived from looking at external phenomena, or at the output of some simple computation, it doesn't reduce the prior probability.

comment by jimmy · 2009-04-21T06:12:57.669Z · LW(p) · GW(p)

It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia >Brittanica, God has at least enough complexity to store that information.

Except that the library of all possible books includes the Encyclopedia Brittanica but is far simpler.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-21T09:43:53.533Z · LW(p) · GW(p)

Except that the library of all possible books includes the Encyclopedia Brittanica but is far simpler.

Presumably, God can also distinguish between "the set of books with useful information" and "the set of books containing only nonsense". That is quite complex indeed.

Replies from: jimmy
comment by jimmy · 2009-04-21T16:52:29.449Z · LW(p) · GW(p)

I'm afraid I wasn't clear. I am not arguing that "god" is simple or that it explains anything. I'm just saying that god's knowledge is compressible into an intelligent generator (AI).

The source code isn't likely to be 10 lines, but then again, it doesn't have to include the Encyclopedia of Brittanica to tell you everything that the encyclopedia can once it grows up and learns.

F=m*a is enough to let you draw out all physically possible trajectories from the set of all trajectories, and it is still rather simple.

comment by pangloss · 2009-04-21T06:31:36.209Z · LW(p) · GW(p)

You say: You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things."

I may have to look up where before Darwin it gets argued, but I am pretty sure people challenged that before Darwin.

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-20T17:56:54.461Z · LW(p) · GW(p)

It might be why you're an atheist, but do you think it would have swayed your christian self much? I highly doubt that your post would come near to deconverting anyone. Many religious people believe that souls are essential for creativity and intelligence, and they won't accept the "you're wired to see intelligence" argument if they disbelieve in evolution (not uncommon.)

To deconvert people to atheism quickly, I think you need a sledgehammer. I still haven't found a really good one. Here are some areas that might be promising:

  1. Ask them why God won't drop a grapefruit from the sky to show he exists. "He loves me more than I can imagine, right? And more than anything he wants me to know him right? And he's all powerful, right?" To their response: "Why does God consider blindly believing in him in the absence of evidence virtuous? Isn't that sort of think a made-up religion would say about their god to keep people faithful?"

  2. The Problem of Evil: why do innocent babies suffer and die from disease?

  3. I've heard there are lots of contradictions in the bible. Maybe someone who is really dedicated could find some that are really compelling. Personally, I'm not interested enough in this topic to spend time reading religious texts, but more power to those who are.

A few moderately promising ones: Why does God heal cancer patients but not amputees? Why do different religious denominations disagree, when they could just ask God for the answer? Why would a benevolent God send people who happened to be unlucky enough not to hear about him to enternal damnation?

Replies from: Alexandros, orthonormal, MBlume
comment by Alexandros · 2009-04-23T21:42:01.652Z · LW(p) · GW(p)

I think a very straightforward contradiction is here: http://skepticsannotatedbible.com/contra/horsemen.html

2 Samuel and 1 Chronicles are supposed to be parallels, telling the same story. Yet one of them probably lost or gained a zero along the way. Many christians that see this are foreced to retreat to a more 'soft' interpretation of the bible that allows for errors in transactiption etc. It's the closest to a quick 'n' dirty sledgehammer I have ever had. And a folow-up: Why hasn't this been discussed in your church? Surely, a group of truthseekers wouldn't shy away from such fundamental criticisms, even to diffuse them.

comment by orthonormal · 2009-04-21T01:46:04.373Z · LW(p) · GW(p)

Problem is, theists of reasonable intelligence spend a good deal of time honing and rehearsing their replies to these. They might be slightly uneasy with their replies, but if the alternative is letting go of all they hold dear, then they'll hold to their guns. Catching them off guard is a slightly better tactic.

Or, to put it another way: if there were such a sledgehammer lying around, Richard Dawkins (or some other New Atheist) would be using it right now. Dawkins uses all the points you listed, and more; and the majority of people don't budge.

comment by MBlume · 2009-04-23T06:12:31.587Z · LW(p) · GW(p)

do you think it would have swayed your christian self much?

Well...it did sway my Christian self. My Christian self generated those arguments and they, with help from Eliezer's writings against self-deception, annihilated that self.

comment by orthonormal · 2009-04-23T04:40:47.261Z · LW(p) · GW(p)

That's as good of an exposition of this point as any I've seen. It deserves to be cleaned up and posted visibly, here on LW or somewhere else.

Replies from: MBlume
comment by MBlume · 2009-04-23T05:00:29.527Z · LW(p) · GW(p)

thanks =)

comment by Jack · 2009-04-20T20:38:27.988Z · LW(p) · GW(p)

So

  1. (x) : x is a possible entity. the more complicated x is the less likely it is to exist controlling for other evidence.

  2. (x): x is a possible entity. the more intelligent x the more complicated x is, controlling for other properties.

  3. God is maximally intelligent.

:. God's existence is maximally unlikely unless there is other evidence or unless it has other properties that make its existence maximally more likely.

(Assume intelligent to refer to the possession of general intelligence)

I think most theists will consent to (1), especially given that its implicit in some of their favorite arguments. (3) They consent to, unless they mean "God" as merely a cosmological constant, or first cause. In which case we're having a completely different debate. So the issue is (2). I'm sure some of the cognitive science types can give evidence for why intelligence is necessarily complicated. There is however, definitive evidence for the correlation of intelligence and complexity. Human brains are vastly more complex than the brains of other animals. Computers get more complicated the more information they hold, etc. It might actually be worth making the distinction, between intelligence and the holding of data. It is a lot easier to see how the more information something contains the more complicated something is since one can just compare two sets of data, one bigger than the other, and see that one is more complicated. Presumably, God needs to contain information on everyone's behavior, the the events that happen at any point in time, prayer requests etc.

Btw, is there a way for me to us symbolic logic notation in xml?

Replies from: MBlume, byrnema
comment by MBlume · 2009-04-20T21:03:36.876Z · LW(p) · GW(p)

hmm...if we can get embedded images to work, we're set.

http://www.codecogs.com/png.latex?\int_a^b\frac{1}{\sqrt{x}}dx

Click that link, and you'll get a rendered png of the LaTeX expression I've placed after the ?. Replace that expression with another and, well, you'll get that too. If you're writing a top-level post, you can use this to pretty quickly embed equations. Not sure how to make it useful in a comment though.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-20T21:54:03.136Z · LW(p) · GW(p)

Here it is:

Source code:

![](http://www.codecogs.com/png.latex?\\int\_a^b\\frac\{1\}\{\\sqrt\{x\}\}dx)

(It was mentioned before.)

Replies from: MBlume
comment by MBlume · 2009-04-20T22:22:50.018Z · LW(p) · GW(p)

awesome =)

comment by byrnema · 2009-04-20T21:06:09.368Z · LW(p) · GW(p)

I think you are looking at this from an evolutionary point of view? Then it makes sense to make statements like "more and more complex states are less likely" (i.e., they take more time) and "intelligence increases with the complexity" (of organisms).

Outside this context, though, I have trouble understanding what is meant by "complicated" or why "more intelligent" should be more complex. In fact, you could skip right from (1) to (3) -- most theists would be comfortable asserting that God is maximally complex. However, is response to (1) they might counter with -- if something does exist, you can't use its improbability to negate that it exists.

Replies from: Jack
comment by Jack · 2009-04-20T22:58:44.797Z · LW(p) · GW(p)
  1. I'm not sure most theists would be comfortable asserting that God is maximally complex.

  2. The wikipedia article Complexity looks helpful.

  3. It is true that if something does exist you can't use its improbability to negate its existence. But this option is allowed for in the argument; "unless there is other evidence or it has other properties that make its existence maximally more likely". So if God is, say, necessary, then he is going to exist no matter his likelihood. What this argument does is set a really low prior for the probability that God exists. There is never going to be one argument that proves atheism because no argument is going to rule out the existence of evidence the other way. The best we can do is give a really low initial probability and wait to hear arguments that swing us the other way AND show that some conceptions of God are contradictory or impossible.

Edit- You're right though, if you mean that there is a problem with the phrasing "maximally unlikely" if there is still a chance for its existence. Certainly "maximally unlikely" cannot mean "0".

comment by Nanani · 2009-04-22T00:55:20.908Z · LW(p) · GW(p)

really like LW and the idea of a place where objective, unbiased truth is The Way.

Something about this phrase bothers me. I think you may be confused as to what is meant by The Way. It isn't about any specific truth, much less Truth. It is about rationality, ways to get at the truth and update when it turns out that truth was incomplete, or facts change, and so on.

Promoting an abstract truth is very much -not- the point. I think it will help your confusion if you can wrap your head around this. My apologies if these words don't help.

comment by Paul Crowley (ciphergoth) · 2009-04-20T07:27:47.774Z · LW(p) · GW(p)

I would prefer us not to talk about theism all that much. We should be testing ourselves against harder problems.

Replies from: MBlume
comment by MBlume · 2009-04-20T07:31:45.567Z · LW(p) · GW(p)

Theism is the first, and oldest problem. We have freed ourselves from it, yes, but that does not mean we have solved it. There are still churches.

If we really intend to make more rationalists, theism will be the first hurdle, and there will be an art to clearing that hurdle quickly, cleanly, and with a minimum of pain for the deconverted. I see no reason not to spend time honing that art.

Replies from: ciphergoth, cabalamat
comment by Paul Crowley (ciphergoth) · 2009-04-20T07:45:11.093Z · LW(p) · GW(p)

First, the subject is discussed to death. Second, our target audience at this stage is almost entirely atheists; you start on the people who are closest. Insofar as there are theists we could draw in, we will probably deconvert them more effectively by raising the sanity waterline and having them drown religion without our explicit guidance on the subject; this will also do more to improve their rationality skills than explicit deconversion.

Replies from: MBlume
comment by MBlume · 2009-04-20T07:54:08.384Z · LW(p) · GW(p)

sigh You're probably right.

I have a lot of theists in my family and in my social circle, and part of me still wants to view them as potential future rationalists.

Replies from: Vladimir_Nesov, JulianMorrison, gjm
comment by Vladimir_Nesov · 2009-04-20T16:31:33.545Z · LW(p) · GW(p)

We should teach healthy habits of thought, not fight religion explicitly. People should be able to feel horrified by the insanity of supernatural beliefs for themselves, not argued into considering them inferior to the alternatives.

comment by JulianMorrison · 2009-04-20T12:43:00.747Z · LW(p) · GW(p)

When you don't have a science, the first step is to look for patterns. How about assembling an archive of de-conversions that worked?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T20:29:04.683Z · LW(p) · GW(p)

The problem with current techniques is that nothing works reliably. If you can go so high as to have a document that works to deconvert 10% of educated theists, then you can start examining for regularities in what worked and didn't work. The trouble is reaching that high initial bar.

Replies from: David_Gerard, pjeby, JulianMorrison
comment by David_Gerard · 2011-04-13T14:20:40.103Z · LW(p) · GW(p)

The problem with current techniques is that nothing works reliably. If you can go so high as to have a document that works to deconvert 10% of educated theists, then you can start examining for regularities in what worked and didn't work. The trouble is reaching that high initial bar.

The first place that springs to mind to look is deconversion-oriented documents that theists warn each other off and which they are given prepared opinions on. The God Delusion is my favourite current example - if you ever hear a theist dissing it, ask if they've read it; it's likely they won't have, and will (hopefully) be embarrassed by having been caught cutting'n'pasting someone else's opinions. What others are there that have produced this effect?

Replies from: Alicorn
comment by Alicorn · 2011-04-13T14:24:49.492Z · LW(p) · GW(p)

People are more willing than you might think to openly deride books they admit that they have never read. I know this because I write Twilight fanfiction.

Replies from: wedrifid, Zetetic, David_Gerard
comment by wedrifid · 2011-04-13T14:54:24.044Z · LW(p) · GW(p)

People are more willing than you might think to openly deride books they admit that they have never read.

Almost as if their are other means than just personal experience by which to collect evidence.

"Standing on the shoulders of giants hurling insults at Stephenie Meyer's."

comment by Zetetic · 2011-04-13T19:45:13.466Z · LW(p) · GW(p)

I am very curious about your take on those who attack Twilight for being anti-feminist, specifically for encouraging young girls to engage in male-dependency fantasies.

I've heard tons of this sort of criticism from men and women alike, and since you appear to be the de facto voice of feminism on Lesswrong, I would very much appreciate any insight you might be able to give. Are these accusations simply overblown nonsense in your view? If you have already addressed this, would you be kind enough to post a link?

Replies from: Alicorn, AstroCJ
comment by Alicorn · 2011-04-13T21:05:30.293Z · LW(p) · GW(p)

I really don't want to be the voice of feminism anywhere. However, I'm willing to be the voice of Twilight apologism, so:

Bella is presented as an accident-prone, self-sacrificing human, frequently putting herself in legitimately dangerous situations for poorly thought out reasons. If you read into the dynamics of vampire pairing-off, which I think is sufficiently obvious that I poured it wholesale into my fic, this is sufficient for Edward to go a little nuts. Gender needn't enter into it. He's a vampire, nigh-indestructible, and he's irrevocably in love with someone extremely fragile who will not stop putting herself in myriad situations that he evaluates as dangerous.

He should just turn her, of course, but he has his own issues with considering that a form of death, which aren't addressed head-on in the canon at all; he only turns her when the alternative is immediate death rather than slow gentle death by aging. So instead of course he resorts to being a moderately controlling "rescuer" - of course he does things like disable her car so she can't go visiting wolves over his warnings. Wolves are dangerous enough to threaten vampires, and Edward lives in a world where violence is a first or at least a second resort to everything. Bella's life is more valuable to him than it is to her, and she shows it. It's a miracle he didn't go spare to the point of locking her in a basement, given that he refused to make her a vampire. (Am I saying Bella should have meekly accepted that he wanted to manage her life? No, I'm saying she should have gotten over her romantic notion that Edward needed to turn her himself and gotten it over with. After she's a vampire in canon, she's no longer dependent - emotionally attached, definitely, and they're keeping an eye on her to make sure she doesn't eat anybody, but she's no longer liable to be killed in a car accident or anything and there's no further attempt ever to restrict her movement. She winds up being a pivotal figure in the final battle, which no one even suggests keeping her away from.)

Note that gender has nothing to do with any of this. The same dynamic would play out with any unwilling-to-turn-people vampire who mated to any reckless human. It's fully determined by those personality traits, this vampire tendency, and the relative fragility of humans. So, to hold that this dynamic makes Twilight anti-feminist is to hold one of the following ridiculous positions:

  • the mate bond as implied in the series is intrinsically anti-feminist (even though there's nothing obviously stopping it from playing out with gay couples, or female vampires with male humans)

  • it was somehow irresponsible to choose to write a heterosexual human female perspective character (...?)

  • it was antifeminist to write a vampire love interest who wasn't all for the idea of turning his mate immediately (it is completely unclear how Edward's internal turmoil about whether turning is death has anything to do with feminism in the abstract, so his individual application of this quandary to Bella can't be much more so)

Other feminist accusations fail trivially. Bella doesn't get an abortion. So? She doesn't want one! It's called "pro-choice", not "pro-attacking-a-pregnant-woman-because-your-judgment-overrides-hers". Etcetera.

Replies from: HonoreDB, Zetetic, shokwave
comment by HonoreDB · 2011-04-13T21:34:47.742Z · LW(p) · GW(p)

I haven't read Twilight, and I don't criticize books I haven't read, but I do object in general to the idea that something can't be ideologically offensive just because it's justified in-story.

Birth of a Nation, for example, depicts the founding of the Ku Klux Klan as a heroic response to a bestial, aggressive black militia that's been terrorizing the countryside. In the presence of a bestial, aggressive black militia, forming the KKK isn't really a racist thing to do. But the movie is still racist as all hell for contriving a situation where forming the KKK makes sense.

Similarly, I'd view a thriller about an evil international conspiracy of Jewish bankers with profound suspicion.

Replies from: Alicorn
comment by Alicorn · 2011-04-13T21:39:39.999Z · LW(p) · GW(p)

I think it's relevant here that vampires are not real.

Replies from: HonoreDB
comment by HonoreDB · 2011-04-13T21:45:37.192Z · LW(p) · GW(p)

Well, sure, but men who think women need to stay in the kitchen for their own good are. What makes Twilight sound bad is that it's recreating something that actually happens, and something that plenty of people think should happen more, in a context where it makes more sense.

Replies from: Alicorn
comment by Alicorn · 2011-04-13T21:48:11.892Z · LW(p) · GW(p)

There are other female characters in the story. Alice can see enough to dance circles around the average opponent. Rosalie runs around doing things. Esme's kind of ineffectual, but then, her husband isn't made out to be great shakes in a fight either. Victoria spends two books as the main antagonist. Jane is scary as hell. And - I repeat - the minute Bella is not fragile, there is no more of the objectionable attitude.

Replies from: HonoreDB
comment by HonoreDB · 2011-04-13T22:04:34.950Z · LW(p) · GW(p)

That doesn't necessarily mean that the Edward/Bella dynamic wasn't written to appeal to patriarchal tendencies, and just arose naturally from the plot. I'm completely unequipped to argue about whether or not this was the case. But I'm pretty confident the reason people who haven't read the book think it sounds anti-feminist is that we assume that Stephenie Meyer started with the Edward-Bella relationship and built the characters and the world around it.

comment by Zetetic · 2011-04-14T04:28:29.181Z · LW(p) · GW(p)

Alicorn,

First of all, thanks for taking the time to give an in-depth response.

I personally have misgivings similar to those expressed by HonoreDB, insofar as it seems that although the fantastical elements of the story do 'justify' the situation in a sense, they appear to be designed to do so.

I felt that these sort of plot devices were essentially a post hoc excuse to perpetuate a sort of knight-in-shining-armor dynamic in order to tantalize a somewhat young and ideologically vulnerable audience in the interest of turning a quick buck.

Then again, I may be being somewhat oversensitive, or I may be letting my external biases (I personally don't care for the young adult fantasy genre) cloud my judgment.

comment by shokwave · 2011-04-14T05:09:48.793Z · LW(p) · GW(p)

I don't credit Stephenie Meyer with enough intelligence to have figured out this line of reasoning. I think it's most likely that Meyer created situations so that Edward could save Bella, and due to either lack of imagination or inability to notice, the preponderance of dangerous situations (and especially dangerous people) ended up very high - high enough to give smarter people ideas like violence is just more common in that world.

That said, my views on Twilight are extremely biased by my social group.

Replies from: Alicorn
comment by Alicorn · 2011-04-14T05:32:28.232Z · LW(p) · GW(p)

My idea that violence is common in the Twilight world is not primarily fueled by danger to Bella in particular. I was mostly thinking of, say, Bree's death, or the stories about newborn armies and how they're controlled, or the fact that the overwhelming majority of vampires commit murder on a regular basis.

comment by AstroCJ · 2011-04-13T20:31:25.625Z · LW(p) · GW(p)

I have a friend currently researching this precise topic; she adores reading Twilight and simultaneously thinks that it is completely damaging for young women to be reading. The distinction she drew, as far as I understood it, was that (1) Twilight is a very, very alluring fantasy - one day an immortal, beautiful man falls permanently in love with you for the rest of time and (2) canon!Edward is terrifying when considered not through the lens of Bella. Things like him watching her sleep before they'd spoken properly; he's not someone you want to hold up as a good candidate for romance.

(I personally have not read it, though I've read Alicorn's fanfic and been told a reasonable amount of detail by friends.)

comment by David_Gerard · 2011-04-13T14:52:03.226Z · LW(p) · GW(p)

Yes, but catching them out can be fun :-)

comment by pjeby · 2009-04-21T12:17:16.518Z · LW(p) · GW(p)

The problem with current techniques is that nothing works reliably. If you can go so high as to have a document that works to deconvert 10% of educated theists, then you can start examining for regularities in what worked and didn't work. The trouble is reaching that high initial bar.

It seems to me that Derren Brown once did some sort of demonstration in which he mass-converted some atheists to theists, and/or vice versa. Perhaps we should investigate what he did. ;-)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-21T12:26:47.799Z · LW(p) · GW(p)

(Updated following Vladimir_Nesov's comment - thanks!)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-21T15:13:59.106Z · LW(p) · GW(p)

Even where it's obvious, you should add textual description for the links you give. This is the same courtesy as not saying just "Voted up", but adding at least some new content in the same note.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-21T15:47:04.219Z · LW(p) · GW(p)

Fixed, thanks!

comment by JulianMorrison · 2009-04-21T08:26:50.309Z · LW(p) · GW(p)

The problem with current techniques is that nothing works reliably.

You sound real sure of that. Since it's you saying it, you probably have data. Can you link it so I can see?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-21T08:59:59.634Z · LW(p) · GW(p)

If something worked that reliably, wouldn't we know about it? Wouldn't it, for example, be seen many times in one of these lists of deconversion stories?

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-22T01:08:49.372Z · LW(p) · GW(p)

That only rules out the most surface-obvious of patterns. And I doubt anyone has tried deconverting someone in an MRI machine. It's too early to give up.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-22T11:50:17.942Z · LW(p) · GW(p)

No-one's giving up, but until we find such a way we have to proceed in its absence.

comment by gjm · 2009-04-20T12:46:09.494Z · LW(p) · GW(p)

They are potential future rationalists. They're even (something like) potential present rationalists; that is, someone can be a pretty good rationalist in most contexts while remaining a theist. This is precisely because the internal forces discouraging them from changing can be so strong.

comment by cabalamat · 2009-04-20T17:44:47.879Z · LW(p) · GW(p)

Theism is the first, and oldest problem. We have freed ourselves from it, yes, but that does not mean we have solved it. There are still churches.

Indeed. When a community contains more than a critical number of theists, their irrational decision making can harm themselves and the whole community. By deconverting theists, we help them and everyone else.

I'd like to see a discussion on the best ways to deconvert theists.

Replies from: CronoDAS
comment by CronoDAS · 2009-04-20T22:19:10.745Z · LW(p) · GW(p)

Capture bonding seems to be an effective method of changing beliefs.

comment by saturn · 2009-04-21T05:11:15.635Z · LW(p) · GW(p)

Here's the open-and-shut case against theism: People often tell stories to make themselves feel better. Many of these stories tell of various invisible and undetectable entities. Theory 1 is that all such stories are fabrications; Theory 2 is that an arbitrary one is true and the rest are fabrications. Theory 2 contains more burdensome detail but doesn't predict the data better than Theory 1.

Although to theists this isn't a very convincing argument, it is a knock-down argument if you're a Bayesian wannabe with sane priors.

comment by spriteless · 2009-04-21T01:26:13.377Z · LW(p) · GW(p)

Y'all are misunderstanding theists main reason for belief when you attack it's likelihood. They don't think God sounds likely, but that it's better to assume God exists so you can at least pretend one's happiness is justified; god gives hope and hopelessness is the enemy. That's the argument you'd need to undermine to deconvert people. I'm not articulate to do that, so I link someone who writes for a living instead. http://gretachristina.typepad.com/greta_christinas_weblog/2008/11/a-safe-place-to-land.html

Replies from: byrnema
comment by byrnema · 2009-04-21T03:48:19.939Z · LW(p) · GW(p)

Right, it would be easier to deconvert if you give some hope about the other side. An analogous idea at LW is leaving a line of retreat.

Note: for editing (italics, etc), there's a Help button on the lower right hand corner of the comment box.

Replies from: spriteless
comment by spriteless · 2009-04-27T12:55:21.869Z · LW(p) · GW(p)

Thank you. Eliezer's an interesting read, but I prefer to link to rationalists outside this community when possible... enough people have already read his work that I'd want to get in some new ideas, and because we need more girls.

comment by byrnema · 2009-04-20T03:19:30.780Z · LW(p) · GW(p)

A criticism of practices on LW that are attractive now but which will hinder "the way" to truth in the future; that lead to a religious idolatry of ideas (a common fate of many "in-groups") rather than objective detachment. For example,

(1) linking to ideas in original posts without summarizing the main ideas in your own words and how they apply to the specific context -- as this creates short-cuts in the brain of the reader, if not in the writer

(2) Use of analogies without formally defining the ideas behind them leads to content not only saying more than it intends to (or more than it strictly should) but also having meta-meanings that are attractive but dangerous because they're not explicit. [edit: "formally" was a poor choice of words, "clearly" is my intended meaning]

And any other examples people think of, now and as LW develops.

Replies from: pangloss, PhilGoetz
comment by pangloss · 2009-04-20T06:01:43.322Z · LW(p) · GW(p)

I am not sure I agree with your second concern. Sometimes premature formalization can take us further off track than leaving things with intuitively accessible handles for thinking about them.

Formalizing things, at its best, helps reveal the hidden assumptions we didn't know we were making, but at its worst, it hard-codes some simplifying assumptions into the way we start talking and thinking about the topic at hand. For instance, as soon as we start to formalize sentences of the form "If P, then Q" as material implication, we adopt an analysis of conditionals that straightjackets them into the role of an extensional (truth-functional) semantics. It is not uncommon for someone who just took introductory logic train themselves into forcing natural language into this mold, rather than evaluating the adequacy of the formalism for explaining natural language.

comment by PhilGoetz · 2009-04-20T18:03:03.827Z · LW(p) · GW(p)

(1) linking to ideas in original posts without summarizing the main ideas in your own words and how they apply to the specific context -- as this creates short-cuts in the brain of the reader, if not in the writer

I plan to keep doing this; it saves time.

(2) Use of analogies without formally defining the ideas behind them leads to content not only saying more than it intends to (or more than it strictly should) but also having meta-meanings that are attractive but dangerous because they're not explicit. [edit: "formally" was a poor choice of words, "clearly" is my intended meaning]

Isn't this inherent in using analogies? Are you really saying "Don't use analogies"?

Replies from: byrnema
comment by byrnema · 2009-04-20T20:43:44.050Z · LW(p) · GW(p)

I like analogies. I think they are useful in introducing or explaining an idea, but shouldn't be used as a substitute for the idea.

comment by MBlume · 2009-04-20T03:02:49.619Z · LW(p) · GW(p)

Winning Interpersonally

cousin_it would like to know how rationality has actually helped us win. However, in his article, he completely gives up on rationality in one major area, admitting that "interpersonal relationships are out."

Alex strenuously disagrees, asking "why are interpersonal relationships out? I think rationality can help a great deal here."

(And, of course, I suppose everone knows my little sob-story by now.)

I'd like to get a read from the community on this question.

Is rationality useless -- or worse, a liability when dealing with other human beings? How much does it matter if those human beings are themselves self-professed rationalists? It's been noted that Less Wrong is incredibly male. I have no idea whether this represents an actual gender differential in desire for epistemic rationality, but if it does, it means most male Less Wrongers should not expect to wind up dating rationalists. Does this mean that it is necessary for us to embrace less than accurate beliefs about, eg, our own desirability, that of our partner, various inherently confused concepts of romantic fate, or whatever supernatural beliefs our partners wish do defend? Does this mean it is necessary to make the world more rational, simply so that we can live in it?

(note: this draft was written a while before Gender and Rationality, so there's probably some stuff I'd rewrite to take that into account)

Replies from: pjeby, None, anonymouslyanonymous, MendelSchmiedekamp, Alicorn, cousin_it, Nanani, SoullessAutomaton
comment by pjeby · 2009-04-20T15:42:58.695Z · LW(p) · GW(p)

Is rationality useless -- or worse, a liability when dealing with other human beings?

Only if you translate this into meaning you've got to communicate like Spock, or talk constantly about things that bore, depress, or shock people, or require them to think when they want to relax. etc.

(That article, btw, is by a guy who figured out how to stop being so "rational" in his personal relationships. Also, as it's a pickup artist's blog, there may be some images or language that may be offensive or NSFW. YMMV.)

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-20T21:31:58.061Z · LW(p) · GW(p)

That article seems kind of dodgy to me. Do people really fail to realize that the behaviors he describes are annoying and will alienate people?

The article also gets on my nerves a bit because it assumes that learning to be socially appealing to idiots is 1) difficult and 2) rewarding. Probably I'm just not in his target demographic, so oh well.

Replies from: pjeby
comment by pjeby · 2009-04-20T22:01:51.274Z · LW(p) · GW(p)

Do people really fail to realize that the behaviors he describes are annoying and will alienate people?

Well, he did, and I did, so that's a sample right there.

The article also gets on my nerves a bit because it assumes that learning to be socially appealing to idiots is 1) difficult and 2) rewarding.

Sounds like you missed the part of the article where he pointed out that thinking of those people as "idiots" is snobbery on your part. The value of a human being's life isn't really defined by the complexity of the ideas that get discussed in it.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-20T22:12:52.620Z · LW(p) · GW(p)

The value of a human being's life isn't really defined by the complexity of the ideas that get discussed in it.

No, but the value to me of interacting with them is. I would like nothing more than to know that they live happy and fulfilling lives that do not involve me.

Also, "snobbery" is a loaded term. Is there a reason I am obligated to enjoy the company of people I do not like?

Replies from: pjeby
comment by pjeby · 2009-04-20T22:18:04.742Z · LW(p) · GW(p)

No, but the value to me of interacting with them is. I would like nothing more than to know that they live happy and fulfilling lives that do not involve me.

Sounds like you also missed the part about acquiring an appreciation for the more experiential qualities of life, and for more varieties of people.

Also, "snobbery" is a loaded term.

More so than "idiots"? ;-)

Is there a reason I am obligated to enjoy the company of people I do not like?

Only if you want to increase your opportunities for enjoyment in life, be successful at endeavors that involve other people, reduce the amount of frustration you experience at family gatherings... you know, generally enjoying yourself without needing to have your brain uploaded first. ;-)

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-20T22:50:06.259Z · LW(p) · GW(p)

Sounds like you also missed the part about acquiring an appreciation for the more experiential qualities of life, and for more varieties of people.

I do have an appreciation for those things. I find them enjoyable, distracting, but ultimately unsatisfying. That's like telling someone who eats a healthy diet to acquire an appreciation for candy.

More so than "idiots"? ;-)

Haha, I wondered if you would call me on that. You are right, of course, and for the most part my attitude towards people isn't as negative as I made it sound. I was annoyed by the smug and presumptuous tone of that article.

Only if you want to increase your opportunities for enjoyment in life, be successful at endeavors that involve other people, reduce the amount of frustration you experience at family gatherings... you know, generally enjoying yourself without needing to have your brain uploaded first. ;-)

I do fine enjoying myself as it is, and it's not like I can't work with people--I'm talking only about socializing or other leisure-time activities. And as far as that goes, I absolutely fail to see the benefit of socializing with at least 90% of the people out there. They don't enjoy the things I enjoy and that's fine; why am I somehow deficient for failing to enjoy their activities?

Like I said, I don't think I'm really in the target demographic for that article, and I'm not really sure what you're trying to convince me of, here.

Replies from: pjeby
comment by pjeby · 2009-04-20T23:06:03.789Z · LW(p) · GW(p)

I'm not really sure what you're trying to convince me of, here.

I'm not trying to convince you of anything. You asked questions. I answered them.

I do have an appreciation for those things. I find them enjoyable, distracting, but ultimately unsatisfying. That's like telling someone who eats a healthy diet to acquire an appreciation for candy.

Hm, so who's trying to convince who now? ;-)

I was annoyed by the smug and presumptuous tone of that article.

Interesting. I found its tone to be informative, helpful, and compassionately encouraging.

And as far as that goes, I absolutely fail to see the benefit of socializing with at least 90% of the people out there. They don't enjoy the things I enjoy and that's fine; why am I somehow deficient for failing to enjoy their activities?

Who said you were? Not even the article says that. The author wrote, in effect, that he realized that he was being a snob and missing out on things by insisting on making everything be about ideas and rightness and sharing his knowledge, instead of just enjoying the moments, and by judging people with less raw intelligence as being beneath him. I don't see where he said anybody was being deficient in anything.

My only point was that sometimes socializing is useful for winning -- even if it's just enjoying yourself at times when things aren't going your way. I personally found that it limited my life too much to have to have a negative response to purely- or primarily- social interactions with low informational or practical content. Now I have the choice of being able to enjoy them for what they are, which means I have more freedom and enjoyment in my life.

But notice that at no time or place did I use the word "deficiency" to describe myself or anyone else in that. Unfulfilled potential does not equal deficiency unless you judge it to be such.

And if you don't judge or fear it to be such, why would the article set you off? If you were really happy with things as they are, wouldn't you'd have just said, "oh, something I don't need", and went on with your life? Why so much protest?

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-20T23:25:39.312Z · LW(p) · GW(p)

I don't see where he said anybody was being deficient in anything.

This was the impression I got from the article's tone, as well as your previous comments--an impression of "you should do this for your own good". If that was not the intent, I apologize, it is easy to misread tone over the internet.

And if you don't judge or fear it to be such, why would the article set you off? If you were really happy with things as they are, wouldn't you'd have just said, "oh, something I don't need", and went on with your life? Why so much protest?

Because there have been other times where people expressed opinions about what I ought to be doing for enjoyment (cf. the kind of helpfulness described as optimizing others ) and I find it irritating. It's a minor but persistent pet peeve.

I remarked on the article originally mainly because the advice it offered seemed puzzlingly obvious.

Replies from: pjeby
comment by pjeby · 2009-04-20T23:37:00.867Z · LW(p) · GW(p)

This was the impression I got from the article's tone, as well as your previous comments--an impression of "you should do this for your own good".

Ah. All I said in the original context was that rationality is only an obstacle in social situations if you used it as an excuse to make everything about you and your ideas/priorities/values, and gave the article as some background on the ways that "rational" people sometimes do that. No advice was given or implied.

As for the article's tone, bear in mind that it's a pickup artist's blog (or more precisely, the blog of a trainer of pickup artists).

So, his audience is people who already want to improve their social skills, and therefore have already decided it's a worthy goal to do so. That's why the article doesn't attempt to make a case for why someone would want to improve their social skills -- it is, after all a major topic of the blog.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-20T23:50:55.862Z · LW(p) · GW(p)

So, his audience is people who already want to improve their social skills, and therefore have already decided it's a worthy goal to do so. That's why the article doesn't attempt to make a case for why someone would want to improve their social skills -- it is, after all a major topic of the blog.

Yes, this is what I meant when I said I probably wasn't in the target demographic--my social skills are acceptable, but my desire to socialize is fairly low.

Anyway, sorry for the pointless argument, heh.

comment by [deleted] · 2009-04-20T12:08:44.790Z · LW(p) · GW(p)

del

comment by anonymouslyanonymous · 2009-04-20T15:17:15.071Z · LW(p) · GW(p)

It's been noted that Less Wrong is incredibly male. I have no idea whether this represents an actual gender differential in desire for epistemic rationality, but if it does, it means most male Less Wrongers should not expect to wind up dating rationalists. Does this mean that it is necessary for us to embrace less than accurate beliefs about, eg, our own desirability, that of our partner, various inherently confused concepts of romantic fate, or whatever supernatural beliefs our partners wish do defend? Does this mean it is necessary to make the world more rational, simply so that we can live in it?

"We commonly speak of the sex 'drive', as if it, like hunger, must be satisfied, or a person will die. Yet there is no evidence that celibacy is in any way damaging to one's health, and it is clear that many celibates lead long, happy lives. Celibacy should be recognised as a valid alternative sexual lifestyle, although probably not everyone is suited to it." -J. S. Hyde, Understanding Human Sexuality, 1986

Source.

Replies from: MBlume, army1987
comment by MBlume · 2009-04-20T17:22:00.249Z · LW(p) · GW(p)

I have been in a happy, mutually satisfying romantic/sexual relationship once in my life. We had one good year together, and it was The. Best. Year. Of. My. Life. I know people say that when something good happens to you, you soon adjust, and you wind up as happy or as sad as you were before, but that was simply not my experience. I'd give just about anything to have that again. Such is my utility function, and I do not intend to tamper with it.

Replies from: anonymouslyanonymous, MTGandP, PhilGoetz, anonymouslyanonymous
comment by anonymouslyanonymous · 2009-04-20T23:07:47.831Z · LW(p) · GW(p)

People differ. All I'm trying to say is this: telling someone something is a necessary precondition for their leading a meaningful life, when that is not the case, is likely to create needless suffering.

Replies from: MBlume
comment by MBlume · 2009-04-21T17:15:33.389Z · LW(p) · GW(p)

indeed

comment by MTGandP · 2015-07-07T04:59:12.246Z · LW(p) · GW(p)

This is really remarkable to read six years later, since, although I don't know you personally, I know your reputation as That Guy Who Has Really Awesome Idyllic Relationships.

comment by PhilGoetz · 2009-04-20T18:06:13.174Z · LW(p) · GW(p)

I've read several times that that feelings lasts 2-3 years for most people. That's the conventional wisdom. I've read once that, for some people, it lasts their whole life long. (I mean, once in a scholarly book. I've read it many times in novels.)

Replies from: MBlume
comment by MBlume · 2009-04-20T18:25:18.879Z · LW(p) · GW(p)

I rather suspect I might be one of those people. It's been over three years since I first fell for her, and over nine months since those feelings were in any way encouraged, and I still feel that attachment today.

If it turns out I am wired to stay in love for the long term, that'd certainly be a boon under the right circumstances.

Rather sucks now though.

Replies from: Jack
comment by Jack · 2009-04-20T23:11:54.593Z · LW(p) · GW(p)

Don't know if it applies to you. But I imagine a very relevant factor is whether or not you get attached to anyone else.

comment by anonymouslyanonymous · 2009-04-20T22:57:36.141Z · LW(p) · GW(p)

People differ. All I'm saying is this: telling people something is absolutely necessary for them to have a meaningful life, when that thing is not absolutely necessary for them to have a meaningful life, is likely to produce needless suffering.

comment by A1987dM (army1987) · 2012-10-02T18:14:38.185Z · LW(p) · GW(p)

there is no evidence that celibacy is in any way damaging to one's health

Er...

Replies from: shminux
comment by shminux · 2012-10-02T18:19:40.196Z · LW(p) · GW(p)

That's involuntary celibacy, not a lifestyle choice.

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-02T18:43:57.129Z · LW(p) · GW(p)

I guess the male LessWrongers that MBlume was thinking about in the ancestor comment haven't chosen that.

Replies from: shminux
comment by shminux · 2012-10-02T18:54:17.243Z · LW(p) · GW(p)

Right, but that's not what the quote you replied to was about.

comment by MendelSchmiedekamp · 2009-04-20T03:46:05.563Z · LW(p) · GW(p)

I have much I could say on the subject of interpersonal application of rationality (especially to romantic relationships), much of it positive and promising. Unfortunately I don't know yet how well it will match up with rationality as its taught in the OB/LW style - which will decide how easy that is for me to unpack here.

Replies from: MBlume
comment by MBlume · 2009-04-20T03:48:46.368Z · LW(p) · GW(p)

Well, this thread might be a good place to start =)

ETA: I don't think anything should ever be said against an idea which is shown to work. If its epistemic basis is dodgy, we can make a project of shoring it up, but the fact that it works means there's something supporting it, even if we don't yet fully understand it.

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-20T15:15:59.723Z · LW(p) · GW(p)

What I do need to do, is to think more clearly (for which now is not the best time) on whether or not the OB/LW flavor of rationality training is something which can communicate that methods I've worked out.

Then it's a matter of trade-offs between forcing the OB/LW flavor or trying to use a related, but better fitting flavor. Which means computing estimates on culture, implicit social biases and expectations. All of which takes time and experiments, much of which I expect to fail.

Which I suppose exemplifies the very basics of what I've found works - individual techniques can be dangerous because when over-generalized there are simply new biases to replace old ones. Instead, forget what you think you know and start re-building your understanding from observation and experiment. Periodically re-question the conclusions you make, and build your understanding from bite size pieces to larger and larger ones.

Which has everything to do with maintaining rational relationships with non-rational, and even deeply irrational people, especially romantic ones. But this takes real work, because each relationship is its own skill, its own "technique", and you need to learn it on the fly. On the plus side, if you get good at it you'll be able to learn how to deal with complex adaptive systems quickly - sort of a meta-skill, as it were.

comment by Alicorn · 2009-04-20T14:52:59.721Z · LW(p) · GW(p)

There are people who will put up with a relentlessly and honestly rationalist approach to one's friendship or other relationship with them. However, they are rare and precious, and I use the words "put up with" instead of "enjoy and respond in kind" because they do it out of affection, and (possibly, in limited situations) admiration that does not inspire imitation. Not because they are themselves rationalists, reacting rationally to the approach, but because they just want to be friends enough to deal.

comment by cousin_it · 2009-04-20T11:52:51.914Z · LW(p) · GW(p)

To expand on my phrase "interpersonal relationships are out"...

Talking to people, especially the opposite sex, strongly exercises many subconscious mechanisms of our brain. Language, intonation, emotion, posture, you just can't process everything rationally as it comes at you in parallel at high bandwidth. Try dancing from first principles; you'll fail. If you possess no natural talent for it, you have no hope of winning an individual encounter through rationality. You can win by preparation - slowly develop such personal qualities as confidence, empathy and sense of humor. I have chosen this path, it works.

Replies from: None
comment by [deleted] · 2009-04-20T14:55:35.282Z · LW(p) · GW(p)

deleted

Replies from: cousin_it
comment by cousin_it · 2009-04-20T15:28:36.214Z · LW(p) · GW(p)

If by rational you mean successful, then yes. If you mean derived from logic, then no. I derived it from intuition.

Replies from: None
comment by [deleted] · 2009-04-20T23:50:06.371Z · LW(p) · GW(p)

deleted

comment by Nanani · 2009-04-22T01:02:06.629Z · LW(p) · GW(p)

Rationality helping in relationships (here used to mean all interpersonal, not just romance) :

  • Use "outside view" to figure out how your interactions look to others; not only to the person you are talking to but also to the social web around you.

  • Focus on the goals, yours and theirs. If these do not match, the relationship is doomed in the long run, romantic or not.

  • Obviously, the whole list of cognitive biases and how to counter them. When you -know- you are doing something stupid, catching yourself rationalizing it and what not, you learn not to do that stupid thing.

comment by SoullessAutomaton · 2009-04-20T10:05:06.765Z · LW(p) · GW(p)

Is rationality useless -- or worse, a liability when dealing with other human beings? How much does it matter if those human beings are themselves self-professed rationalists?

The answers to this are going to depend strongly on how comfortable we are with deception when dealing with irrational individuals.

Replies from: None
comment by [deleted] · 2009-04-20T12:17:44.928Z · LW(p) · GW(p)

deleted

comment by Lawliet · 2009-04-20T02:02:15.153Z · LW(p) · GW(p)

I'd be interested in reading (but not writing) a post about rationalist relationships, specifically the interplay of manipulation, honesty and respect.

Seems more like a group chat than a post, but let's see what you all think.

Replies from: jscn, Alicorn
comment by jscn · 2009-04-20T19:54:33.571Z · LW(p) · GW(p)

I've found the work of Stefan Molyneux to be very insightful with regards to this (his other work has also been pretty influential for me).

You can find his books for free here. I haven't actually read his book on this specific topic ("Real-Time Relationships: The Logic of Love") since I was following his podcasting and forums pretty closely while he was working up to writing it.

Replies from: Lawliet
comment by Lawliet · 2009-04-21T03:51:29.574Z · LW(p) · GW(p)

Do you think you could summarise it for everybody in a post?

Replies from: jscn
comment by jscn · 2009-04-22T01:04:44.250Z · LW(p) · GW(p)

I'm not confident I could do a good job of it. He proposes that most problems in relationships come from our mythologies about ourselves and others. In order to have good relationships, we have to be able to be honest about what's actually going on underneath those mythologies. Obviously this involves work on ourselves, and we should help our partner to do the same (not by trying to change them, but by assisting them in discovering what is actually going on for them). He calls his approach to this kind of communication the "Real-Time Relationship."

To quote from the book: "The Real-Time Relationship (RTR) is based on two core principles, designed to liberate both you and others in your communication with each other:

  1. Thoughts precede emotions.
  2. Honesty requires that we communicate our thoughts and feelings, not our conclusions."

For a shorter read on relationships, you might like to try his "On Truth: The Tyranny of Illusion". Be forewarned that, even if you disagree, you may find either book an uncomfortable read.

comment by Alicorn · 2009-04-20T03:05:19.020Z · LW(p) · GW(p)

This sounds very interesting, but I don't think I'm qualified to write it either.

comment by PhilGoetz · 2009-04-20T01:47:06.767Z · LW(p) · GW(p)

(rationlism:winning)::(science:results)

We've argued over whether rationalism should be defined as that which wins. I think this is isomorphic to the question whether science should be defined as that which gets good results.

I'd like to look at the history of science in the 16th-18th centuries, to see whether such a definition would have been a help or a hindrance. My priors say that it would have been a hindrance, because it wouldn't have kicked contenders out of the field rapidly.

Under position 1, "science = good results", you would have competition only on the level of individual theories. If the experimental approach to transforming metals won out over mystical Hermetic formulations, that would tell you nothing about whether you would expect an experimental approach to crop fertilization to win out over prayer to the gods.

Position 2, that science is a methodology that turns out to have good results, lets epistemologies, or families of theories, compete. You can group a whole bunch of theories together and call them "scientific", and a whole bunch of other theories together and call them "tradition", and other theories together and call them "mystic", etc.; and test the families against each other. This gives you much stronger statistics. This is probably what happened.

comment by Vladimir_Golovin · 2009-04-20T13:09:33.768Z · LW(p) · GW(p)

The ideal title for my future post would be this:

How I Confronted Akrasia and Won.

It would be an account of my dealing with akrasia, which so far resulted in eliminating two decade-long addictions and finally being able to act according to my current best judgment. I also hope to describe a practical result of using these techniques (I specified a target in advance and I'm currently working towards it.)

Not posted because:

  1. The techniques are not yet tested even on myself. They worked perfectly for about a couple of months, but I wasn't under any severe stress or distraction. Also, I think they don't involve much willpower (defined as forcing myself to do something), but perhaps they do -- and the only way to find out is to test them in a situation when my 'willpower muscles' are exhausted.

  2. The ultimate test would be the practical result which I plan to achieve using these techniques, and it's not yet reached.

Replies from: orthonormal
comment by orthonormal · 2009-04-23T04:43:30.994Z · LW(p) · GW(p)

The ideal title for my future post would be this:

How I Confronted Akrasia and Won.

That is (perhaps) unintentionally hilarious, BTW.

comment by PhilGoetz · 2009-04-20T01:54:56.538Z · LW(p) · GW(p)

Regarding all the articles we've had about the effectiveness of reason:

Learning about different systems of ethics may be useless. It takes a lot of time to learn all the forms of utilitarianism and their problems, and all the different ethical theories. And all that people do is look until they find one that lets them do what they wanted to do all along.

IF you're designing an AI, then it would be a good thing to do. Or if you've already achieved professional and financial success, and got your personal life in order (whether that's having a wife, having a family, whatever), and are in a position of power, it would be good to do. But if you're a grad student or a mid-level manager, it may be a big waste of time. You've already got big obvious problems to work on; you don't need to study esoteric theories of utility to find a problem to work on.

Replies from: conchis
comment by conchis · 2009-04-20T20:45:18.839Z · LW(p) · GW(p)

Also potentially useful if you're involved in any way in policy formation. (And yes, even when there are political constraints).

In practice, I find the most useful aspects of having a working knowledge of lots of different ethical systems is that it makes it easier to:

(a) quickly drill down to the core of many disagreements. Even if they're not resolvable, being able to find them quickly often saves a lot of pointless going around in circles. (There are network externalities involved here as well. Knowing this stuff is more valuable when other people know it too.)

(b) quickly notice (or suspect) when apparently sensible goal sets are incompatible (though this is perhaps more to do with knowing various impossibility theorems than knowing different ethical systems).

comment by steven0461 · 2009-04-29T15:48:54.359Z · LW(p) · GW(p)

Any interest in a top-level post about rationality/poker inter-applications?

Replies from: Alicorn, AllanCrossman
comment by Alicorn · 2009-04-29T15:57:24.064Z · LW(p) · GW(p)

I would be interested if I knew how to play poker. Does your idea generalize to other card games (my favorite is cassino, I'd love to figure out how to interchange cassino strategies with rationality techniques), or is something poker-specific key to what you have to say?

Replies from: steven0461
comment by steven0461 · 2009-04-29T16:10:00.000Z · LW(p) · GW(p)

Does your idea generalize to other card games

Mostly I think it doesn't. Some of it may generalize to games in general.

comment by AllanCrossman · 2009-04-29T15:54:11.253Z · LW(p) · GW(p)

Yes.

comment by pangloss · 2009-04-23T17:40:20.510Z · LW(p) · GW(p)

The Implications of Saunt Lora's Assertion for Rationalists.

For those who are unfamiliar, Saunt Lora's Assertion comes from the novel Anathem, and expresses the view that there are no genuinely new ideas; every idea has already been thought of.

A lot of purportedly new ideas can be seen as, at best, a slightly new spin on an old idea. The parallels between, Leibniz's views on the nature of possibility and Arnauld's objection, and David Lewis's views on the nature of possibility and Kripke's objection are but one striking example. If there is anything to the claim that we are, to some extent, stuck recycling old ideas, rather than genuinely/interestingly widening the range of views, it seems as though this should have some import for rationalists.

Replies from: David_Gerard
comment by David_Gerard · 2011-04-13T14:09:51.768Z · LW(p) · GW(p)

It would first require a usable definition of "genuinely new" not susceptible to goalpost-shifting and that is actually useful for anything.

Replies from: None
comment by [deleted] · 2011-04-13T21:42:48.973Z · LW(p) · GW(p)

That was part of the joke in Anathem. Saunt Lora's assertion had actually first been stated by Saunt X, but it also occurs in the pre-X writings of Saunt Y, and so on...

comment by abigailgem · 2009-04-20T15:42:36.846Z · LW(p) · GW(p)

Scott Peck, author of "The Road Less Travelled", which was extremely popular ten years ago, theorised that people became more mature, and could get stuck on a lower level of maturity. From memory, the stages were:

  1. Selfish, unprincipled
  2. Rule- following
  3. Rational
  4. Mystical.

Christians could be either rule-following, a stage of maturity most people could leave behind in their teens, needing a big friendly policeman in the sky to tell them what to do- or Mystical.

Mystical people had a better understanding of the World because they did not expect it to be "rational", following a rationally calculable and predictable course. This fits my map in some ways: there are moments when I relate better to someone if I rely on instinct, rather than calculating what is going on, just as I can hit something better if I let my brain do the work rather than try to calculate a parabolic course for the rock.

I am not giving his "stage four" as well as he could. If you like, I would read up in his books, including "Further along the RLT" and "The RLT and beyond" and "The Different Drum" (I used to be a fan, and still hold him in respect).

You could then either decide you were convinced by Scott Peck, or come up with ways to refute him.

Would you like an article on this? Or would you rather just read about him on wikipedia?

Wikipedia says,

Stage IV is the stage where an individual starts enjoying the mystery and beauty of nature. While retaining skepticism, he starts perceiving grand patterns in nature. His religiousness and spirituality differ significantly from that of a Stage II person, in the sense that he does not accept things through blind faith but does so because of genuine belief. Stage IV people are labeled as Mystics.

Replies from: Nanani
comment by Nanani · 2009-04-22T00:50:41.128Z · LW(p) · GW(p)

I suspect most, if not all, regulars will dismiss these stages as soon as reading convinces them that the words "rational" and "mystical" are being used in the right sense. That is, few here would be impressed by "enjoying the mystery of nature".

However it might be useful for beginners who haven't read through the relevant sequences. Voted up.

Replies from: pjeby
comment by pjeby · 2009-04-22T02:31:52.853Z · LW(p) · GW(p)

I suspect most, if not all, regulars will dismiss these stages as soon as reading convinces them that the words "rational" and "mystical" are being used in the right sense. That is, few here would be impressed by "enjoying the mystery of nature".

I don't think that "enjoying the mystery of nature" is an apt description of that last stage. My impression is more that it's about appreciating the things that can't be said; i.e., of the "he who speaks doesn't know, and he who knows doesn't speak" variety.

There are some levels of wisdom that can't be translated verbally without sounding like useless tautologies or proverbs, so if you insist on verbal rationality as the only worthwhile knowledge, then such things will remain outside your worldview. So in a sense, it's "mystical", but without being acausal, irrational, or supernatural.

comment by Drahflow · 2009-04-20T10:06:25.430Z · LW(p) · GW(p)

I have this idea in my mind that my value function differs significantly from that of Elizier. In particular I cannot agree to blowing up Huygens in that Baby-Eater Scenario presented.

To summarize shortly: He gives a scenario which includes the following problem:

Some species A in the universe has as a core value the creation of unspeakable pain in their newborn. Some species B has as core value removal of all pain from the universe. And there is humanity.

In particular there are (besides others) two possible actions: (1): Enable B to kill off all of A, without touching humanity, but kill off a some humans in the process. (2): Block all access between all three species, leading to a continuation of the acts of A, kill significantly less humans in the process.

Elizier claims action 1 is superior over 2, and I cannot agree.

First, some reason why my intuition tells me that Elizier got it wrong: Consider the situation with wild animals, say in Africa. Lions killing gazelles in the thousands. And we are not talking about clean, nice killing, we are talking about taking bites out of living animals. We are talking about slow, agonizing death. And we can be pretty certain about the qualia of that experience, by just extrapolating from brain similarity and our own small painful experiences. Yet, I don't see anybody trying to stop the lions, and I think that is right.

For me the only argument for killing off special A goes like: "I do not like Pain" -> "Pain has negative utility" -> "Incredible pain got incredible negative utility" -> "Incredible pain needs to be removed from the Universe" That sounds wrong to me at the last step. Namely, I feel that our value function ought to (actually does) include a term which discounts things happening far away from us. In particular I think that the value of things happening somewhere in the universe which are (by the scenario) guaranteed not to have any effects on me, are exactly zero.

But more importantly it sounds wrong at the second to last step, claiming that incredible pain has incredible negative utility. Why do we dislike our own pain? Because it is the hardware response closing the feedback loop for our brain in the case of stupidity. It's evolution's way of telling us "don't do that". Why do we dislike pain in other people. Due to sympathy, i.e. due to reduces efficiency of said people in our world.

Do I feel more sympathy towards mammals than towards insects, yes. Do I feel more sympathy towards apes than towards other mammals, again yes. So the trend seems to indicate that I feel sympathy towards complex thinking things.

Maybe that's only because I am a complex thinking thing, but then again, maybe I just value possible computation. Computation generally leads to knowledge, and knowledge leads to more action possibilities. And more diversity in the things carrying out computation will probably lead to more diversity in knowledge, which I consider A Good Thing. Hence, I opt for saving species A, thus creating a lot more of pain, but also some more computation.

As you can probably tell, my line of reasoning is not quite clear yet, but I feel that I got a term in my value function here, that some other people seem to lack, and I wonder whether that's because of misunderstanding or because of different value functions.

Replies from: SoullessAutomaton, PhilGoetz
comment by SoullessAutomaton · 2009-04-20T21:24:22.079Z · LW(p) · GW(p)

(1): Enable B to kill off all of A, without touching humanity, but kill off a some humans in the process. (2): Block all access between all three species, leading to a continuation of the acts of A, kill significantly less humans in the process.

I seem to recall that there was no genocide involved; B intended to alter A such that they would no longer inflict pain on their children.

The options were:

  1. B modifies both A and humanity to eliminate pain; also modifies all three races to include parts of what the other races value.
  2. Central star is destroyed, the crew dies; all three species continue as before.
  3. Human-colonized star is destroyed; lots of humans die, humans remain as before otherwise; B is assumed to modify A as planned above to eliminate pain.
comment by PhilGoetz · 2009-04-20T17:50:20.798Z · LW(p) · GW(p)

Elizier claims action 1 is superior over 2, and I cannot agree.

Does Eliezer's position depend on the fact that group A is using resources that could otherwise be used by group B, or by humans?

Group B's "eliminate pain" morality itself has mind-bogglingly awful consequences if you think it through.

comment by swestrup · 2009-04-21T22:22:48.323Z · LW(p) · GW(p)

Lurkers and Involvement.

I've been thinking that one might want to make a post, or post a survey, that attempts to determine how much folks engage with the contents on less wrong.

I'm going to assume that there are far more lurkers than commenters, and far more commenters than posters, but I'm curious as to how many minutes, per day, folks spend on this site.

For myself, I'd estimate no more than 10 or 15 minutes but it might be much less than that. I generally only read the posts from the RSS feed, and only bother to check the comments on one in 5. Even then, if there's a lot of comments, I don't bother reading most of them.

One of the reasons I don't post is that I often find it takes me 20-30 minutes to put my words into a shape that I feel is up to the rather high standard of posting quality here, and I'm generally not willing to commit that much of my time to this site.

I think the question of how much of their time an average person thinks a site is worth to them is an important metric, and one we may wish to try to measure with an eye to increasing the average for this site.

Heck, that might even get me posting more often.

comment by MartinB · 2009-04-21T16:08:51.504Z · LW(p) · GW(p)

Putting together a rationalist toolset. Including all the methods one needs to know, but also, and very much so the real world knowledge that helps to get along or ahead in life. Doesnt have to be reinvented, but pointed out and evaluated.

In short: I expect members of the rationality movement to dress well when its needed. To be in reasonable shape. To /not/smoke. Know about positive psychology. Know about how to deal with people. And find ways to be a rational & happy person.

Replies from: thomblake
comment by thomblake · 2009-04-21T16:28:15.794Z · LW(p) · GW(p)

I expect members of the rationality movement to dress well when its needed. To be in reasonable shape. To /not/smoke. Know about positive psychology. Know about how to deal with people. And find ways to be a rational & happy person.

I disagree with much of this. Not sure what 'reasonable shape' means, but I'm not above ignoring physical fitness in the pursuit of more lofty goals. Same with smoking - while I'll grant that there are more efficient ways to get the benefits of smoking, for an established smoker-turned-rationalist it might not be worth the time and effort to quit. And I'm also not sure what you mean by 'positive psychology'.

Replies from: MartinB
comment by MartinB · 2009-04-22T22:53:56.980Z · LW(p) · GW(p)

Just some examples. It might be that smoking is not as bad as its currently presented. Optimizing lifestyle for higher chances of survival seams reasonable to me, but might not be everyones choice. Want i do not find usefull in any instance are grumpy rationalists that scorn on the whole world. Do you agree with the importance of 'knowledge about the real world'?

Regarding positive psychology. Look um Daniel Gilbert and Martin Seligman. Both gave nice talks on Ted.com and have something to say about happiness.

comment by Simulacra · 2009-04-21T03:04:47.502Z · LW(p) · GW(p)

There has been some calling for applications of rationality; how can this help me win? This combined with the popularity and discussion surrounding "Stuck in the middle with Bruce" gave me an idea for a potential series of posts relating to LWers pastimes of choice. I have a feeling most people here have a pastime, and if rationalists should win there should be some way to map the game to rational choices.

Perhaps articles discussing "how rational play can help you win at x" and "how x can help you think more rationally" would be worthwhile. I'm sure there are games or hobbies that multiple people share (as was discovered relating to Magic) and even if no one has played a certain game the knowledge gained from it should be generalizable and used elsewhere (as was the concept of a Bruce).

I might be able to do a piece on Counter-Strike (perhaps generalized to FPS style games) although I haven't played in several years.

I know I would be interested in more discussion of how Magic and rationality work together. In fact I almost went out an picked up a deck to try it out again (haven't played since Ice Age when I was but a child) but remembered I don't know anyone I could play with right now anyway, which is probably why I don't.

comment by pre · 2009-04-20T10:03:15.187Z · LW(p) · GW(p)

memetic engineering

The art of manipulating the media, especially news, and public opinion. Sometimes known as "spin-doctoring" I guess, but I think the memetic paradigm is probably a more useful one to attack it from.

I'd love to understand that better than I do. Understanding it properly would certainly help with evangelism.

I fear that very few people really do grok it though, certainly I wouldn't be capable of writing much relevant about it yet.

Replies from: Emile
comment by Emile · 2009-04-20T12:57:43.893Z · LW(p) · GW(p)

I'm not sure that's something worth studying here - it's kinda sneaky and unethical.

Replies from: pre, Simulacra
comment by pre · 2009-04-20T13:35:33.885Z · LW(p) · GW(p)

Oh, so we're just using techniques which win without being sneaky? Isn't 'sneaky' a good, winning strategy?

Rationality's enemies are certainly using these techniques. Should we not study them, if only with a view to finding an antidote?

comment by Simulacra · 2009-04-21T02:25:37.826Z · LW(p) · GW(p)

I would say it is certainly something worth studying, the understanding of how it works would be invaluable. We can decide if we want to use it to further our goals or not once we understand it (hopefully not before, using something you don't understand is generally a bad thing imho). If we decide not to use it, the knowledge would help us educate others and perhaps prevent the 'dark ones' from using it.

Perhaps something a la James Randi, create an ad whose first half uses some of the techniques and whose second half explains the mechanisms used to control inattentive viewers with a link to somewhere with more information on understanding how its done and why people should care.

comment by Alicorn · 2009-04-19T22:11:55.684Z · LW(p) · GW(p)

I have more to say about my cool ethics course on weird forms of utilitarianism, but unlike with Two-Tier Rationalism, I'm uncertain of how germane the rest of these forms are to rationalism.

I have a lot to say about the Reflection Principle but I'm still in the process of hammering out my ideas regarding why it is terrible and no one should endorse it.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-19T22:41:02.532Z · LW(p) · GW(p)

I have a lot to say about the Reflection Principle but I'm still in the process of hammering out my ideas regarding why it is terrible and no one should endorse it.

I'm not sure what Reflection Principle you're referring to here. Google suggests two different mathematical principles but I'm not seeing how either of those would be relevant on LW, so perhaps you mean something else?

Replies from: Alicorn
comment by Alicorn · 2009-04-20T02:04:04.103Z · LW(p) · GW(p)

The Reflection Principle, held by some epistemologists to be a constraint on rationality, holds that if you learn that you will believe some proposition P in the future, you should believe P now. There is complicated math about what you should do if you have degree of credence X in the proposition that you will have credence Y in proposition P in the future and how that should affect your current probability for P, but that's the basic idea. An alternate formulation is that you should treat your future self as a general expert.

Replies from: SoullessAutomaton, JulianMorrison
comment by SoullessAutomaton · 2009-04-20T09:26:27.307Z · LW(p) · GW(p)

Reminds me a bit of the LW (ab)use of Aumann's Agreement Theorem, heh--at least with a future self you've got a high likelihood of shared priors.

Anyway, I know arguments from practicality are typically missing the point in philosophical arguments, but this seems to be especially useless--even granting the principle, under what circumstance could you become aware of your future beliefs with sufficient confidence to change your current beliefs based on such?

It seems to boil down mostly to "If you're pretty sure you're going to change your mind, get it over with". Am I missing something here?

Replies from: Alicorn
comment by Alicorn · 2009-04-20T14:59:54.289Z · LW(p) · GW(p)

Well, that's one of my many issues with the principle - it's practically useless, except in situations that it has to be formulated specifically to avoid. For instance, if you plan to get drunk, you might know that you'll consider yourself a safe driver while you are (in the future) drunk, but that doesn't mean you should now consider your future, drunk self a safe driver. Sophisticated statements of Reflection explicitly avoid situations like this.

comment by JulianMorrison · 2009-04-20T03:11:03.239Z · LW(p) · GW(p)

Well that's pretty silly. You wouldn't treat your present self as a general expert.

Replies from: Alicorn
comment by Alicorn · 2009-04-20T15:01:07.356Z · LW(p) · GW(p)

Wouldn't you? You believe everything you believe. If you didn't consider yourself a general expert, why wouldn't you just follow around somebody clever and agree with them whenever they asserted something? And even then, you'd be trusting your expertise on who was clever.

Replies from: gwern
comment by beoShaffer · 2012-09-09T03:28:13.145Z · LW(p) · GW(p)

I started an article on the psychology of rationalization, but stopped due to a mixture of time constrains and not finding many high quality studies.

comment by Karl · 2011-04-19T22:42:37.757Z · LW(p) · GW(p)

It seem to me possible to create a safe oracle AI. Suppose that you have a sequence predictor which is a good approximation of Solomonoff induction but which run in reasonable time. This sequence predictor can potentially be really useful (for example, predict future siai publications from past siai publications then proceed to read the article which give a complete account of Friendliness theory...) and is not dangerous in itself. The question, of course, is how to obtain such a thing.

The trick rely on the concept of program predictor. A program predictor is a function which predict, more or less accurately, the output of the program (note that when we refer to a program we refer to a program without side effect that just calculate an output.) it take as it's input but within reasonable time. If you have a very accurate program predictor then you can obviously use it to gain a good approximation of Solomonoff induction which run in reasonable time. But of course, this just displace the problem: how do you get such an accurate program predictor?

Well, suppose you have a program predictor which is good enough to be improved on. Then, you use it to predict the program of less than N bits of length (with N sufficiently big of course) which maximize a utility function which measure how accurate the output of that program is as a program predictor given that it generate this output in less than T steps (where T is a reasonable number given the hardware you have access to). Then you run that program. Check the accuracy of the obtained program predictor. If insufficient repeat the process. You should eventually obtain a very accurate program predictor. QED.

So we've reduced our problem to the problem of creating a program predictor good enough to be improved upon. That should be possible. In particular, it is related to the problem of logical uncertainty. If we can get a passable understanding of logical uncertainty it should be possible to build such a program predictor using it. Thus a minimal understanding of logical uncertainty should be sufficient to obtain agi. In fact even without such understanding, it may be possible to patch together such a program predictor...

comment by pangloss · 2009-04-27T19:03:12.617Z · LW(p) · GW(p)

The Verbal Overshadowing effect, and how to train yourself to be a good explicit reasoner.

comment by dclayh · 2009-04-21T22:48:23.574Z · LW(p) · GW(p)

Contents of my Drafts folder:

  • A previous version of my Silver Chair post, with more handwringing about why one might not stop someone from committing suicide.
  • A post about my personal motto ("per rationem, an nequequam", or "through reason, or not at all"), and how Eliezer's infamous Newcomb-Box post did and didn't change my perspective on what rationality means.
  • A listing of my core beliefs related to my own mind, beliefs/desires/etc., with a request for opinions or criticism.
  • A post on why animals in particular and any being not capable of rationality/ethics in general don't get moral consideration. This one isn't posted only because my proof has a hole.
comment by steven0461 · 2009-04-21T16:53:13.731Z · LW(p) · GW(p)

Great thread idea.

Frequentist Pitfalls:

Bayesianism vs Frequentism is one thing, but there are a lot of frequentist-inspired misinterpretations of the language of hypothesis testing that all statistically competent people agree are wrong. For example, note that: p-values are not posteriors (interpreting them this way usually overstates the evidence against the null, see also Lindley's paradox), p-values are not likelihoods, confidence doesn't mean confidence, likelihood doesn't mean likelihood, statistical significance is a property of test results not hypotheses, statistical significance is not effect size, statistical significance is not effect importance, p-values aren't error probabilities, the 5% threshold isn't magical.

In a full post I'd flesh all of these out, but I'm considering not doing so because it's kind of basic and it turns out Wikipedia already discusses most of this surprisingly well.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-21T17:16:19.552Z · LW(p) · GW(p)

More generally, semantics of the posteriors, and of probability in general, comes from the semantics of the rest of the model, of prior/state space/variables/etc. It's incorrect to attribute any kind of inherent semantics to a model, which as you note happens quite often, when frequentist semantics suddenly "emerges" in probabilistic models. It is a kind of mind projection fallacy, where the role of the territory is played by math of the mind.

Replies from: steven0461
comment by steven0461 · 2009-04-21T17:25:26.435Z · LW(p) · GW(p)

To return to something we discussed in the IRC meetup: there's a simple argument why commonly-known rationalists with common priors cannot offer each other deals in a zero-sum game. The strategy "offer the deal iff you have evidence of at least strength X saying the deal benefits you" is defeated by all strategies of the form "accept the deal iff you have evidence of at least strength Y > X saying the deal benefits you", so never offering and never accepting if offered should be the only equilibrium.

This is completely off-topic unless anyone thinks it would make an interesting top-level post.

ETA: oops, sorry, this of course assumes independent evidence; I think it can probably be fixed?

comment by byrnema · 2009-04-21T03:32:06.923Z · LW(p) · GW(p)

What an argument for atheism would look like.

Yesterday, I posted a list for reasons for why I think it would be a good idea to articulate a position on atheism.

I think many people are interested in theory in developing some sort of deconversion programme (for example, see this one), and perhaps creating a library of arguments and counter-arguments for debates with theists.

While I have no negative opinion of these projects, my ambition is much more modest. In a cogent argument for atheism,there would be no need to debate particular arguments. It would be much more effective (and evincing of an open-and-shut case) to just present the general line of reasoning. For example, the following argument would be a beginning (from someone who is just trying to guess what your arguments are):


Example: An Articulated Athiestic Position

First, we reject the claim that any empirical observation is the sleight of hand of a Supreme Being on the grounds that if you begin with such a hypothesis, then you find yourself in an unacceptable epistimological position in which you cannot know anything.

Therefore, we interpret all empirical evidence at face value, using the scientific method, and accept scientific theory, limited as ever by the scientist's view that theories are modified as needed to incorporate conflicting evidence. Thus, we reject a literal interpretation of the bible and the supernatural as these explicitly contradict the notion that we can interpret empirical evidence at face value.

By this, we have limited the acceptable definition of God to one that obeys the natural laws of the universe. While this leaves open the possibility of a passive, non-intrusive God, we present the principle that one does not assume the existence of something if there is no need to do so.


A definition like the one above is good in that a theist can read it, see exactly where assumptions differ, and decide if they want to engage in a debate -- or not. If they think God created an illusive world to test our faith, there's no point arguing about empirical evidence of anything.

I wanted to give this definition as a crude example of the simplicity I would like to see; I in no way intend to speak for atheists by making a final statement on the main arguments -- I would expect this to be edited.

As another example, I would like to give an argument for theism that I think could be a good articulation of their position, if someone held these particular views. The value being in that assumptions are explicit so pretty much you can find exactly where your views differ.


Example: An Articulated Thiestic Position

The existence of God can neither be proved nor demonstrated through any empirical evidence, but is knowable through reasoned reflection and intuition. All knowledge of the properties of God are inferred through experience, and thus knowledge of God is evolving and not absolute.

comment by MBlume · 2009-04-20T03:13:41.836Z · LW(p) · GW(p)

Thank you for this post -- I feel a bit lighter somehow having all those drafts out in the open.

Replies from: byrnema
comment by byrnema · 2009-04-20T03:41:12.831Z · LW(p) · GW(p)

I also think this post is a great idea. I've written 3 posts that were, objectively, not that appropriate here. Perhaps I should have waited until I knew more about what was going on at LW, but I'm one of those students that has to ask a lot of questions at first, and I'm not sure how long it would have taken me to learn the things that I wanted to know otherwise.

Along these lines, what do you guys think of encouraging new members (say, with Karma < 100) to always mini-post here first? [In Second Life, there was a 'sandbox area' where you could practice building objects.] Here on LW, it would be (and is, now that it's here) immensely useful to try out your topic and gauge what the interest would be on LW.

Personally, I would have been happy to post my posts (all negative scoring) somewhere out of the main throughfare, as I was just fishing for information and trying to get a feel for the group rather than wanting to make top-level statements.

Replies from: MBlume
comment by MBlume · 2009-04-20T03:46:35.053Z · LW(p) · GW(p)

I definitely think this is a post that should stay visible, whether because we start stickying a few posts or because somebody reposts this monthly.

I don't know whether we need guidelines about when people should post here, and definitely don't think we need a karma cutoff. I think just knowing it's here should be enough.

comment by MBlume · 2009-04-20T02:58:50.770Z · LW(p) · GW(p)

Let's empty out my draft folder then....

Counterfactual Mugging v. Subjective Probability

A couple weeks ago, Vladimir Nesov stirred up the biggest hornet's nest I've ever seen on LW by introducing us to the Counterfactual Mugging scenario.

If you didn't read it the first time, please do -- I don't plan to attempt to summarize. Further, if you don't think you would give Omega the $100 in that situation, I'm afraid this article will mean next to nothing to you.

So, those still reading, you would give Omega the $100. You would do so because if someone told you about the problem now, you could do the expected utility calculation 0.5U(-$100)+0.5U(+$10000)>0. Ah, but where did the 0.5s come from in your calculation? Well, Omega told you he flipped a fair coin. Until he did, there existed a 0.5 probability of either outcome. Thus, for you, hearing about the problem, there is a 0.5 probability of your encountering the problem as stated, and a 0.5 probability of your encountering the corresponding situation, in which Omega either hands you $10000 or doesn't, based on his prediction. This is all very fine and rational.

So, new problem. Let's leave money out of it, and assume Omega hands you 1000 utilons in one case, and asks for them in the other -- exactly equal utility. What if there is an urn, and it contains either a red or a blue marble, and Omega looks, maybe gives you the utility if the marble is red, and asks for it if the marble is blue? What if you have devoted considerable time to determining whether the marble is red or blue, and your subjective probability has fluctuated over the course of you life? What if, unbeknownst to you, a rationalist community has been tracking evidence of the marble's color (including your own probability estimates), and running a prediction market, and Omega now shows you a plot of the prices over the past few years?

In short, what information do you use to calculate the probability you plug into the EU calculation?

Replies from: ciphergoth, Vladimir_Nesov
comment by Paul Crowley (ciphergoth) · 2009-04-20T07:37:13.078Z · LW(p) · GW(p)

This is probably mean of me, but I'd prefer if the next article about Omega's various goings-on set out to explain why I should care about what the rational thing to do in Omega-ish situations.

comment by Vladimir_Nesov · 2009-04-20T10:04:59.357Z · LW(p) · GW(p)

So, those still reading, you would give Omega the $100.

It's a little too strong, I think you shouldn't give away the $100, because you are just not reflectively consistent. It's not you who could've ran the expected utility calculation to determine that you should give it away. If you persist, by the time you must do the action it's not in your interest anymore, it's a lost cause. And that is a subject of another post that has been lying in draft form for some time.

If you are strong enough to be reflectively consistent, then ...

In short, what information do you use to calculate the probability you plug into the EU calculation?

You use your prior for probabilistic valuation, structured to capture expected subsequent evidence on possible branches. According to evidence and possible decisions on each branch, you calculate expected utility of all of the possible branches, find a global feasible maximum, and perform a component decision from it that fits the real branch. The information you have doesn't directly help in determining the global solution, it only shows which of the possible branches you are on, and thus which role should you play in the global decision, that mostly applies to the counterfactual branches. This works if the prior/utility is something inside you, worse if you have to mine information from the real branch for it in the process. Or, for more generality, you can consider yourself cooperating with your counterfactual counterparts.

The crux of the problem is that you care about counterfactuals; once you attain this, the rest is business as usual. When you are not being reflectively consistent, you let the counterfactual goodness slip away from your fingers, turning to myopically optimizing only what's real.

comment by infotropism · 2009-04-19T23:44:54.718Z · LW(p) · GW(p)

I have an idea I need to build up about simplicity, how to build your mind and beliefs up incrementally, layer by layer, how perfection is achieved not when there's nothing left to add, but nothing left to remove, how simple minded people are sometimes being the ones to declare simple, true ideas others lost sight of, people who're too clever and sophisticate, whose knowledge is like a card house, or a bag of knots, genius, learning, growing up, creativity correlated with age, zen. But I really need to do a lot more searching about that before I can put something together.

Edit : and if I post that here, that's because if someone else wants to dig that idea, and work on it with me, that'd be with pleasure.

Replies from: ciphergoth, PhilGoetz
comment by Paul Crowley (ciphergoth) · 2009-04-20T07:38:05.701Z · LW(p) · GW(p)

Do you understand Solomonoff's Universal Prior?

Replies from: infotropism
comment by infotropism · 2009-04-20T10:02:07.247Z · LW(p) · GW(p)

Not the mathematical proof.

But the idea that if you don't yet have data bound to observation, then you decide the probability of a prior by looking at its complexity.

Complexity, defined as looking up the smallest compressed bitstring program for each possible turing machines (and that is the reason why it's intractable unless you have infinite computational ressources yes ?), that can be said to generate this prior as the output of being run on that machine.

The longest the bitstring, the less likely the prior (and this has to do with the idea you can make more permutations on larger bit strings, like, a one bit string can be in two states, a two bit one can be in 2 states, a 3 bit one in 2 exp 3 states, and so on.).

Then you somehow average the probabilities for all pairs of (turing machine + program) into one overall probability ?

(I'd love to understand that formally)

comment by PhilGoetz · 2009-04-20T01:57:19.198Z · LW(p) · GW(p)

I'm skeptical of the concept as presented here. Anything with the phrase "how perfection is achieved" sets up a strong prior in my mind saying it is completely off-base.

More generally, in evolution and ecosystems I see that simplicity is good temporarily, as long as you retain the ability to experiment with complexity. Bacteria rapidly simplify themselves to adapt to current conditions, but they also experiment a lot and rapidly acquire complexity when environmental conditions change. When conditions stabilize, they then gradually throw off the acquired complexity until they reach another temporary simple state.

Replies from: JulianMorrison, JulianMorrison, infotropism
comment by JulianMorrison · 2009-04-20T02:39:34.112Z · LW(p) · GW(p)

The Occam ideal is "simplest fully explanatory theory". The reality is that there never has been one. They're either broken in "the sixth decimal place", like Newtonian physics, or they're missing bits, like quantum gravity, or they're short of evidence, like string theory.

comment by JulianMorrison · 2009-04-20T02:28:36.583Z · LW(p) · GW(p)

The Occam ideal is "simplest fully explanatory theory". The reality is: sometimes you don't have a fully explanatory theory at all, only a broken mostly-explanatory theory. Sometimes the data isn't good enough to support any theory. Sometimes you have a theory that's obviously overcomplicated but no idea how to simplify it. And sometimes you have a bunch of theories, no easy way to test them, and it's not obvious which is simplest.

comment by infotropism · 2009-04-20T02:23:09.624Z · LW(p) · GW(p)

So maybe, to rephrase the idea then, we want to strive, to achieve something as close as we can to perfection; optimality ?

If we do, we may then start laying the bases, as well as collecting practical advices, general methods, on how to do that. Though not a step by step absolute guide to perfection, rather, the first draft of one idea that would be helpful in aiming towards optimality.

edit : also, that's a st Exupery quote, that illustrates the idea, I wouldn't mean it that literally, not as more than a general guideline.

comment by deanba · 2009-04-20T20:26:52.416Z · LW(p) · GW(p)

"Telling more than we can know" Nisbett & Wilson

I saw this on Overcoming Bias a while ago thanks to Pete Carlton www.lps.uci.edu/~johnsonk/philpsych/readings/nisbett.pdf

I hope you all read this. What are the implications? Can you tell me a story ;)

Here is my story and I am sticking to it!

I have a sailing friend that makes up physics from whole cloth it is frightening and amusing to watch an almost impossible to correct without some real drama. It seems one only has to be close in horse shoes and hand grenades and life.