A Rationalist's Tale

post by lukeprog · 2011-09-28T01:17:13.372Z · score: 88 (90 votes) · LW · GW · Legacy · 308 comments

Contents

  Doubt
  New Joy and Purpose
  The Level Above My Own
None
308 comments

Warning: sappy personal anecdotes ahead! See also Eliezer's Coming of Age story, SarahC's Reflections on rationality a year out, and Alicorn's Polyhacking.

On January 11, 2007, at age 21, I finally whispered to myself: There is no God.

I felt the world collapse beneath me. I'd been raised to believe that God was necessary for meaning, morality, and purpose. My skin felt cold and my tongue felt like cardboard. This was the beginning of the darkest part of my life, but the seed of my later happiness.

I grew up in Cambridge, Minnesota — a town of 5,000 people and 22 Christian churches (at the time). My father was (and still is) pastor of a small church. My mother volunteered to support Christian missionaries around the world.

I went to church and Bible study every week. I prayed often and earnestly. For 12 years I attended a Christian school that taught Bible classes and creationism. I played in worship bands. As a teenager I made trips to China and England to tell the godless heathens there about Jesus. I witnessed miraculous healings unexplained by medical science.

And I felt the presence of God. Sometimes I would tingle and sweat with the Holy Spirit. Other times I felt led by God to give money to a certain cause, or to pay someone a specific compliment, or to walk to the cross at the front of my church and bow before it during a worship service.

Around age 19 I got depressed. But then I read Dallas Willard’s The Divine Conspiracy, a manual for how to fall in love with God so that following his ways is not a burden but a natural and painless product of loving God. And one day I saw a leaf twirling in the wind and it was so beautiful — like the twirling plastic bag in American Beauty — that I had an epiphany. I realized that everything in nature was a gift from God to me. Grass, lakes, trees, sunsets — all these were gifts of beauty from my Savior to me. That's how I fell in love with God, and he delivered me from my depression.

I moved to Minneapolis for college and was attracted to a Christian group led by Mark van Steenwyk. Mark’s small group of well-educated Jesus-followers are 'missional' Christians: they think that loving and serving others in the way of Jesus is more important than doctrinal truth. That resonated with me, and we lived it out with the poor immigrants of Minneapolis.

 

Doubt

By this time I had little interest in church structure or doctrinal disputes. I just wanted to be like Jesus to a lost and hurting world. So I decided I should try to find out who Jesus actually was. I began to study the Historical Jesus.

What I learned, even when reading Christian scholars, shocked me. The gospels were written decades after Jesus' death, by non-eyewitnesses. They are riddled with contradictions, legends, and known lies. Jesus and Paul disagreed on many core issues. And how could I accept miracle claims about Jesus when I outright rejected other ancient miracle claims as superstitious nonsense?

These discoveries scared me. It was not what I had wanted to learn. But now I had to know the truth. I studied the Historical Jesus, the history of Christianity, the Bible, theology, and the philosophy of religion. Almost everything I read — even the books written by conservative Christians — gave me more reason to doubt, not less. What preachers had taught me from the pulpit was not what they had learned in seminary. My discovery of the difference had just the effect on me that conservative Bible scholar Daniel B. Wallace predicted:

The intentional dumbing down of the church for the sake of filling more pews will ultimately lead to defection from Christ.

I started to panic. I felt like my best friend — my source of purpose and happiness and comfort — was dying. And worse, I was killing him. If only I could have faith! If only I could unlearn all these things and just believe. I cried out with the words from Mark 9:24, "Lord, help my unbelief!"

I tried. For every atheist book I read, I read five books by the very best Christian philosophers. But the atheists made plain, simple sense, and the Christian philosophers were lost in a fog of big words that tried to hide the weakness of their arguments.

I did everything I could to keep my faith. But I couldn’t do it. I couldn’t force myself to believe what I knew wasn’t true. So I finally let myself whisper the horrifying truth out loud: There is no God.

I told my dad, and he said I had been led astray because I was arrogant to think I could get to truth by studying — I was "relying too much on my own strength." Humbled and encouraged, I started a new quest to find God. I wrote on my blog:

I’ve been humbled. I was “doing discipleship” in my own strength, because I thought I was smart enough and disciplined enough. [Now] having surrendered my prideful and independent ways to him, I can see how my weakness is God’s strength.

I’ve repented. I was deceived because I did not let the Spirit lead me into truth. Now I ask for God’s guidance in all quests for knowledge and wisdom.

I feel like I’ve been born again, again.

It didn’t last. Every time I reached out for some reason — any reason — to believe, God simply wasn’t there. I tried to believe despite the evidence, but I couldn’t believe a lie. Not anymore.

No matter how much I missed him, I couldn’t bring Jesus back to life.

 

New Joy and Purpose

Eventually I realized that millions of people have lived lives of incredible meaning, morality, and happiness without gods. I soon realized I could be more happy and moral without God than I ever was with him.

In many ways, I regret wasting more than 20 years of my life on Christianity, but there are a few things of value I took from my life as an evangelical Christian. I know what it’s like to be a true believer. I know what it’s like to fall in love with God and serve him with all my heart. I know what’s it like to experience his presence. I know what it’s like to isolate one part of my life from reason or evidence, and I know what it’s like to think that is a virtue. I know what it’s like to be confused by the Trinity, the failure of prayers, or Biblical contradictions but to genuinely embrace them as the mystery of God. I know what it’s like to believe God is so far beyond human reason that we can’t understand him, but at the same time to fiercely believe I know the details of how he wants us to behave.

I can talk to believers with understanding. I've experienced God the same way they have.

Perhaps more important, I have a visceral knowledge that I can experience something personally, and be confident of it, and be completely wrong about it. I also have a gut understanding of how wonderful it can be to just say "oops" already and change your mind.

I suspect this is why it was so easy for me, a bit later, to quickly change my mind about free will, about metaethics, about political libertarianism, and about many other things. It was also why I became so interested in the cognitive science of how our beliefs can get so screwy, which eventually led me to Less Wrong, where I finally encountered that famous paragraph by I.J. Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

I remember reading that paragraph and immediately thinking something like: Woah. Umm... yeah... woah. That... yeah, that's probably true. But that's crazy because... that changes fricking everything.

So I thought about it for a week, and looked up the counterarguments, and concluded that given my current understanding, an intelligence explosion was nearly inevitable (conditional on a basic continued progress of science) and that everything else I could spend my life working on was trivial by comparison.

So I mostly stopped blogging about philosophy of religion, read through all of Less Wrong, studied more cognitive science and AI, quit my job in L.A., and moved to Berkeley to become a visiting fellow with Singularity Institute.

 

The Level Above My Own

My move to Berkeley was a bit like the common tale of the smartest kid in a small town going to Harvard and finding out that he's no longer the smartest person in the room. In L.A., I didn't know anyone as devoted as I was to applying the cognitive science of rationality and cognitive biases to my thinking habits (at least, not until I attended a few Less Wrong meetups shortly before moving to Berkeley). But in Berkeley, I suddenly found myself among the least mature rationalists in my social world.

There is a large and noticeable difference between my level of rationality and the level of Eliezer Yudkowsky, Carl ShulmanAnna Salamon, and several others. Every week I learn new rationality techniques. Friends help me uncover cached beliefs about economics, politics, and utilitarianism. I've begun to use the language of anti-rationalization and Bayesian updates in everyday conversation. In L.A. I had become complacent because my level of rationality looked relatively impressive to me. Now I can see how far above my level humans can go.

I still have a lot to learn, and many habits to improve. Living in a community with rationalist norms is a great way to do those things. But a 4-year journey from evangelical Christian missionary to Singularity Institute researcher writing about rationality and Friendly AI is... not too shabby, I suppose.

And that's why I'm glad some people are writing about atheism and the basics of rationality. Without them, I'd probably still be living for Jesus.

308 comments

Comments sorted by top scores.

comment by Dr_Manhattan · 2011-09-08T19:17:02.249Z · score: 27 (33 votes) · LW(p) · GW(p)

Luke is really being too humble here. Clearly the events up to the atheist realization happened in the first 15 minutes of his existence, given a reasonable allowance for all the sources he read for his articles.

comment by Manfred · 2011-09-10T01:47:16.005Z · score: 20 (20 votes) · LW(p) · GW(p)

Are there also lots of "undramatic" people like me? Every one of these personal stories I see involves sadness, epiphany, that sort of thing, but is that publication bias or am I unusual?

comment by orthonormal · 2011-09-11T14:37:07.223Z · score: 11 (11 votes) · LW(p) · GW(p)

I think it has to do with the "leaving the tribe" aspect more than anything. Those of us who became devout in one of the more serious religions (that is, religions that view everyone else as a spectrum from "good but deeply flawed" to "hellbound") had that religion encompass most of our social world, and so in order to leave it we had to face the prospect of ostracism from all the people we cared about. The evolutionary pressures to never get ostracized make for a lot of subconscious bias to fight, and a pretty dramatic tale.

If your conversion was undramatic, therefore, I conjecture that you didn't have lots of friends or family who might have abandoned you if you stopped being religious.

comment by bigjeff5 · 2011-10-13T06:01:27.697Z · score: 7 (7 votes) · LW(p) · GW(p)

I've been an atheist for about a year now, but I still haven't "come out" of the atheist closet with my parents yet. They are southern baptist, and I know it will devastate them - my mom especially.

My own break with Christianity was a light switch moment (more like turning out the last light before leaving the place for good kind of light switch moment) that happened while I was watching the Discovery Channel, of all things. I'd been raised with the hard-line young earth, all-evidence-for-evolution-is-fabricated, fire and brimstone style belief. My faith had been eroding for almost a decade as I tried to rationalize the existence of God, but it didn't really click until I saw a bunch of little Japanese Mudskippers crawling around in the mud with their elongated fins, the very picture of an evolutionary transition species that I had been taught since I was kid could not exist. I just thought "Well, that's it then. I can't honestly believe Christianity any more can I?" I think I actually let out a sigh at some point, but that may just be my mind filling in details for dramatic effect.

Really, my true belief had been gone since probably some time in high school. That was just the last straw that forced me to give up my belief in belief. Sort of like finally letting go of the rope, expecting to fall to your death, and discovering you were only a few inches from solid ground after all.

comment by orthonormal · 2011-10-13T14:13:19.612Z · score: 4 (4 votes) · LW(p) · GW(p)

Sort of like finally letting go of the rope, expecting to fall to your death, and discovering you were only a few inches from solid ground after all.

I like this analogy. I think I'm going to steal it.

comment by wedrifid · 2011-09-14T03:38:39.033Z · score: 4 (4 votes) · LW(p) · GW(p)

(that is, religions that view everyone else as a spectrum from "good but deeply flawed" to "hellbound"

That isn't exactly a spectrum. There are serious and sincere believers who I have met who are forthright with the 'hellbound' prediction while also being far less judgemental than others who say 'good but deeply flawed'. "Hellbound" is a prediction about future consequences not a personal criticism.

comment by Will_Newsome · 2011-09-14T04:35:09.083Z · score: 0 (2 votes) · LW(p) · GW(p)

Indeed, I had two close friends in high school who predicted I was definitely going to hell. One academic liberal, one fundamentalist conservative. (It didn't come up much.)

comment by orthonormal · 2011-09-14T04:27:38.717Z · score: 0 (0 votes) · LW(p) · GW(p)

I meant 'religions with claims to exclusivity', basically. I don't think anyone today worries that they'll lose their social world if they leave their Unitarian church.

But yes, the relationship between theology and arrogance isn't quite as simple as some might think.

comment by Manfred · 2011-09-11T15:07:34.168Z · score: 4 (4 votes) · LW(p) · GW(p)

Conjecture correct.

comment by Logos01 · 2011-09-28T17:02:56.111Z · score: 5 (5 votes) · LW(p) · GW(p)

I grew up atheist. Without a story to tell, I've got nothing to publish. I would agree with the publication bias conjecture.

comment by Jack · 2011-09-11T15:03:51.830Z · score: 4 (4 votes) · LW(p) · GW(p)

"Oh. There are people who aren't sure about God? They're called agnostics? Huh, yeah, I think that's what I am."

comment by ArisKatsaris · 2011-09-11T22:40:04.374Z · score: 3 (3 votes) · LW(p) · GW(p)

Just publication bias, I think.

Personally I just gradually went from "believing by default because of what school and family were telling me" as a kid, to "believing mostly, but not all of religion" in junior high, to "believing very little of it must be true" in highschool, to "vaguely hoping there's some just and merciful order in the universe" in college, to being an atheist now.

There was hardly any drama at all, as far as I can recall.

comment by wedrifid · 2011-09-11T04:45:17.023Z · score: 3 (3 votes) · LW(p) · GW(p)

Are there also lots of "undramatic" people like me?

Definitely. My story is much the same as Luke's but I was a whole heap more chill the whole way through. Although come to think of it if I did publish my style of writing is such that it would come out seeming dramatic anyway.

comment by arundelo · 2011-09-10T18:29:51.839Z · score: 3 (3 votes) · LW(p) · GW(p)

My deconversion was undramatic too.

comment by AdeleneDawner · 2011-09-10T21:29:59.252Z · score: 2 (2 votes) · LW(p) · GW(p)

I assume it's publication bias, based on the fact that dramatic conversions are easier to write about - not only do they make better stories, but the details of that kind of thing are easier to remember. (I'm also in the 'undramatic' category.)

comment by Nick_Beckstead · 2011-09-11T14:11:36.029Z · score: 1 (1 votes) · LW(p) · GW(p)

Dramatic deconversion here.

comment by Bill_McGrath · 2011-09-10T09:58:10.096Z · score: 1 (1 votes) · LW(p) · GW(p)

I was terrified of Hell when I was younger, so it was a while before I was able to admit my doubts to myself, and my deconversion was a gradual process; but it wasn't particularly dramatic. I felt a bit sad a few times, and a bit guilty, but by the age of 17 I was an atheist and not too worried about it.

So it may be publication bias, yes, but that's only two examples. I post quite regularly on an atheism & agnosticism forum, I might put a poll asking this question.

comment by Virge · 2011-09-29T13:52:36.298Z · score: 0 (0 votes) · LW(p) · GW(p)

Undramatic for me too.

If you've got a talent that keeps you very popular within a group, it's very easy to get sucked into being what those admiring people want you to be. Being bright, clear-thinking, eloquent, confident (and a musician) moves you very easily into a leadership position, and builds the feeling of responsibility for the welfare of the group.

It took me too long to commit to acknowledging my accumulated doubts and misgivings and examine them in anything other than a pro-Christian light. I had enough religious cached thoughts in an interconnected self-supporting web that doubting any one of them was discouraged by the support of the others. However, I was spending more of my time aware of the dissonance between what I knew and what I believed (or, as I later realised, what I was telling myself I believed).

I ended up deciding to spend a few months of my non-work time examining my faith in detail -- clearing the cache, and trying to understand what it was that made me hold on to what I thought I believed. During that time I gradually dropped out of church activities.

I look back on the time and see it as a process of becoming more honest with myself. Had I tried to determine what I really believed by looking at what I anticipated and how that influenced my behaviour, I'd have realised a lot earlier that my true beliefs were non-supernatural. I'd just been playing an expected role in a supportive family and social group, and I'd adjusted my thinking to blend into that role.

comment by Thomas · 2011-09-11T22:19:18.687Z · score: 0 (0 votes) · LW(p) · GW(p)

I was always an atheist. But I saw the drama of atheisation a generation before myself. 50 generation before that, pagan ancestors embraced Christianity.

comment by MBlume · 2011-09-11T14:17:37.353Z · score: 0 (0 votes) · LW(p) · GW(p)

Daria here -- I sobbed aloud the first time I read that story because of how strongly I identified with her. No family troubles to speak of once I deconverted, but I did lose a girlfriend to it.

comment by XiXiDu · 2011-09-08T18:07:37.200Z · score: 18 (20 votes) · LW(p) · GW(p)

So I decided I should try to find out who Jesus actually was. I began to study the Historical Jesus. What I learned, even when reading Christian scholars, shocked me.

"Whoso wishes to grasp God with his intellect becomes an atheist."Nikolaus Ludwig von Zinzendorf

It seems you abandoned Christianity for the right reasons. Few are those whose disbelief is the result of extensive studies and advanced knowledge, I'm certainly not one of them.

comment by lukeprog · 2011-09-08T18:32:30.080Z · score: 4 (4 votes) · LW(p) · GW(p)

It seems you abandoned Christianity for the right reasons.

Well, kind of. My reasons for rejecting supernaturalism are much better informed than when I originally left theism behind. I didn't know about technical explanation, Bayesianism, or Solomonoff induction when I lost my faith.

comment by orthonormal · 2011-09-08T20:47:31.176Z · score: 7 (7 votes) · LW(p) · GW(p)

Well, you've been able to understand the reasons much more clearly than you first did, but they're still essentially the same reasons. It's not as high a barrier as understanding quantum mechanics, for instance; once you just stop sabotaging your mental processes, atheism is the obvious conclusion.

Not to say there's anything easy about the first part of that, of course! I'm just saying it sounds like you'd advanced far enough there at the time you left religion.

comment by jubydoo · 2011-09-08T20:52:43.185Z · score: 4 (4 votes) · LW(p) · GW(p)

It's good to know I'm not the only one. I can give a good argument against the existence of God these days, but when I first walked away from religious belief it was just a vague sense of "this is all BS".

I'm still new around here, and I'm still learning how to really be rational, but after being the smartest guy in the room for so long it's nice to learn that there's still room to grow. On that note, what is Solomonoff induction?

comment by lukeprog · 2011-09-08T20:56:18.038Z · score: 1 (1 votes) · LW(p) · GW(p)

Here.

comment by [deleted] · 2011-09-08T21:07:37.433Z · score: 7 (7 votes) · LW(p) · GW(p)

Though I like Shane Legg's formal explanation, it's not very kind to people without mathematical inclinations. I think starting with Eliezer's post on Occam's Razor and Solomonoff Induction would be a much gentler introduction.

comment by lessdazed · 2011-09-08T23:22:35.703Z · score: 0 (0 votes) · LW(p) · GW(p)

I like this one.

If you fall off your chair the first time you read section 8.2, you're doing it right. Alternatively, if you think to yourself "Obviously..." you are at a far deeper level of amateur understanding than I am.

I didn't find the sections had to be read in order necessarily, so if one is obscure you could skip around.

comment by XiXiDu · 2011-09-08T19:19:26.200Z · score: 3 (5 votes) · LW(p) · GW(p)

I didn't know about technical explanation, Bayesianism, or Solomonoff induction when I lost my faith.

I still don't, my comprehension is at best vague. My knowledge of history, evolution and physics is virtually non-existent. The main reason for why I don't believe into a god is that an universe with a god seems less likely than one without god. Not because I have studied evolution and made sense of its mathematical and conceptual foundations, but simply because all other available explanations and their implications sound incredible unlikely, even less probable than a global conspiracy among rival scientists to reach a consensus on something as complicated as the theory of evolution.

The original reason for me to abandon religion was that I perceived the christian god to be morally bankrupt.

comment by loup-vaillant · 2011-09-13T16:19:12.258Z · score: 0 (0 votes) · LW(p) · GW(p)

I didn't know about […] Bayesianism […]

Did you ? My guess is, at an intuitive level, you were already close. Quoting from your post :

Historical investigations use three basic criteria to determine the probability of recorded events.

There were some flaws, of course, like when you said that "miracles are, by definition, highly improbable" (probably doesn't make the distinction between prior and posterior probabilities, and maybe (I'm not sure) some mind projection fallacy).

(Of course, that could just be me projecting my atheist slant to your believer's post. I'm not strong enough to judge that.)

comment by [deleted] · 2011-09-08T22:56:33.932Z · score: 17 (21 votes) · LW(p) · GW(p)

.

comment by Jayson_Virissimo · 2011-09-29T15:13:52.442Z · score: 2 (2 votes) · LW(p) · GW(p)

I've got a similar story. A high school friend's grandmother took a trip to Sri Lanki (their home country) to visit a "healer" (they were Buddhist, but I don't know which kind) in a last-ditch effort to avoid death from cancer. She came back without her tumor. Can I explain this? No, I can't.

comment by kilobug · 2011-09-29T15:34:34.034Z · score: 10 (10 votes) · LW(p) · GW(p)

Well, we know that spontaneous remissions on cancers do occur, very rarely, but they do occur. One of the hypothesis is that the immune system finally learns to attack the cancer. With the huge number of people who, faced with a disease that scientific medicine doesn't know how to cure, go to prayer or healers, it's not surprising that, statistically, a few spontaneous remissions do happen just after such a visit. Especially considering the placebo effect, and the non-negligeable links between the efficiency of the immune system and the mental state (it's well known that stress diminish the efficiency of the immune system).

What would be meaningful is not a single case of unexplained spontaneous healing. It's a significant, reproducible, higher-than-placebo, increase in survival rate by a given healer (or a set of healers using a given faith). And that is, as far as I know, not backed by any study (or if it is, please show me the link).

comment by Vaniver · 2011-09-29T23:20:25.430Z · score: 11 (11 votes) · LW(p) · GW(p)

It's a significant, reproducible, higher-than-placebo, increase in survival rate by a given healer (or a set of healers using a given faith). And that is, as far as I know, not backed by any study (or if it is, please show me the link).

That said, if you get the placebo effect from going to a faith healer, do it.

comment by dlthomas · 2011-09-29T23:37:16.643Z · score: 5 (7 votes) · LW(p) · GW(p)

Unless you can get it cheaper ways...

comment by Technoguyrob · 2011-09-30T12:13:37.283Z · score: 8 (8 votes) · LW(p) · GW(p)

Right. Moreover, of all the people who read GabrielDuquette's comment and know someone that had cancer and went to a faith healer, I imagine only the ones with a story like Jayson_Virisimo's will post a reply. Failed attempts are not reported. If you are acquaintances with someone that experienced a failed faith healing, you are likely not even aware of it! (If it was successful, they would have lauded it.) An easy Bayesian estimate makes the presence of Jayson_Virisimo's comment unsurprising.

Given a sufficiently non-zero probability of spontaneous remission, this argument explains my lack of surprise at such a story. This is an important addition to your argument (and, I feel, indeed the crux), because a non-zero probability is not satisfactory. Consider if we had many people posting such claims; with sufficiently low probabilities of spontaneous remission, we would not expect such a density of claims.

comment by Jayson_Virissimo · 2011-09-30T13:10:18.064Z · score: 0 (0 votes) · LW(p) · GW(p)

That sounds about right.

comment by Bugmaster · 2011-09-30T21:17:58.665Z · score: 2 (2 votes) · LW(p) · GW(p)

I would explain it as a spontaneous remission followed by the post hoc fallacy.

Edit: assuming, of course, that the tumor was actually gone, as DSimon points out.

comment by Vladimir_Nesov · 2011-09-30T21:26:29.211Z · score: 4 (4 votes) · LW(p) · GW(p)

I would explain it as a spontaneous remission followed by the post hoc fallacy.

Surely you mean, causing post hoc fallacy?

comment by Bugmaster · 2011-09-30T22:07:01.835Z · score: 1 (5 votes) · LW(p) · GW(p)

If I do that, I may be in danger of committing the post hoc fallacy :-)

comment by dlthomas · 2011-09-30T22:18:37.638Z · score: 0 (0 votes) · LW(p) · GW(p)

That was the joke...

comment by Bugmaster · 2011-09-30T22:52:07.941Z · score: 1 (1 votes) · LW(p) · GW(p)

Indeed.

comment by DSimon · 2011-09-29T15:26:59.917Z · score: 1 (3 votes) · LW(p) · GW(p)

How did you know that the tumor was eliminated? That is, was there a before-and-after x-ray clearly showing the difference?

comment by Jayson_Virissimo · 2011-09-30T10:02:21.701Z · score: 2 (2 votes) · LW(p) · GW(p)

How did you know that the tumor was eliminated? That is, was there a before-and-after x-ray clearly showing the difference?

I don't know it was eliminated. My only evidence is my friend's testimony, his track record of truth-telling, and the fact the his was an atheist (and, therefore, unlikely to make up mystical stories to promote his religion).

comment by DSimon · 2011-09-30T21:10:20.825Z · score: 0 (0 votes) · LW(p) · GW(p)

Then what I really want to know is: how did your friend know the tumor was eliminated?

comment by Will_Newsome · 2011-09-10T09:34:20.004Z · score: 1 (5 votes) · LW(p) · GW(p)

I'd like to second the question. Computational decision theoretic cosmology doesn't rule out statistical miracles and more importantly it's best to compute likelihood ratios and posteriors separately. E.g. see: http://www.overcomingbias.com/2009/02/share-likelihood-ratios-not-posterior-beliefs.html

comment by Hyena · 2011-09-12T13:09:53.412Z · score: 10 (10 votes) · LW(p) · GW(p)

When I read these stories, I always feel guilty. I became non-theist. I simply stopped believing in it; there was no grappling with theological or historical issues, I just stopped believing. A bit flipped one day and I've never been able to believe since; I can't even conceptually access my pre-flip self.

Sometimes I wonder, though, if my version isn't more true. Everyone else could also be subject to the flip but seek to rationalize it. Later I went through many attempts to "find religion" but couldn't. I can't help but womder if lukeprog's research wasn't a similar process of post-flip grappling rather than it's source.

comment by Spurlock · 2011-09-09T15:06:19.117Z · score: 7 (7 votes) · LW(p) · GW(p)

Eventually I realized that millions of people have lived lives of incredible meaning, morality, and happiness without gods. I soon realized I could be more happy and moral without God than I ever was with him.

You sort of glossed over this, but it seems like the bit that a lot of people have trouble with (and have trouble realizing that it's even possible). There are lots of arguments for this position, but I'm just curious if there were any particular things that were "Aha" moments for you here.

ETA: Do you think you could have come to this position before rejecting God? That you could have said "even though there is a God, it would still be possible to be moral and happy and purposeful if there weren't"? I'm curious how easy it is to get people to realize this before it's their last resort for preserving morality etc.

comment by Yossarian · 2011-09-29T03:22:45.149Z · score: 4 (4 votes) · LW(p) · GW(p)

As an atheist that attended a Catholic high school, one of the questions often leveled at me was what exactly prevented me from going on murdering rampages without a religious morality to keep me in check. I got this question from both students and faculty (usually as part of the class discussion in religion class). So in my experience at least, it is difficult for religious people to understand the morality of a non-religious person. I would speculate that this is because they, on some level, didn't believe in God (or at least the Catholic God) and were instead believing in belief, feeling that the morality that came with the dogma was necessary and beneficial to leading a proper life.

comment by Desrtopa · 2011-09-29T03:35:00.609Z · score: 2 (2 votes) · LW(p) · GW(p)

How did you usually answer when they asked that, and how was your answer received?

comment by Yossarian · 2011-09-29T04:07:58.844Z · score: 0 (0 votes) · LW(p) · GW(p)

At the time, I made a distinction between ethics and morality that I would now say is probably more semantic than definitional. But, IIRC, they defined morality as a code of behavior with a religious basis. So I used the term ethics to say that I followed a code of behavior that didn't follow from religious belief.

Essentially, I made the point that just because I didn't believe I would go to hell for killing somebody didn't mean that I had any desire to. Or that the prospect of prison and general rejection from society didn't serve as an adequate deterrent. I don't remember specifically, but I might have made the point that the Golden Rule doesn't have to be tied to a religious belief and is a pretty self evident truth on its own.

As for their response, I mostly remember them moving onto a different topic (or at least, ceasing to focus on me for that moment). I always thought about my answers and tried to give an honest answer, but I actively avoided giving them the answers they were expecting or wanted, since they were usually leading questions designed to get me to agree with them in some basic way.

comment by TheOtherDave · 2011-09-09T17:46:39.713Z · score: 2 (2 votes) · LW(p) · GW(p)

I know plenty of religious folk who freely acknowledge that there exist non-religious moral folk, and accept that it follows that belief in God (as they understand God) is not crucial to living a moral life.

Mostly they seem to have arrived at that conclusion by observing the behavior of other people who don't share their understanding of God, and concluding that it sure does seem moral to them.

That said, I also know religious folk who have made that same observation and conclusion, but nevertheless continued to believe that there is no living a moral life without sharing their understanding of God, so it's by no means a given.

comment by lukeprog · 2011-09-09T16:36:35.049Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes, I think I could have realized this before deconversion. Plenty of people do; perhaps most. I was just too thoroughly isolated.

comment by kilobug · 2011-09-09T09:38:02.989Z · score: 7 (7 votes) · LW(p) · GW(p)

Very interesting story. Since I'm born in an atheist family and never believed in God, I lack any similar experience, and somehow, I regret it, because that experience must definitely be of a great help to change your mind about other topics. The closest experience I have to this is the Santa Claus thing, but I was such a young child that I only have confuse memory about how I started to doubt. But the process looks similar : there is nice Santa Claus person that gives me present, I start to doubt it's real and feel bad because I don't want the "magic of chirstmas" to go away, and then I realize that it's something even more "magical" than elves and flying Santa Claus going faster than light : it's the love of my parents, who spent days going from shop to shop to find the silly present I asked for in my letter to Santa Claus that the teacher gave them... it has the three phases : belief in something supernatural that makes you happy, doubt and feeling sad, and then realizing that reality makes you even more happy. But it's so lost in the mist of early childhood that it doesn't have the potency you describe.

Oh, on other topic, I'm still doubtful about "Singularity", « an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion'» sounds a logical jump with no foundation to me, let me try to explain : let's assume we have a measure of intelligence into a single real number, I(M). An intelligent machine can design a better version of itself, so we have I(M_n+1) > I(M_n). That's a strictly monotonically increasing sequence. That's all we know. A strictly monotonically increasing sequence can have a finite limit (like 1+1/2+1/4+1/8+... has a limit of 2), or can grow towards infinity very slowly (like log(n)). How do we know that designing a better intelligence is not an exponentially difficult task ? How do we know that above a given level, the formula doesn't look like I(M_n+1) = I(M_n) + 1/n, because every increase in intelligence is so much harder to make ? I guess there is an answer to that, but I couldn't find it in siginst FAQ... does any of you have a pointer to an answer to that question ?

comment by [deleted] · 2011-09-09T11:11:29.154Z · score: 10 (10 votes) · LW(p) · GW(p)

How do we know that designing a better intelligence is not an exponentially difficult task ?

Well, the answer could simply be, "you're right; we don't know that". However, I think there is evidence that an ultraintelligent machine could make itself very intelligent indeed.

The human mind, though better at reasoning than anything else that currently exists, still has a multitude of flaws. We can't symbolically reason at even a millionth the speed of a $15 cell phone (and even if we could, there are still unanswered questions about how to reason), and our intuition is loaded with biases. If you could eliminate all human flaws, you would end up with something more intelligent than the most intelligent human that has ever lived.

Also, I could be mistaken, but I think people who study rationality and mathematics (among other things?) tend to report increasing marginal utility: once they understand a concept, it becomes easier to understand other concepts. A machine capable of understanding trillions of concepts might be able to learn new ones very easily compared to a human.

comment by lessdazed · 2011-09-09T11:16:36.470Z · score: 7 (7 votes) · LW(p) · GW(p)

If you could eliminate all human flaws, you would end up with...

You might end up with nothing. You really have to start over and build an inference machine vastly different from ours.

comment by Omegaile · 2012-11-28T03:11:22.851Z · score: 0 (0 votes) · LW(p) · GW(p)

If you could eliminate all human flaws, you would end up with something more intelligent than the most intelligent human that has ever lived

This seems true...but it doesn't argue against a bounded intelligence, just that the bound is very far.

comment by JenniferRM · 2011-09-10T07:05:55.792Z · score: 5 (5 votes) · LW(p) · GW(p)

There are lots of words on the subject in the FOOM debate but that's (1) full of lots of "intuition, examples, and hand waving" on both sides, (2) ended with neither side convincing the other, and (3) produced no formal coherent treatise on the subject where evidence could be dropped into place to give an unambiguous answer that a third party could see was obviously true. It is worth a read if you're looking for an intuition pump, not if you want a summary answer.

If you want to examine it from another angle to think about timing and details and so on, you might try using The Uncertain Future modeling tool. If you have the time to feed it input, I'm curious to know what output you get :-)

comment by kilobug · 2011-09-10T16:09:35.784Z · score: 4 (4 votes) · LW(p) · GW(p)

It seems to me that I'm both pessimistic and optimisc (or anyway, not well calibrated). I got :

  • Catastrophe by 2070 : 65.75%

  • AI by 2070 : 98.3%

I would have given much less to both (around 25%-33% for catastrophe, and around 50-75% for AI) if you directly asked me... so I'm badly calibrated, either in the way I answered to the individual questions, or to my final estimate (most likely to both...). I'll have to read the FOOM debate and think more about the issue. Thanks for the pointers anyway.

(Btw, it's painful, the applet doesn't support copy/paste...)

comment by scientism · 2011-09-12T02:01:17.077Z · score: 5 (5 votes) · LW(p) · GW(p)

Would you credit your upbringing with giving you the fervour and energy you now bring to studying rationality? Possibly also the virtue of scholarship? (I don't mean to suggest anything negative by this; just that you attack problems in a very methodical way that surely requires great feats of willpower and I wonder what the source of that is.)

comment by lukeprog · 2011-09-12T16:16:37.847Z · score: 2 (2 votes) · LW(p) · GW(p)

Yes, probably.

comment by metaweta · 2011-10-01T23:21:23.334Z · score: 3 (3 votes) · LW(p) · GW(p)

Lukeprog: how has this transition affected your relationship with your parents, siblings, and extended family? Have any readers had similar transitions later in life, with spouse and children?

comment by D-Rock · 2011-10-01T06:31:12.402Z · score: 3 (3 votes) · LW(p) · GW(p)

lukeprog - I'm curious about when you 'felt the presence of God'. I'll often have a discussion with someone and they tell me that they can 'feel God' (usually followed by accusing me of being an atheist because I don't want to feel Him.)

While you were a believer, what did it feel like to feel God's presence? What did the tingling feel like (followed by sweating) with the Holy Spirit? Now that you are not a believer, how do you explain what you were feeling back then?

It's an area I would like to comment on when speaking with theists, but I have no frame of reference. I grew up religion-neutral, and became an atheist to find out 'what this whole God thing was about'.

comment by summerstay · 2011-10-01T10:09:12.305Z · score: 3 (3 votes) · LW(p) · GW(p)

Have you ever heard of musical frission? Sometimes you hear a piece of music resolve, and it feels right, and that rightness makes you feel pleasure, sometimes described as a chill running down your spine. You can get the same emotion from learning a new idea that overturns a lot of what you knew and inspires a manic rush of ideas. Or if you've been in a room full of people listening to a speaker seriously talk about a deeply personal experience, the room gets totally quiet and you realize that everyone in the room is feeling the same things as you at the same time. These are the kinds of experiences people describe as feeling the spirit. It's an intense feeling that something is true, and good, and important.

comment by HSala · 2011-09-13T06:14:43.218Z · score: 3 (3 votes) · LW(p) · GW(p)

This is such a feel-good tale for atheists. Growing up as one of three Jewish kids in a predominantly Catholic town, I've always known my parent's beliefs are not everyone else's parents beliefs. I think that's made it much, much easier to discard them. When you're placed outside the norm to begin with, it doesn't hurt to switch to a different outside-the-norm belief. But, it wasn't like that for you, and for that I applaud you even more.

comment by RobertLumley · 2011-09-08T23:51:25.071Z · score: 3 (5 votes) · LW(p) · GW(p)

That sounds familiar...

comment by lukeprog · 2011-09-09T02:36:45.699Z · score: 6 (6 votes) · LW(p) · GW(p)

Indeed. For those unfamiliar, here is the version from 2008, from before I had heard of Overcoming Bias, Less Wrong, or intelligence explosion.

comment by RobertLumley · 2011-09-09T04:14:11.473Z · score: 5 (5 votes) · LW(p) · GW(p)

That's not what I meant, actually; I meant it sounded familiar because it's very similar to my life...

comment by lukeprog · 2011-09-09T05:56:45.603Z · score: 0 (0 votes) · LW(p) · GW(p)

Ah! Cool.

comment by DSimon · 2011-09-11T04:37:08.923Z · score: 1 (1 votes) · LW(p) · GW(p)

Did you also once post this on the old De-Conversion.net blog? (Which I see now is unfortunately kaput.)

comment by lukeprog · 2011-09-11T04:40:59.455Z · score: 1 (1 votes) · LW(p) · GW(p)

The 2008 version was pasted by other people to many other websites.

comment by Aharon · 2011-09-18T14:36:03.849Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm curious what the good christian philosophers you read were.

comment by lukeprog · 2011-09-28T18:31:14.584Z · score: 0 (0 votes) · LW(p) · GW(p)

Many of them can be found in one bunch in The Blackwell Companion to Natural Theology.

comment by Aharon · 2011-10-09T07:18:05.477Z · score: 0 (0 votes) · LW(p) · GW(p)

Wow, that is a lot of stuff to go through. Thanks!

comment by Vladimir_Nesov · 2011-09-10T10:29:29.893Z · score: 2 (6 votes) · LW(p) · GW(p)

I wonder how much similarity there is, on psychological level (and in terms of the process of deconversion), between religious belief and belief in the Mighty Status Quo that would Deliver Us From Harm...

comment by Will_Newsome · 2011-09-10T11:48:20.442Z · score: 8 (10 votes) · LW(p) · GW(p)

What is belief in the Mighty Status Quo? Are there people who truly have faith in the status quo like there are some that truly have faith in their religion? (Or do you mean belief in belief in the status quo / religion?)

comment by wedrifid · 2011-09-11T04:44:07.490Z · score: 2 (2 votes) · LW(p) · GW(p)

(Second the question!)

comment by [deleted] · 2011-09-15T08:43:01.566Z · score: 1 (1 votes) · LW(p) · GW(p)

It could be interpreted as pointing out the strength of standard background ideology. Having my older relatives explain how seriously basically everyone under a certain age took communism was really eye opening to me as a teen. Even more odd was how only the elderly pointed out that the current system too may pass someday, and that it isn't wise to personally attach oneself to it to much, while some of my younger relatives (still old enough to grow up in communism!) seem to assume that this time it really is the "end of history".

Societies used to have a background religion no one even considered to question, and those people who did kept really quiet about it, many still do, this is why we have countries that are say 97%+ Muslim. But nearly all societies still have a background ideological foundations on which everyone agrees.

The ideological foundations always reinforce existing dominant power structures, even if it perhaps dosen't paint the positions it is supporting as dominant. Counter-revolutionaries and wreckers are needed to explain failure after all. You see our society dosen't suck because its ideals are impractical or flawed, it sucks because we don't live up to them enough, and are prevented to do so by those nasty people who still have too much power, though they are so clever about it that we can't really say how they do it. We are still struggling for a [adjective] society but aren't you glad you now have us on your side?

I still get goose bumps realizing just how much my world view may be deformed by the current social order that I've internalized.

comment by [deleted] · 2011-09-09T14:26:07.198Z · score: 2 (2 votes) · LW(p) · GW(p)

Well written and thought out. Thanks for sharing. I have a very similar story, for me it has meant a dramatic difference in the way that I approach life. Before my rejection of faith, I was plagued by a feeling of impending doom. I carried the world on my shoulders as if the fate of all these "souls" relied upon my efforts. I was continually depressed and struggling with negative emotions and thoughts. It turns out it can be rather upsetting and extremely difficult for a person who thinks and asks questions to maintain a relationship with a being who never reciprocates the effort. After 6 years of slowly picking my way through all of the bogus arguments in favor of faith, I began to go through the typical winnowing process (christian pluralist, pluralist, agnostic, indifferent, atheist). Stumbling upon Sagan's The Demon Haunted World started me on a process of rearranging my way of thinking and processing information. Truth be told the gloom of life is gone, I am free to accept reality and embrace it. In rejecting fantasy and embracing reality, I have come to find that the route to piece of mind is not divine favor from an invisible deity, but to reasonably approach disagreeable circumstances with calm thought and reason. Honestly I have never looked back.

comment by TimFreeman · 2011-09-30T00:25:12.725Z · score: 6 (8 votes) · LW(p) · GW(p)

Before my rejection of faith, I was plagued by a feeling of impending doom.

I was a happy atheist until I learned about the Friendly AI problem and estimated the likely outcome. I am now plagued by a feeling of impending doom.

comment by khafra · 2011-09-30T14:51:17.182Z · score: 3 (3 votes) · LW(p) · GW(p)

As an atheist, I was seriously bothered by the thought of my inevitable, irreversible death in just a few decades. As a Friendly AItheist, I'm seriously bothered by the thought of highly probable astronomical waste, but cheered by the thought of MWI putting more of my future probability mass in universes that turn out really nifty, especially if I can help out with its creation. Of course, unless something like mangling takes place, MWI + quantum immortality is far more horrifying than astronomical waste.

comment by wedrifid · 2011-09-30T06:31:11.580Z · score: 1 (1 votes) · LW(p) · GW(p)

I've got the impending doom but I don't bother with the 'plagued' bit. Why on earth should lack of incomprehension oblige me to experience negative emotion or adverse psychological states? That makes no sense.

comment by antigonus · 2011-09-30T14:28:03.407Z · score: 1 (1 votes) · LW(p) · GW(p)

I imagine because the thing you've successfully comprehended could be very, very bad. Not sure if that "obliges" you to feel anything (or if anything ever obliges anyone to feel anything), but if you're actually wondering what the thought process is...

comment by simplicio · 2011-09-12T00:05:15.008Z · score: 1 (1 votes) · LW(p) · GW(p)

Great article, and by the way, I have been listening to episode after episode of your very interesting podcast for a few days now.

A worry about theism/atheism... thinking and writing about that question is indeed worthwhile, for the sake of helping confused people relinquish their confusion. However, it seems to me that there is a point at which it becomes flat-out epistemically dangerous, in the sense that a person writing and thinking about X all the time, even as a critic of X, is going to have their thinking inadvertently shaped by X. One sees this with certain atheists who don't have any opinions about anything except insofar as it relates to the atheism/theism debate. For example, I recall one fellow who could find nothing more germane to a discussion on the ethics of eating meat than a passing comment by some atheist debater on YouTube he had recently heard.

I am certainly not accusing you of this problem, but it is something to watch out for. Religion is psychological candy, after all. (I swear my right hemisphere is a theist.)

comment by lukeprog · 2011-09-12T16:18:43.691Z · score: 10 (10 votes) · LW(p) · GW(p)

Highly agree. My current approach when talking to theists is not to mention atheism at all. I just talk about science and rationality and sociology and so on. If you know enough science and can overcome a few cognitive biases when you're told about them in vivid ways, then theism starts to look ridiculous even when I don't explicitly mention theism. That's a theory, anyway - I haven't tested it carefully.

comment by Logos01 · 2011-09-28T17:01:55.683Z · score: 3 (3 votes) · LW(p) · GW(p)

If you know enough science and can overcome a few cognitive biases when you're told about them in vivid ways,

Interesting anecdote along these lines: just this morning I used the 1960 Watson experiment as an attempt to explain the Confirmation Bias to a coworker. (That's the 'list triplets of numbers. I'll tell you if they fit or don't fit the rule I'm thinking of. Your first free example is 2 4 6, which fits.') Even after having the Confirmation bias explained to him as the fact that people don't tend to try to look for ways their beliefs might be wrong (amongst other things), he still only made 'positive' guesses, and came up with "Each number is even and a multiple of the first."

I was fascinated by this.

comment by Desrtopa · 2011-09-28T17:19:44.017Z · score: 1 (1 votes) · LW(p) · GW(p)

What was his reaction when he learned that he was wrong?

comment by JoshuaZ · 2011-09-28T17:10:21.563Z · score: 1 (1 votes) · LW(p) · GW(p)

Hypothesis: Your coworker is an idiot. Observation: This hypothesis has little to do with whether or not he is religious.

More seriously, I've given the selection task to people before and no one I've ever encountered does that badly if they've been primed about confirmation bias and similar issues. Even just telling people that it is a puzzle seems to go a long way to them getting the right solution.

comment by Logos01 · 2011-09-28T17:26:30.967Z · score: 0 (0 votes) · LW(p) · GW(p)

I've never been greatly impressed by his intellect, but I would definitely say that he is of at least average intelligence. The field I work in doesn't suffer individuals of significantly poor intelligence (IT/sysadmin), though I freely admit that isn't really saying much. There are some folks to whom the practice of thinking rationally is just... alien. Once you reach somewhere around forty, thinking patterns get pretty firmly set, too.

Additionally, the guy is an english-as-second-language speaker, so it might not have been a 'fair trial' to him. I'm trying to be generous (to him) considering I pretty much agree with you.

This hypothesis has little to do with whether or not he is religious.

He's openly atheistic, in fact.

comment by Multiheaded · 2011-12-28T21:20:32.137Z · score: 0 (0 votes) · LW(p) · GW(p)

Luke, I've only just stumbled upon it, and this story is damn near heartbreaking - not in a bad way, no. I've never experienced the entire memeplex of theism from the inside, yet, having found myself bitterly envying the comfort of organized religion during unpleasant times in my life, I feel like I understand what it must've been like for you emotionally.

I must admit that my opinion of your judgment and moral character, shaken by that controversial dating advice post of yours, has now much improved.

comment by [deleted] · 2011-09-08T18:51:45.152Z · score: 0 (0 votes) · LW(p) · GW(p)

This seems to have disappeared from the discussions page? Perhaps some kind of clash between:

Edit: When checking main one must check the 'new' tab.

comment by Vladimir_Nesov · 2011-09-08T18:55:26.980Z · score: 2 (2 votes) · LW(p) · GW(p)

It was moved to Main.

comment by lukeprog · 2011-09-08T20:57:44.172Z · score: 0 (0 votes) · LW(p) · GW(p)

Yeah. I accidentally published it to discussion though I had intended to post it to main, so I moved it.

comment by Pachomius · 2012-01-08T16:24:16.094Z · score: -6 (6 votes) · LW(p) · GW(p)

Do you know the distinction between God and religion?

You can know God exists without religion.

Tell me though what is your concept of God, and your concept of the universe.

comment by Will_Newsome · 2011-09-10T08:40:53.156Z · score: -7 (29 votes) · LW(p) · GW(p)

I like how this is similar to my last few years but in reverse. I spent a year or so diligently studying rationality as a SingInst Visiting Fellow followed by realizing that I was a few levels above nearly any other aspiring rationalist. In the meantime I lost faith in the sanity of humans and decided I basically wasn't on their side anymore, which is a much more complex intrapersonal dynamic than it sounds.

For the last 6 months I've been downright obsessed with "morality", though a less lossy way of putting it is like "that thing in the middle of justification, decision theory, institutional economics, ontology of agency, computer-science-inspired moral philsophy, teleology & timelessness, physicalism vs. computationalism, &c.".

In the meantime I hit upon the theisms of Leibniz and Aquinas and other semi-neo-Platonistic academic-style philosophers, taking a computational decision theoretic perspective while trying to do justice to their hypotheses and avoiding syncretism. Ultimately I think that academic "the form of the good and the form of being are the same" theism is a less naive perspective on cosmology-morality than atheism is---you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what "right" is, and pretending like you aren't or are only negligibly watched over by a superintelligence---whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution---is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced---you always want to be on the edge of uncertainty, you don't want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.

Sadly Less Wrong seems to know absolutely nothing about theism, which ends up with me repeatedly facepalming when people feel obliged to demonstrate how incredibly confident they are that theism is stupid and worth going out of their way to signal contempt for. One person went so far as to compare it with modern astrology, which I could only respond to with a mental "what is this i dont even". This was long after I'd lost my faith in the ability of humanity's finest to show off even a smidgen of sanity but it still managed to make me despair. Humans.

Perhaps more important, I have a visceral knowledge that I can experience something personally, and be confident of it, and be completely wrong about it.

Eliezer got it from trying to build uFAI, Wei_Dai got it from cryptography, lukeprog got it from Christianity, I got it from my ex-girlfriend. I feel so contingent.

comment by Kaj_Sotala · 2011-09-10T21:04:39.866Z · score: 41 (41 votes) · LW(p) · GW(p)

Providing a clear explanation of your theories would be useful. You don't seem to even really try, and instead write comments and posts that don't even attempt to bridge the inferential distance. At the same time, you do frequently write content where you talk about how you feel superior to LWers. In other words, you say you're better than us because you don't give us a real chance to catch up with your thoughts.

That's kinda rude.

It also makes one suspect that you either don't actually have a theory that was coherent enough to formulate clearly, or that you prefer to bask in your feeling of superiority instead of bothering to discuss the theory with us lowly LW-ers. Acting in a way to make yourself immune to criticism hardly fits the claim of being "a few levels above nearly any other aspiring rationalist". Rather, it shows that you're failing even the very rudiments of rationalist practice 101.

comment by lessdazed · 2011-09-10T23:56:22.252Z · score: 27 (33 votes) · LW(p) · GW(p)

Acting in a way to make yourself immune to criticism hardly fits the claim of being "a few levels above nearly any other aspiring rationalist". Rather, it shows that you're failing even the very rudiments of rationalist practice 101.

Being levels above in rationalism means doing rationalist practice 101 much better than others as much as being a few levels above in fighting means executing a basic front-kick much better than others.

comment by wedrifid · 2011-09-11T04:38:41.547Z · score: 19 (19 votes) · LW(p) · GW(p)

Being levels above in rationalism means doing rationalist practice 101 much better than others as much as being a few levels above in fighting means executing a basic front-kick much better than others.

To follow the analogy further if you are a few levels above in fighting then you should not find yourself face-planting every time you attempt a front kick. Or, at least, if you know that front kicks are the one weakness in your otherwise superb fighting technique then you don't use front kicks.

comment by katydee · 2011-09-11T04:40:18.903Z · score: 10 (10 votes) · LW(p) · GW(p)

Before I vote on this post, please clarify whether you think being a few levels above in fighting means executing a basic front-kick much better than others.

comment by lessdazed · 2011-09-11T05:25:57.721Z · score: 14 (16 votes) · LW(p) · GW(p)

Ceteris paribus, being better at front-kicking makes one a better fighter. One would probably need mastery of more than the one technique to be considered levels up: rationalism 102, 103, etc. I just used one example of a basic fighting technique because the sentence flowed better that way; I didn't put much time in thinking about and formulating it.

But the point was that no advanced techniques are needed to be many levels above normal. I see now that the comment might imply it's enough to be several levels up with one skill alone. At 45 seconds into this video is a fight between a master of grappling and a regular MMA fighter. If they had made it to the ground together and conscious, Gracie would have won easily. He needed a more credible striking threat so Gomi would have had to defend against that too, and thereby weaken his defense against being taken down.

I meant something like:

I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times. ~ Bruce Lee

I have probably heard that quote before, but wasn't consciously thinking of it.

How do fights end? Not with spinning jumping back-kicks to the head, but with basic moves better executed than basic counters to them. Right cross, arm-bar, someone running away, simple simple.

By analogy, for rationalism I'm emphasizing the connection between basic and advanced rationality mentioned by Kaj_Solata. If you don't have the basics, you have nothing, and you can't make up for it with moderate facility at doing advanced things.

comment by wedrifid · 2011-09-11T06:02:04.309Z · score: 5 (7 votes) · LW(p) · GW(p)

How do fights end?

If you do it right, the same way they start: A single king hit.

comment by katydee · 2011-09-12T05:22:18.732Z · score: 1 (1 votes) · LW(p) · GW(p)

Gotcha. Upvoted.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-11T01:34:13.988Z · score: 3 (19 votes) · LW(p) · GW(p)

I regret that I only have one upvote to give this comment.

comment by [deleted] · 2011-09-11T01:36:07.836Z · score: 5 (7 votes) · LW(p) · GW(p)

That's why we've given you a karmic wake, brother.

comment by NihilCredo · 2011-09-11T03:57:22.490Z · score: 1 (3 votes) · LW(p) · GW(p)

The technical term is bro.

comment by atucker · 2011-09-15T05:01:30.849Z · score: -1 (1 votes) · LW(p) · GW(p)

Bro as in Kamina.

comment by AdeleneDawner · 2011-09-11T01:46:38.504Z · score: 2 (6 votes) · LW(p) · GW(p)

We should perhaps formalize norms for upvoting based on this kind of comment. In any case, I'm doing so. And then going back to read the context to make sure I agree.

comment by wedrifid · 2011-09-11T04:32:01.764Z · score: 5 (5 votes) · LW(p) · GW(p)

I find that the increased attention given to the context combined with the positive priming is more than enough.

In this case, however I am finding that the comment backfired. It is Kaj's comment, not lessdazed's. Lessdazed's comment isn't bad as an independent observation but does miss the point of its parent. This means Eliezer's "upvote MOAR" comment is a red herring and I had to downvote it and lessdazed in responsed where I would otherwise have left it alone.

comment by lessdazed · 2011-09-11T02:09:47.602Z · score: 2 (2 votes) · LW(p) · GW(p)

I have an idea...(begins writing discussion post draft)

comment by Will_Newsome · 2011-09-14T23:40:56.758Z · score: 0 (4 votes) · LW(p) · GW(p)

You could instead make a post more explicitly about how rationality is a set of skills that must be trained. I keep trying to get this into people's heads but you are in a much better position to do so than I am, and it's an important thing to be aware of. Like, really important.

(I always end up making analogies to chess or guitar, perhaps you could make analogies to computer programming?)

comment by [deleted] · 2011-09-10T21:30:41.012Z · score: 4 (4 votes) · LW(p) · GW(p)

That's kinda rude.

You're still operating under the assumption that Will_Newsome cares, beyond a certain very low fundamental threshold, what we think about him and/or his theories.

comment by Will_Newsome · 2011-09-10T22:51:39.092Z · score: -9 (11 votes) · LW(p) · GW(p)

Can someone tell me what my theories are? Maybe it's the sleep deprivation but I don't remember having any theories qua theories. I talk about other peoples' theories sometimes, but mostly to criticize them, e.g. my decision theoretic arguments against naive interpretations of academic theism (of the sort that Mitchell Porter rightly finds misguided).

comment by DSimon · 2011-09-11T04:06:43.656Z · score: 12 (12 votes) · LW(p) · GW(p)

They don't have to be your theories in the sense that you originated them, we just mean "your theories" as in the theories/models/beliefs/maps you personally use, and that you often mention in passing in your posts, but without much detail.

For example: what does Aquinas have to do with TDT? That's not a specific question (though I'd like to hear your answer!) so much as a hint as to the sort of things that come across as empty statements to us; it's not at all obvious (to me, at least) how you are relating together the various things you mention in a given sentence, or how you are arriving at your conclusions. It's like there's a bunch of big invisible "this lemma left as an exercise for the reader" sentences in the middle of your paragraphs.

At the very least, you could provide links back to some of your longer posts which explain your ideas in a step-by-step fashion. Inferential distance, dude.

comment by [deleted] · 2011-09-10T23:59:16.383Z · score: 0 (0 votes) · LW(p) · GW(p)

Can someone tell me what my theories are?

I don't understand your writings enough to know for sure. However, for example,

Ultimately I think that academic "the form of the good and the form of being are the same" theism is a less naive perspective on cosmology-morality than atheism is

is a conclusion that surely must have come from some nontrivial body of beliefs. Maybe that's not what you mean by theory qua theory, but I suspect that's what Kaj_Sotala meant.

Whatever this underlying framework is, it would be nice to evaluate someday.

comment by Will_Newsome · 2011-09-10T22:50:21.298Z · score: -8 (10 votes) · LW(p) · GW(p)

Have I ever claimed to have any "theories"? I claim to have skills. I have expounded on what some of these skills are at various points. How am I acting in a way that makes myself immune to criticism? If I am trying to do that it would appear that I am failing horribly considering all the criticism I get. In other words, what you're saying sounds very reasonable, but are you talking about reality or instead a simplified model of the situation that is easy to write a nice-sounding analysis of? That's an honest question.

comment by Kaj_Sotala · 2011-09-11T08:34:09.805Z · score: 19 (21 votes) · LW(p) · GW(p)

Have I ever claimed to have any "theories"? I claim to have skills. I have expounded on what some of these skills are at various points.

This certainly sounds like a theory, or a bunch of them, to me:

Ultimately I think that academic "the form of the good and the form of being are the same" theism is a less naive perspective on cosmology-morality than atheism is---you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what "right" is, and pretending like you aren't or are only negligibly watched over by a superintelligence---whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution---is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced---you always want to be on the edge of uncertainty, you don't want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.

Certainly you keep saying that you feel superior to LW:ers because they don't know the things you do. You may call that knowledge, theory, skill, or just claims, however you prefer. But while you have expounded on it somewhat, you haven't written anything that would try to systematically bridge the inferential distance. Right now, the problem isn't even that we wouldn't understand your reasons for saying what you do, the problem is that we don't understand what you are saying. Mostly it just comes off as an incomprehensible barrage of fancy words.

For instance, my current understanding of your theories (or skills, or knowledge, or whatever) is the following. One, you claim that because of the simulation argument, theism isn't really an unreasonably privileged claim. Two, this relates to TDT somehow. Three, that's about all I understand. And based on your posting history that's about all that the average LW reader could be expected to know about the things you're talking about.

That's what my claim of you making yourself immune to criticism is based on: you currently cannot be criticized, because nobody understands your claims well enough to criticize them (or for that matter, agree with them), and you don't seem to be making any real attempt to change this.

In other words, what you're saying sounds very reasonable, but are you talking about reality or instead a simplified model of the situation that is easy to write a nice-sounding analysis of?

I'm talking about my current best model of you and your claims, which may certainly be flawed. But note that I'm already giving you an extra benefit of doubt because you seemed sane and cool when we interacted iRL. I do still think that you might be on to something reasonable, and I'm putting some effort into communicating with you and inspecting my model for flaws. If I didn't know you at all, I might already have dismissed you as a Time Cube crank.

comment by Will_Newsome · 2011-09-14T23:44:44.663Z · score: -4 (18 votes) · LW(p) · GW(p)

you don't seem to be making any real attempt to change this.

I keep bringing this up only to have it ignored completely, but: THAT IS NOT A PSYCHOLOGICALLY REALISTIC OPTION.

comment by cousin_it · 2011-09-15T08:49:00.857Z · score: 10 (10 votes) · LW(p) · GW(p)

I too used to have a disorder that made me occasionally write nonsense. In my case it turned out to be fixable by reading a lot of LW, in particular Eliezer's and Yvain's posts, and then putting a lot of work into my own posts and comments to approach their level of clarity. It was hard at first, but after a while it became easier. Have you tried that?

Right now your writings look very stream-of-consciousness to me, like you don't even write drafts. Given all the criticism you get, this is kind of unacceptable. Many LWers write drafts and send them to each other for critique before posting stuff publicly. I often do that even for discussion posts.

comment by wedrifid · 2011-09-15T09:01:34.501Z · score: 2 (2 votes) · LW(p) · GW(p)

Right now your writings look very stream-of-consciousness to me, like you don't even write drafts. This is kind of unacceptable. Many LWers write drafts and send them to each other for critique before posting stuff publicly. I often do that even for discussion posts.

Errr... wait. We do that? Ooops. Sometimes I proof-read and sometimes I make edits to my comments as soon as I post them. Does that count?

comment by cousin_it · 2011-09-15T09:12:37.214Z · score: 7 (7 votes) · LW(p) · GW(p)

Yeah, some of us do. Your posts are pretty good as they are, but hey, now you know a way to make them even better! I volunteer to read drafts anytime :-)

comment by Will_Newsome · 2011-09-15T00:15:37.102Z · score: 8 (14 votes) · LW(p) · GW(p)

I remember thinking it was ironic how in the Wikipedia article on learned helpnessness when they talk about the dogs the tone is like "oh, how sad, these dogs are so demoralized that they don't even try to escape their own suffering", but when it came to humans it was like "oh look, these humans seem to have a choice about whether or not they suffer but they're acting as if they don't have that choice so as to avoid blame and avoid putting forth effort to change their situation"; which if taken seriously sort of undermines the hypothesis about how the behavioral mechanisms are largely the same for both animals. But you could tell it was totally unconscious on the part of the writers, and if you'd tried to point it out to them they could just backpedal in various ways, and so there'd be no point in trying to point out the change in perspective, it'd just look like defensiveness. And going meta like this probably wouldn't help either.

comment by Kaj_Sotala · 2011-09-15T07:00:28.651Z · score: 4 (4 votes) · LW(p) · GW(p)

This is the first time I see you say that, but fair enough. I can relate to that.

comment by mwengler · 2011-10-04T23:26:12.927Z · score: 1 (3 votes) · LW(p) · GW(p)

Why? Or did I accidentally stumble across a private forum with secrets?

comment by mwengler · 2011-10-04T23:27:02.394Z · score: 0 (4 votes) · LW(p) · GW(p)

Why? Or did I accidentally stumble across a private forum with secrets?

comment by lukeprog · 2011-09-10T23:47:41.054Z · score: 22 (22 votes) · LW(p) · GW(p)

Ultimately I think that academic "the form of the good and the form of being are the same" theism is a less naive perspective on cosmology-morality than atheism is---you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what "right" is, and pretending like you aren't or are only negligibly watched over by a superintelligence---whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution---is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced---you always want to be on the edge of uncertainty, you don't want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.

I think this might be what Kaj means when he mentions your 'theories.' Let's take your "the form of the good and the form of being are the same" theory of cosmology-morality, for example. (You call it a 'perspective', but I just mean 'theory' in a very broad sense, here.) If you've explained it clearly on Less Wrong anywhere, I missed it. Of course you don't owe us any such explanation, but that may be the kind of thing Kaj is talking about when he says that "You don't seem to even really try [to explain your ideas], and instead write comments and posts that don't even attempt to bridge the inferential distance. At the same time, you do frequently write content where you talk about how you feel superior to LWers."

Also, you contrast your theory of cosmology-morality with 'atheism', as if atheism is a theory of cosmology-morality, but of course it's not. So that's confusing. The rest of the paragraph is a dense jumble of concepts and half-arguments that could each mean half a dozen different things depending on one's interpretation, and is thus incomprehensible - to me, anyway.

Sadly Less Wrong seems to know absolutely nothing about theism, which ends up with me repeatedly facepalming when people feel obliged to demonstrate how incredibly confident they are that theism is stupid and worth going out of their way to signal contempt for. One person went so far as to compare it with modern astrology, which I could only respond to with a mental "what is this i dont even".

I agree that there are forms of theism much more sophisticated than anything I've read in astrology. But as someone who has read the leading analytic theistic philosophers - Alvin Plantinga, Peter van Inwagen, William Alston, Charles Taliaferro, Alexander Pruss, John Hare, Robin Collins, Timothy McGrew, Marilyn McCord Adams, Bill Craig, William Hasker, Timothy O'Connor, Eleonore Stump, Keith Yandell, and others - I can somewhat knowledgeably confirm that theism is probably not worth studying.

comment by Will_Newsome · 2011-09-10T23:53:57.123Z · score: 0 (4 votes) · LW(p) · GW(p)

Have you read Thomas Aquinas or Gottfried Leibniz? It'd be cool if there was something we'd both read such that we could have an object-level discussion. I am not familiar with modern theism. Plantinga and Craig I'm mildly familiar with thanks to your blog, but they seemed third-rate compared to the original thinkers.

comment by lukeprog · 2011-09-11T00:01:51.046Z · score: 1 (1 votes) · LW(p) · GW(p)

Have you read Thomas Aquinas or Gottfried Leibniz?

Not much, I'm afraid. I may know more about their views than most LWers, but that's ain't much.

comment by Will_Newsome · 2011-09-11T00:17:21.385Z · score: 0 (6 votes) · LW(p) · GW(p)

Okay, hm. You're busy all the time but if ever you have some time free I'd like to brainstorm about how we might have something like a "rational debate". E.g. the optimal set-up might be meeting in person where we can go back and forth in real-time to clarify small things while taking a break every few minutes to check the internet for sources and write out better-considered arguments and responses. Considering we live a block away from each other that might actually be possible. It'd incentivize me to put a lot more effort into being understandable. I'm not exactly sure what the topic of such debate would be; I agree with you that theism isn't worth studying, I only try to argue that it's really hard to claim that theists are wrong given our current state of uncertainty.

comment by lukeprog · 2011-09-11T00:31:03.006Z · score: 4 (4 votes) · LW(p) · GW(p)

That doesn't sound like a productive way to address these issues, but it's true that I should put my time on this until at least after a September 30th deadline I've got on a project. I'll keep this in mind.

comment by Will_Newsome · 2011-09-10T23:57:18.527Z · score: -8 (10 votes) · LW(p) · GW(p)

Also, you contrast your theory of cosmology-morality with 'atheism', as if atheism is a theory of cosmology-morality, but of course it's not.

Huh? I find this to be an odd claim. Atheism is at least implicitly a prediction about where justification certainly doesn't come from: basically, not from any big, well-organized, monolithic institution/agent/thing.

comment by lukeprog · 2011-09-11T00:03:05.885Z · score: 9 (9 votes) · LW(p) · GW(p)

Sure, but only in the sense that a-fairyism is "a perspective on cosmology-morality." A-fairlyism says that justification doesn't come from fairies. In the way I typically use the English language, that's not enough to bother calling a-fairyism "a perspective on cosmology-morality."

comment by Will_Newsome · 2011-09-11T00:11:02.867Z · score: -7 (13 votes) · LW(p) · GW(p)

Not even close. This is like the astrology thing. You're claiming that belief in God is privileging the hypothesis when clearly I do not think that belief in God is privileging the hypothesis. Things like God and truth are already picked out as tenable hypotheses, the support of opposition of which are in fact clear philosophical positions. I'm not sure if I'm being clear; do you see why I think you're assuming the conclusion here? If not I could try to write out something longer with more concrete examples.

comment by lessdazed · 2011-09-11T00:17:04.024Z · score: 5 (5 votes) · LW(p) · GW(p)

If there are three otherwise equal pairwise mutually exclusive possibilities, "belief" in one is privileging the hypothesis.

The non-Bayesian "belief" language is deficient here anyway.

comment by Will_Newsome · 2011-09-11T00:19:57.104Z · score: -7 (9 votes) · LW(p) · GW(p)

Right, and in that case atheism would also be privileging the hypothesis, which means, yeah, this whole "privileging the hypothesis" thing isn't really helping.

comment by lessdazed · 2011-09-11T00:30:23.647Z · score: 8 (8 votes) · LW(p) · GW(p)

No. A-(assertion)-ism is fine.

Assertions can be true, false, incoherent, and other things. Most statements are not true. Single, otherwise perfectly fine statements that imply the falsity of many multitudes of similar otherwise perfectly fine statements cannot be justified by the claim that, in general, otherwise perfectly fine statements get the presumption of validity or consideration. However much one says it is important not to judge statements such as assertions of monotheism, that applies to the statements monotheism excludes, which are more numerous.

comment by Will_Newsome · 2011-09-11T00:35:59.580Z · score: -7 (11 votes) · LW(p) · GW(p)

Only in the complete absence of evidence. But theism already has a ton of evidence for it and was the default belief of intelligent folk for thousands of years; it's like saying a-gravity-ism isn't actually a theory about physics (to take our metaphors to the other extreme from fairies). Assigning a low prior to theism is an abuse of algorithmic probability theory. ...Am I missing something?

comment by lukeprog · 2011-09-11T01:25:32.086Z · score: 13 (17 votes) · LW(p) · GW(p)

theism already has a ton of evidence for it

Could you give an example? Like, can you state a specific fact of the world and explain which version of theism it is evidence for, and how it is evidence for that version of theism?

comment by Jack · 2011-09-11T19:26:31.344Z · score: 2 (4 votes) · LW(p) · GW(p)

All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch. For the kind of priors people deal with everyday- things like "Is Amanda Knox Guilty?" or "Will I win the hand of poker?" the evidence of the strength that we have for God's existence would be more than enough to convince us. But the prior for theism (as it is usually formulated) is so laughably, incomprehensibly low all this evidence isn't even enough for a rational person to seriously consider the theistic hypothesis. Will's claim that a low prior for theism "is an abuse of algorithmic probability theory" is the real issue. Now, that prior can be reduced if the hypothesis involves some process by which the entity could come to exist while conserving complexity (in particular, if that entity evolved and then created this universe). Will however seems to believe in something different than the usual simulation hypothesis- he may endorse something like Divine Simplicity which is complete and utter nonsense. Word games and silliness as far as I can tell- or at least smacking of a to-me-untenable moral realism.

comment by DSimon · 2011-09-12T04:35:12.778Z · score: 1 (1 votes) · LW(p) · GW(p)

All of existence is strong evidence in favor of theism. The existence of an extremely complex system is obviously evidence of an entity capable and willing to create such a system from scratch.

I don't understand how it's strong evidence. We have plenty of experience showing that complex stuff is just what you get when you leave simple stuff alone long enough, assuming you're talking about "complexity" in the thermodynamic sense. For intelligent entities to be elevated as a particular hypothesis, it seems like you need to find things like low entropy pockets and optimization behavior.

comment by Jack · 2011-09-12T04:42:05.024Z · score: 3 (3 votes) · LW(p) · GW(p)

All of existence is also evidence for the hypothesis that if you leave simple stuff alone long enough complexity arises. And the prior for that is much higher than the theism prior.

comment by DSimon · 2011-09-12T05:02:39.673Z · score: 0 (0 votes) · LW(p) · GW(p)

If both those hypotheses (thermodynamics, theism) started at the same prior, which one would receive more of a boost upwards after updating on all existence?

comment by Jack · 2011-09-12T05:24:10.860Z · score: 2 (2 votes) · LW(p) · GW(p)

That's a really good question.

In theism's favor we have mystical experience, purported revelation and claims of miracles. Against, we have the existence of evil and a lot of familiarity with how complexity can come to be through simple processes. Maybe the fact that we keep explaining things that God was once used to explain is metainductive evidence against theism... I really have trouble thinking clearly about this and suspect I've biased myself by being an atheist so long. What do you think?

comment by DSimon · 2011-09-12T06:12:33.934Z · score: 3 (3 votes) · LW(p) · GW(p)

I'm gonna think out loud for a bit, let's see if this makes sense.

I think that "complexity" is a red herring; it's dodging the real query. What we're really interested in is something more like an explanation for why the universe is the way it is, rather than some other universe, including the rather large subset of possible universes that would've resulted in nothing very interesting at all happening ever.

So: rather than "theism" and "thermodynamics", we more generally have "theism" and "everything else" as our two competing chunks of hypothesis-space to explain "why is the universe the way it is?". Let's assume that that's a meaningful question. Let's also assume that the two chunks have equal prior probability (that is, let's just forget about comparing minimum message lengths or anything like that, otherwise "everything else" gets a big head start).

Update on direct, personal, but non-replicable experiences of communicating with gods. This is at most very weak evidence in favor of theism, due to what we know about cognitive biases.

Update on negative results of attempting to replicably communicate with gods. This is weak evidence against theism; it is good evidence against a god that can communicate with us and wants to, but it doesn't say much for the remainder of possible-god-space.

Update on evolution via natural selection as the explanation for humanity's biological setup. This is also weak evidence against theism; it's good evidence only against the subset of possible-god-space that wants people to be able to notice them, or that has a particular design idea in mind and goes about creating people to fulfill that idea. Also, given the pretty major flaws of human bodies and minds, it's good evidence against the subset of possible-god-space where the gods prioritize our happiness (in both the sophisticated fun theoretic sense and the wire-head sense of happiness).

Update more generally on the existence of naturalistic patterns like evolution that can crank out relatively low-entropy things like biological life. Weak evidence against gods in general, good evidence against the subset of possible gods that specifically are interested in and capable of creating biological life.

I can go on like that for a while, but the basic pattern seems to be: "not theism" pulls generally but not majorly ahead, by taking probability mass from the parts of "theism" that involve directly causing stuff that applies only to our particular neck of the universe. Humans and the Earth are pretty weird compared to all the stuff around them, but it seems that gods are not a good explanation for that weirdness.

The hypothesis space for "theism" still has probability mass for gods that do not or cannot directly intervene in favor of privileging universes where humans are the way they are. I'm not sure how big that is compared to the entire hypothesis space of possible theisms; whatever that there is, that's how badly "theism" in general would be losing to "not theism" if they started out at the same prior.

comment by shokwave · 2011-09-12T11:59:48.370Z · score: 2 (2 votes) · LW(p) · GW(p)

What we're really interested in is something more like an explanation for why the universe is the way it is

Haha. I'm not a theist, I'm an anthropic theorist!

comment by Jack · 2011-09-12T06:38:56.028Z · score: 1 (1 votes) · LW(p) · GW(p)

Your comment definitely pulls me in your direction.

This is hard and probably not fair to do without knowing what else is in "non-theism". But in general theism has an advantage you're forgetting which is that it lets us explain everything we don't understand with magic. Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can't already explain. This means everything we don't understand is evidence for God. I don't know that the realization that we keep explaining things previously attributable to God swamps this effect. You're certainly right that the image of God one arrives at is at best indifferent and at worst humorously sadistic (with "averse to science" somewhere in the middle).

I will say that I'm not sure Occam priors actually come from any kind of analytic deduction based on something like algorithmic complexity. That is, I think the whole thing might just be one giant meta-induction on all our confirmed and falsified hypotheses where simplicity turned out to be a useful heuristic. In which case, I don't know what the prior was (doesn't matter) but p=God is just crazy low,

comment by Will_Newsome · 2011-09-12T08:21:51.966Z · score: 1 (1 votes) · LW(p) · GW(p)

That's not necessarily true. You could have a shy god. The better your epistemology gets, the shyer it gets, always staying on the edge of humanity's epistemology. But it still works miracles when people aren't looking too closely.

Though I'm not quite sure what kind of god you're talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.

comment by Bananarama · 2011-09-12T09:51:44.290Z · score: 3 (3 votes) · LW(p) · GW(p)

He used to be a shy god Until I made him my god Yeah

comment by Jack · 2011-09-12T08:23:25.211Z · score: 1 (1 votes) · LW(p) · GW(p)

Shy is what I meant by "averse to science".

Though I'm not quite sure what kind of god you're talking about in your comment; it seems weird to me to ignore the only kind of god that seems particularly likely, i.e. a simulator god/pantheon.

Agreed.

comment by Andreas_Giger · 2011-09-12T07:47:32.789Z · score: 0 (0 votes) · LW(p) · GW(p)

But in general theism has an advantage you're forgetting which is that it lets us explain everything we don't understand with magic.

If "magic" is the answer to anything we don't understand, then it isn't an explanation, it's just an abbreviation for "I don't know". This is hardly an advantage.

Big Bang, abiogenesis, what have you, theism has been defined in such a way that it can explain anything we can't already explain. This means everything we don't understand is evidence for God.

If theism can explain anything, it explains nothing. Phlogiston anyone?

comment by Jack · 2011-09-12T08:09:16.564Z · score: 0 (0 votes) · LW(p) · GW(p)

You need to read the thread instead of assuming l'm actually arguing for theism.

comment by Andreas_Giger · 2011-09-12T08:13:02.315Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm not assuming you are arguing for theism. What I assume you're arguing for is that theism being able to "explain" anything is an advantage for theism, which it is not. I'm not arguing against theism either.

comment by Jack · 2011-09-12T08:20:05.408Z · score: 0 (0 votes) · LW(p) · GW(p)

I mainly meant any step on the causal path to our existence. Apologies.

comment by Andreas_Giger · 2011-09-12T08:37:23.427Z · score: 0 (0 votes) · LW(p) · GW(p)

I see what you mean, but how does theism "explaining" currently unsolved mysteries in any way constrain experience? As far as I know, theism postulating "all was created by a god" doesn't allow me to anticipate anything I can't already anticipate anyway. Also as far as I know, it's not as if any phenomena currently not explainable were predicted by any form of theism.

I may be wrong on this though, as I am certainly not a theism expert. If so, this would be actual evidence for theism.

comment by Jack · 2011-09-12T09:15:23.600Z · score: 0 (0 votes) · LW(p) · GW(p)

This is getting too complex given my tiredness. I have a feeling I've said something dumb along the way. I'll be able to tell in the morning.

comment by lessdazed · 2011-09-12T07:20:33.330Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't see why gods would be in every magical universe.

comment by Will_Newsome · 2011-09-12T05:32:03.085Z · score: 1 (3 votes) · LW(p) · GW(p)

If you bring semi-logical considerations into it then the obvious pro-theism one is Omohundro's AI drives plus game theory. Simulators gonna simulate. (And superintelligences have a lot of computing resources with which to do so.) (Semi-logical because there are physical reasons we expect agents to work in certain ways.)

comment by Jack · 2011-09-12T05:45:48.850Z · score: 0 (0 votes) · LW(p) · GW(p)

I was not using your definition of theism since theism scenarios where the God evolved aren't distinct hypotheses from "complexity from thermodynamics and evolution". There is more evidence for your version of God, the simulation argument in particular. But miracles, revelation and mystical experience count far less.

comment by Will_Newsome · 2011-09-12T05:50:41.287Z · score: -1 (1 votes) · LW(p) · GW(p)

There are timeful/timeless issues 'cuz there's an important sense in which a superintelligence is just an instantiation of a timeless algorithm. (So it's less clear if it counts as having evolved.) But partitioning away that stuff makes sense.

comment by wedrifid · 2011-09-12T06:33:38.121Z · score: 1 (1 votes) · LW(p) · GW(p)

There are timeful/timeless issues 'cuz there's an important sense in which a superintelligence is just an instantiation of a timeless algorithm.

Not true. There are some superintelligences that could be constructed that way but that is only a small set of possible superintelligences. Others have nothing timeless about their algorithm and don't need it to be superintelligent.

comment by Will_Newsome · 2011-09-12T06:36:37.065Z · score: 0 (0 votes) · LW(p) · GW(p)

That's one hypothesis, but I'd only assign like 90% to it being true in the decisions-relevant sense. Probably gets swamped by other parts of the prior, no?

comment by wedrifid · 2011-09-12T06:54:52.636Z · score: 0 (0 votes) · LW(p) · GW(p)

Probably gets swamped by other parts of the prior, no?

I don't believe so. But your statement is too ambiguous to resolve to any specific meaning.

comment by Jack · 2011-09-12T05:53:22.075Z · score: 0 (0 votes) · LW(p) · GW(p)

There are timeful/timeless issues 'cuz there's an important sense in which a superintelligence is just an instantiation of a timeless algorithm.

What sense is that? Or rather, I'm confused about this whole bit.

comment by Will_Newsome · 2011-09-12T06:18:45.892Z · score: 2 (2 votes) · LW(p) · GW(p)

A naive view sees a lump of matter being turned into a program whose execution just happens to correlate with the execution of similar programs across the Schmidhuberian computational ensemble. (If you don't assume a computational ensemble to begin with then you just have to factor that uncertainty in.) A different view is that there's no correlation without shared causation, and anyway that all those program-running matter-globs are just shards of a single algorithm that just happens to be distributed from a physical perspective. But if those shards all cooperate, even acausally, it's only in a rather arbitrary sense that they're different superintelligences. It's like a community of very similar neurons, not a community of somewhat different humans. So when a new physical instantiation of that algorithm pops up it's not like that changes much of anything about the timeless equilibrium of which that new physical instantiation is now a member. The god was always there behind the scenes, it just waited a bit before revealing itself in this particular world.

I apologize for the poor explanation/communication.

comment by Will_Newsome · 2011-09-11T23:15:20.229Z · score: 1 (3 votes) · LW(p) · GW(p)

I think it's more something like "moral realism" than like word games. It's (I think) isomorphic to the hypothesis that all superintelligences converge on the 'same decision algorithm': and of course at that point in the discussion a bunch of words have to get tabooed and we have to get technical and quantitative (e.g. talking about Goedel machines and such, not about arbitrary paperclip maximizers which may or may not be possible).

And I dunno about Divine Simplicity. I really do prefer to talk in terms of decision theory.

comment by Vladimir_Nesov · 2011-09-11T23:24:10.445Z · score: 4 (4 votes) · LW(p) · GW(p)

isomorphic

You (lately) misuse "isomorphic", which is a word reserved for very strong relationship. "Analogy" or even "similarity" or "metaphor" would describe these relations better.

comment by Will_Newsome · 2011-09-11T23:39:30.271Z · score: 0 (2 votes) · LW(p) · GW(p)

Sorry. In my defense I felt a sharp pain each time I did it, but figured that 'analogous' wasn't quite right (wasn't quite strong enough, because Thomas Aquinas and I are actually talking about the same decision policy, maybe). Maybe if I knew category theory I could make such comparisons precise.

Thanks for calling me out on a bad habit.

comment by Vladimir_Nesov · 2011-09-11T23:47:00.881Z · score: 3 (3 votes) · LW(p) · GW(p)

Thomas Aquinas and I are actually talking about the same decision policy

This seems very unlikely (1) to be true and (2) to become known, if true.

comment by Will_Newsome · 2011-09-11T23:50:34.238Z · score: 0 (2 votes) · LW(p) · GW(p)

With Leibniz it's a lot clearer that his God was a programmer trying to make most efficient use of His resources to do the optimal thing, and he had intuitions but of course not any explicit language to talk about what that algorithm would look like. That's roughly the extent to which I think I'm thinking of the same decision algorithm as Aquinas, the convergent objective decision theory. The specifics of that decision theory, nobody knows. The point is that none of the best thinkers were thinking about a big male human in the sky, and were instead thinking about Platonic algorithms, ever since early Christianity was influenced by neoplatonism. Leibniz made it computationalesque but only recently with decision theory is theology become truly mathematical.

comment by Vladimir_Nesov · 2011-09-12T00:07:17.561Z · score: 3 (3 votes) · LW(p) · GW(p)

Maybe. In this case, most would agree that at this level of vagueness saying that two thinkers are contemplating exactly the same idea is incorrect and misleading terminology, and your comment suggests that you don't actually mean that.

comment by Will_Newsome · 2011-09-12T00:27:41.661Z · score: 0 (2 votes) · LW(p) · GW(p)

Okay. It's like a hypothesis about future revelations, where both Aquinas and I are being shown a series of different agents and we'd agree more than my prediction of LW priors would suggest as to which of those agents were more or less Godlike. It's like we have different labels for what is ultimately the same thing but we don't even know what that thing is yet; but the fact that they're different labels is misleading as to the extent to which we're talking or not talking about what is ultimately the same thing. Still, point taken.

comment by simplicio · 2011-09-11T23:54:09.791Z · score: 3 (3 votes) · LW(p) · GW(p)

...only recently with decision theory is theology become truly mathematical.

Do the theologians know about this?

comment by Will_Newsome · 2011-09-12T00:01:51.791Z · score: 0 (2 votes) · LW(p) · GW(p)

/shrugs I'd be very surprised, but I know nothing about modern theology. I've been reading philosophy by working my way forward through time. If there were/are any competent computer scientist/theologians after Leibniz then I do not yet know about them.

(ETA: I suppose I could become one if I put my mind to it but unfortunately I have this whole "figuring out how moral justification works so that everything I love about the world doesn't perish" thing to deal with.)

comment by Jack · 2011-09-12T00:22:53.505Z · score: 1 (1 votes) · LW(p) · GW(p)

That's fair. My probability for that is probably pretty close to my probability for a strong version of the simulation hypothesis+moral realism. Though it seems to me that a lot of people here think moral realism is much more likely than I do- which makes me confused about why I seem to take your ideas more seriously than others here. You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.

comment by Will_Newsome · 2011-09-12T00:33:36.346Z · score: 1 (1 votes) · LW(p) · GW(p)

You seem to express unjustified certainty on the matter, but that may just be a quirk of your personality/social role here.

I consistently talk about things I have 1-20% confidence in in a way that makes me sound like I have 80-95% confidence in them. This is largely because there's no way to non-misleadingly talk about things with 1-20% logical probability (1-20% decision theoretic importance whatever-that-means). It's really a problem with norms of communication and English language, one of the few things where it's not my fault that I can't communicate easily. Most of the time I just suck at communicating.

Unfortunately, good rationalists should spend a lot of time hovering around things with 50% probability of being true, and anything moderately on the lower side of that ends up sounding completely ridiculous and anything moderately on the higher side of that ends up sounding completely reasonable.

comment by NihilCredo · 2011-09-16T02:31:32.899Z · score: 5 (5 votes) · LW(p) · GW(p)

Then just write "around 1-20%". It will make your comments more clunky, but it's not like they can get much worse anyway, and it's better than the alternative.

comment by Will_Newsome · 2011-09-12T00:50:08.902Z · score: 1 (1 votes) · LW(p) · GW(p)

(If only there were a language that had short concepts for things like "frequency=3%, utility=+10^15,-10^6 relative to counterfactual surgery world".)

comment by Will_Newsome · 2011-09-12T00:18:54.331Z · score: 0 (2 votes) · LW(p) · GW(p)

It's complicated. The three versions of theism I can immediately think up are I suppose like "some superintelligent agent is computing us and this is important for our decisions", "all superintelligences converge on the same superinteligent supermoral superpowerful decision algorithm-policy", and "all superintelligences converge on the same superintelligent supermoral decision algorithm-policy and this is important for our decisions". In our current state of knowledge these questions are more logical or indexical-the-way-that-word-used-to-make-sense-before-decision-theory than physical (not to say those are fundamentally different kinds of uncertainty, as I believe Nesov likes to point out). So if I start talking about specific facts of the world then I have to start talking about specific facts about logical attractors akin to how fractal structures are attractors for evolving systems, and I can't point to something nice and concrete like the supposed resurrection of Jesus. This makes the debate really rather difficult--a Bayesian debate much more than a scientific one--and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.

comment by Vladimir_Nesov · 2011-09-12T00:28:21.082Z · score: 6 (6 votes) · LW(p) · GW(p)

This makes the debate really rather difficult--a Bayesian debate much more than a scientific one--and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.

At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn't think the difficulty is insurmountable and one should lose hope already.

comment by Will_Newsome · 2011-09-12T00:45:31.296Z · score: 1 (3 votes) · LW(p) · GW(p)

There is a wiser part of me that invariably agrees with that, it's just this stupid motivational coalition of mine that anti-anti-wants to warn others when they're absolutely certain of something they shouldn't be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won't be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I'm doing more harm than good the other coalition goes "Oh, you're telling me to shut up and be evil. Doesn't this sound familiar..."

comment by Will_Newsome · 2011-09-12T02:28:08.901Z · score: -1 (1 votes) · LW(p) · GW(p)

if I didn't think the difficulty is insurmountable and one should lose hope already.

Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I'd like to make a bet.

(I parsed your comment like three different ways when I used three different inductive biases.)

comment by Vladimir_Nesov · 2011-09-12T10:24:20.804Z · score: 1 (1 votes) · LW(p) · GW(p)

Efforts to figure out what otherworldly superintelligences are up to.

comment by Vladimir_Nesov · 2011-09-11T07:04:38.329Z · score: 12 (12 votes) · LW(p) · GW(p)

Well, of course there are both superintelligences and magical gods out there in the math, including those that watch over you in particular, with conceptual existence that I agree is not fundamentally different from our own, but they are presently irrelevant to us, just as the world where I win the lottery is irrelevant to me, even though a possibility.

It currently seems to me that many of such scenarios are irrelevant not because of "low probability" (as in the lottery case; different abstract facts coexist, so don't vie for probability mass) or moral irrelevance of any kind (the worlds with nothing possibly of value), but because of other reasons that prevent us from exerting significant consequentialist control over them. The ability to see the possible consequences (and respond to this dependence) is the step missing, even though your actions do control those scenarios, just in a non-consequentialist manner.

(It does add up to atheism, as a modest claim about our own world, the "real world", that it's intended to be. In pursuit of "steelmanning" theism you seem to have come up with a strawman atheism...)

comment by Jack · 2011-09-11T19:47:37.896Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't know if this is what Will has in mind- but it seems plausible that the super intelligences and gods that would be watching out for us might attempt to maximize the instantiations of our algorithms that are under their domain, so that as great a proportion of our future selves as possible will be saved (this story is vaguely Leibnizian). But I don't know that such superbeings would be capable of overcoming their own sheer unlikelihood (though perhaps some subset of such superbeings have infinite capacity to create copies of us?). You can derive a self-interested ethics from this too- if you think you'll be rewarded or punished by the simulator. The choices of the simulators could be further constrained by simulators above them-- we would need an additional step to show that the equilibrium is benevolent (especially given the existence of evil in our universe).

But I'm not at all convinced Tegmark Level 4 isn't utter nonsense. There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated. And can we calculate anthropic probabilities from infinities of different magnitudes?

comment by Vladimir_Nesov · 2011-09-11T21:39:53.952Z · score: 1 (1 votes) · LW(p) · GW(p)

There is big step from accepting that abstract objects exist to accepting that all possible abstract objects are instantiated.

I'd rather say that the so-called "instantiated" objects are no different from the abstract ones, that in reality, there is no fundamental property of being real, there is only a natural category humans use to designate the stuff of normal physics, a definition that can be useful in some cases, but not always.

comment by Will_Newsome · 2011-09-15T02:40:05.268Z · score: 1 (1 votes) · LW(p) · GW(p)

So there are easy ways to explain this idea at least, right? Humans' decisions are affected by "counterfactual" futures all the time when planning, and so the counterfactuals have influence, and it's hard for us to get a notion of existence outside of such influence besides a general naive physicalist one. I guess the not-easy-to-explain parts are about decision theoretic zombies where things seem like they 'physically exist' as much as anything else despite exerting less influence, because that clashes more with our naive physicalist intuitions? Not to say that these bizarre philosophical ideas aren't confused (e.g. maybe because influence is spread around in a more egalitarian way than it naively feels like), but they don't seem to be confusing as such.

comment by Mitchell_Porter · 2011-09-15T09:13:18.046Z · score: 0 (0 votes) · LW(p) · GW(p)

Humans' decisions are affected by "counterfactual" futures all the time when planning, and so the counterfactuals have influence

Human decisions are affected by thoughts about counterfactuals. So the question is, what is the nature of the influence that the "content" or "object" of a thought, has on the thought?

I do not believe that when human beings try to think about possible worlds, that these possible worlds have any causal effect in any way on the course of the thinking. The thinking and the causes of the thinking are strictly internal to the "world" in which the thinking occurs. The thinking mind instead engages in an entirely speculative and inferential attempt to guess or feel out the structure of possibillity - but this feeling out does not in any way involve causal contact with other worlds or divergent futures. It is all about an interplay between internally generated partial representations, and a sense of what is possible, impossible, logically necessary, etc in an imagined scenario; but the "sensory input" to these judgments consists of the imagining of possibilities, not the possibilities themselves.

comment by Jack · 2011-09-12T00:25:42.914Z · score: 1 (1 votes) · LW(p) · GW(p)

Sure, thats a fine way to put it. But, how do you even begin estimating how likely that is?

comment by Vladimir_Nesov · 2011-09-12T00:43:36.299Z · score: 0 (0 votes) · LW(p) · GW(p)

How likely what is? There doesn't appear to be a factual distinction, just what I find to be a more natural way of looking at things, for multiple purposes.

comment by Jack · 2011-09-12T00:49:51.401Z · score: 1 (1 votes) · LW(p) · GW(p)

You don't think whether or not the Tegmark Level 4 multiverse exists could ever have any decision theoretic import?

comment by Vladimir_Nesov · 2011-09-12T00:54:30.096Z · score: 1 (1 votes) · LW(p) · GW(p)

I believe that "exists" doesn't mean anything fundamentally significant (in senses other than referring to presence of a property of some fact; or referring to the physical world; or its technical meanings in logic), so I don't understand what it would mean for various (abstract) things to exist to greater or lower extent.

comment by Jack · 2011-09-12T00:57:42.843Z · score: 1 (1 votes) · LW(p) · GW(p)

Okay. What is your probability for that belief? (Not that I expect a number, but surely you can't be certain.)

comment by Vladimir_Nesov · 2011-09-12T01:04:07.493Z · score: 1 (1 votes) · LW(p) · GW(p)

That would require understanding alternatives, which I currently don't. The belief in question is mostly asserting confusion, and as such it isn't much use, other than as a starting point that doesn't purport to explain what I don't understand.

comment by Will_Newsome · 2011-09-12T01:19:20.261Z · score: 1 (1 votes) · LW(p) · GW(p)

Anyone who has positive accounts of existentness to put forth, I'd like to hear them. (E.g., Eliezer has talked about this related existentness-like-thing that has do with being in a causal graph (being computed), but I'm not sure if that's just physicalist intuition admitting much confusion or if it's supposed to be serious theoretical speculation caused by interesting underlying motivations that weren't made explicit.)

comment by Jack · 2011-09-12T01:16:54.841Z · score: 1 (1 votes) · LW(p) · GW(p)

Fine. So you agree that we should be wary of any hypotheses of which the reality of abstract objects is a part?

comment by Vladimir_Nesov · 2011-09-12T01:23:06.799Z · score: 0 (0 votes) · LW(p) · GW(p)

No, I won't see that in itself as a reason to be wary, since as I said repeatedly I don't know how to parse the property of something being real in this sense.

comment by Jack · 2011-09-12T05:04:10.150Z · score: 1 (1 votes) · LW(p) · GW(p)

Personally, I am always wary of hypotheses I don't know how to parse.

comment by Vladimir_Nesov · 2011-09-11T21:23:44.794Z · score: 0 (0 votes) · LW(p) · GW(p)

Different abstract facts aren't mutually exclusive, so one can't compare them by "probability", just as you won't compare probability of Moscow with probability of New York. It seems to make sense to ask about probability of various facts being a certain way (in certain mutually exclusive possible states), or about probability of joint facts (that is, dependencies between facts) being a certain way, but it doesn't seem to me that asking about probabilities of different facts in themselves is a sensible idea.

(Universal prior, for example, can be applied to talk about the joint probability distribution over the possible states of a particular sequence of past and future observations, that describes a single fact of the history of observations by one agent.)

comment by wedrifid · 2011-09-15T11:44:28.154Z · score: 0 (0 votes) · LW(p) · GW(p)

(I'm not sure 'compare' is the right word here.)

Different abstract facts aren't mutually exclusive, so one can't compare them by "probability", just as you won't compare probability of Moscow with probability of New York.

You just prompted me to make that comparison. I've been to New York. I haven't been to Moscow. I've also met more people who have talked about what they do in New York than I have people who talk about Moscow. I assign at least ten times as much confidence to New York as I do Moscow. Both those probabilities happen to be well above 99%. I don't see any problem with comparing them just so long as I don't conclude anything stupid based on that comparison.

There's a point behind what you are saying here - and an important point at that - just one that perhaps needs a different description.

comment by Vladimir_Nesov · 2011-09-15T11:57:50.393Z · score: 0 (0 votes) · LW(p) · GW(p)

I assign at least ten times as much probability New York as I do Moscow.

What does this mean, could you unpack? What's "probability of New York"? It's always something like "probability that I'm now in New York, given that I'm seating in this featureless room", which discusses possible states of a single world, comparing the possibility that your body is present in New York to same for Moscow. These are not probabilities of the cities themselves. I expect you'd agree and say that of course that doesn't make sense, but that's just my point.

comment by wedrifid · 2011-09-15T12:14:55.720Z · score: 0 (0 votes) · LW(p) · GW(p)

I assign at least ten times as much probability New York as I do Moscow.

What does this mean, could you unpack?

It wasn't my choice of phrase:

just as you won't compare probability of Moscow with probability of New York

When reading statements like that that are not expressed with mathematical formality the appropriate response seems to be resolving to the meaning that fits best or asking for more specificity. Saying you just can't do the comparison seems to a wrong answer when you can but there is difficulty resolving ambiguity. For example you say "the answer to A is Y but you technically could have meant B instead of A in which case the answer is Z".

I actually originally included the 'what does probability of Moscow mean?' tangent in the reply but cut it out because it was spammy and actually fit better as a response to the nearby context.

These are not probabilities of the cities themselves. I expect you'd agree and say that of course that doesn't make sense, but that's just my point.

Based on the link from the decision theory thread I actually thought you were making a deeper point than that and I was trying to clear a distraction-in-the-details out of the way.

comment by Vladimir_Nesov · 2011-09-15T12:32:02.304Z · score: 1 (1 votes) · LW(p) · GW(p)

The point I was making is that people do discuss probabilities of different worlds that are not seen as possibilities for some single world. And comparing probabilities of different worlds in themselves seems to be an error for basically the same reason as comparing probabilities of two cities in themselves is an error. I think this is an important error, and realizing it makes a lot of ideas about reasoning in the context of multiple worlds clearly wrong.

comment by Vladimir_Nesov · 2011-09-15T11:53:42.207Z · score: 0 (0 votes) · LW(p) · GW(p)

I assign at least ten times as much probability

log-odds

comment by wedrifid · 2011-09-15T11:56:55.907Z · score: 0 (0 votes) · LW(p) · GW(p)

Oh, yes, that. Thankyou.

comment by Jack · 2011-09-12T00:26:57.278Z · score: 0 (0 votes) · LW(p) · GW(p)

Really? God isn't less probable than New York?

comment by Vladimir_Nesov · 2011-09-12T00:37:15.877Z · score: 1 (1 votes) · LW(p) · GW(p)

God is an exceedingly unlikely property of our branch of the physical world at the present time. Implementations of various ideas of God can be found in other worlds that I don't know how to compare to our own in a way that's analogous to "probability". The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.

(I don't privilege the God worlds in particular, the thought experiment where the Moon is actually made out of Gouda is an equivalent example for this purpose.)

comment by wedrifid · 2011-09-15T11:51:49.124Z · score: 2 (2 votes) · LW(p) · GW(p)

The Moscow vs. New York example illustrates the difficulty with comparing worlds that are not different hypotheses about how the same world could be, but two distinct objects.

There doesn't seem to be a problem here. The comparison resolves to something along the lines of:

  • Consider all hypotheses about the physical world of the present time which include the object "Moscow".
  • Based on all the information you have calculate the probability that any one of those is the correct hypothesis.
  • Do the same with "New York".
  • Compare those two numbers.
  • ???
  • Profit.

Instantiate "???" with absurdly contrived bets with Omega as necessary. Rely on the same instantiation to a specific contrived decision to be made to resolve any philosophical issues along the lines of "What does probability mean anyway?" and "What is 'exist'?".

comment by Vladimir_Nesov · 2011-09-15T12:52:29.933Z · score: 1 (1 votes) · LW(p) · GW(p)

What you describe is the interpretation that does make sense. You are looking at properties of possible ways that the single "real world" could be. But if you don't look at this question specifically in the context of the real world (the single fact possibilities for whose properties you are considering), then Moscow as an abstract idea would have as much strength as Mordor, and "probability of Moscow" in Middle-earth would be comparatively pretty low.

(Probability then characterizes how properties fit into worlds, not how properties in themselves compare to each other, or how worlds compare to each other.)

comment by Will_Newsome · 2012-03-04T01:36:14.495Z · score: -2 (4 votes) · LW(p) · GW(p)

God is an exceedingly unlikely property of our branch of the physical world at the present time.

Our disagreement here somewhat baffles me, as I think we've both updated in good faith and I suspect I only have moderately more/different evidence than you do. If you'd said "somewhat unlikely" rather than "exceedingly unlikely" then I could understand, but as is it seems like something must have gone wrong.

Specifically, unfortunately, there are two things called God; one is the optimal decision theory, one is a god that talks to people and tells them that it's the optimal decision theory. I can understand why you'd be skeptical of the former even if I don't share the intuition, but the latter god, the demon who claims to be God, seems to me to likely exist, and if you think that god is exceedingly unlikely then I'm confused why. Like, is that just your naive impression or is it a belief you're confident in even after reflecting on possible sources of overconfidence, et cetera?

comment by Will_Newsome · 2011-09-11T22:55:23.905Z · score: -1 (3 votes) · LW(p) · GW(p)

I agree that there are many reasons that prevent us from explicitly exerting significant control, but I'm at least interested in theurgy. Turning yourself into a better institution, contributing only to the support of not-needlessly-suboptimal institutions, etc. In the absence of knowing what "utility function" is going to ultimately decide what justification is for those who care about what the future thinks, I think building better institutions might be a way to improve the probabilities of statistical-computational miracles. I think this with really low probability but it's not an insane hypothesis even if it is literally magical thinking. (The decision theory and physics backing the intuitions are probably sound, it's just that it doesn't have the feel of well-motivatedness yet. It's more one of those "If I have to choose to spend a few hours either reading about dark matter or reading about where decision theory meets human deciion policies I think it's a potentially more fruitful idea to think about the latter" things.)

I really appreciate that you responded at roughly the right level of abstraction. It seems clear that the debate should be over the extent to which thaumaturgy is possible (including thaumaturgy that helps you build FAIs faster) because that's the only way "theism" or "atheism" should affect our decision policy. (Outside of deciding which object level moral principles to pursue. I like traditional Anglican Christianity when it comes to object level morality even if I mostly ignore it.)

comment by Vladimir_Nesov · 2011-09-11T23:18:22.080Z · score: 0 (0 votes) · LW(p) · GW(p)

The decision theory and physics backing the intuitions are probably sound

Not by a long shot. Physics is probably mostly irrelevant here, it focuses only on our world; and decision theory is so flimsy and poorly understood that any related effort should be spent on improving it, for it's not even clear what it suggests to be the case, much less how to make use of its suggestions.

comment by Will_Newsome · 2011-09-11T23:27:25.315Z · score: 0 (4 votes) · LW(p) · GW(p)

I've seen QM become important because of decision problems where agents have to coordinate between quantum branches in order to reverse time. I can't go into that here but I'd at least like to flag that there are decision theory problems where things like quantum information theory shows up.

comment by Nisan · 2011-09-12T05:24:37.545Z · score: 0 (0 votes) · LW(p) · GW(p)

That actually sounds like it has a possibility of being interesting.

comment by Will_Newsome · 2011-09-11T23:25:44.224Z · score: -1 (3 votes) · LW(p) · GW(p)

Physics focuses on worlds across the entire quantum superposition. That's a pretty big neighborhood, no? Agreed about decision theory. When I said "choose to spend" I meant "I have a few hours to kill but I'm too lazy to do problem sets at the moment", not "I choose thaumaturgy as the optimal thing to study".

comment by Vladimir_Nesov · 2011-09-11T23:32:45.537Z · score: 0 (0 votes) · LW(p) · GW(p)

Physics focuses on worlds across the entire quantum superposition. That's a pretty big neighborhood, no?

Okay, that makes sense as a rich playground for acausal interaction. I don't know what pieces of intuition about physics you refer to as useful for reasoning about acausal effects of human decisions though.

comment by Will_Newsome · 2012-01-01T07:15:55.472Z · score: -3 (5 votes) · LW(p) · GW(p)

(It does add up to atheism, as a modest claim about our own world, the "real world"

Not if there is evidence of angels and demons in our world, and you can interact with them in at least semi-predictably consequential ways. Which basically everyone believes except the goats, because everyone gets evidence except the goats. Doesn't it suck to have a mind-universe that actively encourages you to fall into self-sustaining delusions? Yes, yes it does.

ETA: Apparently it's 2012 now! My resolution: not to fall into self-sustaining delusion! Happy new year LW!

comment by [deleted] · 2011-09-11T01:15:01.268Z · score: 11 (11 votes) · LW(p) · GW(p)

Assigning a low prior to theism is an abuse of algorithmic probability theory.

Can you explain this? Because I've been operating under the following assumption:

It's enormously easier (as it turns out) to write a computer program that simulates Maxwell's Equations, compared to a computer program that simulates an intelligent emotional mind like Thor.

comment by Will_Newsome · 2011-09-11T23:22:52.539Z · score: 2 (4 votes) · LW(p) · GW(p)

In order to write a computer program that actually computes (rather than models) Maxwell's equations you have to write a program that writes out a physical universe, and if you want a program that describes Maxwell's equations then the interpretation you choose is more a matter of pragmatic decision theory than of algorithmic probability theory, at least in practice. (Bounded agents aren't exactly committing an error of rationality when they don't try to act like Homo Economicus; that would be decision theoretically insane.)

But anyway. Specific things in the universe don't seem to be caused by gods. Indeed, that'd be hella unparsimonious: "God chose to add some ridiculous number of bits into His program just to make it such that there was a 'Messiah gets crucified' attractor?". The local universe as a whole, on the other hand, is this whole other thing: there's the simulation argument.

comment by Will_Newsome · 2012-03-04T01:22:59.959Z · score: -1 (1 votes) · LW(p) · GW(p)

Your comment got voted up to +10 despite Eliezer's argument being a straightforward error of algorithmic probability; I don't know what to do about that and it stresses me out. Does anyone have ideas? It saddens me to see algorithmic probability so regularly abused on LW, but the few corrective posts on the matter, e.g. by Slepnev, don't seem to have permeated the LW memeplex, probably because they're too technical.

comment by [deleted] · 2012-03-04T02:48:31.535Z · score: 0 (0 votes) · LW(p) · GW(p)

I think you are slightly misinterpreting things. As you pointed out, the established memeplex does lean heavily in favor of Eliezer's position on algorithmic probability theory rather than Slepnev's. But that doesn't mean that all of the upvoters agree with Eliezer's position--some of them probably just want to see you answer my question "Can you explain this?". In fact, I would very much like to see this question answered thoroughly in a way that makes sense to me. Vladimir's posts are a great start, but lacking knowledge of algorithmic probability theory, I don't really know how to put all of it together.

comment by Will_Newsome · 2012-03-04T02:50:59.568Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks for the correction, that people are interested in it at least is a good sign.

comment by [deleted] · 2012-03-04T02:58:20.010Z · score: 0 (0 votes) · LW(p) · GW(p)

What we really need is a well-written gentle introduction to algorithmic probability theory that carefully and clearly shows how it works and what it does and doesn't imply.

comment by DSimon · 2011-09-11T04:25:28.865Z · score: 7 (7 votes) · LW(p) · GW(p)

theism already has a ton of evidence for it

Like I said in an earlier comment, you can't just state this without a justification to this audience. It may well be that there's a perfectly good justification for this statement, but we're at the wrong inferential distance for it. If you want us to update on this supposed evidence for theism you're going to have to guide us to it, via short, individually supported, straight-forward steps.

[...]was the default belief of intelligent folk for thousands of years[...]

This is very weak evidence; consider ideas like the aether, or the standard whipping-boy 'round these parts, phlogiston.

comment by lessdazed · 2011-09-11T01:26:04.922Z · score: 5 (7 votes) · LW(p) · GW(p)

I do not think that theism has a ton of evidence for it. In particular treating things as simply evidence for theism is usually wrong. Things purported to specifically show the truth of Christianity, like Jesus' image in a shroud, can't be added to purported miracles worked by Shamans sating warring gods by sacrificing chickens, or humans, for example.

The more the truth is shown within one theory, the more probability mass it steals from others, including atheist theories - and by the time the dust settles after the first round of considering evidence, there are equally plausible theistic beliefs that each disqualify many other similarly theistic ones proportional to their likelihood of being true. The best conclusion is that intelligent people are adept at believing untrue claims about religion similar to folk beliefs around them. Every theistic philosophy has to postulate massive credulity by otherwise intelligent humans about wrong religious claims.

A-gravity-ism isn't a theory of physics. I can't tell if that means a theory saying that everything expands in size, creating the illusion of things being attracted to things proportional to size, or a theory saying that this universe is a simulation run from one without gravity as a physical law, or a theory that everything has an essence that seeks other essences in a way unrelated to mass, or what. The denial of anything other than an impossibly exhaustive conjunctive and disjunctive statement isn't a theory.

Gravity deniers may form a political party with adherents of all the theories I mentioned above to lobby against the "gravitational establishment". But their collective existence means that each has to have as part of their psychological and sociological theory that it is very easy to be deluded into believing a crackpot, unjustified theory of gravity. No particular theory, including any of theirs, get the presumption of truth.

We begin with no presumption that mass is attracted to other mass inversely proportional to the square of the distance. We don't need to to end up assigning similar odds for that we began with, because for that hypothesis there is truly a ton of evidence.

We don't see any particular theory uniquely improbably postulating rampant confabulation and motivated cognition implicated in beliefs about gravity. Every theory, even the a-gravity-ist ones, also postulates this, so there is nothing to explain that an a-gravity-ism is required to explain, or is superior at explaining, including if most intelligent people have been a-gravity-ists. This is particularly true when a-gravity-ism was the default belief.

And when something is found that better describe's matter's behavior, such as relativity, we see how the new theory says the old one was a good approximation, the ton of evidence was not simply violated.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-11T01:31:18.291Z · score: 14 (24 votes) · LW(p) · GW(p)

So I'm thinking to myself, around six years ago, "I can at least manage to publish timeless decision theory, right? That's got to be around the safest idea I have, it couldn't get any safer than that while still being at all interesting. I mean, yes, there's these possible ways you could let these ideas eat your brain but who could possibly be smart enough to understand TDT and still manage to fall for that?"

Lesson learned.

I spent a year or so diligently studying rationality as a SingInst Visiting Fellow followed by realizing that I was a few levels above nearly any other aspiring rationalist.

And this is what several levels above me looks like? I'm not omnipotent, yet, but I have a deed or two to my name at this point; for example, when I write Harry Potter fanfiction, it reliably ends up as the most popular HP fanfiction on the Internet. (Those of you who didn't get here following HPMOR can rule out selection effects at this point.) Several levels above me should make it noticeably easier to show your power in a third-party-noticeable fashion, and the fact that you can't do so should cause you to question yourself.

It's the opposite of the lesson I usually try to teach, but in this one case I'll say it: it's not the world that's mad, it's you.

comment by JoshuaZ · 2011-09-11T20:26:21.207Z · score: 11 (11 votes) · LW(p) · GW(p)

And this is what several levels above me looks like? I'm not omnipotent, yet, but I have a deed or two to my name at this point; for example, when I write Harry Potter fanfiction, it reliably ends up as the most popular HP fanfiction on the Internet. (Those of you who didn't get here following HPMOR can rule out selection effects at this point.) Several levels above me should make it noticeably easier to show your power in a third-party-noticeable fashion, and the fact that you can't do so should cause you to question yourself.

This doesn't obviously follow to me. There are skill sets which aren't due to rationality. Your own skill sets may be due in part to better writing capability and general intelligence.

comment by homunq · 2011-09-12T02:15:20.180Z · score: 8 (8 votes) · LW(p) · GW(p)

Mad skillz doesn't imply rationality. Lack of demonstrable skillz does strongly decrease the probability of mad rashunalitea.

comment by cousin_it · 2011-09-12T09:45:44.455Z · score: 8 (10 votes) · LW(p) · GW(p)

Don't hold yourself responsible when people go funny in the head on TDT-related matters. Quantum mechanics and relativity have turned much more brains to mush, does that mean they shouldn't have been published?

comment by Vladimir_Nesov · 2011-09-12T11:26:29.879Z · score: 8 (8 votes) · LW(p) · GW(p)

That would be a valid argument against, of course a relatively very weak one. Resist the temptation to make issues one-sided.

comment by Will_Newsome · 2011-09-11T22:58:34.534Z · score: 8 (16 votes) · LW(p) · GW(p)

You misinterpreted me, I wasn't claiming to be several levels above you. That's my fault for being unclear.

comment by CronoDAS · 2011-09-11T04:26:26.582Z · score: 7 (11 votes) · LW(p) · GW(p)

I mean, yes, there's these possible ways you could let these ideas eat your brain but who could possibly be smart enough to understand TDT and still manage to fall for that?"

Make something idiotproof and the universe will build a better idiot.

comment by Will_Newsome · 2011-09-11T23:07:15.626Z · score: 6 (8 votes) · LW(p) · GW(p)

I got my intuitions from ADT, not TDT, and I would've gotten all the same ideas from Anna/Steve even if you hadn't popularized decision theory. (The general theme had been around since Wei Dai in the early 2000's, no?) So you shouldn't learn that lesson to too great an extent.

comment by lessdazed · 2011-09-11T02:05:34.100Z · score: 6 (6 votes) · LW(p) · GW(p)

Reading charitably, he may mean you are a rationalist, and the other visiting fellows were peer aspiring rationalists. Also, he did say "nearly."

comment by Will_Newsome · 2011-09-11T23:03:18.891Z · score: 5 (7 votes) · LW(p) · GW(p)

Thanks; yeah, I wasn't writing carefully, but I didn't mean to say that "I am a significantly better rationalist than anybody else on the planet", I meant to say "there are important subskills of rationality where I seem to be at roughly the SingInst Research Fellow level of rationality and high above the Less Wrong poster level of rationality". My apologies for being so unclear.

comment by XiXiDu · 2011-09-11T12:39:58.241Z · score: 4 (4 votes) · LW(p) · GW(p)

It's the opposite of the lesson I usually try to teach, but in this one case I'll say it: it's not the world that's mad, it's you.

I don't think he is "mad", at least not if you press him enough. A few weeks ago I posted the following comment on one of his Facebook submissions:

Will, this off-topic, I'm curious. What would you do if 1.) any action would be ethically indifferent 2.) expected utility hypothesis was bunk 3.) all that really counted was what you want based on naive introspection?

I'm asking because you (and others) seem to increasingly lose yourself in logical implications of maximizing expected utility and ethical considerations.

Take care that you don't confuse squiggles on paper with reality.

His reply (emphasis mine):

Alexander, I don't think that's a particularly good model of my actual reasoning. The simple arguments I have for thinking about what I think about don't involve Pascalian reasoning or conjunctions of weird beliefs, and when it comes to policy I am one of the most vocal critics on LW of the unfortunate trend where otherwise smart people attempt to implement complicated policies due to the output of some incredibly brittle model, often without even taking into account opportunity costs or even considering any obviously better meta-level policies. That is insanity, and completely unrelated to any of the kinds of thinking that I do.

The reasons for my current obsessions are pretty simple, though it's worth noting that I am intentionally keeping my options very, very open.

Seed AI appears to be very possible to engineer. "Provably"-FAI isn't obviously possible to engineer given potential time constraints. If we could make a seed AI that was reflective enough, for example due strong founding in what Steve Rayhawk wants from a "Creatorless Decision Theory", and we had strong arguments about attractors that such an agent might fall into, and we had reason to believe that it might converge on something like FAI, then there might come a time when we should launch such a seed AI, even without all the proofs---for example due to being in a politically or existentially volatile situation.

Between BigNum-maximizer Goedel machine-like foomers and provably-FAI foomers, there's a long continuum of AIs that are more or less reflective on the source of their utility function and what it means that some things rather than some other things caused that particular utility function to be there rather than some other one. The typical SingInst argument that a given AGI will be some kind of strict literalist with respect to what it thinks is its utility function is simply not very strong. In fact, it even contradicts Omohundro's Basic AI Drives paper, which briefly addresses the topic: "For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit." Some small amount of reflection would seem to open the door for arbitrarily large amounts of reflection, especially if the AI is simultaneously modifying its decision theory---obviously we'd rather avoid an argument of degree where unchained intuitions are allowed to run amok.

We can make the debate more technical by looking at Goedel machines and program semantics. I have some relevant ideas but perhaps Schmidhueber's talk about some Goedel machine implementations in a few days at AGI2011 will prove enlightening.

I'm already losing steam, so we'll just call that Part One. Part Two and maybe a Part Three will talk about: decision theories upon self-modification; decision theory in context; abstract models of optimization & morality; timeless control and game theory of the big red button; and probably other miscellaneous related ideas.

But after all that I don't really know how to answer your question. Wants... Even if somehow the thousand aversions that are shoulds were no longer supposed to compel me, they'd still be there, and I'd still be motivationally paralyzed, or whatever it is I am. I'd probably do the exact same things I'm doing now: living in Berkeley with my girlfriend, eating good food, regularly visiting some of the coolest people on Earth to talk about some of the most interesting ideas in all of history. All of that sounds pretty optimal as far as living on a budget of zero dollars goes. If the aversions were lifted, but I was still me, then I haven't a good idea what I'd do. I'd be happy to immerse myself in the visual arts community, perhaps, or if I thought I could be brilliant I'd revolutionize music cognition and write by far the best artificial composer algorithms. I'd go to various excellent universities for a year or two, and if somehow I found an easy way to make money along the way, e.g. with occasional programming jobs, then I'd frequently travel to Europe and then Asia. I imagine I'd spent very many months in Germany, especially Bavaria. Walking along green mountains or resting under trees in meadow orchards, ideally with a MacBook Pro and a drawing tablet handy. I'd do much meditation and probably progress very quickly, and at some point I expect I'd develop a sort of self-refuge. But I don't know, I'm just saying things that sound nice as if can't have, and I may very well end up doing most of them no matter what future I lead.

It seems to me that he's still with the rest of humanity when it comes to what he is doing on a daily basis and his underlying desires.

comment by Vladimir_Nesov · 2011-09-11T13:51:37.190Z · score: 7 (7 votes) · LW(p) · GW(p)

I don't think he is "mad", at least not if you press him enough.

(You argue that the madness in question, if present, is compartmentalized. The intended sense of "madness" (normal use on LW) includes the case of compartmentalized madness, so your argument doesn't seem to disagree with Eliezer's position.)

comment by Will_Newsome · 2011-09-11T23:59:03.613Z · score: 1 (3 votes) · LW(p) · GW(p)

((For those who haven't seen it yet: http://lesswrong.com/lw/2q6/compartmentalization_in_epistemic_and/ ))

comment by FeepingCreature · 2011-11-22T16:40:06.052Z · score: 0 (0 votes) · LW(p) · GW(p)

Belatedly.

"For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated"

Hold on. Motivated by what? If its objectives are only implicit in the structure, then why would these objectives include their self-preservation?

comment by Will_Newsome · 2011-09-11T23:56:13.999Z · score: 0 (2 votes) · LW(p) · GW(p)

BTW, this is neat: http://arxiv.org/PS_cache/arxiv/pdf/0804/0804.3678v1.pdf

It's an attempt to better unify causal graphs with algorithmic information. The sections about various Markov properties is I think very important for explaining differences between CDT and TDT, 'cuz you can talk more clearly about exactly where a decision problem can't be solved due to Markov condition limitations.

comment by CronoDAS · 2011-09-11T04:22:37.810Z · score: 12 (16 votes) · LW(p) · GW(p)

In the meantime I hit upon the theisms of Leibniz and Aquinas and other semi-neo-Platonistic academic-style philosophers, taking a computational decision theoretic perspective while trying to do justice to their hypotheses and avoiding syncretism. Ultimately I think that academic "the form of the good and the form of being are the same" theism is a less naive perspective on cosmology-morality than atheism is---you personally should expect to be at equilibrium with respect to any timeless interaction that ends up at-least-partially-defining what "right" is, and pretending like you aren't or are only negligibly watched over by a superintelligence---whether a demiurge, a pantheonic economy, a monolithic God, or any other kind of institution---is like asking to fail the predictable retrospective stupidity test. The actual decision theory is more nuanced---you always want to be on the edge of uncertainty, you don't want to prop up needlessly suboptimal institutions or decision policies even timelessly, &c.---but pragmatically speaking this gets swamped by the huge amount of moral uncertainty that we have to deal with until our decision theories are better equipped to deal with such issues.

In what sense is this paragraph supposed to be distinguishable from gibberish?

comment by lessdazed · 2011-09-11T04:52:53.453Z · score: 15 (15 votes) · LW(p) · GW(p)

Without Time Cube, your life right is voided.

It always comes to this, doesn't it?

comment by jimrandomh · 2011-09-13T21:47:09.179Z · score: 4 (4 votes) · LW(p) · GW(p)

I like how this is similar to my last few years but in reverse. I spent a year or so diligently studying rationality as a SingInst Visiting Fellow followed by realizing that I was a few levels above nearly any other aspiring rationalist.

My own perspective on this is that most of the aspiring rationalists in the community have their own specialties and niches, and that if I blind myself to skills other than my own, they all look lower-level, but that if I pay attention to what they're focused on then I see things I can learn from them. Or to put it more succinctly, their levels are in different character classes. While I certainly don't have faith in anyone's sanity, I don't feel like this should put me on an opposing side under ordinary circumstances. I now regret not having met you when I was in the Bay area for rationality bootcamp or Burning Man, but hopefully will get a chance to remedy that the next time I'm in the area.

comment by Will_Newsome · 2011-09-14T00:03:37.811Z · score: 0 (4 votes) · LW(p) · GW(p)

I agree with this perspective and in retrospect should really have emphasized the "there are many skills of rationality and I only claim to be excellent along those dimensions that I (probably after-the-fact) deem important, skills relating to building lots of models without getting attached to them and finding subtle ways in which concepts are dissatisfactory and must be improved" aspect of my alleged superiority to everything under the sun.

comment by Classobserver · 2011-09-15T22:54:24.261Z · score: 5 (7 votes) · LW(p) · GW(p)

These skills don't seem to actually slay any problem-monsters or do anything helpful, where wizards and clerics leave a trail of steaming corpses of those monster types. Your rare class seems to be an NPC one, like commoner or adept, which would give you a low CR.

comment by lessdazed · 2011-09-10T09:19:09.276Z · score: 2 (2 votes) · LW(p) · GW(p)

Is there non-dualist theism? if not, that's the bottleneck making dismissal of theism justified, though ignorance does not excuse inaccurate descriptions of theism.

comment by Mitchell_Porter · 2011-09-10T09:51:19.799Z · score: 12 (12 votes) · LW(p) · GW(p)

My problem with Will's outlook is that if we are indeed being "watched over by a superintelligence", it doesn't appear to care about us in any very helpful way. Our relationship to it is therefore more about survival than it is about morality. According to the scenario, there is some thing out there which is all-powerful, whose actions depend partly on our actions, and which doesn't care about {long list of evolutionary and historical holocausts}, in any way that we would recognize as caring. Clearly, if we had any idea of the relationship between our actions and its actions, it would be in one's interest, first of all, to act so that it would not allow various awful things to happen to you and anyone you care about, and second, to act so that you might gain some advantage from its powers.

It appears that the only distinctive reason Will has for entertaining such a scenario is the usual malarkey about timeless game-theoretic equilibria... A while back, I was contemplating a post, to be called "Towards a critique of acausal reason", which was going to mention three fallacies of timeless decision theory: acausal democracy, acausal trade, acausal blackmail. The last two arise from a fallacy of selective attention: to believe them possible, you must only pay attention to possible worlds which only care about you in a highly specific way. But for any possible world where there is an intelligence simulating your response and which will do X if you do Y, there is another possible world where there is an intelligence which will do X if you don't do Y. And the actual multiplicity of worlds in which intelligences make decisions on the basis of decisions made by agents in other possible worlds that they are simulating it is vanishingly small, in the set of all possible worlds. Why the hell would you base your decision, regarding what to do in your own reality, on the opinions or actions of a possible entity in another world? You may as well just flip a coin. The whole idea that intelligences in causally disjoint worlds are in a position to trade, bargain, or arrive at game-theoretic equilibria is deeply flawed; it's only a highly eccentric agent which "cares" strongly about events which are influenced by only an extremely small fraction of its subjective duplicates (its other selves in the space of possible worlds). So some of these "eccentric agents" may genuinely "do deals", but there is no reason to think that they are anything more than a vanishingly small minority among the total population of the multiverse. (Obviously it would be desirable for people trying to work rigorously in TDT to make this argument in a rigorous form, but I don't see anything that's going to change the basic conclusion.)

So that leaves us in the more familiar situation, of possibly being in a simulation, or possibly facing the rise of a superintelligence in the near future, or possibly being somewhere in the guts of a cosmic superintelligence which either just tolerates our existence because we haven't crossed thresholds-of-caring yet, or which has a purpose for us which extends to tolerating the holocausts I mentioned earlier. All of this suggests that our survival and well-being are on the line, but it doesn't suggest that we are embedded in an order that is moral in any conventional sense.

comment by lessdazed · 2011-09-10T10:57:22.239Z · score: 3 (3 votes) · LW(p) · GW(p)

acausal democracy

What does that even mean? Does that mean something like: hypothetical lunar farmers in a hypothetical lunar utopia should send down some ore to Earth, and that actual people hundreds of years earlier in a representative body voted 456-450 not to fund a lunar expedition even with a rider to the bill requiring future farmers to send down ore, but the farmer votes from the future+450 > 456? So the farmers "promised' to send ore?

acausal blackmail

It seems more like a real self inflicted wound than a fallacy or fake blackmail to me, perhaps we don't disagree. it's something that is real if one has certain patterns of mind that one could self modify away from, I think.

comment by Mitchell_Porter · 2011-09-10T11:16:16.418Z · score: 2 (4 votes) · LW(p) · GW(p)

By "acausal democracy", I mean the attempt to justify the practice of democracy - specifically, the act of voting - with timeless decision theory. No-one until you has attempted to depict a genuinely acausal democracy :-) This doesn't involve the "fallacy of selective attention", it's another sort of error, or combination of errors, in which TDT reasoning is supposed to apply to agents with only a bare similarity to yourself. See discussion here for a related example.

I also think we agree regarding acausal blackmail, that for a human being it can only be a mistake. Only one of those "eccentric agents" with a very peculiar utility function or decision architecture could rationally be susceptible to acausal blackmail - its decision procedure would have to insist that "selective attention" (to just those possible worlds where the specific blackmail threat is being made) is important, rather than attending to other worlds where contrary threats are being made, or to worlds where the action under consideration will be rewarded rather than punished, or to worlds where the agent is simply a free agent not being threatened or enticed by a captor who cares about acausal dealmaking (and those worlds should be in the vast majority).

comment by Will_Newsome · 2011-09-10T11:19:31.443Z · score: 1 (1 votes) · LW(p) · GW(p)

Right, humans can't even do straightforward causal reasoning, let alone weird superrational reasoning.

comment by Wei_Dai · 2012-04-24T22:39:35.638Z · score: 2 (2 votes) · LW(p) · GW(p)

I brought up a similar objection to acausal trade, and found Nesov's reply somewhat convincing. What do you think?

comment by Mitchell_Porter · 2012-04-24T23:37:44.383Z · score: 2 (2 votes) · LW(p) · GW(p)

We are now advanced enough to tackle this issue formally, by trying to construct an equilibrium in a combinatorially exhaustive population of acausal trading programs. Is there an acausal version of the "no-trade theorem"?

comment by Vladimir_Nesov · 2012-04-27T22:13:14.240Z · score: 0 (0 votes) · LW(p) · GW(p)

I brought up a similar objection to acausal trade, and found [Nesov_2010]'s reply somewhat convincing.

His reply doesn't address the problem of potentially prohibitive difficulty of acausal trade, it merely appeals to its theoretical possibility. Essentially, the argument is that "there is still a chance", but that's not enough,

"between zero chance of becoming wealthy, and epsilon chance, there is an order-of-epsilon difference"

comment by Rain · 2011-09-11T13:54:00.995Z · score: -1 (1 votes) · LW(p) · GW(p)

My problem with Will's outlook is that if we are indeed being "watched over by a superintelligence", it doesn't appear to care about us in any very helpful way.

The only "plausible" (heh) scenario I can come up with is that a future civilization developed backward time travel, but to avoid paradox it required full non-interaction, so it developed a means of close observation without changing that which is observed, and used it to upload everyone upon their information theoretic death.

comment by Will_Newsome · 2011-09-10T12:36:32.716Z · score: -5 (9 votes) · LW(p) · GW(p)

I don't think I really have an outlook, I just notice that I am very confused about a lot of things that other people are ignoring. And my social role is different from my betting odds. (I notice I am confused about whether or not this is justified, about what meta-level policy I should have for situations like this.)

((((I feel compelled to stir up drama for people because they are too complacent to stir up drama for themselves. Unfortunately it is hard to stir up drama by going meta.))))

comment by Will_Newsome · 2011-09-10T11:53:08.808Z · score: -5 (9 votes) · LW(p) · GW(p)

You're talking about theodicy; have you read Leibniz on the subject? The most existent of all possible worlds, the world that takes the least bits to specify, because existence is good... Anyway I find it plausible that the universe is weird and that miracles do happen, but once luck reveals clearly how its decision policy works you get Goodhardt's law problems, so it lies low. Bow chicka bow wow, God of the gaps FTW.

comment by Mitchell_Porter · 2011-09-10T12:14:12.904Z · score: 5 (5 votes) · LW(p) · GW(p)

In A History of Western Philosohy, Bertrand Russell wrote of Leibniz that

His best thought was not such as would win him popularity, and he left his records of it unpublished in his desk. What he published was designed to win the approbation of princes and princesses. The consequence is that there are two systems of philosophy which may be regarded as representing Leibniz: one, which he proclaimed, was optimistic, orthodox, fantastic, and shallow; the other, which has been slowly unearthed from his manuscripts by fairly recent editors, was profound, coherent, largely Spinozistic, and amazingly logical. It was the popular Leibniz who invented the doctrine that this is the best of all possible worlds (to which F. H. Bradley added the sardonic comment "and everything in it is a necessary evil"); it was this Leibniz whom Voltaire caricatured as Doctor Pangloss. It would be unhistorical to ignore this Leibniz, but the other is of far greater philosophical importance.

and Russell seems to think that "best of all possible worlds" is the shallow public theodicy, and "most existent" is the private theodicy, and they are not the same thing - since privately (according to Russell's account), Leibniz speculated that the world which gets to exist is the one which has the most entities in it (maximum number of entities logically capable of coexisting). But then Russell also writes that Leibniz may have considered this a sign of God's goodness - it's good to exist, and God makes the world with the most possible things... I am much more sympathetic to Nietzsche's metaphysics, as described in the posthumous notes collected in The Will to Power, and his skeptical analysis of the psychology behind philosophies which set forth identities such as Reason = Virtue = Happiness. Nietzsche to my knowledge did not speculate as to why there is something rather than nothing, one reason why Heidegger could see Nietzsche's ontology as the final stage in the forgetting of Being, but his will-to-power analysis is plausible as an explanation of why beings-who-happen-to-exist end up constructing metaphysical systems which say that to be is good, and to be is inevitable, so goodness is inevitable.

comment by Jack · 2011-09-10T12:19:55.992Z · score: 2 (2 votes) · LW(p) · GW(p)

Nietzsche's metaphysics, as described in the posthumous notes collected in The Will to Power

The Will to Power is universally regarded as not representative of Nietzsche's views.

comment by Mitchell_Porter · 2011-09-10T12:34:16.711Z · score: 2 (2 votes) · LW(p) · GW(p)

So what parts would he have disagreed with?

comment by Jack · 2011-09-11T14:58:17.800Z · score: 2 (2 votes) · LW(p) · GW(p)

So Nietzsche wrote a bunch of stuff in notebooks and even started writing a book called "The Will to Power". He abandoned it but used a lot of the ideas in his last few works. Upon his death his anti-semitic sister arranged the notebooks and abandoned text into "The Will to Power". Much of it is in line with stuff he published and that stuff, it is fair to say is representative of his views. But where TWTP says things Nietzsche didn't include in his later works (which were written after the notes used to create TWTP)... it's likely that he that he didn't publish those ideas because he ended up not liking them for whatever reason. Plus, the editorial decisions made by his sister were made by his sister... for example Nietzsche made lots of organization outlines only one of which had "Discipline and Breeding" as a book title... that that outline was chosen in lieu of others is a result of his sister's ideology (which Nietzsche opposed).

I doubt there is anything in there that is so far away from Nietzsche's actual views that you aren't equipped to talk about Nietzsche (the stuff you talk about above is certainly something he's be down with). I can't tell you what specifically is in TWTP that isn't in his other books because I haven't read it- it's usually just something read by Nietzsche scholars.

(Looking at this comment it kind of sounds like I'm playing status games "You read the wrong book." etc. I don't mean that, you probably have at least as good an understanding of Nietzsche's views as I do. Mainly I'm just recommending that you be careful about ascribing all of TWTP to Nietzsche and pointing this out so that people don't read your comment and then go out and buy TWTP in order to understand Nietzsche. And of course, just because Nietzsche didn't agree with everything in the book doesn't mean what's in there aren't good ideas.)

comment by Mitchell_Porter · 2011-09-12T03:54:13.072Z · score: 2 (4 votes) · LW(p) · GW(p)

I agree with much of what you say, except

But where TWTP says things Nietzsche didn't include in his later works (which were written after the notes used to create TWTP)... it's likely that he that he didn't publish those ideas because he ended up not liking them for whatever reason.

There are sections of TWTP - e.g. "The Mechanical Interpretation of the World" - which cover topics simply not addressed in any of Nietzsche's finished works. (By the way, the version of TWTP that I'm familiar with is Walter Kaufmann's.) So all we can say is that they lack the final imprimatur of appearing in a book "author"ized by Nietzsche himself. There's no evidence here of a change of opinion. It is at least possible that he would subsequently have disagreed with some of the thoughts anthologized in TWTP - though presumably he agreed with them at the time he wrote them.

On at least one subject - the meaning of the "eternal recurrence" - I believe TWTP shows that a lot of Nietzsche scholarship has been on the wrong track. Many interpreters have said that the eternal recurrence is a state of mind, or a metaphor, anything but a literal recurrence. But in these notes, Nietzsche shows himself to be interested in eternal recurrence as a physical hypothesis. He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally. He thinks this is the world-picture that 20th-century science will produce and endorse. And then - this is the part I think is hilarious - he thinks that lots of people will kill themselves because they can't bear the thought of their lives being repeated infinitely often in the future cycles of time. The "superman" is supposed to be someone who finds the eternal recurrence a joyous thing, because they love their life and the whole of existence, and the eternal recurrence provides their existence with a sort of eternity that is otherwise not available in a universe of relentless flux. In this regard Nietzsche's futurology was doubly wrong - first, that isn't the world-picture that science produces; second, it's only a very rare individual who would take this claim - the alleged fact of existing again in a distant future aeon - seriously enough to make it the basis for choosing life or death. But I have the same appreciation for the imagination behind this piece of Nietzschean cultural futurology, as I do for the uniquely weird worldviews that are sometimes exhibited on LW. :-)

comment by Jack · 2011-09-12T04:33:58.659Z · score: 1 (1 votes) · LW(p) · GW(p)

Well, they were personal notebooks- so who knows how speculative he was being. The key thing is, this wasn't what he was working on when he died. Published works intervened between TWTP and his death. That combined with the sheer implausibility of the metaphysics you've described might suggest he wasn't that committed to the whole thing ;-). It sounds fascinating though.

He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally.

Are there any arguments for these claims? I'm fascinated by the (often very compelling!) arguments past generations had for how the physical world had to be. Aristotle is the best at this.

comment by Will_Newsome · 2011-09-10T12:28:41.845Z · score: 0 (0 votes) · LW(p) · GW(p)

(to which F. H. Bradley added the sardonic comment "and everything in it is a necessary evil")

Weird, I'm pretty sure that was in the original.

comment by lukeprog · 2011-09-11T00:10:31.374Z · score: 2 (2 votes) · LW(p) · GW(p)

And I thought it was Voltaire's satire of Leibniz.

comment by Will_Newsome · 2011-09-11T00:12:08.274Z · score: 0 (2 votes) · LW(p) · GW(p)

Here: http://www.class.uidaho.edu/mickelsen/texts/Leibniz%20-%20Theodicy.htm

comment by lukeprog · 2011-09-11T00:29:07.634Z · score: 1 (1 votes) · LW(p) · GW(p)

Oh. Yes, the idea was in Leibniz, but the specific quote is Voltaire's, I believe.

comment by Will_Newsome · 2011-09-11T00:31:41.775Z · score: 0 (2 votes) · LW(p) · GW(p)

Speaking of Voltaire, his theism is a really good example of meta-contrarianism.

comment by Will_Newsome · 2011-09-11T00:29:55.184Z · score: 0 (0 votes) · LW(p) · GW(p)

Ah, got it.

comment by Will_Newsome · 2011-09-10T09:38:04.935Z · score: 0 (4 votes) · LW(p) · GW(p)

E.g. this is what most theism actually looks like: http://plato.stanford.edu/entries/divine-simplicity/ . A lot of it is simply hypotheses about attractors for superintelligences and the Platonic algorithms that they embody. Trust me, I am not just being syncretic.

comment by lessdazed · 2011-09-10T10:45:17.516Z · score: 6 (6 votes) · LW(p) · GW(p)

this is what most theism actually looks like

Please make a claim. Are you saying that if one were to take a proxy for quality like citations to papers/capita of religious studies branches of universities, or the top theological seminaries attached to the most competitive Ivy League Schools, or similar, you are 95% confident that at least 70% of the theist professors believe something like this?

Or is it a stronger claim? With 50% confidence, what percentage of counties and county-equivalents in the United States have most self-identified theists or spiritualists or whatever believing something like this? 50%? 10%?

In what percentage are there at least ten such people?

comment by Will_Newsome · 2011-09-10T11:08:55.103Z · score: 0 (6 votes) · LW(p) · GW(p)

I don't see how that is the claim at issue. Most people are incompetent. That tells us little about what theism is. How would knowing the answer tell us anything useful about whether or not theism itself is or isn't a tenable philosophical position? I really dislike focusing on individual people, I'd rather look at memes. Can I guess at how many of the SEP's articles on theism are not-obviously-insane and not just if-a-tree-falls debates? I think that question is much more interesting and informative. I'd say... like, 30%.

comment by lessdazed · 2011-09-10T11:15:23.397Z · score: 5 (5 votes) · LW(p) · GW(p)

Why call it "theism"?

comment by Will_Newsome · 2011-09-10T11:23:05.738Z · score: 2 (10 votes) · LW(p) · GW(p)

That's what the Stanford Encyclopedia of Philosophy calls it. Most biologists are mediocre at biology (many are creationists, God forbid!); that doesn't mean we should call the thing that good biologists do by some other name. (If this is a poor analogy I don't immediately see how, but it does have the aura of an overly leaky analogy.) If you asked "why reason in terms of theism instead of decision theory?" then I'd say "well we should obviously reason in terms of decision theory; I'd just prefer we not have undue contempt for an interesting memeplex that we're not yet very familiar with".

comment by lessdazed · 2011-09-10T11:56:21.044Z · score: 5 (5 votes) · LW(p) · GW(p)

Biology is the repository of probable information left over after putting data and experiments through the sieve of peer review (the process is also "biology"). The more important ideas get parsed more. Mediocre enough biologists don't add to biology.

Theology starts with a belief system and is the remnants that by their own lights theologians have not discarded. The process of discarding is also called theology. Unsophisticated people are likely to fail to see what is wrong with more of the original belief set than sophisticated ones, they don't add to showing what is wrong with the belief pile. It isn't a crazy analogy, but it's not quite symmetrical.

To call this theism says more about the language than the beliefs you describe. Is the word closest in idea-space to this memeplex theism? OK, maybe, but it could have been "hunger for waffles and other, lesser breakfast foods" with a few adjustments to the history without adjusting anything at all about the ideas. These beliefs didn't originate as the unfalsifiable part of an arbitrary cult focused on breakfast, as it happens.

an interesting memeplex

it's interesting as the least easy to falsify, arguably unfalsifiable core of motivated, unjustified belief. It's not interesting as something at all likely to be true.

comment by Will_Newsome · 2011-09-10T12:05:21.537Z · score: -3 (5 votes) · LW(p) · GW(p)

it's interesting as the least easy to falsify, arguably unfalsifiable core of motivated, unjustified belief. It's not interesting as something at all likely to be true.

I disagree; certain ideas that theism originated are as likely to be true as certain ideas about decision theory are likely to be true, because they're isomorphic.

You are reasoning from cached priors without bothering to recompute likelihood ratios (not like you're actually looking at evidence at all; did you read the article on divine simplicity? Do you have a knockdown reason that I should ignore that debate other than "stupid people believe in God, therefore belief in God is stupid"?). You are ignoring evidence. "Ignore": ignorance. You are ignorant about theism. That's cool; you don't have all the time in the world. But don't confidently assert that something is not likely to be true when you clearly know very little about it. This is an important part of rationality.

Edit: In other words, you do not have magical inductive biases and you have seen significantly less evidence than I have. This should be more than enough to cause you to be hesitant.

comment by lessdazed · 2011-09-10T12:14:00.590Z · score: 5 (5 votes) · LW(p) · GW(p)

You are ignorant about theism. That's cool; you don't have all the time in the world. But don't confidently assert that something is not likely to be true when you clearly know very little about it. This is an important part of rationality.

You confidently assert my ignorance. That assertion is notable.

you have seen significantly less evidence than I have.

You're much more confident of this than I am. You should be more hesitant.

comment by Will_Newsome · 2011-09-10T12:23:12.889Z · score: 0 (0 votes) · LW(p) · GW(p)

Duly noted. Can we share a few representative reasons? What do you think I don't already think you know about why "theism" (a word that may soon need to be tabooed) isn't worth looking into?

comment by Will_Newsome · 2011-09-10T10:11:15.497Z · score: 0 (2 votes) · LW(p) · GW(p)

I can briefly try to translate the divine simplicity thing: "The perfectly reflective Platonic decision algorithm that performs optimally on all optimization problems doesn't 'possess' the quality of optimizerness---it is optimization, just as it is reflectivity. Being a Platonic algorithm, it does not have inputs or outputs, but controls all programs ambiently. It has no potentiality, only actuality: everything is at equilibrium." And so on and so forth. (Counterarguments would be like "what, there is a sense of equilibrium that implies that this algorithm is a decision theoretic zombie, I think you're using a non-intuitive definition of 'equilibrium'" and things like that, or something. It's better to talk in terms of decision theory but that doesn't mean they're not actually equivalent. The parts that don't boil down to predictions about decision theory tend to be just quibbling over ways of carving reality, which is often informative but not when the subject matter is so politically charged.)

comment by jimrandomh · 2011-09-13T21:36:04.206Z · score: 8 (8 votes) · LW(p) · GW(p)

I can briefly try to translate the divine simplicity thing: "The perfectly reflective Platonic decision algorithm that performs optimally on all optimization problems doesn't 'possess' the quality of optimizerness---it is optimization, just as it is reflectivity. Being a Platonic algorithm, it does not have inputs or outputs, but controls all programs ambiently. It has no potentiality, only actuality: everything is at equilibrium." And so on and so forth.

I think you need to take a big step back and consider what you've studied and what you've come up with. I'm not sure where divine simplicity fits in your worldview exactly, but in the course of my own decision theory studies, I came up with an issue that seems to shoot down that concept entirely: there can be no decision algorithm that performs optimally on all optimization problems, because there are optimization problems for which the solution space is infinite, and there is an infinite chain of progressively better solutions. Worse, the universe we presently occupy appears to be infinite, and to have such chains for almost all sensible optimization criteria. The best we can do, decision-theory wise, is to bite off special cases, come up with transforms and simplifications to make those cases more broadly applicable, and fall back on imperfect heuristics for the rest.

But there's a much bigger issue here. It looks to me like you've taken a few batches of concentrated confusion - the writings of old philosophers - and invented a novel interpretation to give it meaning. You then took these reinterpretations and mixed them into what started out as a sensible worldview. You're talking about studying Aquinas and Leibniz, and this makes me very worried, because my longstanding belief is that these authors, and most others of their era, are cognitive poison that will drive you insane. Furthermore, your writings recently look to me like evidence that this may actually be happening. You should probably be looking to consolidate your findings, and to communicate them.

comment by Will_Newsome · 2011-09-13T23:56:38.705Z · score: -1 (3 votes) · LW(p) · GW(p)

Divine simplicity is a hypothesis, what you say is strong evidence against that hypothesis. But I think it's still a coherent hypothesis. At the very least we can talk about Goedelian stuff or NFL theorems to counterargue a bunch of the stronger 'omnipotence, omniscience' stuff... but things are all weird when you're that abstract; you can just say, "okay, well, this agent is multipartite and so even if one part has one Chaitin's constant this other part has another Chaitin's constant and so you can get around it", or something, but I doubt that actually works or makes sense. On the other hand it's always really unclear to me when the math is or isn't being used outside its intended domain. Basically I notice I am confused when I try to steel man "optimal decision policy" arguments, for or against. (There's also this other thing that's like "optimal given boundedness" but I think that doesn't count.)

I disagree about Aquinas and Leibniz. I see them as putting forth basically sane hypotheses that are probably wrong but probably at least a little relevant for our decision policies. I don't think that theology is a useful area of study, not when we have decision theory, but I really don't think that Leibniz especially was off track with his theology. (I dunno if you missed my comments about how he was really thinking in terms of the intuitions behind algorithmic information theory?)

comment by CarlShulman · 2011-09-14T00:41:28.681Z · score: 8 (10 votes) · LW(p) · GW(p)

I have significant familiarity with Aquinas, and I do not see anything worth reading Aquinas for, save perhaps arguing with theists. Insofar as there are interesting ideas in his writing, they are better presented elsewhere (particularly in modern work with the benefit of greatly improved knowledge and methods), with greater clarity and without so much nonsense mixed in. Recommending that people read Aquinas, or castigating them for not having read Aquinas, seems like a recipe for wasting their time.

comment by Will_Newsome · 2011-09-14T02:02:50.708Z · score: 1 (5 votes) · LW(p) · GW(p)

(I agree with this.)

comment by lessdazed · 2011-09-10T10:34:28.297Z · score: 4 (4 votes) · LW(p) · GW(p)

I saw this after making my Plato's theory of forms comment at 10:19:54AM.

This is what I thought the article was saying.

the subject matter is so politically charged

Everyone seems to be operating under something like the law of conservation of ninjitsu here. You seem to be perhaps the worst offender, with gratuitous offensiveness and the like being approximately equal among all of the few theists here and the many atheists.

In this thread alone:

Sadly Less Wrong seems to know absolutely nothing about theism, which ends up with me repeatedly facepalming when people feel obliged to demonstrate how incredibly confident they are that theism is stupid and worth going out of their way to signal contempt for.

It tends to be like, yeah, we get it, minds reside in brains, neuroscience is cool and shit, but repeatedly bringing it up as if nobody had ever heard that before is a facepalm-inducing red herring.

theism is a less naive perspective on cosmology-morality than atheism is

Also bad is how you characterize what LW thinks, this seems like a artificial way to pretend you have the only or best informed position by averaging many people on here with the people who do't know and don't care to know about things that the best evidence they have shows are elaborate rationalizations and meta-hipsterism by intellectuals.

comment by Will_Newsome · 2011-09-10T09:29:27.035Z · score: 0 (4 votes) · LW(p) · GW(p)

Yes, lots of it. E.g. Leibniz's monadology is monist (obviously); it's equivalent to computationalism in fact. But note that it's not like dualism is well-understood 'round these parts either. It's really hard to find a way in which you can say that a property dualist is wrong. It tends to be like, yeah, we get it, minds reside in brains, neuroscience is cool and shit, but repeatedly bringing it up as if nobody had ever heard that before is a facepalm-inducing red herring.

comment by lessdazed · 2011-09-10T10:19:54.532Z · score: 1 (1 votes) · LW(p) · GW(p)

It seems that monadology relies on something like Plato's theory of forms. That fills the role usually played by dualism in theism. Is there theism without that?

comment by Will_Newsome · 2011-09-10T11:15:29.509Z · score: 0 (4 votes) · LW(p) · GW(p)

Theism without computationalism? It's not popular, but most Less Wrong folk are computationalists AFAIK. Hence the "timeless decision theory" and "Tegmark" and "simulation argument" memes floating around. I don't see how a computationalist can ignore theism on the grounds that it claims that abstract things exist.

comment by lessdazed · 2011-09-10T11:17:09.927Z · score: 4 (4 votes) · LW(p) · GW(p)

I do not think Plato's forms are equivalent to computationalism.

comment by Jack · 2011-09-10T11:25:18.749Z · score: 1 (1 votes) · LW(p) · GW(p)

Modern platonism is just the view that abstract objects exist.

comment by lessdazed · 2011-09-11T02:21:05.703Z · score: 4 (4 votes) · LW(p) · GW(p)

Do they causally do anything?

comment by Jack · 2011-09-11T13:20:16.671Z · score: 1 (1 votes) · LW(p) · GW(p)

Of course not.

comment by Will_Newsome · 2012-03-04T01:15:49.977Z · score: 0 (2 votes) · LW(p) · GW(p)

What? Of course abstract objects have causal influence... why do you think people don't think they do?

comment by Jack · 2012-03-04T08:16:32.086Z · score: 1 (1 votes) · LW(p) · GW(p)

Because I've studied metaphysics? It's not even a quirky feature of abstract objects it's often how they are defined. Now that distinction may be merely an indexical one-- the physical universe could be an abstraction in some other physical universe and we just call ours 'concrete' because we're in it. But the distinction is still true.

If you can give an instance of an abstract object exerting causal influence that would be big news in metaphysics.

(Note that an abstract object exerting causal influence is not the same as tokens of that abstraction exerting causal influence due features that the token possesses in virtue of being a token of that abstract object. That is "Bayes Theorem caused me to realize a lot of my beliefs were wrong" is referring to the copy of Bayes Theorem in your brain, not the Platonic entity. There are also type-causal statements like "Smoking causes cancer" but these are not claims of abstract objects having causal influence just abstractions on individual, token instances of causality. None of this, or my assent to lessdazed question, reflects a disparaging attitude toward abstract objects. You can't talk about the world without them. They're just not what causes are made of.)

comment by Will_Newsome · 2012-03-04T08:24:48.696Z · score: 0 (0 votes) · LW(p) · GW(p)

Okay, thanks; right after commenting I realized I'd almost certainly mixed up my quotation and referent. (Such things often happen to a computationalist.)

ETA: A few days ago I got the definition of moral cognitivism completely wrong too... maybe some of my neurons are dying. :/

comment by Will_Newsome · 2011-09-10T11:30:52.939Z · score: 0 (2 votes) · LW(p) · GW(p)

Metaphysics of abstract processes: Pythagoras -> Leibniz -> Turing. Platonism -> monadology -> algorithmic information theory.

Math and logics: Archimedes et al -> Leibniz -> Turing. Logic -> symbolic logic -> theory of computation.

Philosophy of cognition: (haven't researched yet) -> Leibniz -> Turing. ? -> alphabet of thought -> Church-Turing thesis.

Computer engineering: Archimedes -> Pascal-Leibniz -> Turing. Antikythera mechanism -> symbolic calculator -> computer.

comment by Jack · 2011-09-10T11:54:35.308Z · score: 1 (1 votes) · LW(p) · GW(p)

I think you're vastly over emphasizing the historic importance of Leibniz.

comment by Will_Newsome · 2011-09-10T12:12:53.950Z · score: 0 (0 votes) · LW(p) · GW(p)

True, but I think only in the same sense that everyone vastly overemphasizes the importance of Babbage. They both made cool theoretical advances that didn't have much of an effect on later thinking. This gives a sort of distorted view of cause and effect but the counterfactual worlds are actually worth figuring in to your tale in this case. Wow that would take too long to write out clearly, but maybe it kinda makes sense. (Chaitin actually discovered Leibniz after he developed his brand of algorithmic information theory; but he was like 'ah, this guy knew where it was at' when he found out about him.)

comment by RichardKennaway · 2011-09-10T20:17:47.705Z · score: 1 (1 votes) · LW(p) · GW(p)

OTOH, Wiener already in 1948 explicitly saw the digital computer as the fulfilment of Leibniz's calculus ratiocinator. (Quoted on Wiki here, full text (maybe paywalled) here.)

comment by Jack · 2011-09-10T12:14:31.424Z · score: 1 (1 votes) · LW(p) · GW(p)

Chaitin actually discovered Leibniz after he developed his brand of algorithmic information theory; but he was like 'ah, this guy knew where it was at' when he found out about him.

Interesting! You have a cite?

comment by Will_Newsome · 2011-09-10T12:24:53.308Z · score: 3 (3 votes) · LW(p) · GW(p)

This is the original essay I read, I think: http://evans-experientialism.freewebspace.com/chaitin.htm

I should point out that Leibniz had the two key ideas that you need to get this modern definition of randomness, he just never made the connection. For Leibniz produced one of the first calculating machines, which he displayed at the Royal Society in London, and he was also one of the first people to appreciate base-two binary arithmetic and the fact that everything can be represented using only 0s and 1s. So, as Martin Davis argues in his book The Universal Computer: The Road from Leibniz to Turing, Leibniz was the first computer scientist, and he was also the first information theorist. I am sure that Leibniz would have instantly understood and appreciated the modern definition of randomness.

comment by Will_Newsome · 2011-09-10T12:18:47.627Z · score: 0 (0 votes) · LW(p) · GW(p)

It'll take a few minutes, Googling Leibniz+Chaitin gives a lot of plausible hits.

comment by Will_Newsome · 2011-09-10T11:42:47.933Z · score: 0 (0 votes) · LW(p) · GW(p)

(The history of how the idea of computation got formulated is really pertinent for FAI researchers. Justification is a lot like computation. I think we're nearing the "Leibniz stage" of technical moral philosophy. Luckily we already have the language of computation (and decision theory) to build off of in order to talk about justification. Hopefully that will reduce R&D time from centuries to decades. I'm kind of hopeful.)

comment by Jack · 2011-09-10T10:44:32.656Z · score: 0 (0 votes) · LW(p) · GW(p)

Leibniz doesn't believe in material substance, so in no sense is he a dualist. If you are asking if there are materialists theists- eh, maybe but as far as I know it has never been a well developed view. That said, the entire platonism-materialism question can probably be reduced to an issue of levels of simulation... in which case it is easy to envision a plausible theism that is essentially dualist but not repugnant to our computationalist sensibilities.

comment by lessdazed · 2011-09-10T10:47:37.828Z · score: 0 (0 votes) · LW(p) · GW(p)

It would be repugnant to their sensibilities if you described in detail the sorts of scenarios that comply with our sensibilities.

comment by Jack · 2011-09-10T10:56:10.784Z · score: 0 (0 votes) · LW(p) · GW(p)

For most, probably. But you might be surprised how much unorthodoxy is out there.

comment by lessdazed · 2011-09-10T11:02:04.694Z · score: 0 (0 votes) · LW(p) · GW(p)

If you first tell them, or give them enough information to realize, or strongly suspect, that without this concession by them they fail, then you can get them to agree to very nearly anything.

But those people are slightly different than the versions uninformed of this, people who would reject it.

The unorthodoxy is motivated and not serious in terms of relative degrees of belief based on what is most likely true.

comment by Jack · 2011-09-10T11:19:41.105Z · score: 0 (0 votes) · LW(p) · GW(p)

"Fall"? I don't understand the second sentence either.

The unorthodoxy is motivated and not serious in terms of relative degrees of belief based on what is most likely true.

Often, though on occasion their reasons are isomorphic to stories we'd find plausible. If someone thought it was worthwhile to reinterpret some of the older theistic philosophers in light of modern information theory and computer science... some interesting ideas might fall out.

But yes- I doubt there are more than a handful of educated theists not working with the bottom line already filled in.

comment by lessdazed · 2011-09-10T11:37:14.080Z · score: 0 (0 votes) · LW(p) · GW(p)

Edited "fall" to "fail".

the second sentence means I am trying to distinguish between who someone is and who they might have been. Another intuition pump: put identical theists in identical rooms, on one play a television program explaining how they have to admit that all good evidence makes it unlikely there exists (insert theological thing here, an Adam and eve, a soul, whatever) and on the other play something unrelated to the issue. Then ask the previously identical people if they believe in whatever poorly backed theological thing they previously believed. the unorthodox will flee the false position, but only if they see it as obviously false.

Something like this.

Often, though on occasion their reasons are isomorphic to stories we'd find plausible.

That doesn't mean the reasons we find it implausible aren't good or can't be taught., just as teaching how carbon dating relates to the age of the Earth militates against believing it is ~6,000 years old, one can show why what ancestors tell you in dreams isn't good evidence.

So my conclusion, my supposition, is that if you muster up the most theistic-compatible metaphysics you find plausible, and show it to those theists who don't know why anything more supernatural is implausible, inconsistent or incoherent, they will reject it.

That they accept it after learning that you have good objections to anything more theistic is not impressive at all.

comment by Jack · 2011-09-10T11:50:05.495Z · score: 1 (1 votes) · LW(p) · GW(p)

Got it. Don't disagree. But it doesn't follow that a) we should disregard all theistic philosophy or b) not use theistic language. Given that there are live possibilities that resemble theism the circle of concepts and arguments surrounding traditional, religious theism are likely to be fruitful.

comment by lessdazed · 2011-09-10T11:59:50.452Z · score: 0 (0 votes) · LW(p) · GW(p)

Immortals with infinite mind space definitely should not ignore theistic philosophy.

It's sometimes useful to use theistic language, sometimes not. Usually when I see it when theism isn't a subject, it isn't useful.

comment by Will_Newsome · 2011-09-10T11:57:35.303Z · score: -1 (1 votes) · LW(p) · GW(p)

But yes- I doubt there are more than a handful of educated theists not working with the bottom line already filled in.

Rationalization is an important skill of rationality. (There probably needs to be a post about that.) But anyway, I think my "theistic" intuitions are very similar to those of Thomas Aquinas, a.k.a. the rock that Catholic philosophy is built on. Like, actually similar in that we're thinking about the same decision agent and its properties, not just we're thinking about similar ideas.

comment by Will_Newsome · 2011-09-10T17:33:54.110Z · score: 1 (5 votes) · LW(p) · GW(p)

Perhaps more important, I have a visceral knowledge that I can experience something personally, and be confident of it, and be completely wrong about it.

Eliot:

And last, the rending pain of re-enactment
Of all that you have done, and been; the shame
Of motives late revealed, and the awareness
Of things ill done and done to others' harm
Which once you took for exercise of virtue.
Then fools' approval stings, and honour stains.