Posts

Are search engines perpetuating our biases? 2011-07-04T01:29:30.887Z
The Joys of Conjugate Priors 2011-05-21T02:41:22.206Z
Occam's Razor, Complexity of Verbal Descriptions, and Core Concepts 2011-04-13T02:12:55.140Z

Comments

Comment by TCB on [SEQ RERUN] Math is Subjunctively Objective · 2012-07-12T07:47:11.454Z · LW · GW

Two sheep plus three sheep equals five sheep. Two apples plus three apples equals five apples. Two Discrete ObjecTs plus three Discrete ObjecTs equals five Discrete ObjecTs.

Arithmetic is a formal system, consisting of a syntax and semantics. The formal syntax specifies which statements are grammatical: "2 + 3 = 5" is fine, while "2 3 5 + =" is meaningless. The formal semantics provides a mapping from grammatical statements to truth values: "2 + 3 = 5" is true, while "2 + 3 = 6" is false. This mapping relies on axioms; that is, when we say "statement X in formal system Y is true", we mean X is consistent with the axioms of Y.

Again, this is strictly formal, and has no inherent relationship to the world of physical objects. However, we can model the world of physical objects with arithmetic by creating a correspondence between the formal object "1" and any real-world object. Then, we can evaluate the predictive power of our model.

That is, we can take two sheep and three sheep. We can model these as "2" and "3" respectively; when we apply the formal rules of our model, we conclude that there are "5". Then we count up the sheep in the real world and find that there are five of them. Thus, we find that our arithmetic model has excellent predictive power. More colloquially, we find that our model is "true". But in order for our model to be "true" in the "predictive power" sense, the formal system (contained in the map) must be grounded in the territory. Without this grounding, sentences in the formal system could be "true" according to the formal semantics of that system, but they won't be "true" in the sense that they say something accurate about the territory.

Of course, the division of the world into discrete objects like sheep is part of the map rather than the territory...

Comment by TCB on [SEQ RERUN] Existential Angst Factory · 2012-07-08T06:25:33.834Z · LW · GW

In this article, Eliezer implies that it's the lack of objective morality which makes life seem meaningless under a materialist reductionist model of the universe. Is this the usual source of existential angst? For me, existential angst always came from "life not having a purpose"; I was always bothered by the thought that no higher power was guiding our lives. I ended up solving this problem by realizing that emergent structures such as society can be understood as a "higher power guiding our lives"; while it's not as agenty as God, it suits my purposes well enough, and I've been free of existential angst ever since.

(I do agree with the main thesis of Eliezer's post; I think I was able to accept my philosophical solution to existential angst because of an increasingly positive outlook on life. I'm just commenting because I'm now very curious about what "existential angst" means to the rest of LessWrong. What does existential angst mean to you?)

Comment by TCB on Less Wrong views on morality? · 2012-07-06T06:42:35.643Z · LW · GW

I agree with what seems to be the standard viewpoint here: the laws of morality are not written on the fabric of the universe, but human behavior does follow certain trends, and by analyzing these trends we can extract some descriptive rules that could be called morals.

I would find such an analysis interesting, because it'd provide insight into how people work. Personally, though, I'm only interested in what is, and I don't care at all about what "ought to be". In that sense, I suppose I'm a moral nihilist. The LessWrong obsession with developing prescriptive moral rules annoys me, because I'm interested in truth-seeking above all other things, and I've found that focusing on what "ought to be" distracts me from what is.

Comment by TCB on The ethics of breaking belief · 2012-05-09T04:08:05.598Z · LW · GW

Aha! I think I was misreading your post, then; I assumed you were presenting truth-seeking as a reason why you wanted your friends to be atheists, as well as a reason why converting them would be moral. Sorry for assuming you didn't know your own motivations!

Comment by TCB on The ethics of breaking belief · 2012-05-09T03:08:38.723Z · LW · GW

It really depends on your own personal moral system (assuming ethical relativism). In order to answer your question, I would need to know what you consider moral. I'll attempt to infer your morals from your post, and then I'll try to answer your question accordingly.

It sounds from your post like you're torn between two alternatives, both of which you consider moral, but which are mutually exclusive. On one hand, it seems that you're morally devoted to the causes of atheism and truth-seeking; thus, you desire to convert others to this cause. But on the other hand, you're morally devoted to your friends' happiness, and you realize that if they do become atheists, then they will lose their social grounding (not to mention the emotional benefits they receive from being religious).

It sounds like you're very devoted to truth-seeking, and that you believe atheism to be the truth. (Side-note: as a Bayesian, I distrust anyone who claims to know "the truth". The point of Bayesianism is that we don't know the truth; all we have are probabilities, and thus we can approach the truth but never attain it.) Anyway, given your devotion to truth-seeking, I would expect you to want to avoid Dark Arts-ish methods. If atheism is true, then we (and your Catholic friends) should want to believe that atheism is true, but we should want to believe it because of empirical evidence and rational argument, not based on the words of some authority figure (especially since authority figures have proven unreliable in the realm of religion).

If you deconvert your friends using Dark Arts-ish methods, but you don't teach them the virtues of truth-seeking, then atheism will become just another religion to them, handed down by new authority figures (you and "Science", for instance). They'll accept atheism in the same way they accepted religion: with blind faith. If your goal is truth-seeking, then you should want to teach your friends skepticism, not atheism. And if you're so interested in converting your friends to atheism that you would sacrifice the virtues of truth-seeking, perhaps you should re-examine your motivations.

You note that the God issue is a source of tension between you and your friends; thus, I suspect that you want your friends to be athiests because it would relieve social tensions, not because you are devoted to spreading the virtues of truth-seeking. Because you are considering using the Dark Arts, it seems to me that your appeal to truth-seeking is a rationalization. So what you're really asking is, "Is it moral to make my friends' lives very difficult in order to relieve a social tension that I find unpleasant?" Most moral systems would say "no".

Perhaps I'm being a bit unfair to you. Perhaps your true goal is to encourage truth seeking, but it's easier to convert atheists to truth-seekers than it is Catholics. Or perhaps you believe that atheism will make the world a better place by eliminating holy wars and other problems caused by religion. If that's the case, then I apologize for the harshness of this analysis. Also, fwiw, my personal moral system says that converting your friends to atheism would be wrong, so I'm likely to be biased. Take this (and everything else in life, of course) with a grain of salt, and good luck to you with whatever you decide to do. =)

Comment by TCB on To like each other, sing and dance in synchrony · 2012-05-06T23:34:51.119Z · LW · GW

That sounds awesome and not manipulative at all. =)

Comment by TCB on To like each other, sing and dance in synchrony · 2012-04-24T08:09:48.175Z · LW · GW

I'm all for community-building activities, and I'd love to learn to dance, so I think this is an awesome idea. That said, something about the way this post and its comments are worded rubs me the wrong way entirely, and makes me want to avoid rationalist dance meetups and the LessWrong community in general. Since it seems that your goal is to recruit more rationalists, and I've been a long-time lurker on the outskirts of the rationalist community, I figured that it might be helpful if I explained my negative reaction. I've had similar negative reactions to many LessWrong posts, and it's part of why, although I consider myself a staunch Bayesian, I am reluctant to identify as a rationalist.

One of my problems with this post is the academic and impersonal wording used to describe the studies cited. (This complaint does not apply to the first two quoted passages.) Because of the detached and dispassionate wording, I imagine participants entering the rationalist dance meetup thinking "Tonight I'm going to manipulate System 1 into having good feelings about the rationalist community!" To me, this mindset seems incredibly fake: the eternal detachment and third-person analysis of our own experiences prevents us from fully engaging with the events at hand. If I were to attend such an event for community-building purposes, I would instead go into it thinking "Tonight I'm going to meet a bunch of cool people and have a really great time learning to dance!"

On a related note, I dislike the manipulative nature of this post. The opening paragraphs in particular suggest that a meetup organizer should be thinking "I am going to plan activities that will trick participants into bonding socially, in order to lure more people into joining the rationalist community." I think that a better perspective would be "I am going to plan activities which are fun, and which make people feel at home in the rationalist community; they will enjoy this meetup so much that they'll want to become rationalists!" Of course, these two statements are saying roughly the same thing, but the former treats people as pawns to be manipulated, while the latter treats people as... people. Really, if you're going to lead a meetup, you need to have empathy and treat the other participants as human beings with feelings. Otherwise, people are going to think you're a jerk and stop showing up. I don't mean to say that the meetup leaders are actually failing in this regard; rather, I'm claiming that the wording of this post fosters attitudes that are counterproductive to community-building.

Again, I really like this idea. The reason I am replying to this post is because the recent call for contrarians has given me the courage to speak up; it is not because I consider this post a particularly egregious offender of either of my complaints.

Comment by TCB on [SEQ RERUN] Savanna Poets · 2012-03-04T01:14:26.113Z · LW · GW

I find this post incredibly inspiring, but I feel like it does not directly address one of the main reasons that people do not find scientific explanations emotionally satisfying. When we personify the cosmos, then the universe seems a lot less hostile, and we feel much more connected to it. A primitive man looks up at the sun and thinks, "There is the sun-god, the sky-father, watching over my people." And he is happy because the universe, while capricious, is not apathetic to his life. But a modern man looks up at the sun and thinks "There is a giant ball of gas that lacks any consciousness." And he is sad because man is alone in a vast and impersonal universe indifferent to his plight.

I also don't think that scientific stories are incompatible with human stories. I think the problem with the religions we have is that their views of the cosmos are outdated, not that religious or spiritual beliefs must inherently contradict scientific ones. As an exercise (because I want to be a writer), I frequently try to describe scientific theories to myself in personified, mythological terms, or I try to write myths which reflect our modern understandings of the universe. Consider the following creation myth, which is compatible with the big bang theory of the origin of the cosmos: In the beginning, there was the cosmic mother. She died giving birth to the universe, which burst forth from her womb in a fiery fury of pain. Etc.

Comment by TCB on Fake Reductionism · 2012-03-02T13:41:57.783Z · LW · GW

Allow me to provide some insight, as an erstwhile "anti-reductionist" in the sense that Eliezer uses it here. (In many senses I am still an anti-reductionist.) I think that what is at work here is the conflict between intuition and analysis. However, before I remark on the relevance of these concepts to the experience of a rainbow, I would like to clarify what I mean by the terms "intuition" and "analysis".

The way I understand the mind, at the very deepest level of our consciousnesses we have our core processes; these are the things we have carried with us from the dawn of our evolution. And somewhere around there is our emotions and our gut reactions. Because these are such fundamental processes, and because they are ingrained in us so deeply, we feel them especially strongly. Emotions add richness and depth to experience.

As I see it, emotion is deeper than intuition, but not much deeper. Because our intuitive thought processes are so close to our emotional thought processes, intuitive thoughts are more likely to inspire emotional experiences. And as I see it, analysis is at the very surface level of our minds: it is our verbal reasoning, to which we have full conscious access. Because analysis is further from emotion than intuition is, it is less likely than intuition to inspire an emotional response. I suspect that it's for this reason that verbal, rational, conscious analyses are often seen as dry and lifeless and lacking any emotional resonance.

Here is what I believe Keats experienced. Before he knew the scientific explanation of the rainbow, he experienced rainbows intuitively and they caused in him a powerful emotional response. When he saw a rainbow, it did not trigger conscious verbal thought, and instead it triggered intuition which triggered emotion. But after he knew the scientific explanation, that verbal experience of the rainbow overrode the intuitive experience. Now, when Keats saw a rainbow, it triggered the conscious analysis level of his mind, and did not trigger intuition or emotion, and thereby were rainbows made less beautiful.

It could also be possible that before Keats knew the scientific explanation of rainbows, he had a very different verbal understanding of them. After all, Keats was a poet, so one would expect him to have been a very verbal thinker. But there are some verbal descriptions which are closer to intuition than others. The more concrete a description, the closer to intuition it is (at least, this is my hypothesis). Intuition is very symbolic, as is well-known from dreams. Abstract concepts are represented by simpler, concrete symbols. Thus, I believe that the more concrete a description is, the more intuitive it is, and the more likely it is to incite an emotional response. Whoever explained the rainbow to Keats probably did so in abstract scientific terms, and thus this description probably did not trigger such an emotional response, and Keats therefore did not think it was beautiful.

I suspect that the reason we scientifically-minded types find scientific explanations beautiful is because we understand them intuitively. Much of learning involves gaining an intuition for a subject. Those who have studied science have gained the intuition required to understand it. What this means, in terms of my model of cognition, is that the words for scientific explanations now activate symbolic, intuitive concepts, which in turn activate emotion. According to my model, then, those who have learned a subject deeply would be more likely to feel emotions when hearing about that subject, than those who have not been exposed to it. From my own experiences and from talking to others it seems like this is largely true.

A final alternative presents itself. Perhaps Keats does feel emotion when presented with the scientific explanation of the rainbow. Perhaps this emotion is negative. When he hears the scientific words he recalls tedious days in science classes that failed to capture his imagination and he then associates rainbows with this tedium. Rainbows then become less beautiful because they have been explained in a way that is negative to Keats.

Anyway what I think is that we need better education, which teaches kids the beauty of scientific ideas. Actually, I suspect science fiction novels would be better for this than textbooks and classes; good writers have a way of infusing ideas with beauty, and reading science fiction as a kid seems to enhance enthusiasm for science.

Comment by TCB on I believe it's doublethink · 2012-02-22T14:37:09.662Z · LW · GW

This post reminds me of evidential markers in linguistics (http://en.wikipedia.org/wiki/Evidentiality). Evidential markers are morphemes (e.g. prefixes or suffixes) which, when attached to the verb, describe how the speaker came to belief the fact that he is asserting. These can include things like direct knowledge ("I saw it with my own eyes"), hearsay ("Somebody told me so but I didn't see for myself"), and inference ("I concluded it from my other beliefs"). While evidential markers are less specific than what's described in this post ("Somebody told me" rather than "John told me last Thursday at lunch"), I suspect that speakers of languages with evidential markers would be a lot more inclined to remember the more specific details.*

Does anyone here speak a language with evidential markers? If so, what do you think of the claim (asserted in at least four separate comments here) that these things would be far too difficult to remember and keep track of?

*I suspect this because I've read some articles about languages which use absolute direction (north, south, east, west) instead of subjective direction (left, right, in front of, behind); speakers of these languages develop very good internal compasses and always know which direction they're facing. (Here I'm assuming this is due to nurture rather than nature.) If language can cause people to acquire such a skill, it doesn't seem unreasonable that language could also cause people to acquire a talent for remembering sources of information.

Comment by TCB on On Eliezer's post "The Cluster Structure of Thingspace" · 2011-09-04T08:56:05.755Z · LW · GW

I'm not actually convinced that negative examples are really necessary for learning empirical clusters in thingspace, especially if you're just trying to teach someone a subcategory of a big class they're already familiar with. If someone is already familiar with the concept of "bird" and you want to inform them that there is such a thing as blue jays, it may suffice to show them just a few instances of a blue jay (assuming you don't care whether they learn the terminology). Source: this super cool paper about one-shot learning using hierarchical Bayesian models: http://www.mit.edu/~rsalakhu/papers/MIT-CSAIL-TR-2010-052.pdf

Comment by TCB on Thinking without words? · 2011-07-09T20:50:45.263Z · LW · GW

Interesting point. I certainly agree that concepts/words are not actually atomic, or Platonic ideals, or anything like that. Concrete concepts, in particular, seem to correspond to "empirical clusters in thing-space", or probability distributions over classes of objects in the real world (though of course, even objects in the real world aren't really atomic).

Despite this, most people still view themselves as thinking symbolically, and many people believe themselves to be logical reasoning agents. After reading the first couple chapters of Jaynes, I am very convinced that the mind works probabilistically and does not actually deal with absolutes. Yet at the level of conscious reasoning, we seem to perceive the world in terms of symbolic absolutes. It seems like this could be either verbal or visual, but either way I have difficulty imagining conscious reasoning without symbols, even if more complicated clusters or probability distributions underly those symbols at a subconscious level. I wonder why this is.

Comment by TCB on Thinking without words? · 2011-07-09T19:36:03.031Z · LW · GW

This article presents evidence that symbols exist in our minds independent of words. http://artksthoughts.blogspot.com/2009/07/concepts-cognition-and-anthropomorphism.html

Actually, it seems extremely unlikely that words would be required for symbolic thinking, considering that any animal advanced enough to base its actions on thought rather than pure reflex would need to have some kind of symbolic representation of the world.

Comment by TCB on Thinking without words? · 2011-07-09T19:17:02.783Z · LW · GW

I did this a few years ago, but I'm not sure exactly how. I wanted to think less verbally because I worried that my thoughts were too constrained by words, which kept me at the very surface level of my consciousness and perhaps inhibited my access to deeper parts of my mind. I think that part of the transformation came about simply because I wanted it to (power of suggestion). It probably also helped that I started watching a lot more films and doing more math. I don't remember the exact process by which I transformed my thought-structure.

Something that I noticed was that my thoughts got much less verbal when I became more emotional. During a few particularly emotionally intense experiences this past year, I found myself less able to reason through thoughts verbally. At the time, I had an impression that my subconscious mind had taken over my thought process, closing my conscious mind out, and denying it the power of words that would let it interfere with the transformations going on inside me.

A few months ago, I decided to start thinking more verbally again, so that it would be easier to fulfill my dreams of being a novelist. I'll try to remember exactly how I did this, but the process is not wholly clear to me. I know that at some point I made a decision to "reprogram myself" to be more verbal, and I think that my desire to transform contributed significantly to the actual transformation. I also made a point of trying to express myself more verbally in my mind. One exercise I did involved looking at things in the world and trying to come up with eloquent verbal descriptions of them. I also started writing more in my journal, and reading a lot more. This wasn't a rigorous scientific experiment, and I didn't keep very careful track of the different things I was doing to reshape my thought-structure, but whatever I did worked, because I think very verbally now.

I'm not sure if this was helpful at all, but I figured I should comment here, since this is something I've actually done. I also don't know how similar these processes are between different people, or whether it matters that I'm female. Furthermore, I'll note that it was much easier to train myself to think verbally again than it was to make myself think less verbally; I was always a very verbal person growing up. Not sure if that's nature or nurture or both.

Comment by TCB on Basics of Animal Reinforcement · 2011-07-06T02:51:49.388Z · LW · GW

Those are valid concerns. Regarding the first, that's why I emphasized the ritual component of sex in a repressed society. I suspect that such a society would have very strict rituals for sex: it must occur only at specific times in specific locations, and in the presence of certain stimuli. Some examples of stimuli are candles or lacy lingerie or dim lighting. An example of a time is night. I've heard lots of comments to the effect that having sex in the middle of the day would be strange, and that sex is strictly a nighttime activity. This could be classified as a "nighttime fetish", perhaps. The ritual component of sex would serve to highlight the ritual times/locations/stimuli, causing them to imprint more strongly than other, non-ritualized components of the sexual act.

Regarding your second objection, while that definitely seems like a possibility, the variations and experimentation would probably mean that no one thing would imprint strongly enough to become a fetish, because its presence wouldn't correlate strongly enough with the sexual act.

Comment by TCB on Basics of Animal Reinforcement · 2011-07-06T02:25:06.538Z · LW · GW

Slightly off-topic thought regarding penny jars and fetish formation:

I've heard that fetishes are more prevalent in cultures where sex is repressed. I always wondered why this would be the case (assuming that it is in fact true). One explanation is associations: if people are raised to think sex is dirty, or that sex is a necessary but base bodily function akin to using the bathroom, then they might fetishize urine or excrement. And if people are raised to think that sex is beastly and animalistic, they might fetishize things that are related to animals and violence.

However, the penny jar experiment suggests another, more "innocent" explanation. Perhaps it's simply that, in sexually repressed cultures, people don't have sex very often, or they do it in a special ritualized setting. If this is the case, then accidents of that ritual setting might become associated with the sexual act itself. So, perhaps the neurons corresponding to the distinctive pink pillows on a lover's bed get wired up with the neurons that correspond to actually having sex. Then later, the pink pillows are enough to cause arousal, and perhaps in extreme cases pink pillows later become /required/ for arousal. This presumably wouldn't happen as much if the sex-location changed frequently, or if the setting was not seen as an important component of the ritual.

The first hypothesis seems to be a better explanation of things like poop fetishes, while the second hypothesis might better explain things like lacy pink lingerie. What do you think?

Also, it goes without saying that I enjoyed your article. =)

Comment by TCB on Are search engines perpetuating our biases? · 2011-07-04T13:54:12.610Z · LW · GW

Oops, you are right; I meant to type pasteurization! I also think that homogenizing milk is bad, but I believe that with lower probability. I'll edit my post, and thanks for the correction. =)

Comment by TCB on The Joys of Conjugate Priors · 2011-05-21T16:34:08.487Z · LW · GW

I would love to see an LW sequence on machine learning! I imagine that LW would have a lot of interesting things to say about the philosophical aspects of ML in addition to the practical aspects.

I'm not sure I'd be qualified to contribute much to such a sequence, since I am just an undergrad, but I did have an outline in mind for an intuitive introduction to MLE and EM. If people would find that interesting, I could certainly post it on LW once it was written up!

I'm fairly inexperienced in ML, so all the models I've worked with are simple enough that they've had conjugate priors. (I think it's really cool that Dirichlet priors can be used for something as complicated as an HMM, but I guess the HMM is still just a whole bunch of multinomials.) I'm less familiar with hierarchical models. What is an example of a model for which is it difficult to use conjugate priors? The only hierarchical process I've heard about is the Dirichlet process, and I was under the impression (based on the name) that it involved Dirichlet priors somewhere; is this incorrect? I have been meaning to read about hierarchical models, so if you know of any good tutorials or papers on them, I would very much appreciate a link!

Comment by TCB on The Joys of Conjugate Priors · 2011-05-21T15:55:41.762Z · LW · GW

After rereading this, I agree with you that I emphasized the beta distribution too heavily. This wasn't my intention; I just picked it because it was the simplest conjugate prior I could find. In the next draft of this document, I'll make sure to stress that the beta distribution is just one of many great conjugate priors!

I am a bit confused about what the second point means. Do you mean that conjugate priors are insufficient for capturing the actual prior knowledge possessed?

I did not know that it was controversial to claim that alpha = beta = 1 expresses no prior knowledge! I think I still prefer alpha = beta = 1 to the other choices, since the uniform distribution has the highest entropy of any continuous distribution over [0,1]. What are the benefits of the other two proposals?

Your last complaint is something I was worried about when I wrote this. Part of why I wrote it like that was because I figured people would be more familiar with the MLE/MAP style of prediction. Thanks to your feedback, though, I think I'll change that in my next draft of this document.

Again, thank you so much for the detailed criticism; it is very much appreciated! =)

Comment by TCB on The Joys of Conjugate Priors · 2011-05-21T04:04:10.744Z · LW · GW

Thank you very much for the compliments, and for the honest criticism!

I am still thinking about your comment, and I intend to write a detailed response to it after I have thought about your criticisms more completely. In the meantime, though, I wanted to say that the feedback is very much appreciated!

Comment by TCB on Rapture/Pet Insurance · 2011-05-19T23:11:32.485Z · LW · GW

Perhaps I have a different system of morality than other people who have commented on this topic, but I personally judge actions as "moral" or "immoral" based on the intentions of the do-er rather than the consequences. (Assuming morality is relative and not an absolute component of the universe, this seems like a valid moral system.)

If the atheists who run this website are doing so to make money by exploiting the perceived stupidity of their customers, this seems immoral to me. On the other hand, if they are running the service because they honestly want to increase the peace of mind of rapture-believing-in pet owners, then that seems like it would be a moral action. However, knowing people, I really suspect that it's the former.

If the rapture really does happen and this really saves pets (assuming that it is a good thing to save pets), then I would still consider this service immoral. I would rather live in a world where people were compassionate enough that they did not to want to trick each other for money (even if they thought each other's beliefs were moronic). Barring that, I'd like to live in a world where people consider tricking each other for money immoral and wouldn't do it because of some internal moral crisis or external punishment. I hold this opinion even if some of the tricks for money backfire and end up benefiting the trickees more than the trickers.

Comment by TCB on Hollow Adjectives · 2011-05-05T13:37:23.720Z · LW · GW

I suppose I am assuming that the universe operates under some set of formal rules (though they might not be deterministic) independently of our ability to describe the universe using formal rules. I would also say that our inability to comprehend a given contradiction is related to the fact that we are inside the system. If God were outside the system he would not necessarily have this problem.

I disagree with your second point, though. Sure, 1 and 2 are labels for concepts that exist within a formal system we've developed, and sure, we can create an isomorphism to different labels. But I would consider this to be the same formal system. The example I gave (working in the integers mod 2) involves switching to a formal structure that is decidedly not isomorphic to the integers under addition.

Also, sorry if I was unclear - I did not mean to imply that mathematical formalisms as we've developed them are related to the fundamental laws of the universe. I only meant to say that if the universe is a formal system of some sort, and God operates outside that formal system, then it is conceivable that God could switch to a different formal system where things that we consider impossible are not, just like we can switch to a different formal system where 0 and 2. Maybe God could do something analogous and put me in the universe (mod 10 feet) so that if I walk ten feet straight across the room I'll end up where I started; this seems like a contradiction in our universe but is definitely imaginable.

[Quick edit for clarity: maybe it doesn't seem like a contradiction that I could walk ten feet away and end up back where I started, but it does seem like a contradiction that I could walk ten feet and both be ten feet away, and also be exactly where I started. This is what I imagine happening in the universe (mod 10 feet).]

Comment by TCB on Hollow Adjectives · 2011-05-05T06:10:09.347Z · LW · GW

I find this post very interesting, but I disagree with your examples about God. This comment is rather lengthy, and rather off-topic, so I apologize, but I wanted to bring it up because your post features these questions so prominently as examples.

Specifically, I don't think that the answer to the questions about God can be written off so easily as "no". It seems to me that the questions "Could God draw a square circle?" "Could God create a stone so large that even He could not lift it?" are asking about the bounds on omnipotence.

Suppose an omnipotent being exists in a universe, and that universe operates under some fundamental laws that among other things define what can exist vs. what is a contradiction. It seems fairly obvious, based on the standard definition of omnipotence, that the omnipotent being should be able to do all things that do not violate these fundamental laws and cause a contradiction. I'll call this Level 0 Power.

"Could God draw a square circle?" is asking about what I'll call God's Level 1 Power. It is asking "Does an omnipotent being have the power to change the fundamental laws of the universe?", or, if you like, "Can God reprogram the universe?" If God is in charge of the rules, then presumably he could rewrite them such that things which are currently a contradiction are no longer a contradiction. In terms of the circle/square question, this seems kind of silly, since circles and squares are not contained in the universe but are products of formal systems invented by humans. Alternatively we could ask "Can God make 1+1 add up to something other than 2?" and the answer is "Of course; even mathematicians can do this, by redefining the axioms or working in the integers (mod 2) or something." In terms of this example, then, Level 1 Power is asking "If the universe is a formal system of sorts, can God change the axioms?"

Suppose God has Level 1 Power and can change the axioms of the system (or systems) he creates. This isn't so hard to imagine; it's like a human programmer rewriting a piece of the code in the middle of running a simulation. But the question about whether God, if he existed, actually had such a power, seems like it would be a reasonable subject of discussion for theologians.

"Could God create a stone so large that even He could not lift it?" is yet a more difficult question. Is it asking about God's Level 1 Power? I think it depends on where omnipotence comes from. If God is omnipotent within the system because that's the way he coded it, then it is asking about God's Level 1 Power: all he has to do is go into the code for the universe and change the part that says he should be able to lift all stones. But if God's omnipotence is something that exists independent of the system, then this question is asking whether God can change the rules which define himself.

Anyway, your answer of "no" to these questions indicates not that the questions are worthless but that you assume an omnipotent being could only have Level 0 Power.

Comment by TCB on Occam's Razor, Complexity of Verbal Descriptions, and Core Concepts · 2011-04-13T03:24:02.898Z · LW · GW

I am aware that my definition of Occam's razor is not the "official" definition. However, it is the definition which I see used most often in discussions and arguments, which is why I chose it. The fact that this definition of Occam's razor is common supports my claim that humans consider it a good heuristic.

Forgive me for my ignorance, as I have not studied Kolmogorov complexity in detail. As you suggest, it seems that human understanding of a "simple description" is not in line with Kolmogorov complexity.

I think the intention of my post may have been unclear. I am not trying to argue that natural language is a good way of measuring the complexity of statements. (I'm also not trying to argue that it's bad.) My intention was merely to explore how humans understand the complexity of ideas, and to investigate how such judgements of complexity influence the way typical humans build models of the world.

The fact that human understanding of complexity is so far from Kolmogorov complexity indicates to me that if an AI were to model its environment using Kolmogorov complexity as a criterion for selecting models, the model it developed would be different from the models developed by typical humans. My concern is that this disparity in understanding of the world would make it difficult for most humans to communicate with the AI.

Comment by TCB on Human errors, human values · 2011-04-08T14:55:20.995Z · LW · GW

Perhaps I am missing something here, but I don't see why utilitarianism is necessarily superior to rule-based ethics. An obvious advantage of a rule-based moral system is the speed of computation. Situations like the trolley problem require extremely fast decision-making. Considering how many problems local optima cause in machine learning and optimization, I imagine that it would be difficult for an AI to assess every possible alternative and pick the one which maximized overall utility in time to make such a decision. Certainly, we as humans frequently miss obvious alternatives when making decisions, especially when we are upset, as most humans would be if they saw a trolley about to crash into five people. Thus, having a rule-based moral system would allow us to easily make split-second decisions when such decisions were required.

Of course, we would not want to rely on a simple rule-based moral system all the time, and there are obvious benefits to utilitarianism when time is available for careful deliberation. It seems that it would be advantageous to switch back and forth between these two systems based on the time available for computation.

If the rules in a rule-based ethical system were derived from utilitarian concerns, and were chosen to maximize the expected utility over all situations to which the rule might be applied, would it not make sense to use such a rule-based system for very important, split-second decisions?