Posts
Comments
As far as complexity-of-logic-theories-for-reason-of-believing-in-them, that should be proportional to the minimal Turing machine that would check if something is an axiom or not. (Of course, in the case of a finite list, approximating it to the total length of the axioms is reasonable, because the Turing machine that does "check if input is equal to following set:" followed by set adds a constant size to the set -- but that approximation breaks down badly for infinite axiom schema).
For eating at people's houses: usually people will have enough side-dishes that if one does not make a big deal of it, one can fill up on non-meat dishes. At worst, there's always bread.
For going to steakhouse -- yes, but at every other place, there's usually a vegetarian option, if one tries hard enough.
It does make a good case for being an unannoying vegetarian...but being a strict-vegetarian is a useful Schelling point.
Of course e can be evidence even if P(X|e)=P(X) -- it just cannot be evidence for X. It can be evidence for Y if P(Y|e)>P(Y), and this is exactly the case you describe. If Y is "there is a monument and left is red or there is no monument and left is black", then e is (infinite, if Omega is truthful with probability 1) evidence for Y, even though it is 0 evidence for X.
Similarly, you watching your shoelace untied is zero evidence for my shoelaces...
No, it is not surprising... I'm just saying that saying that the semantics is impoverished if you only use finite syntactical proof, but not to any degree that can be fixed by just being really really really smart.
bryjnar: I think the point is that the metalogical analysis that happens in the context of set theory is still a finite syntactical proof. In essense, all mathematics can be reduced to finite syntactical proofs inside of ZFC. Anything that really, truly, requires infinite proof in actual math is unknowable to everyone, supersmart AI included.
Here's how I visualize Goedel's incompleteness theorem (I'm not sure how "visual" this is, but bear with me): I imagine the Goedel construction over the axioms of first-order Peano arithmetic. Clearly, in the standard model, the Goedel sentence is true, so we add G to the axioms. Now we construct G' a Goedel sentence in this new set, and add G'' as an axiom. We go on and on, G''', etc. Luckily that construction is computable, so we add G^w as a Goedel sentence in this new set. We continue on and on, until we reach the first uncomputable countable ordinal, at which point we stop, because we have an uncomputable axiom set. Note that Goedel is fine with that -- you can have a complete first-order Peano arithmetic (it would have non-standard models, but it would be complete!) -- as long as you are willing to live with the fact that you cannot know if something is a proof or not with a mere machine (and yes, Virginia, humans are also mere machines).
I'm trying to steelman your arguments as much as I can, but I find myself confused. The best I can do is: "I'm worried that people would find LW communities unwelcoming if they do not go to rituals. Further, I'm worried that rituals are a slippery-slope: once we start having rituals, they might start being the primary activity of LW and make the experience unwelcoming even if non-ritual activities are explicitly open, because it feels more like 'a Church group that occasionally has secular activities. I'm worried that this will divide people into those who properly mark themselves as "LWers" and those who don't, thus starting our entropic decay into a cult."
So far, your objections seem to be to this being the primary activity of the LW group, which -- honestly -- I would join you. But if a regularly meeting LW group also had a Catan night once a week (for Catan enthusiasts, obviously -- if you don't like Catan don't come) and a Filk night once a month (for filk enthusiasts, again), I am not sure this would hasten a descent into a Catan-only or filk-only group. Similarly, if a LW group has a ritual once a year (or even if every LW group has a ritual, and even it's the same ritual), it doesn't seem likely rituals will become the primary thing the group does.
"There is a rather enormous difference between things I care whether lwers do and things I care whether lw does."
I notice I am confused. LessWrong is a web site, and to some extent a community of people, which I tend to refer to as "Less Wrongers". If you mean these words the same as I do, then I do not understand -- "LW does something" means "the community does something" which means "many members do something". I'm not really sure how LW does something is distinguished from LWers doing it...
Sorry, that's not the context at which I meant it -- I'm sure you're as willing to admit you were wrong as the next rationalist. I mean it in the context of "Barbarians vs. Rationalists" -- if group cohesion is increased by ritual, and group cohesion is useful to the rationality movement, than ritual could be useful. Wanting to dissociate ourselves from the trappings of religion seems like a case of "reversed stupidity" to me...
The same bias to...what? From the inside, the AI might feel "conflicted" or "weirded out" by a yellow, furry, ellipsoid shaped object, but that's not necessarily a bug: maybe this feeling accumulates and eventually results in creating new sub-categories. The AI won't necessarily get into the argument about definitions, because while part of that argument comes from the neural architecture above, the other part comes from the need to win arguments -- and the evolutionary bias for humans to win arguments would not be present in most AI designs.
Thanks! You have already updated, so I'm not sure if you want to update further, but I'm wondering if you had read Why our kind can't cooperate, and what your reaction to that was?
I used to have a group of friends (some closer than others), and we would all get together and play Settlers of Catan a given day of the week (~4 years ago, I don't remember which day it was). It consisted of the "same thing" (obviously the game turned out differently every week, but still) every week. There was not really room for "nonparticipation" in the sense that if you wanted to hang out with these people that day, you played Catan. Would it upset you if you learned that there was a regular meetup of Catan LW enthusiasts who meet once a week to play?
Some of my closest friends are from the Israeli filking community. There's no "ritual" per se, but we know and love the same songs, we sing them together and not-singing is kinda frowned upon. It's certainly "weird", and even somewhat exclusionary (helped by a bit of justified feeling of persecution from the rest of SF fandom). Would it upset you if you learned that there was a regular meetup of Filk LW enthusiasts who meet once a week to sing together?
I'm really asking these questions (in the sense that I do not find myself certain either way for what your answer will be, although I assign >.5 that it will be "no" on both).
If it is a "no", then it seems these are not your true rejections.
If it is a "yes", you seem to have a wide brush to paint "things I do not want LWers to do."
Which assumptions generated the incorrect predictions? Are you pulling your Bayesian updates backwards through the belief-propogation network given this new evidence? (In other words: updating on a small probability event should change your mind about a whole host of related beliefs.)
Thanks for posting the ritual booklet. It's fascinating. With my wife being pregnant, I start looking at things through the eyes of a parent to be. Rituals are traditionally a super-familial thing, but including the whole family. Parents take their kids to Church. Parents light the Menorah with their kids. Parents celebrate Winter Soltice with their kids. Reading through the booklets, I constantly had to revise upwards the age at which I could first take my daughter to such a gathering. There's no "minimum age" to participate in Church, or the lighting of the candles. I understand many LWers are single people in their 20s, and certainly a lot of NYCers are single people in their 20s. But I found myself wishing for a ritual I could do with a family. Perhaps if I'm sufficiently motivated, I'll try to work something out next year...
BTW: By Geneva convention standard, Gallileo was tortured -- "For the purposes of this Convention, torture means any act by which severe pain or suffering, whether physical or mental, is intentionally inflicted on a person" (notice the "or mental", and it's certainly mental torture to be threatened by physical torture). It seems like at this point we are arguing about definitions, so maybe I'll stop here, but calling the relevant line in "Word of God" false because of that is a bit of an exaggeration.
Is there a new version of the songbook?
If you like spoilers, google "Lowenheim-Skoler" -- the same technique as the proof for the "upwards" part allows you to generate non-standard models for the First-order logic version of Peano axioms in a fairly straight-forward manner.
It grieves me to note that almost all the arguments in your post could be applied, mutatis mutandis, to why we should teach kids intelligent design as well as evolution.
I am looking forward the the ebooks. I hope you'll provide them in ePub format, for those of us who prefer that. [I was pleased to donate $40, which should soon be matched by my employer as part of the employee-match program, thus getting me double-matched!]
"of all the abilities that humans are granted by their birth this is the one you perform the worst" -- This seems like an odd comparison. Can you really compare my ability to, say, tell stories to 'mind-reading'? It's like comparing my ability to walk to my ability to jump straight up: I can walk for miles, but I can only jump straight up a meter or so -- a 1000:1 ratio -- but I do not feel particularly bad at my ability to jump.
I would definitely believe the AI, but I already believe it, if it said "humans are worse at discerning states of minds than they think they are" -- Paul Ekman said the same, with plenty of research to show how a bit of training can make you better at it. "It is obvious you are living in a simulation", as an easy comparison, is way stranger to me -- the above statement would not even rank in the "10 strangest things".
Woo, I found who wrote it. I enjoyed reading it a lot. I liked that the "utopia" showed how utopic utopia can be while still showing the dangers in even slightly badly formed goals.
My initial reaction was "I wish I wouldn't have known about this", because it made me physically shuddered. After the shock and disgust, I forced myself to accept the proposition "There is a company selling bleach as medicine, and people are ingesting it". I am now happy I have seen this, because my model of the world is more accurate, and if I act on my values in accordance with more accurate beliefs, I will be able to do more good.
You don't need to solve the integral for the posterior analytically, you can usually Monte-Carlo your way into an approximation. That technique is powerful enough on reasonably-sized computers that I find myself doubting that this is the only hurdle to superhuman AI.
I took it. No SAT scores or classical IQ scores, didn't take Myer-Briggs (because it's stupid) or Autism (because freakin' hell, amateur psychology diagnosis on the 'net).
I'm not sure that adding the conjunction (R(x,y,z)&R(x,y,w)->z=w) would have made things clearer...I thought it was obvious the hypothetical mathematician was just explaining what kind of steps you need to "taboo addition"
It so happens that the three "big lies" death mentions are all related to morality/ethics, which is a hard question. But let me take the conversation and change it a bit:
"So we can believe the big ones?"
Yes. Anger. Happiness. Pain. That sort of thing.
"They're not the same at all!"
You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of happiness, one molecule of pain.
In this version, the final argument is still correct -- if I take the universe and grind it down to a sieve, I will not be able to say "woo! that carbon atom is an atom of happiness". Since the penultimate question of this meditation was "Is there anything else", at least I can answer that question.
Clearly, we want to talk about happiness for many reasons -- even if we do not value happiness in itself (for ourselves or others), predicting what will make humans happy is useful to know stuff about the world. Therefore, it is useful to find a way that allows us to talk about happiness. Happiness, though, is complicated, so let us put it aside for a minute to ponder something simpler: a solar system. I will simplify here, a solar system is one star and a bunch of planets rotating around it. Though solar systems effect each other through gravity or radiation, most of the effects of the relative motions inside a solar system comes from inside itself, and this pattern repeats itself throughout the galaxy. Much like happiness, being able to talk about solar systems is useful -- though I do not particularly value solar systems in and of themselves, it's useful to have a concept of "a solar system", which describes things with commonalities, and allows me to generalize.
If I grind the universe, I cannot find an atom that is a solar system atom -- grinding the universe down destroys the "solar system" useful pattern. For bounded minds, having these patterns leads to good predictive strength without having to figure out each and every atom in the solar system.
In essence, happiness is no different than solar system -- both are crude words to describe common patterns. It's just that happiness is a feature of minds (mostly human minds, but we talk about how dogs or lizards are happy, sometimes, and it's not surprising -- those minds are related algorithms). I cannot say where every atom is in the case of a human being happy, but some atom configurations are happy humans, and some are not.
So: at the very least, happiness and solar systems are part of the causal network of things. They describe patterns that influence other patterns.
Mercy is easier than justice and duty. Mercy is a specific configuration of atoms behaving a human in a specific way -- even though the human feels they are entitled to cause another human hurt ("feeling entitled" is a set of specific human-mind-configurations, regardless of whether "entitlement" actually exists), but does not do so (for specific reasons, etc. etc.). In short, mercy describes specific patterns of atoms, and is part of causal networks.
Duty and justice -- I admit that I'm not sure what my reductionist metaethics are, and so it's not obvious what they mean in the causal network.
You can assume that O will make sure to intervene just little enough that two people who are not right for each other will figure it out before they are 18.
I tried the exercise, and came up with an interesting werdtopia. http://moshez.wordpress.com/2011/06/22/going-outside/
Yes, clearly, a bit after I asked, I learned how to use intuition, and at some point, it became rote. But the bigger point is that this is a special case -- in logic, and in math, there are a lot of truth-preserving transformations, and choosing a sequence of transformations is what doing math is. That interesting interface between logic-as-rigid and math-as-something-exploratory is a big part of the fun in math, and what led me to do enough math that led to a published paper. Of course, after that, I went into software engineering, but I never forgot that initial sensation of "oh my god that is awesome" the first time Moshe_1992 learned that there is no such thing as "moving the 1 from one side of the equation to the other" except as a high-level abstraction.
"I will remark, in some horror and exasperation with the modern educational system, that I do not recall any math-book of my youth ever once explaining that the reason why you are always allowed to add 1 to both sides of an equation is that it is a kind of step which always produces true equations from true equations."
I can now say that my K-12 education was, at least in this one way, better than yours. I must have been 14 at the time, and the realization that you can do that hit me like a ton of bricks, followed closely by another ton of bricks -- choosing what to add is not governed by the laws of math -- you really can add anything, but not everything is equally useful.
E.g., "solve for x, x+1=5"
You can choose to add -1 to the equation, getting "x+1+-1=5+-1", simplify both sides and get "x=4" and yell "yay", but you can also choose to add, say, 37, and get (after simplification) "x+38=42" which is still true, just not useful. My immediate question after that was "how do you know what to choose" and, long story short, 15 years later I published a math paper... :)
The software was using "untranslatable" as a short hand for "the current version of the software cannot translate a term and so is giving it a numeric designation so you will be able to see if we use it again", probably not even saying "no future version of the software will be able to translate it", not to mention a human who spent non-trivial amount of thought on the topic (in TWC future, there's no AI, which means human thought will do some things no software can do).
Meditation 3: [Hardest of the meditations, for me.] Let us observe the difference between [post-utopian]-->[colonial alienation] and a connected thing (say [I see you picked Ace]-->[You see you picked Ace] from a deck of cards): In the first case, there is no way to settle an argument about whether Ellie is post-utopian or not. We would predict that it would cause arguments between people that are not settled. Anything connected to the causal web is more likely to lead to settlable arguments, at least among people behaving more-or-less rationally. It is not a perfect test, but it does suggest that I expect to see different things from connected networks and unconnected networks, like people changing their minds.
[Cheating, since I already read some Zombie sequences, but have not read any replies in this thread] The consciousness causes you to speak of consciousness, which is the result of neurons in your brain firing your jaw muscles (and other muscles, and so on). If it was epiphenomenal enough that none would talk about it, we wouldn't have this question in the first place.
[Has consciousness] --> [Writes books/blogs on consciousness]
Causually connects consciousness to the universe.
Replying without reading any of the other answers. Apologies in advance for redundancy:
Meditation 1: The psychic cousin is indeed connected to the network of things. Let's assume that it works, for simplicity, on decks of two cards: an Ace and a King.
Probabilities: Moshe picked Ace/Cousin says Moshe picked Ace -- 0.4 Moshe picked King/Cousin says Moshe picked Ace -- 0.1 Moshe picked Ace/Cousin says Moshe picked King -- 0.1 Moshe picked King/Cousin says Moshe picked Ace -- 0.4
The True Love/Communing is more complicated: Does True Love have any discernible effect? If we assume True Love, say, changes the probability of having a fight (or some property of the fight -- for example, a fight without reconciliation inside of 24 hours), then we should have a diagram:
True Love --> Communing says True Love | | \/ No fight
and resulting joint probability distributions. Since "fighting" is something observable (by a trained psychologist, say, who puts them in the "Love Lab" http://www.gottman.com/49847/The-Love-Lab.html) we have connectedness.
Dunno about Russian, but Hebrew has them for sure -- "T'khelet" means "Light blue", "Kakhol" means "blue". I know quite a few bilingual ~5yo kids, who, if they're wearing a light blue T-shirt, will scream at you if you say "you have a Kakhol T-shirt" in Hebrew, but will happily agree they are wearing a "blue" T-shirt -- thus showing that sufficient lack of reflectivity can have two conflicting vision systems in the same individual. (BTW -- "light blue" is just an approximation, it's a specific shade of light blue).
"Yet many highly intelligent people with normal rationality have terrible fashion sense"
Hrm, I'm not sure what evidence there is that highly intelligent people worse fashion sense than equivalent people [let's stick to the category of males, with which I'm most familiar]. It seems to me like "fashion" for males comes down to a few simple rules, that a monkey (or, for that matter any programmer or mathematician) can master. The problem seems to be that (1) one does need to master these rules (2) sometimes, it means one does not dress comfortably.
I would like to offer a competing hypothesis: nerds have just as much "innate" fashion sense as non-nerds, but they feel that fashion is beneath them, that dressing comfortably is more important than following fashion, or that they would prefer to dress to impress nerds (with T-shirts that say "P(H|E) = P(E|H)*P(H)/P(E)" for example) than to impress non-nerds. In other words, the much simpler hypothesis "dress is usually worn to self-identify as a member of a tribe" is enough to explain nerds' perceived lack of fashion sense.
[For the record, here is how a nerd male can "simulate" a reasonable facsimile of fashion sense: for semi-formal occasions, get a couple of nice suits and wear them. If nobody else would wear a tie, wear a suit without the tie (if your ability to predict whether people will wear a tie is that bad, improve it with explicit Bayesian approximation). For all other occasions, wear dark colored slacks and a button down shirt with a compatible color (ask a person you trust about which colors go with which, and keep a table glued to the inside of your closet. Any "nerd" has mastered skills tremendously more complicated than that (hell, correctly writing HTML is more complicated). One can only assume it is lack of motivation, not of ability.]
For myself as an example of nerd, I can definitely say the reason I dress "with a horrible fashion sense" is as a tribal identification scheme. In situations where my utility function would actually suffer because of that, I do the rational thing, and wear the disguise of a different tribe... (For example, when going on sales pitches to customers, I let the sales rep in charge of the sale to tell me what to dress down to the socks, on my wedding I let my wife pick out my clothes, etc.)
I'm not really sure how you can claim "techniques are value-neutral" without assuming what values are. For example, if my values contain a term for someone else's self-esteem, a technique that lowers their self-esteem is not value-neutral. If my values contain a term for "respecting someone else's requests", techniques for overcoming LMR are not value-neutral. Since I've only limited knowledge of the seduction techniques advanced by the community, I did not offer more -- after seeing some of the techniques, I decided that they are decidedly not value neutral, and therefore chose to not engage in them.
I cannot answer for Eliezer, but I can (perhaps) explain why the belief is "visibly insane".
- There is footage of the airplanes flying into the building.
- In hindsight, several engineering organizations that investigated the phenomena, decided that a collapse from the fires started was likely (http://en.wikipedia.org/wiki/Collapse_of_the_World_Trade_Center )
- In order to be a conspiracy, there would have had to be 3a. Someone who planted the explosives in a way to cause an organized collapse. 3b. People who shipped the explosives. 3c. People on the inside of FEMA and the other investigating organizations who looked into it. 3d. People on the inside of the FBI who swept under the rug the evidence for explosives. 3e. Nobody in the group of 3a-3d who had a change of heart and decided to come clean.
For 3 to be true, too many things to be true. For the non-conspiracy explanation, all that's needed is the (perhaps slightly surprising) fact that the fire caused a specific kind of collapse. Most "truthers" know about as much about physics as me (highschool mechanics, some basics in college). So for a given truther to believe that, the truther needs to assume a high degree of certainty for his or her intuitive physics estimation in the fairly subtle area of civil engineering. In fact, they'd have to have a degree of certainty so high that all the elements in 3 are not enough to sway them the other way. That degree of certainty should be reserved for actual trained civil engineered, and perhaps not even then...
It's not that costly if you do with university students: Get two groups of 4 university students. One group is told "test early and often". One group is told "test after the code is integrated". For every bug they fix, measure the effort it is to fix it (by having them "sign a clock" for every task they do). Then, do analysis on when the bug was introduced (this seems easy post-fixing the bug, which is easy if they use something like Trac and SVN). All it takes is a month-long project that a group of 4 software engineering students can do. It seems like any university with a software engineering department can do it for the course-worth of one course. Seems to me it's under $50K to fund?
Did anything come of the discussion? I would like to know, since there's a school in San Bruno I would love to give a talk at.
Just one thing that bothered me right off the bat when I read the book -- PLEASE PLEASE attribute songs to the original creators. Otherwise, it looks like you're claiming you wrote the song. That's just unfair... :(
Yes, I do seek knowledge for other reasons, here and elsewhere. But my expectation that this will not "look like" curiosity because I expect to have few changes in my behavior based on what I read, and so the importance of it being "true" is likewise diminished. Sure, I would like to have my beliefs about the brain and AI be true, but I'm not prepared to spend a LOT of resources to do it -- I'm sure if I were really curious about the role of Oxytocin in relationships, I could reach true beliefs faster by spending more resources. There are gradations between "French paintings" and "database performance" in how curious I am about things, I agree, and most of Less Wrong falls somewhere in the middle of it. The curiosity Luke was alluding to is the all consuming curiosity of "things I expect belief accuracy to have large impact on my utility", which I doubt most of Less Wrong falls on.
I'm not sure there's an overarching "curiosity" that people have or don't have: I'm very curious about whether a specific kind of database will perform adequately in certain circumstances (long story) but I'm only mildly curious about how to identify which French painter during the 19th century painted which picture. Some art experts, I'm sure, have cultivated the skill to guess within seconds which painter it is for every picture. I wouldn't mind having that skill -- it sounds like a fun skill to have -- but it seems like it would be more resources than it's worth. OTOH, I really want my probability estimations re: the database to reflect reality. Do I need to use AI theory? Doubtful. Probably a little bit of statistics, and even that fairly mild, but I do have to think a lot about how to use my knowledge of databases to design experiments to find the truth out. I'm not sure if that would look "curious" to the lay person (and, of course, there's also a factor of "signaling curiosity" -- I want to make sure that everyone with a stake in the process sees that I've done the due diligence), but nonetheless, I'm truly curious about this (and yes, it could go both ways...I think this is the most important part of curiosity vs. fake curiosity).
When I was genuinely curious about the US immigration laws applied to me (and again, it could have gone both ways -- before running any experiments, I made sure to visualize both options, and realizing I can live with both) I just called an immigration lawyer (and, for a latter question, the paralegal who was involved with my visa). In that case I needed very little knowledge from LW -- I didn't apply my knowledge about Bayes, or about heuristics and biases, and just went and asked a professional (of course, in some cases, like wanting to know if a stock will go up, asking a professional is disasterous, but with immigration law, lawyers can estimate probabilities fairly accurately even if they lack formal rationality training).
Those were the two examples of real curiosity from my life that I could think of, that looked nothing like the description here of "real curiosity"....
Luke, can you fix the link to the "Rathmanner & Hutter" article in the references section? It's missing an "http://" in front of it.
It means that that entity's evolved instincts would be out-of-whack with the MML, so if that entity also got to the point where it invented Turing machines, it would see the flaw in its reasoning. This is no different than realizing that Maxwell's equations, though they look more complicated than "anger" to a human, are actually simpler. Sometimes, the intuition is wrong. In the blue/grue case, human intuition happens to not be wrong, but a hypothetical entity is -- and both humans and the entity, after understanding math and computer science, would agree that humans are wrong about anger, and hypothetical entities are wrong about grue. Why is that a problem?
The original problem, as stated, is "valid": a mind with a "grue"-like prior would make the grue prediction, while normal human minds (with a "green"-like prior, mostly as a result of our evolution around colors) would make the "green" prediction. If we want a more neutral prior, we go with "minimum message length", and "what are colors". Grue and green are words in a dictionary, so they do not count for math -- only Turing machines do. It's simpler to write a Turing machine which puts out "light at XXXhz, light at XXXhz" then one that takes time T into account. Therefore, the green prior is more in-line with an MML-prior mind. We take MML priors as most compatible with human-like reasoning.
That's funny -- I don't consider the FAI thing even remotely "offensive" (perhaps "debatable", in the sense of "I'm not sure how likely it is -- do you have any evidence?" but not "offensive"). I wrote a short story in which the FAI kept human beings humanly-intelligent (though not explained in the story, in my background, it did bring humans to a fairly high minimum, but it did not change the intelligence level overall).
A piece of clothing is fundamentally a tool. Definitions are important so everyone is on the same page. I feel like Wikipedia's first sentence on "tool" accurately describes it
Starting an article with a "proof by definition" does not make me feel overly positive by the article. Why is the definition of tool important? Do you think that before we saw that definition, we did not know that (a) clothes help us deal with environmental conditions or (b) clothes change the way some people perceive us?
Overall, I do not understand what this article is doing on Less Wrong. I'm pretty sure there are more effective ways, both time-wise and money-wise in order to dress better for whatever social goals one is trying to achieve -- for example, going to the high-end shops, asking for advice, but then utilizing that advice on Amazon and on sales, or asking a trusted authority "do these clothes convey the impression that I want?"
[And I'm pretty sure googling for "how to get cheaper clothing" finds way more options than what you listed...]
I think an article that can be summarized by "Clothing affect the way people perceive you. Dress for the perception you want, within your time/money allowances. Google and/or ask trusted friends for more specific advice as to which clothes achieve which impression." is a little bit...much for Less Wrong.
Thanks for the reminder! In honor of this, I donated to the "Against Malaria Foundation". Not all of us have the chance to save the world, but every human life saved is precious! :)
I guess there are my beliefs-which-predict-my-expectations and my aliefs-which-still-weird-me-out. In the sense of beliefs which predict my expectation, I would say the following about mathematics: as far as logic is concerned, I have seen (with my eyes, connected to neurons, and so on) the proof that from P&-P anything follows, and since I do want to distinguish "truth" from "falsehood", I view it as (unless I made a mistake in the proof of P&-P->Q, which I view as highly unlikely -- an easy million-to-one against) as false. Anything which leads me to P&-P, therefore, I see as false, conditional on the possibility I made a mistake in the proof (or not noticed a mistake someone else made). Since I have a proof from "2+2=3" to "2+2=3 and 2+2!=3" (which is fairly simple, and I checked multiple times) I view 2+2=3 as equally unlikely. That's surely entanglement with the world -- I manipulated symbols written by a physical pen on a physical paper, and at each stage, the line following obeyed a relationship with the line before it. My belief that "there is some truth", I guess, can be called unconditional -- nothing I see will convince me otherwise. But I'm not even certain I can conceive of a world without truth, while I can conceive of a world, sadly, where there are mistakes in my proofs :)
So you don't consider mistakes in logical reasoning a problem because someone might point them out to you? What if it's an easy mistake to make, and a lot of other people make the same mistake? At this point, it seems like you're arguing about the definition of the words "problem with", not about states of the world. Can you clarify what disagreement you have about states of the world?