Posts

Comments

Comment by afeller08 on Propinquity Cities So Far · 2020-11-17T21:04:09.238Z · LW · GW

I haven't studied this in general, but I have read a decent amount about the history of a couple cities, and based on those examples, can say with confidence that no modern city comes remotely close to the density that people would choose absent regulations keeping density down.

Tokyo today is less densely populated per square meter ground than late medieval Edo was, and late medieval Edo had no plumbing and basically no buildings taller than three stories. (I don't think there are historical examples of cities with no height restrictions and no density restrictions because until 1885, nobody knew how to build a skyscraper, so height restrictions existed indirectly through limitations of engineering -- technically, they still do.)

All of the evidence I'm familiar with suggests that people would choose to be very densely concentrated if it wasn't for regulations limiting their density.

The favelas of Brazil are generally considered a stepping stone towards urban living by their residents. Most of their residents don't live there because they need to; they live there because they would prefer to leave the places they came from (generally the countryside). There's pretty strong evidence globally and historically that, when given the option, people deliberately choose urban poverty over rural poverty. People migrate from villages to slums, and they don't move back. This is happening in Brazil, Kenya, Tibet, and India today. It happened historically in the United States and the U.K. This exhausts my knowledge of the history of human migration patterns, but I assume that the cases I don't know anything about are roughly consistent with the places I do know something about.

Air pollution from density of residency is unlikely to ever be self-limiting. 19th century London had way worse air pollution than any modern city, caused by coal-burning urban factories being everywhere, not to mention that everyone also burned coal for heat in the winter. (They lacked the technology to track air pollution back then, but it was bad enough that it effectively limited life expectancy to 30, so pretty bad. Incidentally, high polluting urban factories were priced out of existing in urban settings more than they were regulated out of existing in them.) Most cities also end up having a high percentage of their residents primarily travel by not-car, because traffic gets to be horrendous in everywhere but the nimbyest of cities. Outside the U.S., most cities are also designed around encouraging people to get around by not-car.

Asian countries generally permit much higher urban density than Western countries, and this seems to greatly increase the percentage of people who prefer to live in urban settings, and more or less prevent suburbs from developing. (I assume this happens because people are much less likely to be priced out of being able to live in a city, and that the preference for living outside of a city mainly comes from costs.) 

Population density and price per square foot of livable space are highly correlated. I strongly suspect the density causes the increase in price; pretty sure the increase in price doesn't cause the increase in density. 

 

 

By the way, Bloomberg News has a section called "Citylab" that is primarily focused on urban planning. I highly recommend it to anyone interested in the subject.

Comment by afeller08 on Open Thread, Apr. 27 - May 3, 2015 · 2015-04-28T07:02:21.498Z · LW · GW

If I were designing the experiment, I would have the control group be to play a different game instead of having it be maths instructions.

You generally don't want test subjects to know whether they are in the control condition or not. So if you're going to make it be maths instructions, you probably shouldn't tell them what the experiment is designed to test at all, until you're debriefing at the end. If you tell people you are recruiting that you are testing the effects of playing computer games on statistical reasoning, then the people in the control condition won't need to realize that what you're really testing is whether your RPG in particular helps people think about statistics. They can just play HalfLife 2 or whatever you pick for them to play for a few minutes, and then take your tests afterwards.

Comment by afeller08 on 16 types of useful predictions · 2015-04-28T06:29:13.219Z · LW · GW

I find that playing the piano is a particularly useful technique for gauging my emotions, when they are suppressed/muted. This works better when I'm just making stuff up by ear than it does when I'm playing something I know or reading music. (And learning to make stuff up is a lot easier than learning to read music if you don't already play.) Playing the piano does not help me feel the emotions any more strongly, but it does let me hear them -- I can tell that music is sad, happy, or angry regardless of its impact on my affect. Most people can.

Something that I don't do that I think would work (based partially on what Ariely says in The Upside of Irrationality, partially on what Norman says in Emotional Design, and partially on anecdotal experience) is to do something challenging/frustrating and see how long it takes for you to give up or get angry. If you can do it for a while without getting frustrated, you're probably in a positive state of mind. If you give up feeling like it's futile, you're sad, and if you start feeling an impulse to break something, you're frustrated/angry. The shorter it takes you to give up or angry the stronger that emotion is. The huge downside to this approach is that it results in exacerbating negative emotions (temporarily) in order to gauge what you were feeling and how strongly.

Comment by afeller08 on 16 types of useful predictions · 2015-04-28T05:48:56.889Z · LW · GW

The person proposing the bet is usually right.

This is a crucial observation if you are trying to use this technique to improve your calibration of your own accuracy! You can't just start making bets when no one else you associate regularly is challenging you to the bets.

Several years ago, I started taking note of all of the times I disagreed with other people and looking it up, but initially, I only counted myself as having "disagreed with other people" if they said something I thought was wrong, and I attempted to correct them. Then I soon added in the case when they corrected me and I argued back. During this period of time, I went from thinking I was about 90% accurate in my claims to believing I was way more accurate than that. I would go months without being wrong, and this was in college, so I was frequently getting into disagreements with people, probably, an average, three a day during the school year. Then I started checking the times that other people corrected me, just as much as I checked when I corrected other people. (Counting even the times that I made no attempt to argue.) And my accuracy rate plummeted.

Another thing I would recommend to people starting out in doing this is that you should keep track of your record with individual people not just your general overall record. My accuracy rate with a few people is way lower than my overall accuracy rate. My overall rate is higher than it should be because I know a few argumentative people who are frequently wrong. (This would probably change if we were actually betting money, and we were only counting arguments when those people were willing to bet. So you're approach adjusts for this better than mine.) I have several people for whom I'm close to 50%, and there are two people for whom I have several data points and my overall accuracy is below 50%.

There's one other point I think somebody needs to make about calibration. And that's that 75% accuracy when you disagree with other people is not the same thing as 75% accuracy. 75% information fidelity is atrocious; 95% information fidelity is not much better. Human brains are very defective in a lot of ways, but they aren't that defective! Except at doing math. Brains are ridiculously bad at math relative to how easily machines can be implemented to be good at math. For most intents and purposes, 99% isn't a very high percentage. I am not a particular good driver, but I haven't gotten into a collision with another vehicle in my well over 1000 times driving. Percentages tend to have an exponential scale to them (or more accurately a logistic curve). You don't have to be a particularly good driver to avoid getting into an accident 99.9% of the time you get behind the wheel, because that is just a few orders of magnitude improvement relative to 50%.

Information fidelity differs from information retention. Discarding 25% or 95% or more of collected information is reasonable; corrupting information at that rate is what I'm saying would be horrendous. (Because discarding information conserves resources; whereas corrupting information does not... except to the extent that you would consider compressing information with a lossy (as in "not lossless") compression to be a corrupting information, but I would still consider that to be discarding information. Episodic memory is either very compressed or very corrupted depending on what you think it should be.)

In my experience, people are actually more likely to be underconfident about factual information than they are to be overconfident, if you measure confidence on an absolute scale instead of a relative-to-other-people scale. My family goes to trivia night, and we almost always get at least as many correct as we expect to get correct, usually more. However, other teams typically score better than we expect them to score too, and we win the round less often than we expect to.

Think back to grade school when you actually had fill in the blank and multiple choice questions on tests. I'm going to guess that you probably were an A student and got around 95% right on your tests... because a) that's about what I did and I tend to project, b) you're on LessWrong so you were probably an A student, and C) you say you feel like you ought to be right about 95% of the time. I'm also going to guess (because I tend to project my experience onto other people) that you probably felt a lot less than 95% confident on average when you were taking the tests. There were more than a few tests I took in my time in school where I walked out of the test thinking "I didn't know any of that; I'll probably get a 70 or better just because that would be horribly bad compared to what I usually do, but I really feel like I failed that"... and it was never 70. (Math was the one exception in which I tended to be overconfident, I usually made more mistakes than I expected to make on my math tests.)

Where calibration is really screwed up is when you deal with subjects that are way outside of the domain of normal experience, especially if you know that you know more than your peer group about this domain. People are not good at thinking about abstract mathematics, artificial intelligence, physics, evolution, and other subjects that happen at a different scale from normal everyday life. When I was 17, I thought I understood Quantum Mechanics just because I'd read A Brief History of Time and A Universe in a Nut Shell... Boy was I wrong!

On LessWrong, we are usually discussing subjects that are way beyond the domain of normal human experience, so we tend to be overconfident in our understanding of these subjects... but part of the reason for this overconfidence is that we do tend to be correct about most of the things we encounter within the confines of routine experience.

Comment by afeller08 on 16 types of useful predictions · 2015-04-28T05:13:54.867Z · LW · GW

Precisely for this reason, there was a time when I wrote in Elverson pronouns (basically, Spivak pronouns) for gender ambiguous cases. So, if I was writing about Bill Clinton, I would use "he," and if I was writing about Grace Hopper, I would use "she," but if I was writing about somebody/anybody in would use, I would use "ey" instead. This allows one to easily compile the pronouns according to preference without mis-attributing pronouns to actual people... I've always planned on getting around to hosting my own blog running on my own code which would include an option to let people set a cookie to store their gender preference so they could get "she by default", "he by default", "Spivak by default", or randomization between he and she -- with a gimmick option for switching between different sets of gender neutral pronouns at random. The default default being randomization between he and she. But I haven't gotten around to writing the website to host my stuff yet, and I just use unmodified blogger, so for now I'm doing deliberate switching by hand as described above.

(I think I could write a script like that for blogger too, but I haven't bothered looking into how to customize blogger because I keep planning to write my own website anyways because there are a lot of things I want to differently, and that's not necessarily the one that's at the top of my list.)

Comment by afeller08 on 16 types of useful predictions · 2015-04-28T04:55:58.075Z · LW · GW

More jarring than that is if one set of gender pronouns gets used predominantly in negative examples, and the other set gets used predominantly in positive examples.

I try to deliberately switch based on context. If I wrote an example of someone being wrong and then someone being right. I will stick with the same gender for both cases, and then switch to the other gender when I move to the next example of someone being wrong, right, or indifferent.

Occasionally, something will be so inherently gendered that I cannot use the non-default gender and feel reasonable doing it. In these cases, I actually don't think I should. (Triggers: sexual violence. I was recently writing about violence, including rape, and I don't think I could reasonable alternate pronouns for referring to the rapist because, while not all perpetrators are male, they are so overwhelmingly male that it would be unreasonable to use "she" in isolation. I mixed "he" with an occasional "he or she" for the extremely negative examples in those few paragraphs.)

Comment by afeller08 on Moral Anti-Epistemology · 2015-04-24T21:38:49.475Z · LW · GW

I changed my mind midway through this post. Hopefully it still makes sense... I started disagreeing with you based on the first two thoughts that come to mind, but I'm now beginning to think you may be right.

So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.

I.

This statement doesn't really fit with the philosophy of morality. (At least as I read it.)

Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have "No murder" as a terminal value, but that's different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don't commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.

Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal values. It's a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.

II.

Discussions of morality focus on what people "should" do and what people "should" think, etc. The general idea of terminal values is that you have them and they don't change in response to other considerations. They're the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There's no point to discussing what kind of terminal values people "should" have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.

III.

The psychological conditions that cause people to become immoral by most other people's standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.

Sociopaths are people who don't experience empathy or remorse. Psychopaths are people who don't experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality... But that's not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.

Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me... but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.

Comment by afeller08 on Moral Anti-Epistemology · 2015-04-24T19:41:17.090Z · LW · GW

Anti-epistemology is a more general model of what is going on in the world than rationalizations are,

Yes.

so it should all reduce to rationalizations in the end.

Unless there are anti-epistemologies that are not rationalizations.

The general concept of a taboo seems to me to be an example of a forceful anti-epistemology that is common in most moral ideologies and is different from rationalization. When something is tabooed, it is deemed wrong to do, wrong to discuss, and wrong to even think about. The tabooed thing is something that people deem wrong because they cannot think about whether it is wrong without in the process doing something "wrong," so there is no reason to suppose that they would find something wrong with the idea if they were to think about it, and try to consider whether the taboo fit with or ran against their moral sense.

A similar anti-epistemology is when people believe it is right to believe something is morally right... on up through all the meta-levels of beliefs about beliefs, so that they would already be committing the sin of doubt as soon as they begin to question whether they should believe that continuing to hold their moral beliefs is actually something they are morally obliged to do. (For ease of reference, I'll call this anti-epistemology "faith".)

One of the three things that rationalization, taboos, and faith have in common is that they are sufficiently general modes of thought to permit them to be applied to "is" propositions as well as "ought" propositions, and when these modes of thought are applied to objective propositions for which truth-values can be measured, they behave like anti-epistemologies. So in the absence of evidence to the contrary, we should presume that they behave as anti-epistemologies for morality, art criticisms and other subjects -- even though the existence of something stable and objective to be known in these subjects is highly questionable. The modes of thought I just mentioned are themselves inherently flawed. They are not simply flawed ways of thinking about morality, in particular.

If you are looking for bad patterns of though that deal specifically with ethics, and cannot be applied to other subjects about which truthiness can be more objectively measured, the best objection (I can think of) by which to call those modes of thought invalid is not to try to figure out why they are anti-epistemologies, but instead to reject them for their failure to put forward any objectively measurable claims. There are many more ways for a mode of thought to go wrong than for it to go right, so until some thought pattern has provided evidence of being useful for making accurate judgments about something, it should not be presumed to be a useful way to think about something for which the accuracy of statements is difficult or impossible to judge.

Comment by afeller08 on Happiness and Goodness as Universal Terminal Virtues · 2015-04-24T03:59:26.592Z · LW · GW

Welcome to LessWrong! I wouldn't comment if I didn't like your post and think it was worth responding to, so please don't interpret my disagreement as negative feedback. I appreciate your post, and it got me thinking. That said. I disagree with you.

The real point I'm making here is that however we categorize personal happiness, goodness belongs in the same category, because in practice, all other goals seem to stem from one or both of these concepts.

Your claims are probably much closer to true for some people than they are for me, but they are far from accurate for characterizing me or the people who come most readily to mind for me.

Depending on what you mean by goals, either happiness doesn't really affect my goals, or the force of habit is one of the primary drivers of my goals. Happiness is a major influence on my ordinary behavior, but is seldom something that I think about very much when making long term plans. (I have thought about thinking about happiness in my long term plans, and decided against doing so because striving after personal happiness in my long term plans does not fit with my personal sense of identity even though it is reasonably consistent with my personal sense of ethics.) Like happiness/enjoyment, routine is a major driver of my everyday behavior, and while it is somewhat motivated by happiness, it comes more from conditioning, much of which was done to me by other people, and much of which I chose for myself. Most of the things I do are simply the things I do out of habit.

When I choose things for myself and make long term plans, virtue/goodness is something that I consider, but I also consider things that are far from being virtue/goodness as you used the term and as most other people use the term. The two things that immediately spring to mind as part of my considerations are my sense of identity/self-image and my desire to be significant.

I was an anglophile in my teenage years, and one of the lasting consequences of that phase of my life is that I Do Not Drink Coffee. This isn't because I don't think I should drink coffee. This isn't because I think drinking coffee would make me less happy. It is simply because drinking coffee is one of the things that I do not do. I drink tea. I would be less myself, from my own perspective, if I started drinking coffee than I am by continuing to not drink coffee and by sometimes drinking tea. Not drinking coffee is part of what it means to me to be me.

My dad is a lifelong Cubs fan. I have sometimes joked to him that one of the things he could do to immediately make his life happier is to quit being a Cubs fan and become a Yankees fan. My dad cares about sports. He would be happier if he was a Yankees fan but he is not a Yankees fan. (You could argue that this is loyalty, but I would disagree... My dad's from the Midwest but he lives on the East Coast now. When other people move from one part of the country to another and their sports allegiances change he doesn't find that surprising, upsetting, or in any way reprehensible. There are other aspects of life where he does believe people are morally obligated to be loyal, and he finds it reprehensible when other people violate family loyalty and other forms of loyalty that he believes are morally obligatory.)

In terms of strength of terminal values, a sense of personal identity is, in most of the cases that I can think of, stronger than a desire for happiness and weaker than a desire to be good. Sort of. Not really. That's just what I want to say and believe about myself but it's not true. It's easier for me to give an example having to do with sports than one having to do with tea. (Sorry, I grew up with them... and they spring to mind as vivid examples much more so than other subjects, at least for me.)

I'm a very fickle sports fan by most standards. I don't really have a sport I enjoy watching in particular, and I don't really have a team that a cheer for, but every once in a while, I will decide to watch sports... usually a tournament. And then I'll look at a bunch of stats and read a bunch of commentary, and pick what team I think deserves to win, and cheer for that team, for that tournament. Once I pick a team, I can't change my mind until the tournament is over... It's not that I don't want to or think I shouldn't. It's that even when I think that I ought to change my mind, I still keep cheering for the same team as I did before...

Sometimes, I don't realize that I'll be invited over to someone else's house for one of the games. Sometimes, when this happens, I'm cheering for a different team than everyone else, and I feel extremely silly for doing this and a little embarrassed about it because I'm not really a fan of that team. They're just the team I picked for this tournament. So I'll go over to someone's house, and I'll try to root for the same team as everyone else, and it just won't work. The home team is ahead, and I'll smile along with everyone else. I won't get upset that my team is losing. People won't realize that my team is losing; they'll just think I don't care that much about the game... but then, if my team starts to make a comeback, I suddenly get way more interested in the game. I'll start to reflexively move in certain ways. I'll pump my fist a little when they score. I'll try to keep my gestures and vocalization subtle and under control; I'm still embarrassed about rooting for that team... But I'm doing it anyways, because that's my team, at least for today. Then when the home team comes back again and wins it, I'm disappointed, and I'm even a little more embarrassed about rooting against them than I would have been if they'd lost. This wouldn't change even if I had some ethical reason for wanting the other team to win. If (after the tournament had begun and I'd picked what team I was cheering for) some wealthy donor announced that he was going to make some big gift to a charity that I believe in if and only if his team won, and his team didn't happen to be my team... I would start to feel like I should want his team to win. I know who I cheer for doesn't affect the outcome of the game, but I still feel like it would be more ethical to cheer for the team that would help this philanthropic cause if it won. I'd try to root for them just like I'd try to cheer for the home team if I got invited over to a friend's house to watch the game. But I wouldn't actually want that team to win. When the game started and the teams started pulling ahead of and falling behind each other as so often happens in games, my enthusiasm for the game would keep increasing as my team was pulling ahead and keep falling off again when they started losing ground. It's just what happens when I watch sports.

My sense of identity also affects my life choices and long term plans. For example, many of my career choices have had as much to do with what roles I can see myself in, as they have they have had to do with what I think would make me happy, what I do well, and what impact I want to have on the world. I think most people can identify with this particular feeling and this comment is long enough already, so I won't expand on it for now...

By far, the biggest motivator of my personal goals, however, is significance. I want to matter. I don't want to be evil, but mattering is more important to me than being good... The easiest way for me to explain my moral feelings about significance, is to say that, in practice, I am far more of a deontologist than I am in theory. Karl Marx is an example of someone who matters, but was not what I would call good. He dramatically impacted the world and his net impact has probably been negative, but he didn't behave in any way that would lead me to consider him evil, so he's not evil. I would rather become someone who matters and is someone who I would consider good. Norman Borlaug is a significant person whose contributions to the world are pretty much unambiguously good. (Though organic food movement people and other Luddites would erroneously disagree.) Bach, Picasso, and Nabokov are all examples of other people who are extremely significant without necessarily have done anything I would call good. They've had a lasting impact on this world.

I want to do that... I don't want to be the sort of person who would do that. I don't want to have the traits that allowed Bach to write the sort of music in his time that would be remembered in our time. I want to carve the words "Austin was here" so deep into the world that they can never be erased. (Metaphorically, of course.) I want to matter.

...and not just in that "everybody is important, everybody matters" sort of way...

I would much rather be happy, good, and significant than any two of the three. If I can only be two, I would want to be good and significant. And if I can only be one, I would want to be significant. I don't want to be evil... there are some things I wouldn't do even if doing them guaranteed that I would become significant. A few lines I would not cross: I wouldn't rape or torture anyone. I wouldn't murder someone I deemed to be innocent. But if the devil and souls were real, I might sell my soul.

Interestingly, the lines I wouldn't cross for the greater good are different from the lines I wouldn't cross to obtain significance. I would kill somebody I deemed to be innocent to save the lives of a hundred innocent people... but not to save just two or three innocent people. On the other hand, if the devil and souls were real and he came to me with the offer, I wouldn't sell my soul to save the lives of a hundred or even a thousand people I deemed to be innocent though I would seriously consider selling my soul to obtain significance. Whatever my values are, they are not well-ordered. (Which is not quite the same as saying they are illogical, though many would interpret it that way.)

Comment by afeller08 on Have you changed your mind recently? · 2015-02-08T03:43:31.669Z · LW · GW

I have been recently questioning how worthwhile it is to be perceived as smart. Since I have always wanted to be intelligent, having people affirm my intelligence has always made me feel validated, much more so than receiving other forms of compliments. Either in response to that form of approval or else in anticipation of receiving it, I have gone out of my way to present myself primarily as an intelligent person and to consider any other perceptions others may have of me as secondary to that one.

As I've begun to question whether this is a good image to promote, one of the things I've begun to think is that I was mistaken to seek to promote a single uniformed image of myself at all. Instead, I should try to tailor more to my specific context. When I've tried to think of how I should want to project myself in general, the best idea I have been able to come up with is that I would want people to find me interesting. But when I think about what would be a beneficial way to be perceived in various contexts, I quickly realized that there are context-specific labels that direct me towards how I should want other people to view me at various times. In the office, I should seek to be viewed as being "professional." When I'm looking for dates, I want people to think I'm "sexy." When I comment on LW, I should probably seek to be perceived as "rational." This all feels very obvious in retrospect, but while I was still focused on trying to appear "smart," I was unable to even ask the right questions that would help me present myself to the world in a more desirable way.

Complicating this decision is the realization that I have to transition away from feeling successful about my self image to feeling far less successful. A few months ago, I felt good about myself when my boss identified my intelligence as the greatest asset I brought to the company and also identified increasing my professionalism as the area where I needed to grow the most. Today, if I were to truly internalize the change that I have been making cognitively, I think I should regard the phrase "highly professional nitwit" as slightly more flattering than "unprofessional genius," in which case I need to change my view of the feedback I received. At the time, I felt like those two comments summed to feedback that was mostly positive. Emotionally, I still feel that way. But since this discussion was in a business context, and since what we were talking about had far less to do with what is actually true than it did with what my boss perceived to be true, I think I should view the feedback I received as indicating that I should make substantial adjustments to my behavior. E.g. I need to adjust my email response policy away from asking myself "Do I have anything worth saying to add to this conversations?" and more towards "Given that I don't have anything noteworthy to add to this conversation, is it more professional to acknowledge receipt of this particular email or to simply move on?" (I need to make dozens of small changes similar to that one, wherever I notice that some particular aspect of my behavior is based on my desire to appear intelligent.) Again, this feels very obvious now that I've begun to think about things in these terms, but I previously failed to even ask the necessary questions.

Comment by afeller08 on Science Isn't Strict Enough · 2012-11-01T01:59:31.202Z · LW · GW

Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment. If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get false positives, what is the probability that a routinely screened woman with a positive mammography has breast cancer? 7.5%. You cannot say, "I believe she doesn't have breast cancer, because the experiment isn't definite enough." You cannot say, "I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate." 7.5% is the rational estimate given this evidence, not 7.4% or 7.6%. The laws of probability are laws.

I try to do the math when you pose a problem. I'm pretty sure in this case the rational estimate is 7.4%. If 1000 women get tested, you expect 8 of those women to be true positives and 100 to be false positives. 8/108 is .074074... (ellipsis for repeating, I don't know how to do a superscripted bar in a comment here). I have no particular objections to rounding for ease of communication, and would ordinarily consider this sort of correction to be an unnecessary nitpick, but in this case, I'm objecting to the statement that 7.4% is not the correct rational estimate given the evidence, not the statement that 7.5% is. If you happen to read this comment, you might want to change that.

Comment by afeller08 on To Spread Science, Keep It Secret · 2012-10-28T03:04:47.500Z · LW · GW

I don't know to what extent we can hack our own perceptions of scarcity by intentionally directing our thoughts, but it seems like it's something worth trying to do:

"Scientific information is widely available. As a result, people will pay less attention to it than they would if it was hidden. As a result, it's better hidden than if it were kept partially secret. This means that scientific information is very scarce, and almost nobody knows that it is scarce."

Is there a way to phrase the above statement so that it carries the same psychological weight as, "Only a few people realize this now, but there's about to be a beef shortage"?

Edit: (... and is that the whole point of this post along with the story after it?)

Comment by afeller08 on Causal Reference · 2012-10-25T07:21:23.813Z · LW · GW

You're right. My hypothesis is not really distinguishable from the single tier. I'm pretty sure the division I made was a vestigial from the insanely complicated hacked-up version of reality I constructed to believe in back when I devised a version of simulationism that was meant to allow me to accept the findings of modern science without abandoning my religious beliefs (back before I'd ever heard of rationalism or Baye's theorem when I was still asking the question "Does the evidence permit me to believe, and, if not, how can I re-construe it so that it does?" because that once made sense to me.)

When I posted my question, the distinction between 'laws of physics' and 'everything else' was obvious to me. But obvious or not, the distinction is meaningless. Thanks for pointing that out.

Comment by afeller08 on Causal Reference · 2012-10-25T01:09:40.015Z · LW · GW

That would strongly indicate that something caused the zombies to write a program for generating simulations that was likely to create simulated shadow brains in most of the simulations. (The compiler's built in prover for things like type checking was inefficient and left behind a lot of baggage that produced second tier shadow brains in all but 2% of simulations). It might cause the zombies to conclude that they probably had shadow brains and start talking about the possibility of shadow brains, but it should be equally likely to do that whether the shadow brains were real or not. (Which means any zombie with a sound epistemology would not give additional credence to the existence of shadow brains after the simulation caused other zombies to start talking about shadow brains than it would if the source of the discussion of shadow brains had come from a random number generator producing a very large number, and that large number being interpreted as a string in some normal encoding for the zombies producing a paper that discussed shadow brains. Shadow brains in that world should be an idea analogous to Russell's teapot, astrology, or the invisible pink unicorn in our world.)

Now, if there was some outside universe capable of looking at all of the universes and seeing some universes with shadow brains and some without, and in the the universes with shadow brains zombies were significantly more likely to produce simulations that created shadow brains than in the universe in which zombies had shadow brains they were much more likely to create simulations that predicted shadow brains similar to their actual shadow brains -- then, we would be back to seeing exactly what we see when philosophers talk about shadow brains directly: namely, the shadow brains are causing the zombies to imagine shadow brains which means that the shadow brains aren't really shadow brains because they are affecting the world (with probability 1).

Either the result of the simulations points to gross inefficiency somewhere (their simulations predicted something that their simulations shouldn't have been able to predict) or the shadow brains not really being shadow brains because they are causally impacting the world. (This is slightly more plausible than philosopher's postulating shadow brains correctly for no reason only because we don't necessarily know that there is anything driving the zombies to produce simulations efficiently; whereas, we know in our world that we can assume that brains typically produce non-gibberish because enormous selective pressures have caused brains to create non-gibberish.)

Comment by afeller08 on Causal Reference · 2012-10-25T00:52:24.338Z · LW · GW

Still, we don't actually know the Real Rules are like that; and so it seems premature to assign a priori zero probability to hypotheses with multi-tiered causal universes.

Maybe I'm misunderstanding something. I've always supposed that we do live in a multi-tiered causal universe. It seems to me that the "laws of physics" are a first tier which affect everything in the second tier (the tier with all of the matter including us), but that there's nothing we can do here in the matter tier to affect the laws of physics. I've also always assumed that this was how practically everyone who uses the phrase 'laws of physics' uses it.

(I realize you were talking about lower tiers in the line that I quoted, and I certainly agree with the arguments and conclusions you made regarding lower tiers. I just found the remark surprising because I place a very high probability on the belief that we are living in a multi-tier causal universe, and I think that that assignment is highly consistent with everything you said.)

I don't know if I'm nitpicking or if I missed a subtlety somewhere. Either way, I found the rest of this article and this sequence persuasive and insightful. My previous definition of "'whether X is true' is meaningful" was "There is something I might desire to accomplish that I would approach differently if X were true than if X were false," and my justification for it was "Anything distinguishably true or false which my definition omits doesn't matter to me." Your definition and justification seem much more sound.

Comment by afeller08 on Things That Shouldn't Need Pointing Out · 2012-10-24T23:04:16.406Z · LW · GW

Given that I spend a lot of time programming computers and that I occasionally brainstorm my programs through flow-charts, I was shocked, upon realizing that flow-charts can easily be formalized as something Turing complete, by how long it took me to realize this. (Generalized: If I am able to regularly use a particular abstraction as a proxy for another abstraction, it makes sense to ask the question, "Are these two ideas equivalent?")

Comment by afeller08 on Rationality: Appreciating Cognitive Algorithms · 2012-10-12T12:26:26.742Z · LW · GW

I agree with Xachariah's view of semantics. I think that the first 'I believe' does imply a different meta level of belief (often associated with a different reason for believing). His example does a good job of showing how someone can drill down many levels, but the distinction in the first level might be made more clear by considering a more concretely defined belief:

"We're lost" -- "I'm you're jungle leader, and I don't have a clue where we are any more."

"I believe we're lost" -- "I'm not leading this expedition. I didn't expect to have a clue where we were going, but it doesn't seem to me like anyone else knows where we are going either."

--

"Sarah won state science fair her senior year of high school" -- "I attended the fair and witnessed her win it."

"I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and she's the best experimentalist I've ever met."

"I believe that I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and I don't believe for one second that she'd make that sort of thing up. That said, she's not, so far as I can tell, particularly good at science, and it shocks me that she might somehow have been able to win."

--

"Parachuting isn't all it's cracked up to be." -- "I've gone parachuting, and frankly, I've gotten bigger adrenaline rushes playing poker."

"I don't believe parachuting's all it's cracked up to be." -- "I haven't gone parachuting. There's no way I would spend $600 for a 4 minute experience when I can't imagine that it's enough fun to justify that."


Without the 'I believe,' what I tend to be saying is, I trust the map because I drew it and I drew it carefully. With the 'I believe,' I tend to be saying I trust this map because I trust it's source even though I didn't actually create it myself. In the case of the parachuting, I don't know where the map comes from, it's just the one I have.

Placing additional "I believe"s in front of a statement changes what part of the statement you have confidence in.

The statement 'I believe God exists' usually does mean that someone places confidence in eir community's ability to determine if God exists or not rather than placing confidence in the statement itself. Most of the religious people I know would say 'God exists' rather than 'I believe God exists' and most of them believe that they have directly experienced God in some way. However, most of them would say 'I believe the Bible is true' rather than 'the Bible is true' -- and when pressed for why they believe that, they tend to say something along the lines of "I cannot believe that God would allow his people to be generally wrong about something that important" or something else that asserts that their confidence is in their community's ability to determine that 'the Bible is true' rather than their confidence being in the Bible itself. I don't know if this is a very localized phenomenon or not since all of the people I've had this conversation with belong to the same community. It's how I would tend to use the word 'believe' too, but I grew up in this community, so I probably tend to use a lot of words the same way as the people in this community do.

In Xachariah's example the certainty/uncertainty is being placed on the definition of 'believe' at each step past the first one, so the way that the the statement is changing is significantly different in the second and third application of 'I believe' than it is in the first. The science fair example applies the 'I believe' pretty much the same way twice.

When I say "Sarah won science fair," I'm claiming that all of the uncertainty lies in my ability to measure and accurately record the event. Her older sister is really good at science too; it's possible that I'm getting the two confused but I very strongly remember it being Sarah who won. On the other hand, I'm extremely confident that I wouldn't give myself the wrong map intentionally -- I have no reason to want to convince myself that Sarah is better at science than she actually is.

That source of uncertainty essentially vanishes when the source of my information becomes Sarah herself. I now have a new source of uncertainty though because she does have a reason to convince me that she is better at science than she actually is. However, I trust the map because it agrees with what I'd expect it to be. I'd still think she was telling the truth about this if she lied to me about other things.

In the third case, I'm once again extremely confident that Sarah won science fair. She told me she did, and she tells the truth. What she's told me does not at all agree with my expectations; I don't really place confidence in the map, I place confidence a great deal of confidence in Sarah's ability to create an accurate map, and I place a great deal of confidence in her having given me an accurate map. The map seems preposterous to me, but I still think it's accurate, so when someone asks me if I believe that Sarah won science fair, I wince and I say "I believe that I believe that Sarah won science fair" and everyone knows what I mean. My statement isn't really "Sarah won science fair." It's "Sarah doesn't lie. Sarah says she won science fair. Therefore, Sarah won science fair." If I later find out that Sarah isn't quite as honest as I think she is, this is the first thing she's told me that I'll stop believing. Unless that happens, I'll continue to believe that she won.