Posts

Comments

Comment by mattnewport on How I Lost 100 Pounds Using TDT · 2011-07-12T15:40:44.830Z · LW · GW

I simply say to them "Er, human technology has progressed to the point where I can have, say, a sweet breakfast without consuming any sugar, and I'm going to do so. Cheating has nothing to do with it."

It's true that artificial sweeteners mean you can get a sweet taste without consuming calories. Beware the conclusion that they therefore don't cause you to gain weight or have other negative health effects though. There's plenty of evidence to the contrary.

I agree that eating healthily doesn't mean having to deprive yourself of all delicious foods. Sadly artificial sweeteners seem to be quite problematic, though some types may be less bad than others.

Comment by mattnewport on Time and Effort Discounting · 2011-07-08T20:10:10.875Z · LW · GW

I'm not sure you need uncertainty to discount at all - in finance exponential discounting comes from interest rates which are predicated on an assumption of somewhat stable economic growth rather than deriving from uncertainty.

As you point out, hyperbolic discounting can come from combining exponential discounting with an uncertain hazard rate. It seems many of the studies on hyperbolic discounting assume they are measuring a utility function directly when they may in fact be measuring the combination of a utility function with reasonable assumptions about the uncertainty of claiming a reward. It's not clear to me that they have actually shown that humans have time inconsistent preferences rather than just establishing that people don't separate the utility they attach to a reward from their expectation of actually receiving it in their responses to these kinds of studies.

Comment by mattnewport on Time and Effort Discounting · 2011-07-08T02:01:45.749Z · LW · GW

Do any of the studies on hyperbolic discounting attempt to show that it is not just a consequence of combining uncertainty with something like a standard exponential discounting function? That's always seemed the most plausible explanation of hyperbolic discounting to me and it meshes with what seems to be going on when I introspect on these kinds of choices.

Most of the discussions of hyperbolic discounting I see don't even consider how increasing uncertainty for more distant rewards should factor into preferences. Ignoring uncertainty seems like it would be a sub-optimal strategy for agents making decisions in the real world.

Comment by mattnewport on New Year's Predictions Thread · 2011-01-04T14:01:26.832Z · LW · GW

There's room for debate whether we saw a true currency crisis in the Euro but 'this prediction has failed utterly' is overstating it. We saw unusually dramatic short term moves in the Euro in May and there was widespread talk about the future of the Euro being uncertain. Questions about the long term viability of the Euro continue to be raised.

I'd argue that charting any of the major currencies against gold indicates an ongoing loss of confidence in all of them - from this perspective the dollar and the euro have both declined in absolute value over the year while trading places in terms of relative value in response to changing perceptions of which one faces the biggest problems.

'Currency crisis' was in retrospect a somewhat ambiguous prediction to make since there is no clear criteria for establishing what constitutes one. I'd argue that the euro underwent the beginnings of a currency crisis in May but that the unprecedented intervention by the ECB forestalled a full blown currency crisis.

Comment by mattnewport on The Sacred Mundane · 2010-12-01T22:49:34.540Z · LW · GW

According to this article a sense of vibration and rapid acceleration of the body are fairly commonly reported (I don't recall experiencing these symptoms myself). That article and the Wikipedia entry both mention some of the mythology and folklore surrounding the experience from different cultures.

Comment by mattnewport on Dealing with the high quantity of scientific error in medicine · 2010-10-28T00:56:35.508Z · LW · GW

Paleo diets generally consider corn a grain so you might want to avoid that. Some paleo variants (like the one I'm currently following) are ok with cheese and yogurt in moderation (and butter).

Comment by mattnewport on Dealing with the high quantity of scientific error in medicine · 2010-10-27T01:01:30.528Z · LW · GW

I would guess this is not true in general since many things do not want to be eaten and so evolve various defense mechanisms. In turn the organisms that eat them may develop counter-measures that enable them to safely digest their meal despite the defense mechanisms but this will depend on the complex evolutionary history of both organisms. Ruminants are adapted to a quite different diet than humans for example.

Comment by mattnewport on Dealing with the high quantity of scientific error in medicine · 2010-10-27T00:38:04.345Z · LW · GW

I wonder why an apple should be healthy. Wouldn't any animal be satisfied with a fruit that just had calories? Enough, in any case, to come back for more and scatter the seeds? Why should an apple -- or anything 'natural' -- be so especially healthy for humans?

You're missing the other side of the story. Humans evolved to obtain their nutritional needs from those foods that were available in the EEA and this effect is probably more significant than the selection pressure in the other direction (on fruits to be nutritionally beneficial to animals that eat them). Humans are adapted to a diet that includes things that were available to them during the long pre-agricultural evolutionary period.

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-15T19:08:49.940Z · LW · GW

You're probably right but ironically I've ignored much of the standard advice on employment and it's worked out just fine for me so this example doesn't resonate very well for me. I've never worn a suit to a job interview for example.

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-15T19:07:15.845Z · LW · GW

I assumed he was saying something like "the majority of women prefer a man more 'masculine' than the median man". By analogy, if it is true that "the majority of men prefer a woman who is slimmer than the median woman" it should be obvious that being overweight will make it harder for a woman to find a match even if there are men who prefer less slim women. Saying "men prefer slim women" is a slightly sloppy generalization but not an unreasonable one in this example.

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-15T18:31:18.934Z · LW · GW

This confuses me, because it seems to imply that men need to believe that a simple personality heuristic can be applied to all or almost all women. Why is it an unacceptable answer that some women like one thing, and some like another?

The prevalence of different personality types in the population is very relevant here and you seem to be glossing over it. If the number of women attracted to your personality type is relatively low (and especially if it is low relative to the number of other men similar to you) it will still be an obstacle you need to overcome in finding a partner even if you believe that there are women out there who would be attracted to you. Internet dating has probably helped with this a bit by making it easier to find potential matches but it can't overcome seriously unfavourable relative numbers.

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-15T16:07:01.418Z · LW · GW

I think there's a bit more to it than just women overlooking a lack of values because of other attractive factors like confidence. There's some evidence that men with the 'dark triad' personality traits are more successful with women.

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-15T01:26:40.934Z · LW · GW

I wonder how many potential matches know enough maths to realize why he used powers of two?

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-14T01:00:45.516Z · LW · GW

A dealbreaker is something that on its own automatically rules someone out. A factor is something that swings the overall impression positively or negatively but is not on its own a deciding factor independent of other factors.

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-12T03:51:15.439Z · LW · GW

I agree the choices aren't ideal but I think "likes children", "dislikes children" and "doesn't want children" all match a search for "Doesn't have children" whereas leaving the question blank or answering that you have children means you won't show up in a search that specifies "Doesn't have children". It's a bit confusing and not a very logical setup but I think that's how it works.

Comment by mattnewport on Love and Rationality: Less Wrongers on OKCupid · 2010-10-11T22:59:16.574Z · LW · GW

I've left the "children" field blank, for example, because I don't want them now but might some day, so neither "wants" nor "doesn't want" is correct.

I think this might be a mistake. I usually specify "Doesn't have children" when I do a match search and I'd guess this is fairly common. If you leave this question blank I believe you won't show up for people who are filtering on that search criteria.

Comment by mattnewport on The Irrationality Game · 2010-10-06T17:40:01.748Z · LW · GW

Historically, global population increase has correlated pretty well with increases in measures of overall health, wealth and quality of life. What empirical evidence do you derive your theory that zero or negative population growth would be better for these measures from?

Comment by mattnewport on The Irrationality Game · 2010-10-05T20:22:31.202Z · LW · GW

I just don't see any practical examples of people successfully betting by doing calculations with probability numbers derived from their intuitive feelings of confidence that would go beyond what a mere verbal expression of these feelings would convey. Can you think of any?

I'd speculate that bookies and professional sports bettors are doing something like this. By bookies here I mean primarily the kind of individuals who stand with a chalkboard at race tracks rather than the large companies. They probably use some semi-rigorous / scientific techniques to analyze past form and then mix it with a lot of intuition / expertise together with lots of detailed domain specific knowledge and 'insider' info (a particular horse or jockey has recently recovered from an illness or injury and so may perform worse than expected, etc.). They'll then integrate all of this information together using some non mathematically rigorous opaque mental process and derive a probability estimate which will determine what odds they are willing to offer or accept.

I've read a fair bit of material by professional investors and macro hedge fund managers describing their thinking and how they make investment decisions. I think they are often doing something similar. Integrating information derived from rigorous analysis with more fuzzy / intuitive reasoning based on expertise, knowledge and experience and using it to derive probabilities for particular outcomes. They then seek out investments that currently appear to be mis-priced relative to the probabilities they've estimated, ideally with a fairly large margin of safety to allow for the imprecise and uncertain nature of their estimates.

It's entirely possible that this is not what's going on at all but it appears to me that something like this is a factor in the success of anyone who consistently profits from dealing with risk and uncertainty.

The problem with discussing investment strategies is that any non-trivial public information about this topic necessarily has to be bullshit, or at least drowned in bullshit to the point of being irrecoverable, since exclusive possession of correct information is a sure path to getting rich, but its effectiveness critically depends on exclusivity.

My experience leads me to believe that this is not entirely accurate. Investors are understandably reluctant to share very specific time critical investment ideas for free but they frequently share their thought processes for free and talk in general terms about their approaches and my impression is that they are no more obfuscatory or deliberately misleading than anyone else who talks about their success in any field.

In addition, hedge fund investor letters often share quite specific details of reasoning after the fact once profitable trades have been closed and these kinds of details are commonly elaborated in books and interviews once time-sensitive information has lost most of its value.

Either your "rationality" manifests itself only in irrelevant matters, or you have to ask yourself what is so special and exclusive about you that you're reaping practical success that eludes so many other people, and in such a way that they can't just copy your approach.

This seems to be taking the ethos of the EMH a little far. I comfortably attribute a significant portion of my academic and career success to being more intelligent and a clearer thinker than most people. Anyone here who through a sense of false modesty believes otherwise is probably deluding themselves.

Where your own individual judgment falls within this picture, you cannot know, unless you're one of these people with esoteric expertise.

This seems to be the main point of ongoing calibration exercises. If you have a track record of well calibrated predictions then you can gain some confidence that your own individual judgement is sound.

Overall I don't think we have a massive disagreement here. I agree with most of your reservations and I'm by no means certain that improving one's own calibration is feasible but I suspect that it might be and it seems sufficiently instrumentally useful that I'm interested in trying to improve my own.

Comment by mattnewport on The Irrationality Game · 2010-10-05T02:53:52.003Z · LW · GW

I expected the third to be higher than most less wrongers would estimate.

Comment by mattnewport on The Irrationality Game · 2010-10-05T00:06:34.376Z · LW · GW

In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense.

You still have to be able to translate your superior relevant knowledge into odds in order to set the terms of the bet however. Do you not believe that this is an ability that people have varying degrees of aptitude for?

Look at the stock market: it's pure gambling, unless you have insider knowledge or vastly higher expertise than the average investor.

Vastly higher expertise than the average investor would appear to include something like the ability in question - translating your beliefs about the future into a probability such that you can judge whether investments have positive expected value. If you accept that true alpha) exists (and the evidence suggests that though rare a small percentage of the best investors do appear to have positive alpha) then what process do you believe those who possess it use to decide which investments are good and which bad?

What's your opinion on prediction markets? They seem to produce fairly good probability estimates so presumably the participants must be using some better-than-random process for arriving at numerical probability estimates for their predictions.

I'm not familiar with the details of this business, but from what I understand, bookmakers work in such a way that they're guaranteed to make a profit no matter what happens.

They certainly aim for a balanced book but they wouldn't be very profitable if they were not reasonably competent at setting initial odds (and updating them in the light of new information). If the initial odds are wildly out of line with their customers' then they won't be able to make a balanced book.

Comment by mattnewport on The Irrationality Game · 2010-10-04T22:51:41.482Z · LW · GW

Incidentally, do you mean GDP per capita would decrease relative to more interventionist economies or in absolute terms? Since there is an overall increasing trend, in both more and less libertarian economies that would be very surprising to me.

I wondered about this as well. It seems an extremely strong and unlikely claim if it is intended to mean an absolute decrease in GDP per capita.

Comment by mattnewport on The Irrationality Game · 2010-10-04T22:50:08.066Z · LW · GW

Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet? If you're not in the habit of accepting bets, how do you think someone who does this for a living (a bookie for example) should go about deciding on what odds to assign to a given bet?

Comment by mattnewport on Why not be awful? · 2010-10-04T22:41:20.283Z · LW · GW

The two things that seem to work for me most of the time: "will I feel proud / good about myself for doing this?" or, if that fails, "would person X (whose opinion of me is generally important to me) be impressed or disgusted with this behaviour if they knew about it?". Essentially, "is this behaviour consistent with the kind of person I wish myself and (particular) others to perceive me to be?".

Comment by mattnewport on The Irrationality Game · 2010-10-04T04:53:52.693Z · LW · GW

I guess I'm playing the game right then :)

I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.

Comment by mattnewport on The Irrationality Game · 2010-10-04T04:32:14.212Z · LW · GW

I don't know whether ant colonies exhibit principal-agent problems (though I'd expect that they do to some degree) but I know there is evidence of nepotism in queen rearing in bee colonies where individuals are not all genetically identical (evidence of workers favouring the most closely related larvae when selecting larvae to feed royal jelly to create a queen).

The fact that ants from different colonies commonly exhibit aggression towards each other indicates limits to scaling such high levels of group cohesion. Though supercolonies do appear to exist they have not come to total dominance.

The largest and most complex examples of group coordination we know of are large human organizations and these show much greater levels of internal goal conflicts than much simpler and more spatially concentrated insect colonies.

Comment by mattnewport on The Irrationality Game · 2010-10-03T21:43:56.315Z · LW · GW

Stable equilibrium here does not refer to a property of a mind. It refers to a state of the universe. I've elaborated on this view a little here before but I can't track the comment down at the moment.

Essentially my reasoning is that in order to dominate the physical universe an AI will need to deal with fundamental physical restrictions such as the speed of light. This means it will have spatially distributed sub-agents pursuing sub-goals intended to further its own goals. In some cases these sub-goals may involve conflict with other agents (this would be particularly true during the initial effort to become a singleton).

Maintaining strict control over sub-agents imposes restrictions on the design and capabilities of sub-agents which means it is likely that they will be less effective at achieving their sub-goals than sub-agents without such design restrictions. Sub-agents with significant autonomy may pursue actions that conflict with the higher level goals of the singleton.

Human (and biological) history is full of examples of this essential conflict. In military scenarios for example there is a tradeoff between tight centralized control and combat effectiveness - units that have a degree of authority to take decisions in the field without the delays or overhead imposed by communication times are generally more effective than those with very limited freedom to act without direct orders.

Essentially I don't think a singleton AI can get away from the principal-agent problem. Variations on this essential conflict exist throughout the human and natural worlds and appear to me to be fundamental consequences of the nature of our universe.

Comment by mattnewport on The Irrationality Game · 2010-10-03T20:58:04.543Z · LW · GW

Two points that influence my thinking on that claim:

  1. Gains from trade have the potential to be greater with greater difference in values between the two trading agents.
  2. Destruction tends to be cheaper than creation. Intelligent agents that recognize this have an incentive to avoid violent conflict.
Comment by mattnewport on The Irrationality Game · 2010-10-03T20:44:50.107Z · LW · GW

I agree with most of what you're saying (in that comment and this one) but I still think that the ability to give well calibrated probability estimates for a particular prediction is instrumentally useful and that it is fairly likely that this is an ability that can be improved with practice. I don't take this to imply anything about humans performing actual Bayesian calculations either implicitly or explicitly.

Comment by mattnewport on The Irrationality Game · 2010-10-03T20:24:34.766Z · LW · GW

Are we only supposed to upvote this post if we think it is irrational?

Comment by mattnewport on The Irrationality Game · 2010-10-03T20:21:27.002Z · LW · GW
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

Comment by mattnewport on The Irrationality Game · 2010-10-03T20:02:50.611Z · LW · GW

Agree with 1 and 3, not sure exactly what you mean with 2.

Comment by mattnewport on The Irrationality Game · 2010-10-03T20:00:14.892Z · LW · GW

It seems plausible to me that routinely assigning numerical probabilities to predictions/beliefs that can be tested and tracking these over time to see how accurate your probabilities are (calibration) can lead to a better ability to reliably translate vague feelings of certainty into numerical probabilities.

There are practical benefits to developing this ability. I would speculate that successful bookies and professional sports bettors are better at this than average for example and that this is an ability they have developed through practice and experience. Anyone who has to make decisions under uncertainty seems like they could benefit from a well developed ability to assign well calibrated numerical probability estimates to vague feelings of certainty. Investors, managers, engineers and others who must deal with uncertainty on a regular basis would surely find this ability useful.

I think a certain degree of skepticism is justified regarding the utility of various specific methods for developing this ability (things like predictionbook.com don't yet have hard evidence for their effectiveness) but it certainly seems like it is a useful ability to have and so there are good reasons to experiment with various methods that promise to improve calibration.

Comment by mattnewport on The Irrationality Game · 2010-10-03T18:36:52.339Z · LW · GW

I've seen so many decent people turn into bastards or otherwise abdicate moral responsibility when they found themselves at the helm of a company, no matter how noble their initial intentions.

Do you think this is different from the general 'power corrupts' tendency? The same thing seems to happen to politicians for example.

Comment by mattnewport on How do you organize your research? · 2010-10-01T00:51:10.839Z · LW · GW

Evernote recommendation seconded. It's a really neat tool (I particularly like the auto text recognition in images making them searchable).

Comment by mattnewport on Brain storm: What is the theory behind a good political mechanism? · 2010-09-30T06:14:11.092Z · LW · GW

I guess that until competitive government becomes really feasible in a mass scale, this thought is very theoritical.

One of the things I particularly like about the idea of competitive government is it gives you something practical to do now as an individual. Look around the world and consciously pick a country to live in based on the value offered by its government. Surprisingly few people do this but the few that do have been enough to give us the likes of Hong Kong, Singapore, Switzerland, Luxembourg, etc.

I think being an immigrant gives you a different perspective on things. I've spent most of my productive adult life in a country where I pay taxes and have no right to vote. This somehow makes the myth of democracy less potent for me.

Comment by mattnewport on Brain storm: What is the theory behind a good political mechanism? · 2010-09-30T06:09:18.662Z · LW · GW

I found the discussion between Moldbug and Robin Hanson interesting because whilst Robin Hanson has lots of interesting ideas he does not write terribly well. He communicates his idea clearly but there is no style to his writing. Contrast Moldbug (or Eliezer) and see the impact of interesting ideas expressed with eloquence and you being to appreciate the power of language.

I wonder if I give excessive weight to Unqualified Reservations because it has such greater facility with the English language than is typical of the blogosphere. Interesting and controversial ideas expressed with rhetorical flair seem to directly trigger the reward centres of my brain.

Comment by mattnewport on Brain storm: What is the theory behind a good political mechanism? · 2010-09-30T00:24:18.660Z · LW · GW

Is there somewhere where ideas like this are discussed intelligently?

I'm not aware of a single central hub for such discussion I'm afraid. There's academic work in the area of development economics which looks at countries around the world and tries to identify what traits of governmental institutions seem to correspond with economic growth and prosperity. This is where Paul Romer and his charter cities idea is coming from.

If you want some really out there but intelligent discussion of related ideas you might want to check out Unqualified Reservations. Maybe start with the gentle introduction series. Mencius Moldbug could be described as many things but concise is not one of them so you're looking at a fair bit of reading there.

Arnold Kling blogs on this topic a bit as well, he has a particular interest in the idea of 'unbundling' government services.

If the experiments in governance are atheoretical, then I'd expect most of them to be worse. Just as most random mutations in a complex organism are likely to be worse.

Think of competitive government as a meta-theory of political mechanisms in the same way a well functioning market economy represents a meta-theory of producing efficient organizations rather than a theory of how to run an efficient organization. The question is how to structure things in a way that there is an incentive for good governance. If you get the incentive structure right then good governance will tend to outcompete bad governance. The individual experiments would not be atheoretical but the structure under which they operate is intended to be agnostic about what the best approach will prove to be.

Many of the people you'll see talking about competitive government are libertarian leaning and so would have their own personal ideas about how to run a government but rather than privileging their own pet theories they want to put them to the test against other ideas about how to run things. A Thousand Nations emphasizes that traditional ideological opponents could in theory both get behind the idea of competitive government as it would give them the opportunity to go and test out their own utopian ideals without having to convince anyone else.

Experimentation has a cost, what is the expected benefit from experimenting with different forms of government. How is that expected benefit justified?

I don't see how this is any different in principle from the question of the value of experimentation and innovation in general. Many technologies ultimately prove to be market failures but I think the evidence is pretty compelling that economies that follow a free market model and 'waste' resources on ideas that don't pan out have a better track record of producing net benefits through innovation than economies that attempt to centrally plan innovation.

I just find it disheartening when people don't want to try applying their brains to the problem of at least narrowing down the space of how governments should be designed.

I don't believe advocates of competitive government are generally doing this. They just don't believe that their own ideas should be given special privileges over everyone else's.

Comment by mattnewport on Brain storm: What is the theory behind a good political mechanism? · 2010-09-29T21:22:06.728Z · LW · GW

Are you familiar with the background to patrissimo's comment? Competitive government is what he's getting at in the comment you linked.

Comment by mattnewport on Hypothetical - Moon Station Government · 2010-09-29T20:15:31.375Z · LW · GW

Physics provides certain tactical advantages to moon colonists. (Citing fictional evidence I know but as far as I can see the advantages are likely to be real).

Comment by mattnewport on Rationality Power Tools · 2010-09-29T17:43:26.804Z · LW · GW

I would be happy to donate money for development.

There's a lot of tools out there for task / goal tracking. I'd suggest spending some time researching them before thinking about developing a new one. Beware of falling into the trap of chasing after the elusive perfect system rather than just getting in the habit of using something good enough.

I quite like remember the milk but my main problem is still the whole getting into the habit of using it consistently part. It's pretty flexible in terms of attaching extra information to items. It's not hierarchical but I suspect that's overrated anyway. It does support a very flexible tagging and 'smart lists' system which is better than a hierarchy in some ways.

Comment by mattnewport on Intelligence Amplification Open Thread · 2010-09-28T20:51:08.667Z · LW · GW

I just make a couple of fried eggs for breakfast usually. Takes less than 5 minutes and can be done in parallel with making my morning cup of tea. Advance preparation looks like overkill to me - why not just get up 3 minutes earlier?

Comment by mattnewport on Vote Qualifications, Not Issues · 2010-09-28T17:57:16.363Z · LW · GW

Are you claiming that this was actually the plan all along? That our infinitely wise and benevolent leaders decided to create a panic irrespective of the actual threat posed by H1N1 for the purposes of a realistic training exercise?

If this is not what you are suggesting are you saying that although in fact this panic was an example of general government incompetence in the field of risk management it purely coincidentally turned out to be exactly the optimal thing to do in retrospect?

Comment by mattnewport on Vote Qualifications, Not Issues · 2010-09-28T17:30:14.321Z · LW · GW

This looks like a fully general argument for panicking about anything.

Comment by mattnewport on Is Rationality Maximization of Expected Value? · 2010-09-28T16:13:03.595Z · LW · GW

There is nothing in what I wrote that implies people value their lives infinitely. People just need to value their lives highly enough such that flying on an airplane (with its probability of crashing) has a negative expected value.

Yes, that is the point.

Your claim that people flying on planes are engaging in an activity that has negative expected value flatly contradicts standard economic analysis and yet provides no supporting evidence to justify such a wildly controversial position. The only way your claim could be true in general would be if humans placed infinite value on their own lives. Otherwise it depends on details of why they are flying and what value they expect to gain if they arrive safely and on the actual probability of a fatal incident.

Since you didn't mention in your original post under what circumstances your claim holds true you did imply that you were making a general claim and thus further imply that people value their lives infinitely.

Comment by mattnewport on Vote Qualifications, Not Issues · 2010-09-28T16:05:44.751Z · LW · GW

Population/natural resource exhaustion related crises are a bit iffy, because it is plainly obvious that if they remain exponentially growing forever, relative to linearly growing or constant resources (like room to live on), one or the other has got to give.

Obviously the people disputing the wrong predictions know this. Julian Simon was just as familiar with this trivial mathematical fact as Paul Ehrlich. The fact that this knowledge led Paul Ehrlich to make bad predictions indicates that his analysis was missing something that Julian Simon was considering. Often this missing something is a basic understanding of economics.

Comment by mattnewport on Vote Qualifications, Not Issues · 2010-09-28T16:02:06.179Z · LW · GW

Well, you also need to factor in the severity of the threat, as well as the risk of it happening.

Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.

Comment by mattnewport on Is Rationality Maximization of Expected Value? · 2010-09-28T06:19:19.939Z · LW · GW

In fact, people take such gambles (with negative expectation but with high probability of winning) everyday.

They fly on airplanes and drive to work.

In our world people do not place infinite value on their own lives.

Comment by mattnewport on Is Rationality Maximization of Expected Value? · 2010-09-28T06:15:45.834Z · LW · GW

I think it's hard to enjoy gambling if you are sure you'll lose money, which is how I feel like. I may be over pessimistic.

Typical Mind Fallacy.

Comment by mattnewport on Is Rationality Maximization of Expected Value? · 2010-09-28T06:13:32.220Z · LW · GW

(1) You don't have to construe the gamble as some sort of coin flips. It could also be something like "the weather in Santa Clara, California in 20 September 2012 will be sunny" - i.e. a singular non-repeating event, in which case having 100 hundred people (as confused as me) will not help you.

A coin flip is not fundamentally a less singular non-repeating event than the weather at a specific location and specific time. There are no true repeating events on a macro scale if you specify location and time. The relevant difference is how confident you can be that past events are good predictors of the probability of future events. Pretty confident for a coin toss, less so for weather. Note however that if your probability estimates are sufficiently accurate / well-calibrated you can make money by betting on lots of dissimilar events. See for example how insurance companies, hedge funds, professional sports bettors, bookies and banks make much of their income.

(3) Besides, suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

'Small enough' here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be 'so small that it won't happen before the heat death of the universe' scale. I'm still not sure the argument works even in that case.

I believe there is a sense in which small probabilities can be said to also have an associated uncertainty not directly captured by the simple real number representing your best guess probability. I was involved in a discussion on this point here recently.

Comment by mattnewport on Open Thread, September, 2010-- part 2 · 2010-09-28T03:54:14.540Z · LW · GW

Some of the match questions are really poorly phrased.

This is because they are largely user submitted and not actively filtered by OkCupid staff.