Posts

Comments

Comment by michaelsullivan on Iterated Gambles and Expected Utility Theory · 2016-06-06T14:55:25.293Z · LW · GW

So one of the major issues I've identified with why our gut feelings don't always match with good expected utility models is that we don't live in a hypothetical universe. I typically use log utility of end state wealth to judge bets where I am fairly confident of my probability distributions as per Vaniver in another comment.

But there are reasons that even this doesn't really match with our gut.

Our "gut" has evolved to like truly sure things, and we have sayings like "a bird in the hand is worth two in the bush" partly because we are not very good at mapping probability distributions, and because we can't always trust everything we are told by outside parties.

When presented with a real life monty haul bet like this, except in very strange and arbitrary circumstances, we usually have reason to be more confident of our probability map on the sure bet than on the unsure one.

If someone has the $240 in cash in their hand, and is saying that if you take option B, they will hand it you right now and you can see it, you can usually be pretty sure that if you take option B you will get the money -- there is no way they can deny you the money without it being obvious that they have plain and simply lied to you and are completely untrustworthy.

OTOH, if you take the uncertain option -- how sure can you really be that the game is fair? How will the chance be determined? The person setting up the game understands this better than you, and may know tricks they are not telling you. If the real chance is much lower than promised, how will you be able to tell? If they have no intention of paying you for a "win", how could you tell?

The more uncertainty is promised, the more uncertainty we will and should have in our trust and other unknown considerations. That's a general rule of real life bets that's summed up more perfectly than I ever could have in this famous quote from Guys and Dolls:

"One of these days in your travels, a guy is going to show you a brand new deck of cards on which the seal is not yet broken. Then this guy is going to offer to bet you that he can make the jack of spades jump out of this brand new deck of cards and squirt cider in your ear. But, son, do not accept this bet, because as sure as you stand there, you’re going to wind up with an ear full of cider."

So for these reasons, this gamble, where the difference in expected value is fairly small compared to the value of the sure win -- even though a log expected utility curve says to take the risk at almost any reasonable level of rich country wealth, unless you have a short term liquidity crunch -- I'd probably take the 240. The only situations under which I would even consider taking the best are ones where I was very confident in my estimate of the probability distribution (we're at a casino poker table and I have calculated the odds myself for example), and either already have nearly complete trust or don't require significant trust in the other bettor/game master to make the numbers work.

In the hypothetical where we can assume complete trust and knowledge of the probability distribution, then yes I take the gamble. The reason my gut doesn't like this, is because we almost never have that level of trust and knowledge in real life except in artificial circumstances.

Comment by michaelsullivan on Iterated Gambles and Expected Utility Theory · 2016-06-06T14:29:35.350Z · LW · GW

Of course, but in relative terms he's still right, it's just easier to see when you are thinking from the point of the hungry hobo (or peasant in the developing world).

Standing from the point of view of a middle class person in a rich country looking at hypothetical bets where the potential loss is usually tiny relative to our large net worth+human capital value of >4-500k, then of course we don't feel like we can mostly dismiss utility over a few hundred thousand k, because we're already there.

Consider a bet with the following characteristics: You are a programmer making 60k ish a year a couple years out of school. You have a 90% probability of winning. If you win, you will win 10 million dollars in our existing world. If you lose (10%) you will swapped into parallel universe where your skills are completely worthless, you know no-one, and you would essentially be in the position of the hungry hobo. You don't actually lose your brain, so you could potentially figure out how to make ends meet and even become wealthy in this new society, but you start with zero human capital -- you don't know how to get along in it, any better than someone who was raised in a mumbai slum to typical poor parents does in this world.

So do you take that bet? I certainly wouldn't.

Is there any amount of money we could put in the win column that would mean you take the bet?

When you start considering bets where a loss actually puts you in the Hungry hobo position, it becomes clearer that utility of money over a few hundred thousand dollars is pretty small beer, compared to what's going on at the lower tiers of Maslow's hierarchy.

Which is another way of saying that pretty much everyone who can hold down a good job in the rich world has it really freaking good. The difference between $500k and $50 million (enough to live like an entertainer or big-time CEO without working) from the point of view of someone with very low human capital looks a lot like the famed academics having bitter arguments over who gets the slightly nicer office.

This also means that even log utility or log(log) utility isn't risk averse enough for most people when it comes to bets with a large probability mass of way over normal middle class net worth + human capital values, and any significant probability of dropping below rich-country above-poverty net worth+ human capital levels.

Fortunately, for most of the bets we are actually offered in real life, linear is a good enough approximation for small ones, and log or log-log utility is a plenty good enough approximation for even the largest swings (like starting a startup vs. a salaried position), as long as we attach some value to directing wealth we would not consume, and there is a negligible added probability of the kind of losses that would take us completely out of our privileged status.

In most real life cases any problems with the model are overwhelmed by our uncertainties in mapping the probability distribution.

Comment by michaelsullivan on Saving for the long term · 2015-02-26T05:34:02.465Z · LW · GW

I think you're downplaying the chances that a singularity does happen in my lifetime. 90% of experts seem to think it will.

I don't. (Edit: I meant this as "I don't think I am downplaying the chances", not "I don't think the singularity will happen")

It's true that I disagree with your experts here, and Lumifer speaks to some of my reasons. I even disagree with the LW consensus which is much more conservative than the one you quote.

That said, even taking your predictions for granted, there are still two huge concerns with the singularity retirement plan:

  1. Even given that it will occur in your/my lifetime, how do you know what it will look like and that it will lead to a retirement you are happy with even if you have no capital?

  2. If there is even a 5-10% chance that it doesn't happen, or doesn't provide what you want -- that is a fail when I am doing a retirement plan for most of my clients. I'm generally aiming for a 0+epsilon or at least <1% chance of failure if the client is able to follow the plan[*]. The only clients where building in a 10% chance of bust is ok are those who are in a real pickle, and there is no reasonable strategy to do better. Those clients' plans have to include downward adjustment of their goals if the initial trajectory is in or too close to the failure window.

[*] obviously most of the true failure chance happens when the client is unable to follow the plan at some point. Financially, some of that can be insured against (health and disability, life for dependent survivors) and some can't.

Comment by michaelsullivan on Money threshold Trigger Action Patterns · 2015-02-26T05:07:17.252Z · LW · GW

thanks for the links -- although I think some of the people in Millionaire Next Door skirt closer to what OP was referring to -- people who never spend money, not to retire early or do something interesting with their money, but just to hoard it.

I have known a few people who I considered pathological savers -- people who, like the fictional Scrooge, seem to save for the sake of saving, and do not ever enjoy the wealth they have created, nor do they turn it to a useful purpose in the world via large charitable donations. This is very rare in my experience, however. The only people i have known like this are in the generation that grew up during or shortly after the Great Depression.

Comment by michaelsullivan on Money threshold Trigger Action Patterns · 2015-02-26T05:01:52.841Z · LW · GW

Agreed that inflation adjustment is important -- it usually makes sense to annuitize a portion of your portfolio to reduce longevity and market risk. The ballpark I was using is based on a 1% per year increase. hedging more against inflation with a higher escalator or CPI adjustment would be more expensive. Not adjusting at all would be less.

On housing -- it doesn't always make the most sense from a financial standpoint to pay off your mortgage. If you do, on the one hand, that's less money that you need for living expenses, on the other hand, it's net worth tied up in home equity -- tends to be close to a wash in terms of the net worth required to retire at various points. In the current low mortgage rate environment, many people would need more net worth to support expenses with a paid off house than without.

Comment by michaelsullivan on Saving for the long term · 2015-02-26T04:44:48.106Z · LW · GW

To answer your specific question, there are a bunch of potential alternatives.

You can use a Roth IRA to have access at least to your contributions without penalty, and have tax deferral and tax free earnings.

You should probably put enough to get the full match into your 401k no matter what, as long as you expect to become vested for the company contribution, since taking that out early and paying penalties is still a win versus forgoing the free money your employer is offering.

You can invest in plain old retail investment accounts. You will pay tax every year, but it's possible to minimize the tax hit with tax-efficient investing strategies (low turnover, paying attention to specific gains and losses -- or use tax efficient mutual funds).

You can get some of the tax advantages of retirement savings without the restriction on when you can use the money using permanent life insurance as a savings vehicle. This is very popular with wealthy clients who want to save more than roth and 401k's will allow and have high marginal tax rates. Drawback: To the extent you don't otherwise need life insurance, or fit well with this kind of plan, it may not make sense versus eating the taxes in a retail (non-qualified) investment account. Look at your individual case carefully. (Disclaimer: I make money from selling life insurance.).

The possible implied alternative of not saving at all seems foolish unless you have a very good use for the money now. if living cheap is so easy, why not do it now? you can always spend your money later. If you want to start startups, having some money saved for that is a good idea. If you want to give away a lot, the world will not lose very much by you giving it away plus reasonable investment earnings later versus doing it right now. Saving money retains flexibility. There is no reason it must be permanently earmarked for retirement.

As a planner, I would say that most people I run into, if they save enough, are saving too much in retirement focused vehicles, and not enough elsewhere. I see a lot of people in their 40s and 50s who have 5-15x salary in their 401k/403b, but a barely sufficient (or not even) emergency fund, and essentially zero other assets they can use without penalty before age 59.5. My general recommendation is to have a good portion of your savings outside the retirement 59.5 gate if possible without losing matches or giving up too much tax efficiency.

Comment by michaelsullivan on Saving for the long term · 2015-02-26T04:18:36.312Z · LW · GW

In a relatively healthy economy, to a first approximation in the medium and long term, the amount of money you make approximates how much good you are doing. As a liberal, I'll be the first to say that this has a lot of flaws as a benchmark, but in general, if you cannot find people willing to pay for, or donate to support what you are doing without you having to live on ramen forever, there has to be some question about whether what you are doing is providing value to the world comparable to a standard job in tech, finance, sales or a professional discipline, as long as you are moderately careful about who you work for in the latter cases.

Comment by michaelsullivan on Saving for the long term · 2015-02-25T04:11:35.899Z · LW · GW

Outside view of your 1 2, 3 and 4: most people end up in trajectory number 4, so thinking this is the least likely scenario needs some really good evidence.

In particular let's look at 1: How do you plan on an event that has a reasonable probability of not happening in your lifetime, and about which you know relatively nothing (if we could well predict what will happen on the other side, it won't be anything like a singularity).

Who is to say that a singularity results in happiness for everyone -- even for a positive one? From the standpoint of someone sitting in the 18th century, flipping into life today would be like a singularity -- even poor people have luxuries not dreamed of 200 years ago. That said, it's a lot more pleasant to have money or marketable skills than not, even in the cushy rich world. Try being actually poor in the US to see what it's like. See how well you can live on $800/month (a typical very small social security benefit) with no savings and no family support.

For 3: from your perspective right now, there doesn't seem to be any reason to stop working. Past a certain relatively young age, however, anyone who is not good at selling themselves, developing a network, or established as a well known expert in their field will find themselves at a huge disadvantage in the job market, and may no longer be able to get interesting jobs for good pay. At that point, someone who followed the moustache plan in their 20s and 30s doesn't really care -- they can try as many new startups as they want and give the finger to the "norma"l job market, never worrying about whether they will have enough to eat and live in a comfortable home.

If you have never saved any money, you may end up forced into plan 4 in your 40s often at jobs that are uninteresting and do not pay very well. I've seen it happen to a lot of smart people.

Do I think it's worth trying startups young? Yes. And realistically a few failures in a row will probably sap your will to start a new one in any case, so there isn't that much risk of getting past easily hirable age, so I recommend looking to do it early on.

That said, whenever you are making good money, some long term savings is a good idea -- money gives you future flexibility, and the hedonic hit you take to save 15-20% of your money when you are making a solid middle class income is barely noticeable.

Comment by michaelsullivan on Money threshold Trigger Action Patterns · 2015-02-24T19:45:27.277Z · LW · GW

I think it is partly about mixup, and partly because many people don't think clearly about their financial planning until forced to. If someone who makes 100k+ and spends most of it wants to retire in the same style they are used to living, they may well need 2-4M to do so comfortably and safely if retiring early. Social security is progressive, the max you can get as a single person is around 42,000/year. To get that, you must work for 35 years at a high income level and wait to draw your check until age 70. Then you still need to produce another 58,000 somehow from your own assets, which at current recommended withdrawal rates requires almost exactly 2M to do and maintain your wealth. Now, you could purchase an annuity for much less, but few people are comfortable dumping all their money into such vehicles. At current life income annuity rates, that would be a bit less than 1M$ to provide 58,000/year to a 70 yo. So you only need about 900K, but what if you want to retire at 65, or 60, or 55? Then you need to take less SS, or live off assets until age 70, or maybe you can't take it at all yet, and must live off assets alone. Whether you need 2M or 4M or more depends on when you retire, and how much risk of breaking your plan you are willing to take.

It's my job to model this for people. Most are surprised that they don't need 2 million or more, because their needs are more modest than above, and they don't plan to retire very early. That said it's different for everyone, and when younger people talk about stopping/reevaluating their career because they have enough money to retire, they usually mean in middle age not at normal retirement ages. At 40, unless your lifestyle is very frugal by the standards of people who are able to save 2-4M in that time frame (generally 100k+ earners), you probably do need that much to retire comfortably.

It also depends a lot on how you feel about your work. Most higher income earners have found a niche where they feel reasonably good about what they do (it's hard to create a lot of value when you feel like a cog or a moocher), and enjoy at least big parts of their jobs. In that case, why would you retire before you had plenty? OTOH, if you are burned out and it's a struggle to go to work every day, you might be willing to live on a lot less if you knew you could quit now, or soon.

Note: I know very few people who actually live like they are poor today in order to have great wealth tomorrow. Those who are very frugal while working, either are hugely committed to earning to give, or intend to retire or do something risky or different at a very young age, and don't ever intend to not live relatively frugally. Certainly they intend to either give away or enjoy the fruits of their industry long before a typical retirement age.

Comment by michaelsullivan on Money threshold Trigger Action Patterns · 2015-02-24T04:19:03.484Z · LW · GW

I think the biggest thing people who haven't thought about this deeply miss is how large the potential liability exposure is if you don't carry property and casualty insurance. As your wealth rises, and the financial hit from losing your house becomes small enough that you could realistically self-insure (say net worth 10-20x home value), it starts to be pretty much mandatory to carry some kind of umbrella policy to insure against crazy liabilities, and nobody will sell you an umbrella if you don't also have house/auto/etc. insurance. Like all insurance, this is -EV, but it's so cheap compared to the potential loss that it's generally crazy to go without it. The wealth threshold at which it could plausibly make sense to self insure entirely is in the super-rich range: probably around 100Mil$US

While you are probably subsidizing some dumb-asses to a degree, the bulk of your property risk is due to things out of your control like severe weather.

What most people should do, once they have a solid emergency fund is take a much higher than normal deductible on their auto and home/renters insurance. 5000-10000 deductibles will save a lot of money, but still keep you insured against catastrophic loss. Threshold for this is when you have a comfortable emergency fund, and I'd suggest a deductible equal to what you could save again in 6-12 months of belt-tightening without affecting your longer term financial planning. Health insurance, about the same, except most people are forced now to take a large deductible whether or not they can afford one.

Comment by michaelsullivan on Money threshold Trigger Action Patterns · 2015-02-22T06:54:40.510Z · LW · GW

On personal assistant, I think the 3% of wealth value will not transfer to different people simply.

For many people, the value of a personal assistant is that they can accomplish so much more with their own time. I know a number of people who have taken this approach and report that it was an investment that paid off financially for them.

If you think of it as a pure cost, then yes, you would try to pay 30k ish and not be interested until you had a very large income.

For those people I know who actually use this, they employ people who are quite skilled and may command 50-60k/year or more, and who produce economic value in excess of their paycheck.

The key determinant seems to be the point at which your marginal ability to earn more money per hour from time saved is about 2-3 times what you have to pay your PA per hour. If you are in the right kind of job (sales, business owner), the threshold is probably somewhere around 150k/year. If you are in the wrong kind of job, it probably never makes sense until you are wildly rich.

Driver does work similarly, but again, the threshold is much lower if you need to drive around, but can profitably use time in the car to accomplish work that pays you more than you are paying your driver.

Comment by michaelsullivan on Money threshold Trigger Action Patterns · 2015-02-22T06:35:41.819Z · LW · GW

So I agree 100% with 1 and 3, primarily because the profit margins on those insurances are huge, and the losses are so small.

Renters insurance and homeowners insurance on the other hand is quite inexpensive relative to what they cover, and the typical loss rates for insurers are a high percentage of premiums + float, what you are paying in premiums beyond your expected loss rate is very small but reduces the potential volatility of your wealth dramatically.

I guess it depends on what you mean by "rich", if you mean merely "financially independent" and not having wealth far beyond your lifestyle requirements, I'd still generally decide to carry home/renters/health insurance, and most wealthy people do. Note that these cover more than simply your stuff/home, they also have liability clauses that protect your from various claims including personal injury, which can be very expensive and have little or nothing to do with your residence. If you have wealth, it's actually a good idea to carry higher limit car insurance and a personal umbrella to protect your legal liability exposure.

I used to analyze insurance using a pure linear EV with catastrophic check. i.e. always better to self insure, as long as the worst case scenario isn't a financial catastrophe.

Now I think of it more like portfolio balance. It makes sense to do things which give up a little bit of expectation in order to reduce the overall volatility of your net worth. Having exposure to a huge risk like your home being destroyed and you having to rebuild it adds a lot of volatility. And you can insure against it for a very small amount relative to your exposure. Also note that the actual linear -EV from buying most common insurance is a relatively small percentage of the premium cost. For typical home/auto/life/health insurance, the expected loss rate is 80-90% of the premiums.

Compare to electronics insurance or travel insurance, or credit card life insurance, where you are typically paying 5-10 (sometimes 100) times the actual expected loss rate.

I'm not sure what you mean by cryonics insurance, but if you mean life insurance to fund a cryonics contract, I don't see how you can avoid it until you have enough assets to cover the cost. I can see possibly recommending term + aggressive savings over various kinds of permanent life insurance, but there are some significant tax advantages and creditor protections to permanent life insurance that may tip the scale.

Disclaimer: I am licensed to sell life and health insurance in MI and CT, but nothing said here should be construed as a particular recommendation of any kind of insurance -- everyone's individual needs are different.

Comment by michaelsullivan on Money threshold Trigger Action Patterns · 2015-02-22T06:03:25.135Z · LW · GW

The biggest problem I have with outsourcing housecleaning is that it is not only fairly expensive, but also very hard to find someone who does a good job.

We currently pay $90 every two weeks for a cleaner who comes and does about 2-3 hours worth of work. It is 2-3 hours worth of work that my wife or I could do about as fast if we chose to, and either one of us would generally do as good or better a job.

It's still probably worth it, because most of the time we didn't have a cleaner, we didn't choose to do it, even though it made us happier to have a cleaner house. We absolutely limit their tasks to the things we are less likely to do regularly, or are physically hard on our bodies (floors, showers, toilet -- both of us have back problems). Overall the house is cleaner, and in fact, we are motivated to do certain things (pick up, organize, clear dishes in drainer, etc.) in order to have the house ready for the cleaner.

I think the point at which it makes sense to outsource this is when you are making around $30-40 per hour for your time.

Comment by michaelsullivan on Low Hanging fruit for buying a better life · 2015-01-12T20:37:10.394Z · LW · GW

Look to see if there are food or cooking clubs in your area -- a lot of times members will have information classes or get togethers.

I also had a great experience taking some classes in turkish cooking at a turkish cultural center where I used to live. Here's a link if you live near west haven ct:

http://turkishculturalcenterct.com/turkish-cooking-classes-go-ahead-full-speed/

I grabbed a 3 year old item because that's me rolling out some bread dough in the picture, but they still do these.

If you live anywhere near a decent sized city or college town, there's a good chance that "cooking classes " will turn up something good.

Comment by michaelsullivan on Low Hanging fruit for buying a better life · 2015-01-12T20:24:14.876Z · LW · GW

Honestly, most kitchens do not need more than 4 knives. I own and use more, but I cook a lot, and have very good knife skills. I can do almost anything I need with a single large knife (ideally a santoku, but a chef's knife or chinese cleaver would do ok as well). One serrated knife for bread.

The most important thing is that whatever knife you use is good enough to hold an edge, and kept sharp. Have your knives professionally sharpened at least once a year (or learn how to do it yourself) and use a steel to hone them once a week or before/after any hard use (1/2 hr+ of prep chopping). It's also worth some time learning proper knife technique.

All that is much more important than having more than two knives, as long as your two knives are good choices. When I vacation in cottages with a kitchen, or when I visit relatives that I know do not maintain sharp knives -- if I will be cooking, I make it a point to pack my own knives (I bought a chef's knife caddy from a local culinary school for this purpose). And I am a massively nazi-ish light packer, typically packing for a week+ trip in a single carry-on bag (including my knives). That's how important this is to me. That said, I love cooking, and tend to do a lot even on vacation.

I think this principle generalizes. Tools are a nice force multiplier. For anything that you love to do, or need do frequently, having good tools that will last a long time is generally a hugely efficient upgrade in your QOL.

It can, of course, be taken too far. Upgrading everyday use tools to cheapest professional grade is a very good use of money. Upgrading to the best possible, or upgrading things you rarely use is generally not.

Comment by michaelsullivan on 2013 Survey Results · 2014-01-25T03:38:32.223Z · LW · GW

On MIlky Way vs. Observable universe, I would expect a very high correlation between the results of different galaxies. So simple multiplication is misleading.

That said, even with a very high correlation anything over 1% for Milky way should get you to 99+ for universe.

I admit that I did not seriously consider the number of galaxies in the universe, or realize off the cuff that it was that high and give that enough consideration. I estimated a fairly high number for Milky way but gave only 95% to the universe, which was clearly a mistake.

Comment by michaelsullivan on 2013 Survey Results · 2014-01-22T18:14:23.000Z · LW · GW

It seems that very few people considered the bad nanotech scenario obviously impossible, merely less likely to cause a near extinction event than uFAI.

Comment by michaelsullivan on 2013 Survey Results · 2014-01-22T17:31:59.650Z · LW · GW

Don't most people who report IQ scores do the same thing if they have taken multiple tests?

Comment by michaelsullivan on 2013 Survey Results · 2014-01-22T17:29:19.800Z · LW · GW

Some of us took the SAT before 1995, so it's hard to disentangle those scores. A pre-1995 1474 would be at 99.9x percentile, in line with an IQ score around 150-155. If you really want to compare, you should probably assume anyone age 38 or older took the old test and use the recentering adjustment for them.

I'm also not sure how well the SAT distinguishes at the high end. It's apparently good enough for some high IQ societies, who are willing to use the tests for certification. I was shown my results and I had about 25 points off perfect per question marked wrong. So the distinction between 1475 and 1600 on my test would probably be about 5 total questions. I don't remember any questions that required reasoning I considered difficult at the time. The difference between my score and one 100 points above or below might say as much about diligence or proofreading as intelligence.

Admittedly, the variance due to non-g factors should mostly cancel in a population the size of this survey, and is likely to be a feature of almost any IQ test.

That said, the 1995 score adjustment would have to be taken into account before using it as a proxy for IQ.

Comment by michaelsullivan on 2013 Less Wrong Census/Survey · 2013-11-23T04:01:42.139Z · LW · GW

taken.

Comment by michaelsullivan on What Can We Learn About Human Psychology from Christian Apologetics? · 2013-10-31T20:20:06.327Z · LW · GW

That scenario assumes a kind of religion that is more directly in opposition to science than is typical outside of conservative evangelicals. Admittedly that's a large faction with political power, but they aren't even a majority of christians, let alone theists.

Comment by michaelsullivan on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-06T18:26:38.387Z · LW · GW

That's probably true in many cases, but the "mugger" scenario is really designed to test our limits. If 3^^^3 doesn't work, then probably 3^^^^3 will. To be logically coherent, there has to be some crossover point, where the mugger provides exactly enough evidence to decide that yes, it's worth paying the $5, despite our astoundingly low priors.

The proposed priors have one of two problems:

  1. you can get mugged too easily, by your mugger simply being sophisticated enough to pick a high enough number to overwhelm your prior.

  2. We've got a prior that is highly resistant to mugging, but unfortunately, is also resistant to being convinced by evidence. If there is any positive probability that we really could encounter a matrix lord able to do what they claim, and would offer some kind of pascal mugging like deal, there should be some amount of evidence that would convince us to take the deal. We would like it if the amount of necessary evidence were within the bounds of what it is possible for our brain to receive and update on in a lifetime, but that is not necessarily the case with the priors which we know will be able to avoid specious muggings.

I'm not actually certain that a prior has to exist which doesn't have one of these two problems.

I also agree with Eliezer's general principle that when we see convincing evidence of things that we previously considered effectively impossible (prior of /10^-googol or such), then we need to update the whole map on which that prior was based, not just on the specific point. When you watch a person turn into a small cat, either your own sense data, or pretty much your whole map of how things work must come into question. You can't just say "Oh, people can turn into cats." and move on as if that doesn't affect almost everything you previously thought you knew about how the world worked.

It's much more likely, based on what I know right now, that I am having an unusually convincing dream or hallucination than that people can turn into cats. And if I manage to collect enough evidence to actually make my probability of "people can turn into cats" higher than "my sensory data is not reliable", then the whole framework of physics, chemistry, biology, and basic experience which caused me to assign such a low probability to "people can turn into cats" in the first place has to be reconsidered.

Comment by michaelsullivan on The Power of Reinforcement · 2012-06-27T17:11:12.289Z · LW · GW

Of course it is not our business to determine those boundaries in someone else's relationship.

Yet my reaction to the behavior described is very largely determined by what I imagine as the relationship context. The reason I did not have your reaction to this story is because I implicitly assumed that there was no boundary the husband had set about the fact of having clothes end up in the hamper by his hands.

I was somewhat troubled by the story, and the conversation in this subthread has clarified why -- the relationship context is crucial to determining the ethics of the behavior, and the ethical line or the necessary context was not discussed seriously in the article. While I find it unlikely that this particular example was crossing a line in their relationship, similar strategies could easily be used in an attempt to cross explicit or implicit boundaries in a way I would find abhorrent.

There is one point on which I am not clear whether we are drawing the line in the same place.

In the absence of any prior negotiation one way or another, do you consider the wife's behavior unethical? That seemed to be what you suggested with your initial comment, that it would only be acceptable in the context of a prior explicit agreement.

I think I fall on the side of thinking it is sometimes acceptable in some possible middle cases, but I'm not completely comfortable with my decision yet and would be interested in hearing arguments on either side.

I am clear (and think you will agree) that it is ok to use this strategy to reinforce a previous agreement, and NOT ok to use it to break/bend/adjust a previous agreement. It is the situation with no prior agreement that I am interested in.

To describe it semi-formally.

Party A wants to use positive reinforcement on party B in order to get them to do X

Middle cases I consider to be important (aside from there being some explicit agreement/boundary)

Party B has given some indication (but not an explicit statement/agreement) that doing X would be acceptable or desirable in principle --- PR OK

Party B has given some indication (not explicit statement/agreement) that doing X would be a undesirable in principle --- PR NOT OK

Party B has given no indication one way or another -- ??

In this last case, are social expectations relevant? In the particular case of clothes in hamper, there are clear social expectations that most people normatively desire clothes in hamper. Perhaps our difference lies in whether we consider social expectations a relevant part of the context.

My tentative line is that where no indication has been given, reinforcing social expectations is acceptable, and violating social expectations is at least dubious and probably not OK without discussion.

If social expectations matter, then questions about which social circle is relevant come into play. If party A and party B would agree about which social expectation is relevant, then that is the correct one.

The interesting subcase would be where the relevant social expectations are different for party A and for Party B. My current position is that party A's best information about what party B would choose as a relevant set of social expectations should determine the ethics.

Comment by michaelsullivan on The Power of Reinforcement · 2012-06-27T13:08:42.323Z · LW · GW

For my part, I didn't experience the positive reinforcement description in the article as being about subverting negotiated boundaries, but about changing what seem likely to be unthinking habitual behaviors that the person is barely aware of.

I don't know of anyone that I wish to be associated with who specifically desires to leave dirty clothes on the floor instead of in the hamper, it's just something that is easy to do without thinking unless and until you are in the habit of doing something differently.

If the husband in question had actually negotiated a boundary about being able to leave his clothes on the floor, or even expressed reflective hesitancy about using the hamper as a theoretically desired or acceptable action, then I would agree that the author's behavior was highly unethical, and as the husband, if I became aware of it, I would have a problem.

A more typical scenario is one in which the husband would reflectively endorse putting dirty clothes in the hamper on principle, but has a previously developed habit of leaving clothes on the floor and does not judge it important enough to do the hard mental work of changing the habit. Positive reinforcement in this scenario basically represents the wife attempting to do a big portion of the work required to change the habit in the hopes it will get him over this threshold.

In this case, I am having trouble imagining a situation in which one would have reflective desire not to use an existing hamper for dirty clothes.

Comment by michaelsullivan on Defecting by Accident - A Flaw Common to Analytical People · 2012-04-25T15:45:35.391Z · LW · GW

It's funny, I don't remember seeing this post initially. I just followed a link from a more recent discussion post. Just yesterday I had the experience of reading a comment I posted on a popular blog and realizing that I was being a jerk in precisely this way. I only wish I could have edited it after I caught myself, but posting an apologetic followup was helpful anyway.

I learned this general principle a long, long time ago, and it has made a huge difference in the way people respond to me.

That said, to this day, I haven't been able to fully ingrain the habit. When I don't think about my presentation, it's very easy to fall into the habit of being brusque with corrections and arguments where there is no need to be combative.

Comment by michaelsullivan on How can we get more and better LW contrarians? · 2012-04-20T19:22:44.172Z · LW · GW

After a long hiatus from deep involvement in comment threads here -- I actually can't tell if this is serious, or a brilliant mockery of Eliezer's decisions around creating AGI [*]

Comment by michaelsullivan on Fallacies as weak Bayesian evidence · 2012-03-14T13:23:30.945Z · LW · GW

The circular argument about electrons sounds like something a poor science teacher or textbook writer would say. One who didn't understand much about physics or chemistry but was good enough at guessing the teacher's password to acquire a credential.

It glosses over all the physics and chemistry that went into specifying what bits of thing-space are clumped into the identifier "electron", and why physicists who searched for them believed that items in that thing space would leave certain kinds of tracks in a cloud chamber under various conditions. There was a lot of evidence based on many real experiments about electricity that led them to the implicit conditional probability estimates which make that inference legitimate.

The argument itself provides no evidence whatsoever, and encountering sentences like that in science literature is possibly the most frustrating thing about learning settled science to an aspiring rationalist. It simply assumes (and hides!) the science we are supposed to learn, and thus merely giving us another password to guess.

Comment by michaelsullivan on Fallacies as weak Bayesian evidence · 2012-03-14T13:11:04.731Z · LW · GW

The conditional probabilities are doing a lot of work here, and it seems that in many cases our estimates of them are strongly dependent on our priors.

What are our estimates for P(S|A) or P(S|notA) and how do we work them out? clearly P(S|A) is high since "The Bible is the word of God" directly implies that the bible exists, so it is at least possible to observe. If our prior for A is very low, then that implies that our estimate of P(S|notA) must be also be high, given that we do in fact observe the bible (or we must have separately a well founded explanation of the truth of S despite it's low probability).

Since having P(S|A) = P(S|notA) in your formula cancels the right side out to 1/1, P(S|H) = P(S|notH). We find as S as evidence for or against A weakens, so does S as evidence for or against H by this argument.

So the problem with the circular argument is apparent in Bayesian terms. In the absence of some information that is outside the circular argument, the lower the prior probability, the weaker the argument. That's not the way an evidential argument is supposed to work.

Even in the case where our prior is higher, the argument isn't actually doing any work, it is what our prior does to our estimate of those conditionals that makes the likelihood ratio higher. If we've estimated those conditionals in a way which causes a fully circular argument to move the estimate away from our prior, then we have to be doing something wrong, because we don't have any new information.

If we have independent estimates of those various conditionals, then we would be able to make a non-circular argument. OTOH We can make a circular argument for anything no matter what is going on in reality, that's why a circular argument is a true and complete fallacy: it provides no evidence whatsoever for or against a premise.

Comment by michaelsullivan on Spend Money on Ergonomics · 2012-01-11T18:35:53.309Z · LW · GW

I bought one for work 6-7 years ago when they were in fashion, and used it for a short while, but found that what it did to my knees was worse than what regular chairs do to my back.

Ball chairs get very uncomfortable in the butt if I sit in them too long, but otherwise have no drawbacks.

Comment by michaelsullivan on More art, less stink: Taking the PU out of PUA · 2011-12-09T18:09:47.448Z · LW · GW

The biggest problem with what I've seen of PUA and PUA converts is that it is very hard to distinguish these two affects.

Your typical shy guy poor dude, doesn't actually approach women with an actual trial very often. Sometimes it almost never happens.

Suppose the successful PUA can pickup 2-3% of intentional targets. They are probably targeting people everytime they are in a social situation that involves meeting new people. Perhaps this involves dozens of contacts a week, or even hundreds if they are the sort who is looking for a constant stream of one-nighters.

On the other hand, your typical poor dude may only make 1-2 intentional targets a month, if that. I was never a PUA. I developed enough social skills on my own to make a marked difference in my outlook a few years before Lewis Depayne showed up on usenet pushing Ross Jeffries stuff, which was laughable.

But I was definitely a poor dude before then. I attended a college for two years with 70% women, that a friend of mine described in retrospect as a "pussy paradise" without ever having any kind of romantic or sexual relationship. In retrospect, some of the rare targets of my attention were begging me to make a move in ways that I failed to notice. But in two years, I probably made actual attempts to hookup or date at most 9-10 women/girls, and in none of those cases did I ever make a move that demanded either rejection or acceptance. Because I was so, so sure that I would be rejected that I couldn't face the prospect. Is it any surprise that my success rate was 0%?

Even after my awakening, I maintained a relatively low frequency of attempts, but my ratio of hookups to serious attempts is far better than 3%, more like 50-60%.

My going hypothesis is that the mere act of getting guys to specifically attempt to approach women they are attracted to, and then attempt to seduce those who inspire their further interest and verify their success is enough to turn the average loser into someone who will be reasonably successful with women.

I didn't actually need any dark arts to go from a big 'loser' to somebody who, in the right social context (not a typical bar scene), has around a 50/50 shot to hook up with almost anybody who is looking and interests me. I just had to realize that sex is not something women have and men want to take from them, and that I am not hideous and unattractive.

Now, I've come to realize that I'm probably more attractive than average, naturally, and it was my combination of weak social skills and brutal social experience of growing up that warped my mental map about this until I was in my mid-20s. I don't actually believe that most guys would have the results that I do. But I'm hardly some kind of Super-Adonis. I'm fat, and don't pay a whole lot of attention to my appearance beyond being clean (tend to wear non-descript preppy business casual nearly everywhere I go because it's comfortable). I'm pretty sure I'd get negative numbers on Roissy's stupid SMV test.

Comment by michaelsullivan on Living Metaphorically · 2011-12-08T18:36:02.445Z · LW · GW

I'll take a stab at an explanation for the first, which will also shed some light on why I lean toward suspecting the second, but I'm not familiar enough with current academic philosophy to make such a conclusion in general.

The main thing that math has going for it is a language that is very different from ordinary natural languages. Yes, terms from various natural languages are borrowed, and often given very specific mathematical definitions that don't (can't if they are to be precise) correspond exactly to ordinary senses of the terms. But the general language contains many obvious markers that say "this is not an ordinary english(or whatever) sentence" even when a mathematical proof contains english sentences.

On the other hand, a philosophical treatise, reads like a book. A regular book in an ordinary natural language, language which we are accustomed to understanding in ways that include letting ambiguity and metaphor give it extra depths of meaning.

Natural language just doesn't map to formalism well at all. Trying to discuss anything purely formal without using a very specific language which contains big bold markers of rigor and formalism (as math does) is very likely to lead to a bunch of category errors and other subtle reasoning problems.

Comment by michaelsullivan on 2011 Survey Results · 2011-12-05T20:29:52.759Z · LW · GW

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1) Some revealed religion is true: 3.40, (0, 0, .15)

This result is, not exactly surprising to me, but odd by my reading of the questions. It may seem at first glance like a conjunction fallacy to rate the second question's probability much higher than the first (which I did). But in fact, the god question, like the supernatural question referred to a very specific thing "ontologically basic mental entities", while the "some revealed religion is more or less true" question was utterly vague about how to define revealed religion or more or less true.

As I remarked in comments on the survey, depending on my assumptions about what those two things mean, my potential answers ranged from epsilon to 100-epsilon. A bit of clarity would be useful here.

Also, given the large number of hard atheists on LW, it might be interesting to look at finer grained data for the 25+% of survey respondents who did not answer '0' for all three "religion" questions.

Comment by michaelsullivan on 2011 Survey Results · 2011-12-05T20:29:42.505Z · LW · GW

Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke).

It might be a fluke, but like one other respondent who talked about this and got many upvotes, it could be that community veterans were more skeptical of the many many things that have to go right for your scenario to happen, even if we generally believe that cryonics is scientifically feasible and worth working on.

When you say "the average person cryonically frozen today will at some point be awakened", that means not only that the general idea is workable, but that we are currently using an acceptable method of preserving tissues, and that a large portion of current arrangements will continue to preserve those bodies/tissues until post singularity, however long that takes, and that whatever singularity happens will result in people willing to expend resources fulfullling those contracts (so FAI must beat uFAI). Add all that up, and it can easily make for a pretty small probability, even if you do "believe in cryonics" in the sense of thinking that it is potentially sound tech.

My interpretation of this result (with low confidence, as 'fluke' is also an excellent explanation) is that community veterans are better at working with probabilities based on complex conjunctions, and better at seeing the complexity of conjunctions based on written descriptions.

Comment by michaelsullivan on 2011 Survey Results · 2011-12-05T20:09:28.323Z · LW · GW

The phrasing of the question was quite specific: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?"

If I estimate a very small probability of either FAI or UFAI before 2100, then I'm not likely to choose UFAI as "most likely to wipe out 90% of humanity before 2100" if I think there's a solid chance for something else to do so.

Consider that I interpreted the singularity question to mean "if you think there is any real chance of a singularity, then in the case that the singularity happens, give the year by which you think it has 50% probability." and answered with 2350, while thinking that the singularity had less than a 50% probability of happening at all.

Yes, Yvain did say to leave it blank if you don't think there will be a singularity. Given the huge uncertainty involved in anyone's prediction of the singularity or any question related to it, I took "don't believe it will happen" to mean that my estimated chance was low enough to not be worth reasoning about the case where it does happen, rather than that my estimate was below 50%.

Comment by michaelsullivan on 2011 Survey Results · 2011-12-05T19:28:42.819Z · LW · GW

I would interpret "the latest possible date a prediction can come true and still remain in the lifetime of the person making it", "lifetime" would be the longest typical lifetime, rather than an actuarial average. So -- we know lots of people who live to 95, so that seems like it's within our possible lifetime. I certainly could live to 95, even if it's less than a 50/50 shot.

One other bit -- the average life expectancy is for the entire population, but the average life expectancy of white, college educated persons earning (or expected to earn) a first or second quintile income is quite a bit higher, and a very high proportion of LWers fall into that demographic. I took a quick actuarial survey a few months back that suggested my life expectancy given my family age/medical history, demographics, etc. was to reach 92 (I'm currently 43).

Comment by michaelsullivan on 2011 Less Wrong Census / Survey · 2011-11-14T16:11:14.193Z · LW · GW

Perhaps I should have entered "mu".

Comment by michaelsullivan on 2011 Less Wrong Census / Survey · 2011-11-10T20:52:23.724Z · LW · GW

If you look at the results of the last survey, that's exactly what happened, and the mean was far higher than the median (which was reported along with the standard deviation). I agree, it would have been a big improvement to specify which sense was meant.

Also, answering year such that P( | ) would be the best way to get a distribution of answers on when it is expected. So that's what I did. If you interpret the question the other way, then anyone with a 30-49.9999% chance of no singularity, has to put a date that is quite far from where most of their probability mass for when it occurs lies.

Suppose I believe that there is a .03% probability of a singularity for each of the next 1000 years, and then decaying by 1/2 every thousand years after that. That puts my total singularlty probability in the 52% range, with about half of my probability mass concentrated in the next 1000 years. But to answer this question literally, the date I'd have to give would be around 7000AD, even though I would think it was about as likely to happen by 3011AD as after 3011AD.

Comment by michaelsullivan on 2011 Less Wrong Census / Survey · 2011-11-09T21:59:32.038Z · LW · GW

I took the survey, but didn't read anything after "Click Here to take the survey" in this post until afterwards.

So my apologies for being extremely program-hostile in my answers (explicitly saying "epsilon" instead of 0, for instance, and giving a range for IQ since I had multiple tests). Perhaps I should retake it and ask you to throw out the original.

I did have one other large problem. I wasn't really clear on the religion question. When you say "more or less right" are you talking about cosmology, moral philosophy, historical accuracy? Do you consider the ancient texts, the historical traditions, or what the most rational (or most extreme) modern adherents tend to believe and practice? If ancient texts and historical traditions, judging relative to their context or relative to what is known now? My judgement of the probability would vary anywhere from epsilon to 100-epsilon depending on the standard chosen, so it was very hard to pick a number. I ended up going with what I considered less wrong convention and chose to judge religions under the harshest reasonable terms, which resulted in a low number but not epsilon (I considered judging ancient texts, or the most reactionary believers by modern standards, to be unreasonably strict).

Comment by michaelsullivan on Polyhacking · 2011-08-30T18:54:05.376Z · LW · GW

Even if you do have half your face burned off a la Two-Face in the Batman series, being visibly smart and funny will boost your apparent prettiness by quite a lot.

I find that most people have some things attractive about them. If they are interesting and kindly disposed toward me, it is not hard to focus on the attractive features, and blur out the less attractive features. It works very much like the affective death spiral, but with no real negative consequences.

Once you find enough things attractive about someone, you enter the spiral, and you begin to notice the very attractive square line of Harvey's non-burned jaw, and just don't even notice the scary skeletor burn face anymore, or you might even find little parts of it that start to look interesting to you.

Well, this all assumes a counter-fictional Harvey that doesn't go fully dark-side, or recovers at some point to something like his former moral and mental self.

Comment by michaelsullivan on Polyhacking · 2011-08-30T15:46:05.526Z · LW · GW

The answer I would make to "why?" (but have never had to, as women tend to be much less clueless than men about dating) would be something like: "Because it seemed as though you were the sort of person who would feel entitled to ask me why, instead of merely accepting my answer."

It's none of someone's business why unless you choose to volunteer that information, and needing to know why you've just been turned down is a massive low-self-perceived-status signal.

The only exception to that rule would be someone that you already have a deep and long standing relationship (just not sexual or romantic) with. Such a person might be justified in starting a "Why" conversation as your friend. But even that is dicey, and the sort of conversation that could destroy the friendship, as it can so easily ride the knife edge of trying to make you defend your answer, or guilt you into changing it if you can't convince them that is both reasonable and not a negative judgement of them.

Comment by michaelsullivan on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-19T22:21:54.496Z · LW · GW

It's still a pretty significant worry. If you know that some fiscal quarter or year will be used to qualify you for something important, it is often possible to arrange for key revenue and expenses to move around the boundaries to suit what you wish to portray in your report.

Comment by michaelsullivan on St. Petersburg Mugging Implies You Have Bounded Utility · 2011-06-09T19:34:57.431Z · LW · GW

Are you certain that the likeliness of all your claims being true is not proportional to the size of the change in universe you are claiming to affect.

Almost any person can reasonably claim to be a utility generating god for small values of n for some set of common utility functions (and we don't even have to give up our god-like ability). That is how most of us are able to find gainful employment.

The implausible claim is the ability to generate universe changes of arbitrary utility value.

My proposal is that any claim of utility generation ability has plausibility in inverse proportion to the size of effect that one claims to be able to produce. If I say I can produce delta-U ~$1000, that is somewhat plausible. If I say I can produce delta-U of $1,000,000 that might be plausible for some very high skill people, or given a long time to do it, but as a random person with little time, it's extremely implausible. If I claim to be able to produce delta-U ~ (some amount of wealth > world GDP), that's exceedingly implausible no matter who I am.

And of course, in order to make your mugging function, you would need to be able to produce unbounded utility. Your claim to unbounded utility generation is unboundedly implausible.

Admittedly, this is somewhat unsatisfactory as it effectively treats the unbounded implausibility of a classic onmipotent God figure as an axiom. But this is essentially the same trick as using a Bayesian Occam's Razor to demonstrate atheism. If you aren't happy with this line of reasoning, than I can't see how you'd be happy with Occam's Razor as an axiom, nor how you could legitimately claim that there's a solid rational case for hard atheism.

Comment by michaelsullivan on Suffering as attention-allocational conflict · 2011-05-20T12:41:53.108Z · LW · GW

I disagree that a vote down fulfills this function. A vote down does not say "i disagree", it says "I want to see less of some feature of this comment/article in the Less Wrong stream".

Sometimes that's because I disagree strongly enough to consider it foolishness and not worth discussion. But most of the time, a vote down is for other reasons. I do find that I am much more likely to vote down comments that I disagree with, and I suspect this is true for most/all Less Wrongers. But that's because I am more likely to be looking harder for problems in posts I disagree with due to all the various biases in my thinking. Disagreement alone is insufficient reason for a vote down from me, and I hope that is true for almost everyone here.

Comment by michaelsullivan on New Haven / Southern Connecticut Meetup, Wednesday Apr. 27th 6 PM · 2011-04-27T17:51:37.939Z · LW · GW

In fact, I probably can swing by for a short time, will just have to take off at 6:45, and may not get there by 6, but I'll give it a shot.

Comment by michaelsullivan on Zut Allais! · 2011-04-26T17:48:01.725Z · LW · GW

"What the coherence proofs for expected utility show, and the point of the Allais paradox, is that the invariant measure of distance between probabilities for this purpose is the usual measure between 0 and 1. That is, the distance between ~0 and 0.01, or 0.33 and 0.34, or 0.99 and ~1, are all the same distance."

In this example. If it had been the difference between .99 and 1, rather than 33/34 and 1, then under normal utility of money functions, it would be reasonable to prefer A in the one case and B in the other. But that difference can't be duplicated by the money pump you choose. The ratios of probability are what matter for this. 33/34 to 1 is the same ratio as .33 to .34.

So it turns out that log odds is the right answer here also. If the difference in the log odds is the same, then the bet is essentially the same.

Comment by michaelsullivan on Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011) · 2011-04-25T21:37:42.363Z · LW · GW

whatever length they choose, it will be too long or too short for many/most things/people.

1 week is already a long time to devote if you are already focused on a career. I couldn't realistically do even the mini-camp without a lot of advance planning. OTOH, If you are still in school or have an academic job that gives you summers off, then 10 weeks is equivalent to a summer internship or fellowship. I'm not sure what the medium would be except maybe 2 weeks -- harder but still doable for those who don't have summers off. Once you go longer than 2 weeks, it's almost impossible for anyone with serious family or career commitments other than education/academe, so you might as well do the whole summer if it makes sense.

Comment by michaelsullivan on New Haven / Southern Connecticut Meetup, Wednesday Apr. 27th 6 PM · 2011-04-25T21:10:56.507Z · LW · GW

I'd be interested but can't make it. I'm rehearsing that night for a concert on the 30th with New Haven Oratorio.

Comment by michaelsullivan on Epistle to the New York Less Wrongians · 2011-04-24T01:07:13.855Z · LW · GW

The disadvantage of excluding women (or men) is far too large. Just like any other rationalist, they have information, experience and perspective that is valuable. And more so than rationalists of the same gender as you, they can share near insight about issues personal to women and far insight into issues personal to men, that is extremely rare to find among men or a group of men. There is a whole realm of gender related affective death spirals that are terribly easy to fall into in segregated gender groups. This applies to almost any other significant culture gap as well (black/white, rich/poor urban/rural etc.).

There's a common error described some places as "privilege blindness" referring to how easy it is for those who are privileged in some way to go through life completely oblivious to the way the world works for those who do not share that good fortune. This is a classic example of an affective death spiral, and will be a huge potential pitfall for any all-male or even mostly male group.

It might make sense for larger groups with plenty of both genders to have some separate meetings, and that's a worthy experiment. But keeping apart indefinitely seems extremely unwise.

Parent upvoted even though I disagree strongly, because this is an issue worth discussing and bringing in empirical data.

Comment by michaelsullivan on Learned Blankness · 2011-04-19T17:37:25.444Z · LW · GW

It looks like that formula is a lot like cutting the ends off the roast.

The answer to "who cares?" is most likely "some 1930s era engineer/scientist who has a great set of log tables available but no computer or calculator".

I am just young enough that by the time I understood what logarithms were, one could buy a basic scientific calculator for what a middle class family would trivially spend on their geeky kid. I remember finding an old engineer's handbook of my dad's with tables and tables of logarithms and various probabilistic distribution numbers, it was like a great musty treasure trove of magical numbers to figure out what they meant.

I don't know where that ended up, but I still have his slide rule.

Of course, even in the day, it would make more sense to share both formula, or simply teach all students enough math to do what Gray does above and figure out for yourself how to calculate the model-enlightening formula with log tables. Since you'd need that skill to do a million other things in that environment.

Comment by michaelsullivan on Verifying Rationality via RationalPoker.com · 2011-04-13T23:19:32.162Z · LW · GW

"If you improve your rationality and knowledge of basic probability to the point where it exceeds that of the average at the table you are playing at, you will (on average) make money."

Only if you are playing in an unraked home game.

In venues where you play for significant amounts of money versus strangers, there will be a house guaranteeing the fairness of the game and providing insurance against stealing, etc. and they collect a lot of money for this service relative to what a good player can expect to win. Unless you play nosebleed stakes (where the house can make a lot of money by taking a very small percentage of the pot), the rake will make somewhat above average players losers, and below average players big losers.

In a typical low-limit game, the very best players will net on average about as much as the house is taking from each player. So you have to be about halfway from average to the best to break even.