Why bitcoin? 2015-04-02T01:24:09.789Z


Comment by Grant on Making Sense of Coronavirus Stats · 2020-02-28T00:52:22.896Z · LW · GW

I've also been following COVID-19 for investment reasons. Every study I've read of the disease indicates it is extremely contagious relative to the flu. This recent retrospective study indicates that prior to Feb 5th, R0 in China were between 4.7 and 6.6. Time to double was 2.4 days:

However since then China has made herculean efforts to stop the spread of the disease. R0 has certainly plummeted. So I'm not sure what to think. I would imagine officially reported numbers from any country are going to be limited by testing. How many people going to the doctor with flu-like symptoms get a COVID-2019 test? It sounds like no one except Korea and maybe China are testing for community acquired CoV.

What I believe is that if other countries do not take similar measures to China, this thing is going to rapidly spread. From an investment perspective this has created the perfect setup for a short: if a sizable portion of the world's population gets infected, the global economy will greatly suffer. If countries take China-esque measures, the global economy will also suffer.

The rosiest outcome I can imagine is warm weather halts the spread of the disease, and then we get a vaccine ready by the time fall rolls around. It's worth noting I've been the "chicken little" among my investing peers.

One point of optimism is everyone on the Diamond Princess was tested, and "only" 5% of passengers required serious medical care. This number is quite a bit lower than the 10-15% figure I often see thrown around.

Can you link to your source on cremation houses not being able to keep up? That's something I hadn't heard before. Thanks.

Comment by Grant on Why bitcoin? · 2015-04-02T16:50:58.031Z · LW · GW

It looks like other blockchain technologies (altcoins) have been the victim of 51% attacks, so I'm going to read up on their repercussions. I wonder if they were carried out by bitcoiners who don't like competition?

It occurs to me that little can probably be done to stop attacks on distributed systems by large actors with non-monetary goals. If people are willing to throw a lot of resources into destroying a fledgling technology, they will probably succeed.

I do have an idea for a distributed public ledger in which attacks are possible but always negative-sum. I have little experience with cryptography so its probably rubbish. If it looks to not be terrible I will probably post it here for comment.

Comment by Grant on Why bitcoin? · 2015-04-02T02:13:22.263Z · LW · GW

Thanks, I did not know about

On the 51% attacks, I was specifically thinking of state actors. However, mightn't any eventuality which leads to a lot of Bitcoiners who aren't enthusiasts or have ulterior motives (Bond villains?) be an issue? The current state of the BC community is probably mostly BC enthusiasts, i.e. people who aren't just in it for the money.

You're right that "wasted" was a poor term; "inefficient" would be better.

Comment by Grant on Stupid Questions March 2015 · 2015-03-27T17:55:53.907Z · LW · GW

Why does anyone think BitCoin is going to work when its users aren't mostly BitCoin enthusiasts?

I'm specifically referring to incentives of 51% attacks. The returns on mining seem to increase as computing power eclipses 50%, creating an economy of scale in mining and incentivicing attacks.

Comment by Grant on Rationality Quotes Thread March 2015 · 2015-03-26T06:57:22.080Z · LW · GW

The information in a science textbook is (or should be) considered scientific because of the processes used to vet it. Absent of this process its just conjecture.

I often wonder if this position is unpopular because of its implications for economics and climatology.

Comment by Grant on Rationality Quotes Thread March 2015 · 2015-03-26T06:36:51.078Z · LW · GW

Macroeconomics? Sure, its highly politicized so in many cases I'll agree with that. But microeconomics is in many ways the study of how to rationally deal with scarcity. IMO, traditional micro assuming homo-economicus is actually more interesting (and useful, outside of politics) than the behavioral stuff for this reason.

Comment by Grant on Stupid Questions March 2015 · 2015-03-04T07:42:31.950Z · LW · GW

Is there overwhelming evidence on the safety (not efficacy) of vaccines somewhere, and I've just missed it?

Comment by Grant on [QUESTION]: What are your views on climate change, and how did you form them? · 2014-12-06T18:50:04.500Z · LW · GW

I used to just trust the word of the experts, because I am not an expert and had no incentive to become one. I didn't have a lot of faith in such a politicized science, but reasoned it was probably better than anything I could come up with. I trusted the IPCC reports, but after reading about Climategate thought they were exaggerated a bit as a means to gain political power.

Recently I've started to consider investing in alternative energy. Given that most alternative energy (especially with the fracking and shale oil revolutions) is based on AGW being a serious problem, I thought it deserved a real look.

I was appalled to see a non-experimental science call something as complex as the Earth's climate to be "settled", and how even the scientists seemed to degenerate into name-calling ("deniers" and "alarmists"). The issue was even more politicized than I first thought. I was unable to find real public debate between skeptics and supporters, but did come to understand that the disagreement is over feedbacks, specifically the change in water vapor, which respond to an increase in temperature from CO2 emissions.

I was most appalled at the lack of reporting of the global warming pause. I did not find a single supporting scientist seriously reconsider his views in light of the pause. One would think admissions such as "our models were clearly wrong, but AGW may still be a problem because the heat is probably going into the oceans/arctic/whatever" would be commonplace.

To me the "science" of climatology seems very similar to economics: experiments are impossible and it is very politicized. I enjoy and respect economics greatly, but recognized that progress in it has been very slow relative to the harder, experimental sciences.

As a result my faith in climatologists is at an all-time low. If I had to guess, I'd take a shot in the dark and say the feedbacks are being exaggerated by the IPCC, and warming will continue at about 0.10 degrees C per decade. Actually I'd say this whole experience has lowered my faith in politics and made me more libertarian as a result. I am used to people doing and believing disgusting things for power, but something about perverting science especially offends.

So I came here to read some (hopefully) more rational reports on AGW.

I would really like to see a study of the Earth's energy budget - can't we measure radiation lost to space with satellites? Everything else seems rather immaterial (unless the nuclear energy output of the Earth's core varies significantly).

I just read the OP's articles. There is a supporting argument in them which makes me think my sensitivity estimates are too low: the idea that sensitivity can be estimated from any source of forcing (of which there are many), and not just CO2. This would seem to suggest that we have more evidence on climate sensitivity (even if its all proxy records) than I'd of first guessed. However, unless proxy records can cover all significant forcings, I would doubt their usefulness. Do we have a proxy record of cloud cover, for example?

Comment by Grant on Stranger Than History · 2014-03-26T08:21:58.662Z · LW · GW

Agreed. Powerful people (especially politicians) seem to hold plenty of irrational beliefs. Of course we can't really tell the difference between lying about irrational beliefs and hypocrisy, if there's a meaningful difference for the outside observer at all.

Comment by Grant on Rationality Quotes October 2013 · 2013-11-29T02:53:28.055Z · LW · GW

The quote refers to the (end) market and users, not the internal workings of a software development firm.

Comment by Grant on Wait vs Interrupt Culture · 2013-11-28T21:37:50.566Z · LW · GW

Networking protocols face similar challenges. I wonder if there's a rationality of conversation hidden somewhere in here?

Comment by Grant on On Walmart, And Who Bears Responsibility For the Poor · 2013-11-28T21:25:49.953Z · LW · GW

This has always been my experience shopping at Florida Walmarts: the employees are horrible. Perhaps they could be making more money with a higher minimum wage, better unionizing or what have you, but I have always viewed Walmart's ability to make their employees productive as some sort of miracle of capitalism.

I can't think of another chain business I've experienced with the same or lower caliber of employee.

Comment by Grant on Should We Tell People That Giving Makes Them Happier? · 2013-09-06T03:14:39.483Z · LW · GW

I haven't found that to be the case with personal gifts either. I spend a lot of time trying to pick out good gifts, and generally seem to fail. It just seems so very much easier for someone to pick out something they enjoy for themselves than it is for someone else to do it. I find most gifts given to me undesirable, but still have to look happy and grateful to receive them. The majority of the time I'd rather not have gotten or given any gifts at all.

I keep trying to get friends and family to forgo the normal gift-giving holidays in favor of giving to charity, with limited success.

Comment by Grant on Rationality Quotes September 2013 · 2013-09-04T02:08:12.716Z · LW · GW

True. Some sources indicate that some Japanese cities were left intact precisely so the American military could test the effects of a nuke!

Comment by Grant on Rationality Quotes September 2013 · 2013-09-04T01:56:42.029Z · LW · GW

That is no reason to drop the bomb on a city though; there are plenty of non-living targets that can be blown up to demonstrate destructive power. I suppose doing so wouldn't signal the will to use the atomic bomb, but in a time when hundreds of thousands died in air raids I would think such a thing would be assumed.

I suppose this highlights the fundamental problem of the era: the assumption that targeting civilians with bombs was the best course of action.

Comment by Grant on Rationality Quotes September 2013 · 2013-09-03T23:24:17.643Z · LW · GW

If the bombing of Nagasaki contributed more to the end of the war than the bombing of Tokyo, then we could easily say it was morally superior. That is not to say there weren't better options of course.

Comment by Grant on Rationality Quotes September 2013 · 2013-09-03T23:21:22.810Z · LW · GW

Consider what "the cold war" might have been like if we hadn't of had nuclear weapons. It probably would have been less cold. Come to think of it, cold wars are the best kind of wars. We could use more of them.

Yes nukes have done terrible things, could have done far worse, and still might. However since their invention conventional weapons have still killed far, far more people. We've seen plenty of chances for countries to use nukes where they've not, so I think its safe to say the existence of nukes isn't on average more dangerous than the existence of other weapons. The danger in them seems to come from the existential risk which is not present when using conventional weapons.

Comment by Grant on Rationality Quotes September 2013 · 2013-09-02T22:16:08.632Z · LW · GW

True, but its not clear morals have saved us from this. Many of our morals emphasize loyalty to our own groups (e.g. the USA) over our out groups (e.g. the USSR), with less than ideal results. I think if I replaced "morality" with "benevolence" I'd find the quote more correct. I likely read it too literally.

Though the rest of it still doesn't make any sense to me.

Comment by Grant on Rationality Quotes September 2013 · 2013-09-02T21:50:50.496Z · LW · GW

These (nebulous) assertions seem unlikely on many levels. Psychopaths have few morals but continue to exist. I have no idea what "inner balance" even is.

He may be asserting that morals are necessary for the existence of humanity as a whole, in which case I'd point to many animals with few morals who continue to exist just fine.

Comment by Grant on How to avoid dying in a car crash · 2013-08-08T11:03:20.243Z · LW · GW

We're required to wear helmets, nomex suits, gloves, socks and shoes (lots of fun in 90F+ degree weather), head and neck restraint devices and 5 or 6 point harnesses. However keep in mind race cars do not have airbags, while its becoming more and more common for passenger cars to have airbags galore. With airbags, the benefits of a helmet are much reduced.

Comment by Grant on How to avoid dying in a car crash · 2013-08-08T10:52:03.553Z · LW · GW

As an amateur race car driver, I've got a few things to add here.

There's one very important tip I've never seen driver's ed courses mention concerning rain driving: the available traction on wet pavement varies wildly depending on the surface. Rougher surfaces tend to offer more grip, some feel nearly as good as driving in the dry. Smoother surfaces tend to offer less, some (the worst blacktop parking lots) feeling as bad as driving on ice. Any paint (such as painted-on brick strips on some intersections) is going to be very slick, as is most concrete (as its generally smooth, though rougher concrete like is found on runways will have lots of grip). Between different types of wet asphalt the difference in grip of my race car (on street tires) can range from around 1.0 gees of maximum lateral acceleration to as low as 0.65.

Metal drawbridges are also extremely slick in the wet, to the point where a strong wind can blow a car into other lanes.

So unless your familiar with the surface you're driving on, do not take anything for granted in the wet. On poor surfaces even a little bit of water can massively increase stopping distances. Unfortunately you can't count on newer construction being better here, as the slickest interstate I've encountered was relatively new (if you can read a sign from its reflection off the wet surface, the road probably sucks).

I regard tips on how to drive (at night? during the rain? at what speed?) as being largely dependent on environment and visibility. You always need to be prepared to react to something as soon as you can see it. Rain, night time and curvy roads keep you from seeing things as quickly, and mean you need to be more conservative. Every time you drive faster than you can react to unseen dangers you're rolling the dice. Always drive within your visibility. Sounds like common sense, but it doesn't seem to be commonly followed.

Aside from working headlights, tires are the #1 accident-avoidance device on the car. Almost all cars on the road have brakes powerful enough to lock the tires up, meaning stopping distances are a function of available grip. They may look like simple blocks of inflated rubber, but tires are extremely complex and not at all created equal. The best tire for a vehicle is going to vary with wheel size, ambient temperatures and budget, and you definitely don't always get what you pay for here.

All other things equal, more tread depth = more hydroplaning resistance. Bald tires can grip just fine in the wet provided there is no standing water, but this is generally not recommended for obvious reasons.

Some people say tire inflation pressures are critical. You definitely don't want them more than 5 or so psi from ideal, but I've done a lot of testing here and not generally found pressures to make a measurable difference in overall grip when they're kept within reason. Lower pressures feel "sloppier" but still grip, while high pressures feel "crisper" and probably save you some gas. A severely under-inflated tire can overheat and de-laminate just driving in a straight line, and no you won't always notice this until the tread is already coming off. Tire pressure monitors are really great safety devices and I wish I had them on my race car.

Here's an anecdote where tires saved the day: I was driving on the interstate and came upon a block of traffic. In front of me was a Toyota, and I slowed to match its speed. Less than a minute later the Toyota veers off the road and his right front tire hits a concrete construction barrier. The tire climbs up this barrier and flips the car onto its roof, landing in my lane. I was blocked in by traffic and had no other choice than to slam on my brakes and hope. The impact with the barrier slowed the car very quickly, to the point where I came within a few feet of hitting it. Once I matched its speed it skidded away from me as roofs obviously don't slow cars down very well.

I was in a sports car equipped with aerodynamic downforce and road-legal racing tires. Had I of been been on economy tires I certainly would have hit the car with significant force. Had I of been in an SUV I likely would have run it over. As it was the driver crawled out of the car shaken and bleeding, but largely alright. He didn't remember what caused the incident. As it was in the afternoon, I suspect he was distracted, dropped a tire off the road, and the pavement height change pulled on the steering and sucked the car into the barrier.

In hindsight I shouldn't have been following so closely, though I was maintaining more distance than others in the block. I admit it never went through my mind that the car in front of me might veer off into a concrete wall and be deflected back into my lane.

So thats what I've learned: tires are very important, and rain needs respect to be handled safely.

Comment by Grant on Rationality Quotes August 2013 · 2013-08-07T22:08:22.066Z · LW · GW

A more direct approach might be: "no patches which frobnicate a beezlebib will be accepted".

There are many FOSS projects that don't use Linus's style and do work well. What's so special about Linux?

I would say the size (in terms of SLOC count), scope (everything from TVs to supercomputers), lack of a equivalent substitute (MySQL or Postgres? Apache or Nginx? Linux or... BSD?), importance of correctness (its the kernel, stupid), and commercial involvement (Google, Oracle, etc.) make it very different from most FOSS projects. Mostly I'd say the size, complexity and very low tolerance of bugs.

I have no idea if Linus's attitude is helpful or not. I tend to think he could do better with more direct, polite approaches like the above, but I don't hold that belief very strongly.

Comment by Grant on Rationality Quotes August 2013 · 2013-08-06T20:54:32.180Z · LW · GW

Certainly he and his team are less likely to accept patches from people who they've had trouble with in the past? And people who have trouble getting patches accepted (for whatever reason) are probably not going to be paid to continue doing kernel development?

It would surprise me if he's never outright banned anyone.

Thanks for the correction, edited my comment above.

Comment by Grant on Rationality Quotes August 2013 · 2013-08-06T20:27:41.990Z · LW · GW

Which means that anyone who doesn't like his style is free to leave at any time without any consequences in the sense of salary, health insurance, etc. The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.

As of 2012-04-16, 75% of kernel development is paid. I would assume those developers would find their jobs in jeopardy if Linus removed them from development.

Comment by Grant on Rationality Quotes August 2013 · 2013-08-06T17:35:33.114Z · LW · GW

It does assume that asset bubbles are made up of bad investments which are costly to undo. While this insight may have been originally Austrian, I didn't think it was at all contentious. The dot-com bubble is a clearer example, as the housing bubble was both an asset bubble and banking failure (and many of the dot-com investments were just off-the-wall crazy).

As Vernon Smith showed, asset bubbles happen even with derivatives who's value is objective (and without central banks). Its hard for me to see the bust as the problem in those cases.

Would a Keynesian say that any economic downturn can be averted in the face of any and all bad investments?

Comment by Grant on Making Rationality General-Interest · 2013-08-06T16:49:50.677Z · LW · GW

From the articles linked from Welcome to Less Wrong:


The title is descriptive and the text is short and to the point. Empirical support is present and clearly stated. Of course it could be shortened quite a bit more without losing any information, but I don't find it excessively verbose.


Its a long post, not trivial to follow, and when reading its not clear how the effort will pay off. Perhaps this is evidence of a short attention span, but I've generally found that most concepts can be expressed succinctly. It might also be a habit of my profession that I try and make writings as terse and general as possible.

I suspect status and article length are highly correlated (e.g., people read autobiographies of famous people), and so longer writings might be ways to signal status.

I can produce more examples, but the above two are archetypal for me.

3) Well, I don't know what I don't know ;) But to list a few things:

  • Pros and cons of frequentist vs. bayesian approaches. Everything I read here seems pro-bayesian, but other (statistics) sites I look at promote a mix of the two approaches.
  • Why so little discussion of mechanisms which improve the rationality of group action and decision-making? Is that topic too close to the mind-killer, or have I missed those articles?
  • I find appeals to rationality during strictly normative argument irrational, because people don't seem to adopt ethics on the basis of rationality or consistency. Thus I'm confused by the frequency of ethical discussions here. Am I missing something about ethics and rationality? Or just wrong? Something on a general rationalist approach to ethics would be helpful to me.
Comment by Grant on Rationality Quotes August 2013 · 2013-08-06T16:42:26.216Z · LW · GW

I think a better term might be 'meritocratic', and not 'democratic'. Unless mathematicians vote on mathematics?

Comment by Grant on Rationality Quotes August 2013 · 2013-08-06T06:54:59.094Z · LW · GW

Ditto, and downvoting b1shop's response since the quote did not mention any particular economic theory. Busts caused by widespread bad investments aren't necessarily the problem, the widespread bad investments are the problem. Blaming the bust in these cases may be shooting the messenger.

Thats not to say all busts are largely caused by widespread bad investments, or anything about why these bad investments happen. It is however very clear in hindsight that many boom-phase investments are crazy.

Comment by Grant on Why I'm Skeptical About Unproven Causes (And You Should Be Too) · 2013-08-06T05:02:02.902Z · LW · GW

I'm sure the use of prediction markets to predict existential threats is difficult, but it seems like you could at least use them to predict the emergence of AI. I'd be surprised if this wasn't discussed here at some point.

It seems to me that while prediction markets may not need funding from a technical perspective, public and especially political opinion on them does need some nudging. I don't think I'm entering mind-killing territory by suggesting it'd be good if politics didn't get in their way so much. I'm certainly no expert, but long-running markets where investors would expect interest paid would face all sorts of (US) regulatory hurdles beyond normal markets. Its very expensive just to find out what regulations you'll run afoul of, as obviously US financial regulation was not created with prediction markets which could last decades in mind (or prediction markets at all for that matter).

Comment by Grant on Why I'm Skeptical About Unproven Causes (And You Should Be Too) · 2013-08-05T08:33:26.890Z · LW · GW

If existential risks are hugely important but we suck at predicting them, why not invest in schemes which improve our predictions?

Comment by Grant on Rationality Quotes August 2013 · 2013-08-05T07:56:18.314Z · LW · GW

To me "filled with falsehoods and errors" translates into more falsehoods than "some". Though I agree its not a very good quote within the context of LW.

Comment by Grant on Rationality Quotes August 2013 · 2013-08-05T03:41:42.170Z · LW · GW

All true, but there are many booms which seem to produce crazy investments; the dot-com boom is the most obvious recent example. You don't need to accept ABCT to accept this, and I'd guess most people who do notice this don't accept ABCT.

Comment by Grant on Why Eat Less Meat? · 2013-08-05T03:19:04.646Z · LW · GW

"Only" was a gross exaggeration. I'm not sure why I typed it.

I think my examples are pretty typical though. Charitable people get lobbied by people who want charity. This occurs with both personal and extended charity. In my case it gets me bugged into spending more time on other people's technical problems (e.g. open-source software projects) than I'd like.

I haven't contributed to many charities, but the ones I have seem to have put me on mailing and call lists. I also once contributed to a political candidate for his anti-war stance, and have been rewarded with political spam ever since. I'm not into politics at all so its rather unwelcome.

Comment by Grant on Making Rationality General-Interest · 2013-07-24T23:45:31.693Z · LW · GW

INTP male programmer here. I've never posted an article and rarely comment.

One thing which keeps me from doing is actually HPMoR, and EY's posts and sequences. They're all really long, and seem to be considered required reading. I know its EY's style; he seems to prefer narratives. Unfortunately I don't have a lot of time to read all that, and much prefer a terser Hansonian style.

A shorter "getting started" guide would help me. Would it help others?

Comment by Grant on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T23:19:10.351Z · LW · GW

I'm not very well informed on this topic, but isn't something like that always going to be the case in a society with a safety net? e.g., if we make sure everyone has at least $25k to live on, anyone making $8k a year isn't going to be any worse off than someone making $25k.

Of course I'm not sure how well America's arcane maze of benefits, tax deductions and whatnot fit into this simple abstraction.

Comment by Grant on Why Eat Less Meat? · 2013-07-24T23:09:49.316Z · LW · GW

Good article, thanks. The author does say the taste was quite different from chicken, you just can't tell when its in a burrito as the chicken is mostly used for texture. The producer's website is here.

Another idea, with potentially better returns than the above: invest in faux-meat producers. There appear to be plenty of them.

Comment by Grant on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T22:49:19.906Z · LW · GW

Roughly half of Americans don't owe anything to the IRS each year. Pre-recession I believe this figure was about 40%. They of course pay other taxes, such as payroll (social security, medicare, which most people consider taxes), state sales tax, property taxes, etc. It'd be nice if they at least didn't have to file tax returns.

Comment by Grant on Why Eat Less Meat? · 2013-07-24T20:06:40.454Z · LW · GW

Idea: if you're very interested in promoting veganism or vegitarianism, help make it taste better, or invest in or donate to those who are helping make it taste better. As my other much-downvoted comment showed, I am very skeptical that appeals to altruism will have nearly as much of an affect as appeals to self-interest, especially outside of this community. I believe most people eat meat because it just tastes better than their alternatives.

Grown crops are far more efficient to produce than livestock, so there are plenty of other good reasons to transition away from the use of livestock in agriculture. If steak were made to "grow on trees", why pay all that extra for the real thing? If you lower the cost of vegetarianism by improving taste, more people will adopt it. If they don't adopt it they'll still be more likely to forgo meats for vegetarian dishes if those dishes taste better.

In the case of low-quality meats (e.g. McDonalds) the taste bar isn't even set very high.

When I first decided to be a vegetarian, I simply switched from tasty hamburgers to tasty veggieburgers and there was no problem at all.

I think your sample size might have lead you astray here. My personal experience is exactly the opposite. That said, I looked for studies of meat vs. faux meat taste and didn't find anything. I wonder if a love of meat over alternatives is innate or is learned, and if there exist vegetarian recipes which really do taste as good as the real thing.

Comment by Grant on Why Eat Less Meat? · 2013-07-24T16:54:02.675Z · LW · GW

Thank you for the explanation. I was trying to play the devil's advocate a bit and I didn't think my comment would be well-received. I'm glad to have gotten a thoughtful reply.

Thinking about it some more, I was not meaning to anthropomorphize evolution, just point out homo-hypocritus. On any particular value of a person's, we have:

  • What they tell people about it.
  • How they act on it.
  • How they feel about it.

I feel bad about a lot of suffering (mostly that closest to me, of course). However its not clear to me that what I feel is any more "me" than what I do or what I say.

Most everyone (except psychopaths) feels bad about suffering, and tells their friends the same, but they don't do much about it unless its close to their personal experience. Evolution programmed us to be hypocritical. However in this context its not clear to me why we'd chose to act on our feelings instead of feel like our actions (stop caring about distant non-cute animals), or why we'd chose to stop being hypocritical at all. We have lots of examples throughout history of large groups of people ceasing to care about suffering of certain groups, often due to social pressures. I think the tide can swing both ways here.

So I have trouble seeing how these movements would work without social pressures and appeals to self-interest. I guess there's already a lot of pro-altruism social pressure on LW?

Edit: as a personal example, I feel more altruistic than I act, and act more altruistic than I let on to others. I do this because I've only gotten disutility from being seen as a nice guy, and have refrained from a lot of overt altruism because of this. I think I'd need a change in micro-culture to change my behavior here; appeals to logic aren't going to sway me.

Comment by Grant on Why Eat Less Meat? · 2013-07-24T05:52:57.202Z · LW · GW

I admit to being perplexed by this and some other pro-altruism posts on LW. If we're trying to be rationalists, shouldn't we come out and say: "I don't often care about other's suffering, especially of those people I don't know personally, but I do try and signal that I care because this signaling benefits me. Sometimes this signaling benefits others too, which is nice".

I agree everyone likely benefits from a society structured to reward altruism. We all might be in need of altruism one day. But there seems to be a disconnect between the prose of articles like this one and what I thought was the general rationalist belief that altruism in extended societies largely exists for signaling reasons.

Also, the benefits of altruism seem significantly less substantial when the targets are animals. Outside of personal experience animals are just unable to return any favors. If I save the lives of some children in Africa, I can hope those people contribute to the global economy and help make the world a better place for my children. Unfortunately the same cannot be said about my food.

I realize the article starts with the conditional statement "if one cares about suffering", so my comments above aren't really a critique. A more direct critique would be "who really cares about suffering?". If we only care about signaling altruism then I think we should just come out and say that.

I like animals and have owned many pets, but I do not care about the suffering of animals far outside my personal experience. If I was surrounded by people who cared about such things then I likely would learn to as well; to do otherwise would signal barbarism. I might also learn to care if I was interested in signaling moral superiority over my peers.

Comment by Grant on Singleton: the risks and benefits of one world governments · 2013-07-08T08:11:53.192Z · LW · GW

It probably wouldn't stop political competition, but it very well may slow competition in political systems. If there was one world democratic or republican government, would it let something like futarchy develop? That isn't to say that futarchy would have an easy time coming into being anyway, but it seems like it might be harder under a single world government.

More generally, how often does political innovation occur without violence, or the threat of it? It took violence in the case of the American and French revolutions. Reforms in the UK seem to have come at sword-point, though the monarchy was never completely overthrown.

It seems to me that peaceful political innovation requires some sort of peaceful succession process, which is not currently supported by any laws or norms I'm aware of.

Comment by Grant on Helpless Individuals · 2009-03-31T06:02:33.019Z · LW · GW

I'm not sure I'm following the logic here. The failure of science to raise money via voluntary means is evidence that it is too much of a non-ancestral problem?

Well, I'll agree that if we somehow had science as it exists now for a few hundred generations, we'd probably be better at funding it. But thats true of anything. Standard economics predicts that funding large-scale public goods is difficult via voluntary means, and public choice explains why its difficult for governments too. If you believe Coase this difficulty is a feature, not a bug, because it takes transaction (i.e., organizational) costs into account.

Of course, it shouldn't come as any surprise to anyone that a scientist is complaining that people don't fund enough science ;) To be honest, I don't know where I could donate money to science to make a difference. Its very hard for non-scientists to judge the feasibility of scientific projects. So much of science seems to be a complete and utter waste of smart people and resources.

One thing we can do is promote the use of prizes over normal funding.

Comment by Grant on Informers and Persuaders · 2009-02-11T11:04:52.000Z · LW · GW


there will be those who write with an utterly pure and virtuous love of the truthfinding process; they desire solely to give people more unfiltered evidence and to see evidence correctly added up, without a shred of attachment to their or anyone else's theory.
They're implicitly attached to the theory that this process really does find the truth, and they may be attached to the idea that it is the best or one of the best processes for doing so. On a slightly more abstract level, is there a difference between Informers and Persuaders?

For example, a Keynesian and an Austrian economist may not be at all attached to their theories, but are attached to their very different truth-seeking methodologies (though perhaps the term 'truth-seeking' is giving them too much credit).

As an aside, I find Robin's posts to be much easier to understand and follow than Eliezer's, and I don't think this has anything to do with the complexity of the arguments. Robin's style seems to just be simpler and more concise, making it easier for me to spot the premises and logic of his arguments. I think this is a benefit of more formal types of arguments in general, at least to my brain.

Comment by Grant on True Ending: Sacrificial Fire (7/8) · 2009-02-05T21:55:09.000Z · LW · GW


I'm not entirely sure how "they are offended by helpless victims being forced to suffer against their will and want to remove that" translates into "the SHs aren't nice in any sense of the word".
They aren't offended by suffering, but the expression of it. They don't even understand human brains, and can't exchange experiences with them via sex, so how could they? Maybe the SHs are able to survive and thrive without processing certain stimuli as being undesirable, but they never made an argument that humans could.

Comment by Grant on True Ending: Sacrificial Fire (7/8) · 2009-02-05T20:32:19.000Z · LW · GW

Eliezer, thanks. I mostly read OB for the bias posts and don't enjoy narratives or stories, but this one was excellent.

Tyrrell, we aren't told how many humans exist. There could be 15 trillion, so the death of one system may not even equal the number of people who would commit suicide if the SHs had their way.

I don't find the SHs to be "nice" in any sense of the word. In my reading, they aren't interested in making humans happy. They can't be - they don't even understand the human brain. I think they are a biological version of Eliezer's smiley face maximizers. They are offended by mankind's expression of pain (its a negative externality to them) and want to remove what offends them. I don't think any interstellar civilization would be very successful if they did not learn to ignore or deal with non-physical negative externalities from other races (which would, unfortunately, include baby-eating).

The SH did not even seem to consider the most obvious option (to me, at least) which is to trade and exchange cultures the normal way. Many humans would undoubtedly be drawn in to the SH way of life. I suppose their advanced technology makes the cost of using force relatively low, so this option seemed unacceptable. Still, I wonder why Akon didn't propose it (or did he)?

Comment by Grant on Interpersonal Entanglement · 2009-01-20T17:35:56.000Z · LW · GW

I'm mostly with Kaj; I don't see the problem. Designing a companion seems like it will often be a superior strategy than trying to acquire one largely through trial-and-error with existing people. Why would anyone want to "catgirl" when they could make a human who was perfectly suited for them?

If anything, I think problems may come from women, who will find themselves no longer able to acquire resources by virtue of their attractiveness. Of course, if we have enough technology to create companions, we could probably easily modify women (or men) to be as attractive as the "catgirls", and maybe make women on equal footing with men in the engineering department (so they don't suffer economically).

Comment by Grant on Dunbar's Function · 2008-12-31T06:03:38.000Z · LW · GW

But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band, because it's harder to talk to the tribal chief or (if that fails) leave unpleasant restrictions and start your own country. There is an opportunity for progress here.
I strongly disagree with this statement. A tribal tyrant likely has much greater effect on someone's personal life than a president or legislator. Its probably harder to start your own country today, but its not harder to leave your country (tribe) to join another. I'd bet its a lot easier, actually. In modern times, people are parts of many different hierarchies, each of which directly impacts their person life. If they don't like one hierarchy, then can leave it. A tribe of 50 is like a small high school; you can't avoid the bullies, and those on the bottom of the totem pole often just stay there. In the real world, freedom of association combined with modern technologies mean the oppressed can often simply avoid oppressors (or the poor can avoid associating with the rich, the dumb with the smart, etc).

I think you're stretch evolutionary psych a bit too far. Yes people spend a lot of time arguing about how to fix the world, as if they could, but doing so is a signal of intelligence, loyalty to certain groups (i.e., liberals over conservatives), and probably other things I'm not thinking of. If people actually argued politics because their stone-age minds told them it was important, they'd do so more seriously. Instead, political decision-making is a mockery of science and truth (e.g., Robin Hanson and Bryan Caplan's critiques of democracies).

In other words, I think the greater freedom of association and better communication and transportation technologies have reduced negative hierarchical externalities. If people cared so much about relative income, they'd take the $100k over the $50k, and simply find new friends that weren't making more money than them. Of course, discussing money is impolite partially because it creates these externalities (though we do need to distinguish between stated preferences and revealed preferences; TGGP's link is excellent). So I don't see how our stone-age brains are all that handicapped here. We aren't living in tribal bands, where we need personal relationships for reliable trade. Losers in one hierarchy can simply leave it and join another (e.g., the school nerd playing WoW over varsity sports). Our institutions and technologies have evolved to deal with hierarchical issues.

This is getting rather off-topic, but there is an excellent EconTalk podcast where Russ Roberts blows some holes in inequality externality arguments (specifically how they can exist when most people don't know their neighbor's income with any sort of accuracy?).

Comment by Grant on Can't Unbirth a Child · 2008-12-29T04:22:33.000Z · LW · GW


Understood; though I'd call fraud coercion, the use of the word is a side-issue here. However, an AI improving humans could have an equally clear view of what not to mess with: their current goal system. Indeed, I think if we saw specialized AIs that improved other AIs, we'd see something like this anyway. The improved AI would not agree to be altered unless doing so furthered its goals; i.e. the improving was unlikely to alter its goal system.

Comment by Grant on Can't Unbirth a Child · 2008-12-29T03:34:48.000Z · LW · GW

Nick, thats why I said non-coercively (though looking back on it, that may be a hard thing to define for a super-intelligence that could easily trick humans into becoming schizophrenic geniuses). But isn't that a problem with any self-modifying AI? The directive "make yourself more intelligent" relies on definitions of intelligence, sanity, etc. I don't see why it would be any more likely to screw up human intelligence than its own.

If the survival of the human race is one's goal, I wouldn't think keeping us at our current level of intelligence is even an option.

Comment by Grant on Can't Unbirth a Child · 2008-12-29T02:33:55.000Z · LW · GW

I'm not sure I understand how sentience has anything to do with anything (even if we knew what it was). I'm sentient, but cows would continue to taste yummy if I thought they were sentient (I'm not saying I'd still eat them, of course).

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.