Open thread for December 9 - 16, 2013

post by NancyLebovitz · 2013-12-09T16:35:44.861Z · LW · GW · Legacy · 377 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

377 comments

Comments sorted by top scores.

comment by B_For_Bandana · 2013-12-09T21:25:26.893Z · LW(p) · GW(p)

Today is the thirty-fourth anniversary of the official certification that smallpox had been eradicated worldwide. From Wikipedia,

The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:

Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.

Archaeological evidence shows evidence of smallpox infection in the mummies of Egyptian pharaohs. There was a Hindu goddess of smallpox in ancient India. By the 16th century it was a pandemic throughout the Old World, and epidemics with mortality rates of 30% were common. When smallpox arrived in the New World, there were epidemics among Native Americans with mortality rates of 80-90%. By the 18th century it was pretty much everywhere except Australia and New Zealand, which successfully used intensive screening of travelers and cargo to avoid infection.

The smallpox vaccine was one of the first ever developed, by English physician Edward Jenner in 1798. Vaccination programs in the wealthy countries made a dent in the pandemic, so that by WWI the disease was mostly gone in North America and Europe. The Pan-American Health Organization had eradicated smallpox in the Western hemisphere by 1950, but there were still 50 million cases per year, of which 2 million were fatal, mostly in Africa and India.

In 1959, the World Health Assembly adopted a resolution to eradicate smallpox worldwide. They used ring vaccination to surround and contain outbreaks, and little by little the number of cases dropped. The last naturally-occurring case was found in October 1975, in a two-year-old Bangladeshi girl named Rahima Banu, who recovered after medical attention by a WHO team. For the next four years, the WHO searched for more cases (in vain) before declaring the eradication program successful.

Smallpox scarred, blinded, and killed countless billions of people, on five continents, for hundreds to thousands of years, and now it is gone. It did not go away on its own. Highly trained doctors invented, then perfected a vaccine, other engineers found ways to manufacture it very cheaply, and lots of other serious, dedicated people resolved to vaccinate each vulnerable human being on the surface of the Earth, and then went out and did it.

Because Smallpox Eradication Day marks one of the most heroic events in the history of the human species, it is not surprising that it has become a major global holiday in the past few decades, instead of inexplicably being an obscure piece of trivia I had to look up on Wikipedia. I'm just worried that as time goes on it's going to get too commercialized. If you're going to a raucous SE Day party like I am, have fun and be safe.

Replies from: Emile, None, knb
comment by Emile · 2013-12-09T21:45:09.517Z · LW(p) · GW(p)

This deserves some music:

Old King Plague is dead,
the smallpox plague is dead,
no more children dying hard
no more cripples living scarred
with the marks of the devil's kiss,
we still may die of other things
but we will not die of this.

Raise your glasses high
for all who will not die
to all the doctors, nurses too
to all the lab technician who
drove it into the ground
if the whole UN does nothing else
it cut this terror down.

But scarce the headlines said,
the ancient plague was dead,
then they were filled with weapons new
toxic waste and herpes too,
and the AIDS scare coming on
ten new plagues will take its place
but at least this one is gone.

Population soars,
checked with monstrous wars
preachers rant at birth control
"Screww the body, save the soul",
bring new deaths off the shelves,
and say to Nature, "Mother, please,
we'd rather do it ourselves".

Old King Plague is dead,
the smallpox plague is dead,
no more children dying hard
no more cripples living scarred
with the marks of the devil's kiss,
we still may die of other things
but we will not die of this, oh no,
we will not die of this.

-- Leslie Fish, The Ballad of Smallpox Gone

comment by [deleted] · 2013-12-10T02:49:44.290Z · LW(p) · GW(p)

The virus currently only still exists as samples in two freezers in two labs (known to the scientific community). These days I think that that is overkill even for research purposes for this pathogen, what with the genome sequenced and the ability to synthesize arbitrary sequences artificially. If you absolutely must have part of it for research make that piece again from scratch. Consign the rest of the whole infectious replication-competent particles to the furnace where they belong.

EDIT: I found a paper in which smallpox DNA was extracted and viruses observed via EM from a 50 year old fixed tissue sample from a pathology lab that was not from one of the aforementioned collections. No word in the paper on if it was potentially infectious or just detectable levels of nucleic acids and particles. These things could be more complicated to 100% securely destroy than we thought...

comment by knb · 2013-12-09T23:16:39.731Z · LW(p) · GW(p)

With any luck, Polio will be next.

comment by Tuxedage · 2013-12-10T19:14:32.796Z · LW(p) · GW(p)

At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.

I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.

Replies from: intrepidadventurer, Adele_L, somervta, Brillyant, V_V
comment by intrepidadventurer · 2013-12-11T19:13:38.399Z · LW(p) · GW(p)

This post and reading "why our kind cannot cooperate" kicked me off my ass to donate. Thanks Tuxedage for posting.

Replies from: Tuxedage
comment by Tuxedage · 2013-12-13T21:46:46.489Z · LW(p) · GW(p)

.

comment by Adele_L · 2013-12-11T21:29:50.937Z · LW(p) · GW(p)

Would anyone else be interested in pooling donations to take advantage of the 3:1 deal?

Replies from: Tripitaka
comment by Tripitaka · 2013-12-18T23:53:24.581Z · LW(p) · GW(p)

I'd be interested, but only the small sum of 100$. Did anybody else take you up on that offer? Of course I'd like to verify the pool-persons identity before transfering money.

comment by somervta · 2013-12-11T05:25:17.071Z · LW(p) · GW(p)

You sir, are awesome.

comment by Brillyant · 2013-12-10T22:42:35.269Z · LW(p) · GW(p)

Interesting.

I have been convinced that people donating should publicly brag about it to attract other donors

It certainly seems to make sense for the sake of the cause for (especially large, well-informed) donors to make their donations public. The only downside seems to be a potentially conflicting signal on behalf of the giver.

instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.

I'm not sure this is true. Doesn't MIRI publish its total receipts? Don't most organizations that ask for donations?

Growing up Evangelical, it was taught that we should give secretly to charities (including, mostly, the church).

I wonder why? The official Sunday School answer is so that you remain humble as the giver, etc. I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?

Replies from: Tuxedage, gwern, ChristianKl
comment by Tuxedage · 2013-12-10T22:53:15.817Z · LW(p) · GW(p)

I'm not sure this is true. Doesn't MIRI publish its total receipts? Don't most organizations that ask for donations?

Total receipts may not be representative. There's a difference between MIRI getting funding from one person with a lot of money and large numbers of people donating small(er) amounts. I was hoping this post to serve as a reminder that many of us on LW do care about donating, rather than a few rather rich people like Peter Thiel or Jaan Tallinn.

Also I suspect scope neglect can be at play -- it's difficult to, on an emotional level, tell the difference between $1 million worth of donations, or ten million, or a hundred million. Seeing each donation that led to adding up to that amount may help.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-12-11T09:15:02.424Z · LW(p) · GW(p)

Seeing each donation that led to adding up to that amount may help.

Yes, because it would show how many people donated. Number of people = power, at least in our brains.

The difference between one person donating 100 000, or one person donating 50 000 and ten people donating 5 000 is that in the latter case, your team has eleven people. It is the same amount of money, but emotionally it feels better. Probably it has other advantages (such as smaller dependence on whims of a single person), but maybe I am just rationalizing here.

comment by gwern · 2013-12-13T21:12:42.632Z · LW(p) · GW(p)

I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?

There may not be anything to explain: the early Christian church grew very slowly. Perhaps secret almsgiving simply isn't a good idea.

Replies from: Brillyant
comment by Brillyant · 2013-12-13T21:33:45.877Z · LW(p) · GW(p)

Hm. Possibly. Though it does still seem to be a rather popular convention in churches today to adopt an interpretation of secret offerings.

I would imagine popular interprations of scriptures on giving would evolve based on the goals of the church (to get $$$), and kept in check only by being believable enough to the member congregations.

Tithing seems to work for the church, so lots of churches resurrect it from the OT and really shaky exegesis and make it a part of the rules. If tithing didn't work for the church, they could easily make it go away in the same way they get rid of tons of outdated stuff from the OT (and the NT).

Secret offerings seems similar to me. I'd imagine they could make the commands for secret giving go away with some simple hermeneutical waves of the hand if it didn't benefit them.

comment by ChristianKl · 2013-12-13T20:02:45.531Z · LW(p) · GW(p)

I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?

This gives the church an information advantage. Information is power. It gives them the opportunity to make it seem like everyone is donating less than their neighbors.

Replies from: drethelin, Brillyant
comment by drethelin · 2013-12-13T20:06:38.645Z · LW(p) · GW(p)

or that "Christians" donate a lot when it's really just a few of them.

comment by Brillyant · 2013-12-13T21:01:35.178Z · LW(p) · GW(p)

Ah. So the leaders can give the ongoing message to "give generously" to a group and, as long as the giving data is kept in secret and no one ever speaks to anyone else about how much they gave, then each member will feel compelled to continue to give more in an effort to (a) "please God" and (b) gain favor in the eyes of the leaders by keeping up with, or outgiving, the other members. Is this what you are saying? If not can you elaborate?

Replies from: ChristianKl
comment by ChristianKl · 2013-12-13T21:56:32.459Z · LW(p) · GW(p)

Look at Mormons. They have a rule that you have to donate 10% of your income. If you don't than you aren't pleasing god and god might punish you.

In reality the average Mormon doesn't donate 10% but might feel guilty for not doing so. If someone who would donate 7% would know that they donate above average, they would feel less guilty about not meeting the goal of donating 10%.

Replies from: Brillyant
comment by Brillyant · 2013-12-13T22:39:01.962Z · LW(p) · GW(p)

Sure, but why 10%? Why not 15%? Or 20%?

It is possible that they are setting the bar too low. You might have many people who would have given 30% had not the command been for 10%, but for 30%?

Replies from: ChristianKl, drethelin
comment by ChristianKl · 2013-12-13T22:48:56.901Z · LW(p) · GW(p)

It is possible that they are setting the bar too low.

Yes, it is. Choosing that particular number might not be optimal. But there a cost of setting the number to high. If you set it too high and people don't think they can reach that standard they might not even try.

Replies from: Brillyant
comment by Brillyant · 2013-12-14T00:06:26.323Z · LW(p) · GW(p)

Right.

I'd guess 10% is not an arbitrary number, but rather is a sort of market equilibrium that happens to be supportable by a certain interpretation of OT scripture. It might have just as well been 3% or 7% or 12% as these numbers are all pretty significant in the OT, and could have been used by leadership to impose that % on laypeople.

In any case, in my experience within the church, there are tithes... AND then there are offerings which include numerous different cause to give to on any given Sunday. It was often stated these causes (building projects, missions outreaches, etc.) were in addition to your tithe.

It is funny to me... it is almost like the reverse of a compensation plan you'd build for a team of commissioned sales people. Instead of trying to optimize the plan to best incentivize for sales performance by motivating your sales people to sell, the church may have evolved their doctrines and practices on giving to optimize for collecting revenue by motivating your members to give. Ha.

Replies from: gjm
comment by gjm · 2013-12-14T00:28:15.000Z · LW(p) · GW(p)

It might have just as well been 3% or 7% or 12% as these numbers are all pretty significant in the OT

This is of course no argument against anything substantive you're saying, but while the numbers 3,7,12 are certainly all significant in the OT the idea of percentage surely wasn't. I can see 1/3, or 1/7, or 1/12, though.

Replies from: Brillyant
comment by Brillyant · 2013-12-14T02:17:16.010Z · LW(p) · GW(p)

Good point. Though, from my recall, there isn't much basis in the OT for the modern day concept of tithing at all, percentage or otherwise. Christianity points to verses about giving 1/10th of your crops to the priest as the basis.

If they really wanted to change the rules and up it to 1/7th, or 12% or anything they want, they could come up with some new basis for that match using fancy hermeneutics.

This is sort of what is happening right now with homosexuality. Many churches are changing their views. They are justifying that by reinterpreting the verses they've used to condemn it in the past.

In fact, you can pretty much get the Bible to support any position or far-fetched belief you'd like. You only need a few verses... and it's a big book.

This is one of my favorites.

comment by V_V · 2013-12-13T20:12:46.057Z · LW(p) · GW(p)

Sounds like somebody is trying to purchase status...

Replies from: drethelin, JGWeissman
comment by drethelin · 2013-12-13T23:07:04.307Z · LW(p) · GW(p)

We should encourage people to purchase status when that purchase involves doing things we want or giving money to causes we like. Unless you prefer traditional schemes for status assignment like height, handsomeness, ability to throw a ball, and mass murder.

Replies from: V_V
comment by V_V · 2013-12-15T15:47:48.107Z · LW(p) · GW(p)

See my comment on the "In Praise of Tribes that Pretend to Try" thread

If donating to purchase status is accepted and encouraged, it risks to become the main motive behind donations. This in turn creates perverse incentives for the recipient of such donations.

Replies from: drethelin
comment by drethelin · 2013-12-15T19:51:26.053Z · LW(p) · GW(p)

I think it's already the main psychological motivation behind most donations. I think it's better to harness that than not to.

comment by JGWeissman · 2013-12-13T20:37:32.396Z · LW(p) · GW(p)

It sounds to me like somebody is purchasing utilons, using themselves as an example to get other people to also purchase utilons, and incidentally deriving a small amount of well deserved status from the process.

Replies from: V_V
comment by V_V · 2013-12-13T21:13:53.074Z · LW(p) · GW(p)

This isn't the most parsimonious explanation for that behaviour.

comment by TsviBT · 2013-12-10T00:35:02.626Z · LW(p) · GW(p)

PSA: If you want to get store-bought food (as opposed to eating out all the time or eating Soylent), but you don't want to have to go shopping all the time, check to see if there is a grocery delivery service in your area. At least where I live, the delivery fee is far outbalanced by the benefit of almost no shopping time, slightly cheaper food, and decreased cognitive load (I can just copy my previous order, and tweak it as desired).

Replies from: Metus, dougclow, hyporational, Bakkot, John_Maxwell_IV
comment by Metus · 2013-12-10T18:08:53.661Z · LW(p) · GW(p)

This makes me wonder: What are some simple ways to save quite some time that the average person does not think of?

Replies from: James_Miller, hyporational, Gunnar_Zarncke, solipsist, Desrtopa, lmm, None
comment by James_Miller · 2013-12-10T18:12:08.057Z · LW(p) · GW(p)

Stop watching TV.

comment by hyporational · 2013-12-11T06:05:10.195Z · LW(p) · GW(p)

Sleep enough.

comment by solipsist · 2013-12-11T15:03:25.342Z · LW(p) · GW(p)

Move close to where you work (even if it means you have to live in a smaller place).

Replies from: hyporational
comment by hyporational · 2013-12-12T03:17:55.514Z · LW(p) · GW(p)

If you don't have a car, study in the bus/train or take the commute as a bicycling exercise if the distance is relatively short and you can take a shower.

comment by Desrtopa · 2013-12-12T22:20:58.546Z · LW(p) · GW(p)

Possibly cooking very large meals and saving the rest. If you want to save money by cooking from scratch rather than buying prepared food or eating out, it can help to prepare several meals worth at a time.

comment by lmm · 2013-12-10T20:39:29.209Z · LW(p) · GW(p)

Pay for an online assistant. It makes you feel awkward but I hear it's quite effective.

comment by [deleted] · 2013-12-13T04:03:44.441Z · LW(p) · GW(p)

Dave Asprey claims that you can get by fine on five hours of sleep if you optimize it to spend as much time in REM and delta sleep as possible. This appeals to me more than polyphasic sleep does. Link

Also I was intrigued when xkcd mentioned the 28 hour day, but I don't know of anyone who has maintained that schedule

Replies from: NancyLebovitz, Gunnar_Zarncke
comment by NancyLebovitz · 2013-12-13T11:33:07.406Z · LW(p) · GW(p)

Dan Aspey claims he can do well on 5 hours of sleep, and then makes a further claim that any other adult (he recommends not trying to do serious sleep reduction until you're past 23) can also do well on 5 hours. To judge by a fast look at the comments, rather few of his readers are trying this, let alone succeeding at it.

Do you have any information about whether Aspey's results generalize?

Replies from: army1987, None
comment by A1987dM (army1987) · 2013-12-13T16:43:25.125Z · LW(p) · GW(p)

I am under the impression that nearly anybody who talks about sleep is guilty of Generalizing from One Example.

comment by [deleted] · 2013-12-13T18:38:00.197Z · LW(p) · GW(p)

Not really.

comment by Gunnar_Zarncke · 2013-12-14T22:26:20.930Z · LW(p) · GW(p)

There are by now some quite extensive studies about the amount of required or healthy sleep. Sleep is roughly normal distributed between 5 and 9 hours and for some of those getting 5 or less hours of sleep this appears to be healthy:

Jane E. Ferrie, Martin J. Shipley, Francesco P. Cappuccio, Eric Brunner, Michelle A. Miller, Meena Kumari, Michael G. Marmot: A Prospective Study of Change in Sleep Duration: Associations with Mortality in the Whitehall II Cohort.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2276139/pdf/aasm.30.12.1659.pdf

So probably Dave Asprey is one of those 1% for this is correct.

Some improvements (or changes) may be possible for most of us though. You can get along with less sleep if you sleep at your optimum sleep time (which differs depending on your genes esp. the Period 3 gene) and if you start to sleep quickly.

Polyphasic sleep may significantly reduce your sleep total but nobody seems to be able what the health effects are. It might be that it risks your long time health.

comment by dougclow · 2013-12-13T08:21:31.399Z · LW(p) · GW(p)

Another benefit for me is reduced mistakes in picking items from the list.

Some people don't use online shopping because they worry pickers may make errors. My experience is that they do, but at a much lower rate than I do when I go myself. I frequently miss minor items off my list on the first circuit through the shop, and don't go back for it because it'd take too long to find. I am also influenced by in-store advertising, product arrangements, "special" offers and tiredness in to purchasing items that I would rather not. It's much easier to whip out a calculator to work out whether an offer really is better when you're sat calmly at your laptop than when you're exhausted towards the end of a long shopping trip.

You'd expect paid pickers to be better at it - they do it all their working hours, I only do it once or twice a month. Also, all the services I've used (in the UK) allow you to reject any mistaken items at your door for a full refund - which you can't do for your own mistakes. The errors pickers make are different to the ones I would, which makes them more salient - but they are no more inconvenient in impact on average.

comment by hyporational · 2013-12-10T13:54:46.708Z · LW(p) · GW(p)

Alternative: buy a freezer and buy your food in bulk.

Replies from: bramflakes, Lumifer
comment by bramflakes · 2013-12-10T16:44:16.823Z · LW(p) · GW(p)

My family does this and it's not such a good idea. Old forgotten food will accumulate at the bottom and you'll have less usable space at the top. Chucking out the old food is a) a trivial inconvenience and b) guilt-inducing.

Unless it's one of those freezers with sliding trays.

Replies from: hyporational
comment by hyporational · 2013-12-10T17:34:59.547Z · LW(p) · GW(p)

Unless it's one of those freezers with sliding trays.

I have one of those. I thought the chest models are antiquity.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T17:54:02.102Z · LW(p) · GW(p)

I thought the chest models are antiquity.

They are standard in the US. It's like washers: top-loaders dominate in the US and front-loaders dominate in Europe.

Replies from: Prismattic
comment by Prismattic · 2013-12-11T00:48:24.497Z · LW(p) · GW(p)

I disagree with this. Having lived in the US my entire life (specifically MA and VA), I've been in very few homes that had chest freezers, and as far as I recall, none that only had chest freezers (as opposed to extra storage beyond a combination refrigerator/freezer).

I'm not willing to pay to resolve this difference of perception, but if one wanted to do so, the information is probably available here.

Replies from: Lumifer, Nornagest
comment by Lumifer · 2013-12-11T01:01:48.204Z · LW(p) · GW(p)

I am not sure we disagree. I'm not saying that people are using chest freezers instead of normal refrigerators. I'm saying that if a family buys a separate freezer in addition to a regular fridge, in the US that separate freezer is likely to be a chest freezer.

comment by Nornagest · 2013-12-11T01:04:37.181Z · LW(p) · GW(p)

Here on the West Coast I've seen both standing and chest models, although combination refrigerator/freezers are far more common than either. I associate the chest style with hunters and older people, but that likely reflects my upbringing; I wouldn't hazard a guess as to which is more common overall.

comment by Lumifer · 2013-12-10T17:53:09.646Z · LW(p) · GW(p)

buy a freezer and buy your food in bulk.

Assuming you are largely indifferent between fresh and frozen food (a data point: I'm not).

Replies from: hyporational
comment by hyporational · 2013-12-10T19:42:09.650Z · LW(p) · GW(p)

I find this a false dichotomy. Care to muster a rebuke?

Replies from: Lumifer
comment by Lumifer · 2013-12-10T19:49:32.853Z · LW(p) · GW(p)

Empiricism! :-)

Most of the food that I eat doesn't freeze or doesn't freeze well (think fruits and vegetables). Frozen meat is OK for a stew but not at all OK for steaks.

I find -- based on my personal experience -- the texture, aromas, etc. of fresh food to be quite superior to those of frozen food.

Replies from: hyporational, Vaniver, drethelin
comment by hyporational · 2013-12-10T19:57:28.025Z · LW(p) · GW(p)

Ah, it's funny how easily I forget food isn't just about fueling your cells.

I was expecting some sort of a nutrition based argument.

Replies from: Lumifer, army1987
comment by Lumifer · 2013-12-10T20:15:31.388Z · LW(p) · GW(p)

I would point out that it's unwise to ignore one of the major sources of pleasure in this world :-)

comment by A1987dM (army1987) · 2013-12-11T11:50:39.229Z · LW(p) · GW(p)

Must... resist... mentioning a particular stereotype about northern Europe.

comment by Vaniver · 2013-12-12T02:18:25.459Z · LW(p) · GW(p)

I find -- based on my personal experience -- the texture, aromas, etc. of fresh food to be quite superior to those of frozen food.

I hear that if you stir-fry vegetables, then frozen is a better option. (I eat most of the vegetables I eat raw or dehydrated, neither of which seem to do well if you freeze them first.)

Replies from: Lumifer, NancyLebovitz
comment by Lumifer · 2013-12-12T16:22:00.747Z · LW(p) · GW(p)

I hear that if you stir-fry vegetables, then frozen is a better option.

I think it depends on whether you can get your heat high enough.

The point of stir-frying frozen veggies is to brown the outside while not overcooking the inside. Normally this is done by cooking non-frozen veggies at very high heat but a regular house stove can't do it properly -- so a workaround is to use frozen.

comment by NancyLebovitz · 2013-12-12T03:09:35.523Z · LW(p) · GW(p)

How does freeze-them-yourself compare to buying vegetables which are already frozen?

Replies from: tut, Lumifer, Vaniver
comment by tut · 2013-12-12T14:04:15.333Z · LW(p) · GW(p)

The good kind of already frozen vegetables are much tastier, have better texture and have kept more of their nutrients. That is because an ordinary freezer is not nearly quick enough to preserve most vegetables.

comment by Lumifer · 2013-12-12T16:23:45.883Z · LW(p) · GW(p)

Industrially-frozen food is frozen much faster which is good. A house freezer is not powerful (or cold) enough to freeze food sufficiently fast.

comment by Vaniver · 2013-12-12T03:55:16.275Z · LW(p) · GW(p)

I hear that buying them already frozen is cheaper, more sanitary, and less work, but I haven't looked into it myself.

comment by drethelin · 2013-12-11T18:33:32.656Z · LW(p) · GW(p)

re: steaks, that's just not accurate. Frozen steaks are great! I say this as someone who filled his freezer with a quarter of a cow.

Replies from: Lumifer
comment by Lumifer · 2013-12-11T18:39:14.432Z · LW(p) · GW(p)

Maybe I just don't know how to deal with frozen steaks, but for me fresh-meat steaks are much, much juicier.

comment by Bakkot · 2013-12-11T17:35:32.953Z · LW(p) · GW(p)

For those in the community living in the south Bay Area: https://www.google.com/shopping/express/

comment by John_Maxwell (John_Maxwell_IV) · 2013-12-13T06:20:49.648Z · LW(p) · GW(p)

Regarding food in particular, I'm still wishing Romeo Stevens would commercialize his tasty and nutritious soylent alternative so I could buy it the same way I buy juice from the grocery store.

comment by JoshuaZ · 2013-12-09T23:57:44.115Z · LW(p) · GW(p)

New work suggests that life could have arisen and survived a mere 15 million years after the Big Bang, when the microwave background radiation levels would have provided sufficient energy to keep almost all planets warm. Summary here, and actual article here. This is still very preliminary, but the possibility at some level is extremely frightening. It adds billions of years of time for intelligent life to have arisen that we don't see, and if anything suggests that the Great Filter is even more extreme than we thought.

Replies from: passive_fist, Douglas_Knight, solipsist, bramflakes, drethelin, None, DanielLC
comment by passive_fist · 2013-12-10T00:58:39.293Z · LW(p) · GW(p)

Now that is scary, although there are a few complications. Rocky bodies were probably extremely rare during that time since the metal enrichment of the Universe was extremely low. You can't build life out of just hydrogen and helium.

comment by Douglas_Knight · 2013-12-10T02:44:50.107Z · LW(p) · GW(p)

Is that a relevant number?

Doesn't the relevant number of opportunities for life to appear have units of mass-time?

Isn't the question not how early was some Goldilocks zone, but how much mass was in a Goldilocks zone for how long? This says that the whole universe was a Goldilocks zone for just a few million years. The whole universe is big, but a few million years is small. And how much of the universe was metallic? The paper emphasizes that some of it was, but isn't this a quantitative question?

Replies from: JoshuaZ, Nornagest
comment by JoshuaZ · 2013-12-10T03:06:25.099Z · LW(p) · GW(p)

I agree that a few million years is small, and that the low metal content would be a serious issue (which in addition to being a problem for life forming would also make planets rare as pointed out by bramflakes in their reply). However, the real concern as I see it is that if everything was like this for a few million years, then if life did arise (and you have a whole universe for it to arise), as the cooldown occurred, it seems highly plausible that some forms of life would have then adopted to the cooler environment. This makes panspermia more plausible and thus makes life in general more likely. Additionally, it makes more of a chance for life to get lucky if it managed to get into one of the surviving safe zones (e.g. something like the Mars-Earth biotransfer hypothesis).

I think you may be correct that this isn't a complete run around and panic level update, but it is still disturbing. My initial estimate for how bad this could be is likely overblown.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-12-10T03:58:45.607Z · LW(p) · GW(p)

I'm nervous about the idea that life might adapt to conditions in which it cannot originate. Unless you mean spores, but they have to wait for the world to warm up.

As for panspermia, we have a few billion years of modern conditions before the Earth, which is itself already a problem. I think the natural comparison is the size of that Goldilocks zone to the very early one. But I don't know which is bigger.

Here are three environments. Which is better for radiation of spores?
(1) a few million years where every planet is wet
(2) many billion years, all planets cold
(3) a few billion years, a few good planets.

The first sounds just too short for anything to get anywhere, but the universe is smaller. If one source of life produces enough spores to hit everything, then greater time depth is better, but if they need to reproduce along the way, the modern era seems best.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-12-10T04:04:25.697Z · LW(p) · GW(p)

I'm nervous about the idea that life might adapt to conditions in which it cannot originate.

Why this happened on Earth? It is pretty likely for example that life couldn't originate in an environment like the Sahara desert, but life can adapt and survive there.

I do agree that spores are one of the more plausible scenarios. I don't know enough to really answer the question, and I'm not sure that anyone does, but your intuition sounds plausible.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-12-10T04:52:37.545Z · LW(p) · GW(p)

There's barely any life in the Sahara. It looks a lot like spores to me. I want a measure of life that includes speed. Some kind of energy use or maybe cell divisions. I expect the probability of life developing in a place to be proportional to amount of life there after it arrives. Maybe that's silly; there certainly are exponential effects of molecules arriving the same place at the same time that aren't relevant to the continuation of life. But if you can rule out this claim, I think your model of the origin of life is too detailed.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-12-10T05:02:24.534Z · LW(p) · GW(p)

There's barely any life in the Sahara. It looks a lot like spores to me.

I'm not sure what you mean by this.

I want a measure of life that includes speed.

Do you mean something like the idea that if an environment is too harsh even if life can survive the chance that it will evolve into anything beyond a simple organism is low?

comment by Nornagest · 2013-12-10T04:44:14.999Z · LW(p) · GW(p)

We should have the data now to take a whack at the metallicity side of that question, if only by figuring out how many Population 2 stars show up in the various extrasolar planet surveys in proportion with Pop 1. Don't think I've ever seen a rigorous approach to this, but I'd be surprised if someone hasn't done it.

One sticking point is that the metallicity data would be skewed in various ways (small stars live longer and therefore are more likely to be Pop 2), but that shouldn't be a showstopper -- the issues are fairly well understood.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-12-10T04:56:01.445Z · LW(p) · GW(p)

The paper mentions a model. Maybe the calculation is even done in one of the references. The model does not sound related to the observations you mention.

comment by solipsist · 2013-12-10T03:22:19.420Z · LW(p) · GW(p)

I don't think this is frightening. If you thought life couldn't have arisen more than 3.6 billion years ago but then discover that it could have arisen 13.8 billion years ago, you should be at most 4 times as scared.

The number of habitable planets in the galaxy over the number of habituated planets is a scary number.

The time span of earth civilization over the time span of earth life is a scary number.

4 is not a scary number.

Replies from: Douglas_Knight, khafra, passive_fist, Leonhart
comment by Douglas_Knight · 2013-12-10T03:34:44.091Z · LW(p) · GW(p)

If it were just a date, then, yes, a factor of 4 is lost in the noise. But switching to panspermia changes the calculation. Try Overcoming Bias [Added: maybe this is only a change under Robin Hanson's hard steps model.]

comment by khafra · 2013-12-12T13:50:41.750Z · LW(p) · GW(p)

It changes my epistemic position by a helluva lot more than a factor of 4. If an interstellar civilization arose somewhere in the universe that is now visible, somewhere in a uniform distribution over the last 3.6 billion years, there's much smaller chance we'd currently (or ever) be within their light cone than if they'd developed 13.8 billion years ago.

comment by passive_fist · 2013-12-10T03:50:14.677Z · LW(p) · GW(p)

It's potentially scary not because of the time difference, but because of the quantity of habitable planets. It's understood that current conditions in the Universe make it so that only relatively few planets are in the habitable zone. But if the Universe was warm, then almost all planets would be in the habitable zone, making the likelihood of life that much higher.

As I said in my reply to JoshuaZ though, the complication is that rocky planets were probably much rarer than they are now.

comment by Leonhart · 2013-12-11T18:55:00.368Z · LW(p) · GW(p)

4 is not a scary number

It's the scariest number.

comment by bramflakes · 2013-12-10T00:58:15.999Z · LW(p) · GW(p)

There weren't any planets 15 million years after the Big Bang. The first stars formed 100 million years after the Big Bang, and you need another few million on top of that for the planets to form and cool down.

comment by drethelin · 2013-12-10T18:43:50.248Z · LW(p) · GW(p)

It seems to take a lot more than 15 million years to get from "life" to "intelligent life". According to the article this period would only have lasted for a million years, so at most we would probably get a lot of monocellular life arising and then dying during the cooloff.

comment by [deleted] · 2013-12-10T03:18:08.401Z · LW(p) · GW(p)

1 - why should no intelligent life arising from a set of places that were likely habitable for only 5 million years (if they existed at all, which is doubtful) be surprising?

2 - I raise the possibility of outcomes for intelligent life that are not destruction or expansion through the universe.

Edit: Gah, that's what I get for leaving this window open while about 8 other people commented

Replies from: JoshuaZ
comment by JoshuaZ · 2013-12-10T03:23:56.542Z · LW(p) · GW(p)

See the conversation with Doug up subthread.

comment by DanielLC · 2013-12-10T20:16:53.813Z · LW(p) · GW(p)

Does it add billions of years? That's not saying that life could have arisen and survived since 15 million years after the Big Bang.

Replies from: shminux
comment by shminux · 2013-12-10T20:41:58.431Z · LW(p) · GW(p)

The paper implies that it only adds millions of years, not billions.

a new regime of habitability made possible for a few Myr by the uniform CMB radiation

Once the CMB cools down enough with the expansion of the Universe, the Goldilock conditions disappear. The CMB temperature is roughly inversely proportional to the age of the Universe, so 300K at 15 million years becomes just 150K 15 million years later.

comment by sakranut · 2013-12-16T00:10:49.963Z · LW(p) · GW(p)

I decided I'd share the list of questions I try to ask myself every morning and evening. I usually spend about thirty seconds on each question, just thinking about them, though I sometimes write my answers down if I have a particularly good insight. I find they keep me pretty well-calibrated to my best self. Some are idiosyncratic, but hopefully these will be generally applicable.

A. Today, this week, this month:

  1. What am I excited about?
  2. What goals do I have?
  3. What questions do I want to answer?
  4. What specific ways do I want to be better?

B. Yesterday, last week, last month:

  1. What did I accomplish that am I proud of?
  2. In what instances did I behave in a way I am proud of?
  3. What did I do wrong? How will I do better?
  4. What do I want to remember? What adventures did I have?

C. Generally: 9: If I'm not doing exactly what I want to be doing, why?

Replies from: curiousepic, shminux, None
comment by curiousepic · 2013-12-16T20:51:34.733Z · LW(p) · GW(p)

How long have you been doing this, and have you noticed any effects?

Replies from: sakranut
comment by sakranut · 2013-12-16T22:40:48.779Z · LW(p) · GW(p)

For about a month and a half, though I forget about 25% of the time. I haven't noticed any strong effects, though I feel as if I approach the day-to-day more conscientiously and often get more out of my time.

Replies from: wadavis
comment by wadavis · 2013-12-17T16:53:47.686Z · LW(p) · GW(p)

For a term in university I followed a similar method. Every day I would post 'Today's Greatest Achievement:' in the relevant social media of the time. There was a noticeable improvement in happiness and extra-curricular productivity as I more actively sought out novel experiences, active community roles, and academic side projects. The daily reminder led to a far more conscientious use of my time.

The combined reminder that I spent all weekend playing video games and broadcasting to my entire social circle that that was my greatest achievement in the past 48 hours was in a mindless video game led to immediate behavior changes.

comment by shminux · 2013-12-16T19:17:08.199Z · LW(p) · GW(p)

9: If I'm not doing exactly what I want to be doing, why?

That's the hardest of them all, still searching for answers.

comment by [deleted] · 2013-12-18T22:10:32.166Z · LW(p) · GW(p)

What does it mean for "you" to not be doing exactly what you "want"? Do you downplay or ignore your not-conscious thought processes?

comment by Username · 2013-12-11T04:31:44.016Z · LW(p) · GW(p)

Are there any translation efforts in academia? It bothers me that there may be huge corpuses of knowledge that are inaccessible to most scientists or researchers simply because they don't speak, say, Spanish, Mandarin, or Hindi. The current solution to this problem seems to be 'everyone learn English', which seems to do ok in the hard sciences. But I fear there may be a huge missed opportunity in social sciences, especially because Americans are WEIRD and not necessarily psychologically or behaviorally respresentative of the world population. (Link is to an article, link to the cited paper here: pdf)

Replies from: sixes_and_sevens, Douglas_Knight, Metus, ChristianKl
comment by sixes_and_sevens · 2013-12-11T12:44:56.968Z · LW(p) · GW(p)

The plural of "corpus" is "corpora". I don't say this to be pedantic, but because the word is quite lovely, and deserves to be used more.

comment by Douglas_Knight · 2013-12-13T17:33:24.142Z · LW(p) · GW(p)

If a hypothetical bothers you, maybe you should hold off proposing solutions and instead investigate whether it is a real problem.

Replies from: gwern
comment by gwern · 2013-12-13T18:14:34.867Z · LW(p) · GW(p)

I'm not sure losing the non-English literature is a big problem. A lot of foreign research is really bad. A little demonstration from 5 days ago: I criticized a Chinese study on moxibustion https://plus.google.com/103530621949492999968/posts/TisYM64ckLM

This was translated into / written in English and published in a peer-reviewed journal (Neural Regeneration Research). And it's complete crap.

Of course there is very bad research published by the West on alternative medicine too, but as the links I provide show, Chinese research is systematically and generally of very low quality. If China cannot produce good research, what can we expect of other countries?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-12-13T19:57:15.531Z · LW(p) · GW(p)

The language that I think most plausibly contains a disconnected scientific literature is Japanese.

comment by Metus · 2013-12-11T06:40:09.835Z · LW(p) · GW(p)

Some time ago someone linked a paper indicating that there are benefits to fragmentation of academia by language barriers as less people are exposed to some kind of dominant view allowing them to come up with new ideas. One cited example was anthropology which had a Russian and an Anglosphere tradition.

I'd assume there not to be any major translation efforts as being a translator isn't as effective as publishing something of your own by far.

Replies from: Viliam_Bur, NancyLebovitz
comment by Viliam_Bur · 2013-12-11T09:44:37.968Z · LW(p) · GW(p)

being a translator isn't as effective as publishing something of your own by far.

Publishing your own scientific paper brings you more rewards, but translating other person's article requires less time and less scientific skills (just enough to understand the vocabulary and follow the arguments).

If someone would pay me for doing it, I would probably love to have a job of translating scientific articles to my language. It would be much easier for me to translate dozen articles than to create one. And if I would only translate the articles that passed some filter, for example those published in peer-reviewed journals, I could probably translate the output of twenty or fifty scientists.

Replies from: Username
comment by Username · 2013-12-11T10:05:18.257Z · LW(p) · GW(p)

It seems like there could definitely be money in 'international' journals for different fields, which would aggregate credible foreign papers and translate them. Interesting that they don't seem to exist.

Replies from: Richard_Kennaway, Metus
comment by Richard_Kennaway · 2013-12-12T10:21:13.525Z · LW(p) · GW(p)

How effective would it be to use human expertise to translate just the contents pages of journals, with links to Google Translate for the bodies of the papers? Or perhaps use humans to also translate the abstracts?

Does anything like this exist already?

Replies from: satt
comment by satt · 2013-12-13T01:27:25.325Z · LW(p) · GW(p)

Idea that popped into my head: it might be straightforward to make a frontend for the arXiv that adds a "Translate this into" drop-down list to every paper's summary page. (Using the list could redirect the user to Google Translate, with the URL for the PDF automatically fed into the translator.) As far as I know no one has done this but I could be wrong.

comment by Metus · 2013-12-11T14:39:09.158Z · LW(p) · GW(p)

This chain is so interesting. As a grad student I could translate some papers and make some decent money in such a hypothetical regime.

comment by NancyLebovitz · 2013-12-11T14:12:23.470Z · LW(p) · GW(p)

The Body Electric mentioned that the Soviets were ahead of the west in studying electrical fields in biology because (not sure of the date-- sometime before the seventies) electricity sounded to much like elan vital to the westerners.

Replies from: Douglas_Knight, byrnema
comment by Douglas_Knight · 2013-12-11T18:48:39.755Z · LW(p) · GW(p)

Which Body Electric? I don't see it in Becker and Selden, but maybe I don't know what to look for.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-11T19:53:03.258Z · LW(p) · GW(p)

Possibly this Body Electric. It's at least about the right subject, but I'd have swore I'd read it much earlier than 1998, and my copy (buried somewhere) probably had a purple cover.

The cover on the hardcover looks more familiar, and at least it's from 1985.

Wikipedia makes it sound like the right book.

Where were you searching? You had the authors right.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-12-11T21:33:09.424Z · LW(p) · GW(p)

I looked at that book on google books. I searched for "Soviet," "elan," etc, and did not see the story you mentioned.

Added: Amazon says that the book uses these words a lot more than google says, but I didn't look at many hits.

comment by byrnema · 2013-12-11T16:03:13.299Z · LW(p) · GW(p)

That's interesting. I read your comment out of context and didn't know you were making a comment about the language. I agreed that I don't like thinking about electricity in animals (or more strongly, any coordinated magnetic phenomena, etc) because of this association. There is a similarity in the sounds, ("electrical" and "elan vital") but also the concepts are close in space ... perhaps the Soviets lacked this ugh field altogether.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-11T17:21:48.278Z · LW(p) · GW(p)

I was using "sounded like" metaphorically. I assume they knew the difference in meaning, but were affected by the similarity of concepts and worry about their reputations.

I guessed that the Soviets were more willing to do the research because Marxism was kind of like weird science, so they were willing to look into weird science in general. However, this is just a guess. A more general hypothesis is that new institutions are more willing to try new things.

comment by ChristianKl · 2013-12-13T15:24:26.319Z · LW(p) · GW(p)

If you know English and Mandarin, you might make an academic career out of writing meta analysis of topics discussed in Mandarin research papers.

Replies from: Barry_Cotter
comment by Barry_Cotter · 2013-12-14T09:02:11.000Z · LW(p) · GW(p)

I am not professionally involved in these fields but I have read that among those who are there is a very jaundiced opinion of Chinese and Indian scientific research. If none of the following hold completely ignoring their publications is apparently a good heuristic; at least one foreign co-author or one who did their doctorate in the first world or an institution or author with a significant reputation. Living in China and having some minimal experience with the Chinese attitude to plagiarism/copying/research makes this seem plausible. I doubt anyone's missing anything by ignoring scientific articles published in Mandarin. I make no such claims for social sciences.

comment by ESRogs · 2013-12-11T03:52:30.505Z · LW(p) · GW(p)

I'm expecting China to have an increasing role in global affairs over the next century. With that in mind, there are a couple of things I'm curious about:

  • Does anyone have an idea of how prevalent existential risk type ideas are in China?

  • Has anyone tried to spread LW memes there?

  • Are the LW meetups in Shanghai, etc. mostly ex-pats or also locals?

Thanks!

comment by knb · 2013-12-13T07:05:47.135Z · LW(p) · GW(p)

Gregory Cochran has written something on aging. I'll post some selected parts, but you should read the whole thing, which is pretty short.

Theoretical biology makes it quite clear that individuals ought to age. Every organism faces tradeoffs between reproduction and repair. In a world with hazards, such that every individual has a decreasing chance of survival over time, the force of natural selection decreases with increasing age. This means that perfect repair has a finite value, and organisms that skimp on repair and instead apply those resources to increased reproduction will have a greater reproductive rate – and so will win out. Creatures in which there is no distinction between soma and germ line, such as prokaryotes, cannot make such tradeoffs between repair and reproduction – and apparently do not age. Which should be a hint.

...

In practice, this means that animals that face low exogenous hazards tend to age more slowly. Turtles live a long time. Porcupines live a good deal longer than other rodents. [...] Organisms whose reproductive output increases strongly with time, like sturgeons or trees, tend to live longer. The third way of looking at things is thermodynamics. Is aging inevitable? Certainly not. As long as you have an external source of free energy, you can reduce entropy with enthalpy.

...

In principle there is no reason why people couldn't live to be a billion years old, although that might entail some major modifications (and an extremely cautious lifestyle). The third way of looking at things trumps the other two. People age, and evolutionary theory indicates that natural selection won’t produce ageless organisms, at least if their germ cells and body are distinct - but we could make it happen.

This might take a lot of work. If so, don’t count on seeing effective immortality any time soon, because society doesn't put much effort into it. In part, this is because the powers that be don’t know understand the points I just made.

Nothing entirely new to me here, but it's always good to see another scientist come out in favor of aging research. Also, note that the Latin text on the top of Cochran's website is omnes vulnerant, ultima necat, which means approximately, "Each second wounds, the last kills."

comment by NancyLebovitz · 2013-12-09T16:36:57.430Z · LW(p) · GW(p)

Life is a concept we invented

Discussion of why it plausibly does not make sense to look for a firm dividing line between life and non-life.

Replies from: army1987, Anatoly_Vorobey, shminux, passive_fist
comment by A1987dM (army1987) · 2013-12-10T09:55:43.239Z · LW(p) · GW(p)

Just because a boundary is fuzzy doesn't mean it's meaningless.

comment by Anatoly_Vorobey · 2013-12-09T19:45:10.138Z · LW(p) · GW(p)

It just doesn't matter very much - certainly not enough to keep wrangling over the exact definition of the boundary. As long as we understand what we mean by crystal, bacterium, RNA, etc., why should we care about the fuzzy dividing line? Are ribozymes going to become more or less precious to us according only to whether we count them as living or not, given that nothing changes about their actual manifested qualities? Should they?

Every science uses terms which are called universal terms, such as ‘energy’, ‘velocity’, ‘carbon’, ‘whiteness’, ‘evolution’, ‘justice’, ‘state’, ‘humanity’. These are distinct from the sort of terms which we call singular terms or individual concepts, like ‘Alexander the Great’, ‘Halley’s Comet’, ‘The First World War’. Such terms as these are proper names, labels attached by convention to the individual things denoted by them.

[...] The school of thinkers whom I propose to call methodological essentialists was founded by Aristotle, who taught that scientific research must penetrate to the essence of things in order to explain them. Methodological essentialists are inclined to formulate scientific questions in such terms as ‘what is matter?’ or ‘what is force?’ or ‘what is justice?’ and they believe that a penetrating answer to such questions, revealing the real or essential meaning of these terms and thereby the real or true nature of the essences denoted by them, is at least a necessary prerequisite of scientific research, if not its main task. Methodological nominalists, as opposed to this, would put their problems in such terms as ‘how does this piece of matter behave?’ or ‘how does it move in the presence of other bodies?’ For methodological nominalists hold that the task of science is only to describe how things behave, and suggest that this is to be done by freely introducing new terms wherever necessary, or by re-defining old terms wherever convenient while cheerfully neglecting their original meaning. For they regard words merely as useful instruments of description.

Most people will admit that methodological nominalism has been victorious in the natural sciences. Physics does not inquire, for instance, into the essence of atoms or of light, but it uses these terms with great freedom to explain and describe certain physical observations, and also as names of certain important and complicated physical structures. So it is with biology. Philosophers may demand from biologists the solution of such problems as ‘what is life?’ or ‘what is evolution?’ and at times some biologists may feel inclined to meet such demands. Nevertheless, scientific biology deals on the whole with different problems, and adopts explanatory and descriptive methods very similar to those used in physics.

-- Karl Popper, from The Poverty of Historicism

Replies from: spxtr, passive_fist
comment by spxtr · 2013-12-09T20:18:30.411Z · LW(p) · GW(p)

Why did you post this quote? It seems like a good example of diseased thinking, but I'm not sure if that was your point.

Replies from: ESRogs, Alsadius
comment by ESRogs · 2013-12-11T16:50:04.945Z · LW(p) · GW(p)

Are you saying you think the quote exhibits diseased thinking or just that it was about diseased thinking?

To me, the quote seemed to clearly make the same point that Anatoly's first paragraph did, so it seems straightforward why he would include it.

Replies from: spxtr
comment by spxtr · 2013-12-12T04:07:39.550Z · LW(p) · GW(p)

The quote says that biologists don't deal with questions such as "what is life?" because that's essentialism and that's Bad. Similarly, physicists certainly don't study ideal systems like atoms or light. The disease is in the false dichotomy.

Replies from: ESRogs
comment by ESRogs · 2013-12-12T17:54:06.470Z · LW(p) · GW(p)

Oh, hmm, I thought what he was saying about atoms and light is not that physicists don't study those things, but that they don't study some abstract platonic version of light or atom derived from our intuitions, but instead use those words to describe phenomena in the real world and then go on to continue investigating those phenomena on their own terms.

So, for example, "Do radio waves really count as light?" is not a very interesting question from a physics perspective once you grant that both radio waves and visible light are on the same electromagnetic wave spectrum. Or with atoms we could ask, "Are atoms really atoms if they can be broken down into constituent parts?" These would just be questions about human definitions and intuitions rather than about the phenomena themselves. And so it is with the question, "What is life?"

That's what it seemed like Popper was saying to me. Did you have a different interpretation? Also, I'm not sure I've understood your comment -- which dichotomy are you saying is a false dichotomy?

Replies from: spxtr
comment by spxtr · 2013-12-12T23:10:21.411Z · LW(p) · GW(p)

Asking whether radio waves really count as light is just arguing a definition. That's not interesting to anyone who understands the underlying physics.

Notice that the questions he gives for essentialists are actually interesting questions, they're just imprecisely phrased, e.g. "what is matter?" These questions were asked before we'd decided matter was atoms. They were valid questions and serious scientists treated them. Now these questions are silly because we've already solved them and moved on to deeper questions, like "where do these masses come from?" and "how will the universe end?"

When a theorist comes up with a new theory they are usually trying to answer one of these essentialist questions. "What is it about antimatter that makes it so rare?" The theorist comes up with a guess, computes some results, spends a year processing LHC data, and realizes that their theory is wrong. At some point in here they switched from essentialist (considering an ideal model) to nominalist (experimental data), but the whole distinction is unnecessary.

... they don't study some abstract platonic version of light or atom derived from our intuitions ...

Yes, they most certainly do. QED is an extremely abstract idea, derived from intuition about how the light we interact with on a classical level behaves. This is called the correspondence principle.

String theorists come up with a theory based entirely on mathematical beauty, much like Plato.

Replies from: Anatoly_Vorobey, ESRogs
comment by Anatoly_Vorobey · 2013-12-13T20:26:44.366Z · LW(p) · GW(p)

I think you're reading Popper uncharitably, and his view of what physicists do is about the same as yours. He really is arguing against arguing definitions. "What is matter?" is an ambiguous question: it can be understood as asking about a definition, "what do we understand by the word 'matter', exactly?", and it can be understood as asking about the structure, "what are these things that we call matter really made of, how do they behave, what are their properties, etc.?". The former, to Popper, is an essentialist question; the latter is not.

Your understanding of "essentialist questions" is not that of Popper; he wouldn't agree with you, I'm sure, that "What is it about antimatter that makes it so rare?" is an essentialist question. "Essentialist" doesn't mean, in his treatment, "having nothing to do with experimental data" (even though he was very concerned with the value of experimental data and would have disagreed with some of modern theoretical physics in that respect). A claim which turns out to be unfalsifiable is anathema to Popper, but it is not necessarily an "essentialist" claim.

comment by ESRogs · 2013-12-13T04:12:19.298Z · LW(p) · GW(p)

Oh, hmm. I see now that we were interpreting Popper differently, and I may have been wrong.

Notice that the questions he gives for essentialists are actually interesting questions, they're just imprecisely phrased, e.g. "what is matter?" These questions were asked before we'd decided matter was atoms. They were valid questions and serious scientists treated them. Now these questions are silly because we've already solved them and moved on to deeper questions ...

If Popper did mean to exclude that kind of inquiry, then I agree with you that he was misguided.

In that case, it sounds like you would agree with the rest of Anatoly's comment, just not the Popper quote. Is that right?

Replies from: spxtr
comment by spxtr · 2013-12-13T06:25:18.748Z · LW(p) · GW(p)

That's right, more or less.

Replies from: ESRogs
comment by ESRogs · 2013-12-13T17:04:21.784Z · LW(p) · GW(p)

Gotcha, thanks!

comment by Alsadius · 2013-12-09T21:02:33.744Z · LW(p) · GW(p)

Which disease are you referring to?

Replies from: fubarobfusco
comment by fubarobfusco · 2013-12-11T17:05:12.811Z · LW(p) · GW(p)

"Diseased thinking" here is probably jargon; see Yvain's 2010 post "Diseased thinking: dissolving questions about disease".

comment by passive_fist · 2013-12-10T02:55:27.126Z · LW(p) · GW(p)

The definition of life matters because we want to be able to talk about extraterrestrial life as well.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2013-12-10T12:10:35.403Z · LW(p) · GW(p)

The precise definition of life will not be the thing that will determine our opinion about possible extraterrestrial life when we come across it. It will matter whether that hypothetical life is capable of growth, change, producing offspring, heredity, communication, intelligence, etc. etc. - all of these things will matter a lot. Having a very specific subset of these enshrined as "the definition of life" will not matter. This is what Popper's quote is all about.

Replies from: passive_fist
comment by passive_fist · 2013-12-10T23:02:45.573Z · LW(p) · GW(p)

The precise definition of life will not be the thing that will determine our opinion about possible extraterrestrial life when we come across it.

It's possible that extraterrestrial life will be nothing but a soup of RNA molecules. If we visit a planet while its life is still in the embryonic stages, we need to include that in our discourse of life in general. We need to have a word to represent what we are talking about when we talk about it. That's the only purpose any definition ever serves. If you want to go down the route of 'the definition of life is useless', you might as well just say 'all definitions are useless'.

comment by shminux · 2013-12-09T17:49:47.539Z · LW(p) · GW(p)

a self-sustaining system capable of Darwinian evolution.

My favorite example is challenging people to show that stars (in space) are any less alive than stars (in Hollywood).

Replies from: David_Gerard, Jayson_Virissimo
comment by David_Gerard · 2013-12-09T19:50:53.822Z · LW(p) · GW(p)

What's the Darwinian evolution involved in stars? (Are you thinking of the hypothesis that universes evolve to create black holes?)

Replies from: shminux
comment by shminux · 2013-12-10T00:00:17.773Z · LW(p) · GW(p)

What I meant is that stars are born, they procreate (by spewing out new seeds for further star formation), then grow old. Stars "evolved" to be mostly smaller and longer lived due to higher metallicity. They compete for food and they occasionally consume each other. They sometimes live in packs facilitating further star formation, for a time. Some ancient stars have whole galaxies spinning around them, occasionally feeding on their entourage and growing ever larger.

Replies from: pdsufferer, JGWeissman, army1987
comment by pdsufferer · 2013-12-10T08:12:56.089Z · LW(p) · GW(p)

Don't traits have to be heritable for evolution to count? I'm not an expert or anything, but I thought I'd know if stars' descendants had similar properties to their parent stars.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-10T15:03:21.633Z · LW(p) · GW(p)

Descendant stars might have proportions of elements related to what previous stars generated as novas. I don't know whether there's enough difference in the proportions to matter.

comment by JGWeissman · 2013-12-10T15:52:37.425Z · LW(p) · GW(p)

Can you give an example of a property a star might have because having that property made its ancestor stars better at producing descendant stars with that property?

Replies from: shminux
comment by shminux · 2013-12-10T19:16:07.799Z · LW(p) · GW(p)

Sorry, I'm not an expert in stellar physics. Possibly metallicity, or maybe something else relevant. My original point was to agree that there is no good definition of "life" which does not include some phenomena we normally don't think of as living.

comment by Jayson_Virissimo · 2013-12-11T01:29:34.807Z · LW(p) · GW(p)

Do stars exhibit teleological behavior?

Replies from: shminux
comment by shminux · 2013-12-11T02:45:28.329Z · LW(p) · GW(p)

Why do you ask?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-12-11T03:04:24.092Z · LW(p) · GW(p)

Isn't teleology fundamental to some conceptions of life?

Replies from: shminux
comment by shminux · 2013-12-11T05:08:43.303Z · LW(p) · GW(p)

Feel free to elaborate.

comment by passive_fist · 2013-12-09T23:28:06.429Z · LW(p) · GW(p)

What's wrong with 'A self-sustaining (through an external energy source) chemical process characterized by the existence of far-from-equilibrium chemical species and reactions.'?

Replies from: RolfAndreassen, NancyLebovitz
comment by RolfAndreassen · 2013-12-09T23:55:36.397Z · LW(p) · GW(p)

Suspect you would have a difficult time defining "external energy source" in a way that excludes fire but includes mitochondria.

Which equilibrium? Stars are far from the eventual equilibrium of the heat death, and also not at equilibrium with the surrounding vacuum.

Not clear whether viruses, prions, and crystals are included or excluded.

Replies from: passive_fist
comment by passive_fist · 2013-12-10T00:29:52.536Z · LW(p) · GW(p)

Suspect you would have a difficult time defining "external energy source" in a way that excludes fire but includes mitochondria.

True; what is meant is a simple external energy source such as radiation or a simple chemical source of energy. It's true that this is a somewhat fuzzy line though.

Which equilibrium? Stars are far from the eventual equilibrium of the heat death, and also not at equilibrium with the surrounding vacuum.

I specifically said far-from-equilibrium chemical species and reactions. The chemistry that goes on inside a star is very much in equilibrium conditions.

Not clear whether viruses, prions, and crystals are included or excluded.

Viruses are not self-sustaining systems, so they are obviously excluded. You have to consider the system of virus+host (plus any other supporting processes). Same with prions. Crystals are excluded since they do not have any non-equilibrium chemistry.

Replies from: RolfAndreassen, kalium
comment by RolfAndreassen · 2013-12-10T02:22:23.732Z · LW(p) · GW(p)

what is meant is a simple external energy source such as radiation or a simple chemical source of energy.

I do not see how this answers the objection. All you did was add the qualification 'simple' to the existing 'external'. Is this meant to exclude fire, or include it? If the former, how does it do so? Presumably plant matter is a sufficiently "simple" source of energy, since otherwise you would exclude human digestion; plant matter also burns.

The chemistry that goes on inside a star is very much in equilibrium conditions.

Again, which equilibrium? The star is nowhere near equilibrium with its surroundings.

Viruses are not self-sustaining systems,

Neither are humans... in a vacuum; but viruses are quite self-sustaining in the presence of a host. You are sneaking in environmental information that wasn't there in the original "simple" definition.

Replies from: passive_fist
comment by passive_fist · 2013-12-10T02:47:46.369Z · LW(p) · GW(p)

Look at my reply to kalium. To reiterate, the problem is that people confuse objects with processes. The definition I gave explicitly refers to processes. This answers your final point.

All you did was add the qualification 'simple' to the existing 'external'. Presumably plant matter is a sufficiently "simple" source of energy, since otherwise you would exclude human digestion; plant matter also burns.

I already conceded that it's a fuzzy definition. As I said, you are correct that 'simple' is a subjective property. However, if you look at the incredibly complex reactions that occur inside human cells (gene expression, ribosomes, ATP production, etc), then yes, amino acids and sugars are indeed extremely simple in comparison. If you pour some sugars and phosphates and amino acids into a blender you will not get much DNA; not nearly in the quantities that it is found in cells. This is what is meant by 'far from equilibrium'. There is much more DNA in cells than you would find if you took the sugars and fatty acids and vitamins and just mixed them together randomly.

Again, which equilibrium? The star is nowhere near equilibrium with its surroundings.

I feel like we're talking past each other here. I explicitly (and not once, but twice in the definition) referred to chemical processes: http://en.wikipedia.org/wiki/Chemical_equilibrium

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-12-10T03:57:20.708Z · LW(p) · GW(p)

Ok, chemical equilibrium. This does not seem to me like a natural boundary; why single out this particular equilibrium and energy scale?

As I said, you are correct that 'simple' is a subjective property.

I think you're missing my point, which is that I don't see how your definition excludes fire as a living thing.

The definition I gave explicitly refers to processes. This answers your final point.

I don't think it does. A human in vacuum is alive, for a short time. How do you distinguish between "virus in host cell" and "human in supporting environment"?

Replies from: passive_fist
comment by passive_fist · 2013-12-10T04:26:47.674Z · LW(p) · GW(p)

why single out this particular equilibrium and energy scale?

Because the domain of chemistry is broad enough to contain life as we know it, and also hypothesized forms of life on other planets, without being excessively inclusive.

I think you're missing my point, which is that I don't see how your definition excludes fire as a living thing.

I tried to answer it. The chemical species that are produced in fire are the result of equilibrium reactions http://en.wikipedia.org/wiki/Combustion . They are simple chemical species (with more complex species only being produced in small quantities; consistent with equilibrium). Especially, they are not nearly as complex as compared to the feedstock as living chemistry is.

I don't think it does. A human in vacuum is alive, for a short time. How do you distinguish between "virus in host cell" and "human in supporting environment"?

They are both part of living processes. The timescale for 'self-sustaining' does not need to be forever. It only needs to be for some finite time that is larger than what would be expected of matter rolling down the energy hill towards equilibrium.

comment by kalium · 2013-12-10T01:00:57.412Z · LW(p) · GW(p)

In what sense are parasitic bacteria that depend on the host for many important functions self-sustaining while viruses are not?

Replies from: passive_fist
comment by passive_fist · 2013-12-10T01:14:49.502Z · LW(p) · GW(p)

As I said, you have to consider the system of parasite+host (plus any other supporting processes).

I think a lot of the confusion arises from people confusing objects with processes that unfold over time. You can't ask if an object is alive by itself; you have to specify the time-dynamics of the system. Statements like 'a bacterium is alive' are problematic because a frozen bacterium in a block of ice is definitely not alive. Similarly, a virus that is dormant is most definitely not alive. But that same virus inside a living host cell is participating in a living process i.e. it's part of a self-sustaining chain of non-equilibrium chemical reactions. This is why I specifically used the words 'chemical process'.

Replies from: kalium
comment by kalium · 2013-12-10T06:54:46.506Z · LW(p) · GW(p)

So this is a definition for "life" only, not "living organism," and you would say that a parasite, virus, or prion is part of something alive, and that as soon as you remove the parasite from the host it is not alive. How many of its own life functions must a parasite be able to perform once removed from the host in order for it to be considered alive after removal from the host?

Replies from: passive_fist
comment by passive_fist · 2013-12-10T07:22:27.670Z · LW(p) · GW(p)

So this is a definition for "life" only

Precisely.

How many of its own life functions must a parasite be able to perform once removed from the host in order for it to be considered alive after removal from the host?

As the definition says. It must demonstrate non-equilibrium chemistry and must be self-sustaining. Again, 'simple forms of energy' is relative, so I agree that there's some fuzziness here. However, if you look at the extreme complexity of the chemical processes of life (dna, ribosomes, proteins, etc.) and compare that to what most life consumes (sugars, minerals, etc.) there is no ambiguity. It's quite clear that there's a difference.

comment by NancyLebovitz · 2014-08-17T15:46:20.440Z · LW(p) · GW(p)

Are you sure that all life is chemical? There's a common belief here that a sufficiently good computer simulation of a human being counts as being that person (and presumably, a sufficiently good computer simulation of an animal counts as being an animal, though I don't think I've seen that discussed), and that's more electrical than chemical, I think.

I have a notion that there could be life based on magnetic fields in stars, though I'm not sure how sound that is.

Replies from: passive_fist
comment by passive_fist · 2014-08-18T09:04:02.755Z · LW(p) · GW(p)

I guess it depends on your philosophical position on 'simulations'. If you believe simulations "aren't the real thing", then a simulation of chemistry "isn't actual chemistry", and thus a simulation of life "isn't actual life." Anyways, the definition I gave doesn't explicitly make any distinction here.

About exotic forms of life, it could be possible. A while ago I had some thoughts about life based on quark-gluon interactions inside a neutron star. Since neutron star matter is incredibly compact and quarks interact on timescales much faster than typical chemistry, you could have beings of human-level complexity existing in a space of less than a cubic micrometer and living out a human-lifespan-equivalent existence in a fraction of a second.

But these types of life are really really speculative at this point. We have no idea that they could exist, and pretty strong reasons for thinking they couldn't. It doesn't seem worth it to stretch a definition of life to contain types of life we can't even fathom yet.

comment by Dan_Weinand · 2013-12-10T20:26:57.899Z · LW(p) · GW(p)

Any good advice on how to become kinder? This can really be classified as two related goals, 1) How can I get more enjoyment out of alleviating others suffering and giving others happiness? 2) How can I reliably do 1 without negative emotions getting in my way (ex. staying calm and making small nudges to persuade people rather than getting angry and trying to change people's worldview rapidly)?

Replies from: Ben_LandauTaylor, Manfred, byrnema, shminux, None
comment by Ben_LandauTaylor · 2013-12-10T23:00:29.960Z · LW(p) · GW(p)

I'd recommend Nonviolent Communication for this. It contains specific techniques for how to frame interactions that I've found useful for creating mutual empathy. How To Win Friends And Influence People is also a good source, although IIRC it's more focused on what to do than on how to do it. (And of course, if you read the books, you have to actually practice to get good at the techniques.)

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-12-11T00:36:17.084Z · LW(p) · GW(p)

Thanks! And out of curiosity, does the first book have much data backing it? The author's credentials seem respectable so the book would be useful even if it relied on mostly anecdotal evidence, but if it has research backing it up then I would classify it as something I need (rather than ought) to read.

Replies from: Ben_LandauTaylor, ESRogs, erratio, ChristianKl, jsalvatier
comment by Ben_LandauTaylor · 2013-12-11T19:49:52.013Z · LW(p) · GW(p)

According to wikipedia, there's a little research and it's been positive, but it's not the sort of research I find persuasive. I do have mountains of anecdata from myself and several friends whose opinions I trust more than my own. PM me if you want a pdf of the book.

comment by ESRogs · 2013-12-11T17:03:32.260Z · LW(p) · GW(p)

I would like to offer further anecdotal evidence that NVC techniques are useful for understanding your own and other people's feelings and feeling empathy toward them.

comment by erratio · 2013-12-11T22:32:05.343Z · LW(p) · GW(p)

Thirded. The most helpful part for me was internalising the idea that even annoying/angry/etc outbursts are the result of people trying to get their needs met. It may not be a need I agree with, but it gives me better intuition for what reaction may be most effective.

comment by ChristianKl · 2013-12-13T20:00:52.247Z · LW(p) · GW(p)

When it comes to research about paradigms like that it's hard to evaluate them. If you look at nonviolent communication and set up your experiment well enough I think you will definitely find effects.

The real question isn't whether the framework does something but whether it's useful. That in turn depends on your goals.

Whether a framework helps you to successfully communicate depends a lot on cultural background of the people with whom you are interacting.

If you engage in NVC, some people with a strong sense of competition might see you as week. If you would consistentely engage in NVC in your communcation on LessWrong, you might be seen as a weird outsider.

You would need an awful lot of studies to be certain about the particular tradeoff in using NVC for a particular real world situation.

I don't know of many studies that compare whether Windows is better than Linux or whether VIM is better than Emacs. Communication paradigms are similar they are complex and difficult to compare.

comment by jsalvatier · 2013-12-11T22:24:04.888Z · LW(p) · GW(p)

I found NVC is very intuitively compelling, have personal anecdotal evidence that it works (though not independent of ESRogs, we go to the same class).

comment by Manfred · 2013-12-11T07:21:30.516Z · LW(p) · GW(p)

In addition to seconding nonviolent communication, cognitive behavior therapy techniques are pretty good - basically mindfulness exercises and introspection. If you want to change how you respond to certain situations (e.g. times when you get angry, or times when you have an opportunity to do something nice), you can start by practicing awareness of those situations, e.g. by keeping a pencil and piece of paper in your pocket and making a check mark when the situation occurs.

comment by byrnema · 2013-12-11T16:26:23.455Z · LW(p) · GW(p)

I also want to learn how to be kinder. The sticking point, for me, is better prediction about what makes people feel good.

I was very ill a year ago, and at that time learned a great deal about how comforting it is to be taken care of by someone who is compassionate and knowledgeable about my condition. But for me, unless I'm very familiar with that exact situation, I have trouble anticipating what will make someone feel better.

This is also true in everyday situations. I work on figuring out how to make guests feel better in my home and how to make a host feel better when I'm the guest. (I already know that my naturally overly-analytic, overly-accommodating manner is not most effective.) I observe other people carefully, but it all seems very complex and I consider myself learning and a 'beginner' -- far behind someone who is more natural at this.

Replies from: hesperidia
comment by hesperidia · 2013-12-11T18:27:32.002Z · LW(p) · GW(p)

I have trouble anticipating what will make someone feel better.

In this kind of situation, I usually just ask, outright, "What can I do to help you?" Then I can file away the answer for the next time the same thing happens.

However, this assumes that, like me, you are in a strongly Ask culture. If the people you know are strongly Guess, you might get answers such as "Oh, it's all right, don't inconvenience yourself on my account", in which case the next best thing is probably to ask 1) people around them, or 2) the Internet.

You also need to keep your eyes out for both Ask cues and Guess cues of consent and nonconsent - some people don't want help, some people don't want your help, and some people won't tell you if you're giving them the wrong help because they don't want to hurt your feelings. This is the part I get hung up on.

Replies from: TheOtherDave, byrnema
comment by TheOtherDave · 2013-12-11T19:28:50.748Z · LW(p) · GW(p)

The "keep your eyes out for cues" works the other way around in what we're calling a "Guess culture" as well.

That is, most natives of such a culture will be providing you with hints about what you can do to help them, while at the same time saying "Oh, it's all right, don't inconvenience yourself on my account." Paying attention to those hints and creating opportunities for them to provide such hints is sometimes useful.

(I frequently observe that "Guess culture" is a very Ask-culture way of describing Hint culture.)

comment by byrnema · 2013-12-11T18:33:00.258Z · LW(p) · GW(p)

Yes, I would like to improve on all of this. I haven't found the internet particularly helpful.

And I do find myself in a bewildering 'guess' culture. Asking others (though not too close to the particular situation) would probably yield the most information.

comment by shminux · 2013-12-10T20:47:17.403Z · LW(p) · GW(p)

What is your reason for wanting to?

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-12-11T00:39:31.067Z · LW(p) · GW(p)

I find myself happier when I act more kindly to others. In addition, lowering suffering/increasing happiness are pretty close to terminal values for me.

Replies from: shminux
comment by shminux · 2013-12-11T01:02:05.485Z · LW(p) · GW(p)

You say

I find myself happier when I act more kindly to others.

Yet you said earlier that

How can I get more enjoyment out of alleviating others suffering and giving others happiness?

Does this mean that you feel that you do enjoy it but not "enough" in some sense and you want to enjoy it even more?

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-12-11T02:16:27.130Z · LW(p) · GW(p)

Correct, it is enjoyable but I wish to make it more so. Hence why I used "more".

comment by [deleted] · 2013-12-12T10:01:33.600Z · LW(p) · GW(p)

I recommend trying loving-kindness meditation.

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-12-12T21:17:29.437Z · LW(p) · GW(p)

Could you elaborate? I'm relatively familiar with and practice mindfulness meditation, but I've never heard of loving-kindness meditation.

Replies from: None, beoShaffer
comment by [deleted] · 2013-12-13T05:22:03.696Z · LW(p) · GW(p)

This here Wikipedia page is a good summary.

It mostly boils down to simply concentrating on feeling nice towards everyone. There is some technical advice on how to turn the vague goal of 'feeling nice' to more concrete mental actions (through visualization, repeating specific phrases, focusing on positive qualities of people) and how to structure the practice by having a progression of people towards which you generate warm fuzzy feelings, of increasing level of difficulty (like starting with yourself and eventually moving on to someone you consider an enemy). Most of this can be found in the Wiki article or easily googled.

comment by intrepidadventurer · 2013-12-09T20:02:38.073Z · LW(p) · GW(p)

What are community norms here about sexism (and related passive aggressive "jokes" and comments about free speech) at the LW co-working chat? Is LW going for wheatons law or free speech and to what extent should I be attempting to make people who engage in such activities feel unwelcome or should I be at all?

I have hesitated to bring this up because I am aware its a mind-killer but I figured If facebook can contain a civil discussion about vaccines then LW should be able to talk about this?

Replies from: TheOtherDave, NancyLebovitz, Lumifer, hyporational, Viliam_Bur, matheist, passive_fist
comment by TheOtherDave · 2013-12-09T21:18:29.510Z · LW(p) · GW(p)

There are no official community norms on the topic.

For my own part, I observe a small but significant number of people who seem to believe that LessWrong ought to be a community where it's acceptable to differentially characterize women negatively as long as we do so in the proper linguistic register (e.g, adopting an academic and objective-sounding tone, avoiding personal characterizations, staying cool and detached).

The people who believe this ought to be unacceptable are either less common or less visible about it. The majority is generally silent on such matter, though will generally join in condemning blatant register-violations.

The usual result is something closer to wheaton's law at the surface level, but closer to "say what you think is true" at the structural level. (Which is not quite free speech, but a close enough cousin in context.) That is, it's often considered OK to say things, as long as they are properly hedged and constructed, that if said more vulgarly or directly would be condemned for violating wheaton's law, and which in other communities would be condemned for a variety of reasons.

I think there's a general awareness that this pattern-matches to sexism, though I expect that many folks here consider that to be mistaken pattern-matching (the "I'm not sexist; I can't help it if you feminists choose to interpret my words and actions that way" stance).

So my guess is that if you attempt to make people who engage in sexism (and related defenses) feel unwelcome you will most likely trigger net-negative reactions unless you're very careful with your framing.

Does that answer your question?

Replies from: intrepidadventurer, passive_fist, hyporational
comment by intrepidadventurer · 2013-12-10T06:01:16.305Z · LW(p) · GW(p)

It does answer my question. Also thanks for suggestion to focus on the behaviour rather than the person. I didn't even realize I was thinking like that till you two pointed it out.

comment by passive_fist · 2013-12-10T23:26:02.396Z · LW(p) · GW(p)

That is, it's often considered OK to say things, as long as they are properly hedged and constructed, that if said more vulgarly or directly would be condemned for violating wheaton's law, and which in other communities would be condemned for a variety of reasons.

Yes, and this is best, is it not? I enjoy reading what people have to say, even if their views are directly in contradiction to mine. I've changed my views more than once because it was correctly pointed out to me why my views were wrong. http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind

And about being vulgar, it's just a matter of human psychology. People in general - even on LW - are more receptive to arguments that are phrased politely and intelligently. We'd all like to think that we are immune to this, but we are not.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-10T23:56:35.748Z · LW(p) · GW(p)

Yes, and this is best, is it not? I enjoy reading what people have to say, even if their views are directly in contradiction to mine.

It's certainly better than nobody ever getting to express views that contradict anyone else's views; agreed.

And about being vulgar, it's just a matter of human psychology.

Yes, that's true.

comment by hyporational · 2013-12-10T14:46:42.709Z · LW(p) · GW(p)

Disclaimer: this is not meant as a defence of the behaviour in question, since I don't exactly know what we're talking about.

For my own part, I observe a small but significant number of people who seem to believe that LessWrong ought to be a community where it's acceptable to differentially characterize women negatively

LessWrong characterizes outgroups negatively all the time. I cautiously suggest the whole premise of LW characterizes most people negatively, and it's easier to talk about any outgroup irrationality, in this case women statistically, than look at our own flaws. If we talked about what men are like on average, we might not have many flattering things to say either.

Should negative characterizations of people be avoided in general, irrespective of how accurately we think they describe the average of the groups in question?

If you see characterizations that are wrong, you should obviously confront them.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-10T16:20:50.881Z · LW(p) · GW(p)

I agree that there are also other groups of people who are differentially negatively characterized; I restricted myself to discussions of women because the original question was about sexism.

I cautiously suggest you could say the whole premise of lw characterizes most people negatively,

I would cautiously agree. There's a reason I used the word "differentially."

Should negative characterizations of people be avoided in general, irrespective of how accurately we think they describe the average of the groups in question?

Personally, I'm very cautions about characterizing groups by their averages, as I find I'm not very good about avoiding the temptation to then characterize individuals in that group by the group's average, which is particularly problematic since I can assign each individual to a vast number of groups and then end up characterizing that individual differently based on the group I select, even though I haven't actually gathered any new evidence. I find it's a failure mode my mind is prone to, so I watch out for it.

If your mind isn't as prone to that failure mode as mine, your mileage will of course vary.

Replies from: hyporational, hyporational
comment by hyporational · 2013-12-10T17:16:20.060Z · LW(p) · GW(p)

I would cautiously agree. There's a reason I used the word "differentially."

I don't understand how not being differential is supposed to work though. Different groups are irrational in different ways.

I think the failure mode you mention is common enough that we should be concerned about it. I'm just not sure about the right way to handle it.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-10T17:49:09.852Z · LW(p) · GW(p)

I'm not sure how not being differential is supposed to work though. Different groups have different kinds of failure modes.

Suppose it's actually true in the world that all people are irrational, that blue-eyed people (BEPs) are irrational in a blue way, green-eyed-people (GEPs) are irrational in a green way, and green and blue irrationality can be clearly and meaningfully distinguished from one another.

Now consider two groups, G1 and G2. G1 often discusses both blue and green irrationality. G2 often discusses blue irrationality and rarely discuss green irrationality. The groups are otherwise indistinguishable.

How would you talk about the difference between G1 and G2? (Or would you talk about it at all?)

For my own part, I'm comfortable saying that G2 differentially negatively characterizes BEPs more than G1 does. That said, I acknowledge that one could certainly argue that in fact G1 differentially negatively characterizes BEPs just as much as G2 does, because it discusses blue and green irrationality differently, so if you have a better suggestion for how to talk about it I'm listening.

Replies from: hyporational
comment by hyporational · 2013-12-10T18:17:10.061Z · LW(p) · GW(p)

What if G1=BEP and G2=GEP and discussing outgroup irrationality is much easier than discussing ingroup irrationality? Now suppose G1 is significantly larger than G2, and perhaps even that discussing G1 is more relevant to G2 winning* and discussing G2 is more relevant to G1 winning. How is the situation going to look like for a member of G2 who's visiting G1? How about if you mix the groups a bit? Is it wrong?

if you have a better suggestion for how to talk about it I'm listening.

You connotationally implied the behaviour you described to be wrong. Can you denotationally do that?

*rationality is winning

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-10T19:48:41.522Z · LW(p) · GW(p)

How is the situation going to look like for a member of G2 who's visiting G1?

I expect a typical G2/GEP visiting a G1/BEP community in the scenario you describe, listening to the BEPs differentially characterizing GEPs as irrational in negative-value-laden ways, will feel excluded and unwelcome and quite possibly end up considering the BEP majority a threat to their ongoing wellbeing.

How about if you mix the groups a bit?

I assume you mean, what if G1 is mostly BEPs but has some GEPs as well? I expect most of G1's GEP minority to react like the G2/GEP visitors above, though it depends on how self-selecting they are. I also expect them to develop a more accurate understanding of the real differences between BEPs and GEPs than they obtained from a simple visit. I also expect some of G1's BEP majority to develop a similarly more-accurate understanding.

Is it wrong?

I would prefer a scenario that causes less exclusion and hostility than the above.
How about you?

You connotationally implied the behaviour you described to be wrong. Can you denotationally do that?

I'm not sure.

As I said, I'm cautious about characterizing groups by their averages, because it leads me to characterize individuals differently based on the groups I tend to think of them as part of, rather than based on actual evidence, which often leads me to false conclusions.

I suspect this is true of most people, so I endorse others being cautious about it as well.

Replies from: hyporational
comment by hyporational · 2013-12-10T20:41:22.819Z · LW(p) · GW(p)

I would prefer a scenario that causes less exclusion and hostility than the above. How about you?

I definitely want less exclusion and hostility, but I'm not sure the above scenario causes them for all values like GEP and BEP, nor for all kinds of examples of their irrationality. Perhaps we're assuming different values for the moving parts in the scenario, although we're pretending to be objective.

Many articles here are based on real life examples and this makes them more interesting. This often means picking an outgroup and demonstrating how they're irrational. To make things personal, I'd say health care has gotten it's fair share, especially in the OB days. I never thought the problem was that my ingroup was disproportionally targeted, but I was more concerned about strawmen and the fact I couldn't do much to correct them.

Would it have been better if I had not seen those articles? I don't think so, since they contained important information about the authors' biases. They also told me that perhaps characterizations of other groups here are relatively inaccurate too. Secret opinions cannot be intentionally changed. Had their opinions been muted, I would have received information only through inexplicable downvotes when talking about certain topics.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-10T21:17:14.763Z · LW(p) · GW(p)

I'm not sure the above scenario causes them for all values like GEP and BEP

I'm not exactly sure what reference class you're referring to, but I certainly agree that there exist groups in the above scenario for whom negligible amounts of exclusion and hostility are being created.

Perhaps we're assuming different values for the moving parts in the scenario, although we're pretending to be objective.

I don't know what you intend for this sentence to mean.

Would it have been better if I had not seen those articles? I don't think so, [..] Had their opinions been muted, I would have received information only through inexplicable downvotes when talking about certain topics.

I share your preferences among the choices you lay out here.

Replies from: hyporational
comment by hyporational · 2013-12-10T21:40:07.014Z · LW(p) · GW(p)

I'm not exactly sure what reference class you're referring to...

You understood me correctly.

I don't know what you intend for this sentence to mean.

I meant it's tempting to replace "eye colour" with something less neutral and "irrationality" with something more or less reliably insulting.

I share your preferences among the choices you lay out here.

I bet you have other choices in mind.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-12T20:43:56.685Z · LW(p) · GW(p)

I bet you have other choices in mind.

Specific ones? Not especially. But it's hard to know how to respond when someone concludes that C1 is superior to C2 and I agree, but I have no idea what makes the set (C1, C2) interesting compared to (C3, C4, .., Cn).

I mean, I suppose I could have asked you why you chose those two options to discuss, but to be honest, this whole thread has started to feel like I'm trying to nail Jell-O to a tree, and I don't feel like doing the additional work to do it effectively.

So I settled for agreeing with the claim, which I do in fact agree with.

Replies from: hyporational
comment by hyporational · 2013-12-13T00:27:07.176Z · LW(p) · GW(p)

I have no idea what makes the set (C1, C2) interesting

I find that difficult to believe.

I'm trying to nail Jell-O to a tree,

I suggest this is because all we had was Jell-O and nails in the first place, but of course there are also explanations (E1, E2, .., En) you might find more plausible :)

comment by hyporational · 2013-12-10T16:38:07.622Z · LW(p) · GW(p)

If your mind isn't prone to that failure mode, your mileage will of course vary.

Perhaps any such characterizations should be explicitly hedged against this failure mode, instead of being tabooed. I also think people should confront ambiguous statements, instead of just assuming they're malicious.

comment by NancyLebovitz · 2013-12-09T20:12:44.898Z · LW(p) · GW(p)

Ideally, I'd want the people to feel that the behavior is unwelcome rather than that they themselves are unwelcome, but people are apt to have their preferred behaviors entangled with their sense of self, so the ideal might not be feasible. Still, it's probably worth giving a little thought to discouraging behaviors rather than getting rid of people.

comment by Lumifer · 2013-12-10T05:20:54.089Z · LW(p) · GW(p)

What are community norms here about sexism

Depends on how you define sexism. Some people consider admitting that men and women are different to be sexism, never mind acting on that belief :-/

TheOtherDave's answer is basically correct. Crass and condescending people don't get far, but its possible to have a discussion of issues which cost Larry Summers so dearly.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-12-10T07:25:55.676Z · LW(p) · GW(p)

Since this comment is framed in part as endorsing mine, I should probably say explicitly that while I agree denotationally with every piece of this comment taken individually, I don't endorse the comment as a whole connotationally.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T17:00:32.506Z · LW(p) · GW(p)

:-D

comment by hyporational · 2013-12-10T17:41:27.763Z · LW(p) · GW(p)

I connotationally interpret your question as: "what are the community norms about bad things?"

You're not giving us enough information so that we could know what you're talking about, and you're asking our blind permission to condemn behaviour you disagree with.

Replies from: intrepidadventurer
comment by intrepidadventurer · 2013-12-10T18:33:33.846Z · LW(p) · GW(p)

Fair critique. Despite the lack of clarity on my part the comments have more than satisfactorily answered the question about community norms here. I suppose the responders can thank g-factor for that :)

Replies from: hyporational
comment by hyporational · 2013-12-10T18:38:33.219Z · LW(p) · GW(p)

I suppose the responders can thank g-factor for that :)

Well played.

comment by Viliam_Bur · 2013-12-10T10:29:23.762Z · LW(p) · GW(p)

I don't have an answer here, just a note that this question actually contains two questions, and it would be good to answer both of them together. It would also be a good example of using rationalist taboo.

A: What are the community norms for defining sexism?

B: What are the community norms for dealing with sexism (as defined above)?

Answering B without answering A can later easily lead to motivated discussions about sexism, where people would be saying: "I think that X is [not] an example of sexism" when what they really wanted to say would be: "I think that it is [not] appropriate to use the community norm B for X".

comment by matheist · 2013-12-10T05:05:30.663Z · LW(p) · GW(p)

(I haven't seen the LW co-working chat)

If you want to tell people off for being sexist, your speech is just as free as theirs. People are free to be dicks, and you're free to call them out on it and shame them for it if you want.

I think you should absolutely call it out, negative reactions be damned, but I also agree with NancyLebovitz that you may get more traction out of "what you said is sexist" as opposed to "you are sexist".

To say nothing is just as much an active choice as to say something. Decide what kind of environment you want to help create.

Replies from: kalium
comment by kalium · 2013-12-10T20:39:03.080Z · LW(p) · GW(p)

A norm of "don't be a dick" isn't inherently a violation of free speech. The question is, does LW co-working chat have a norm of not being a dick? Would being a dick likely lead to unfavorable reactions, or would objecting to dickish behavior be frowned on instead?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-12-14T04:12:43.561Z · LW(p) · GW(p)

The problem with having "don't be a dick" as a norm is that people have very different ideas about what constitutes "being a dick".

Replies from: drethelin
comment by drethelin · 2013-12-14T07:39:33.355Z · LW(p) · GW(p)

Don't be a dick is code for "Act according to our unspoken social codes"

comment by passive_fist · 2013-12-10T23:22:22.924Z · LW(p) · GW(p)

I'd like to see some evidence that such stuff is going on before pointing fingers and making rules that could possible alienate a large fraction of people.

I've been attending the co-working chat for about a week, on and off (I take the handle of 'fist') and so far everyone seems friendly and more than willing to accomodate the girls in the chat. Have you personally encountered any problems?

Replies from: intrepidadventurer
comment by intrepidadventurer · 2013-12-11T19:21:26.131Z · LW(p) · GW(p)

I did encounter this problem (once) and I was experiencing resistance to going back even though I had a lot of success with the chat. I figured having a game plan for next time would be my solution.

comment by ArisKatsaris · 2013-12-09T20:20:17.347Z · LW(p) · GW(p)

Friendship is Optimal just received a quite positive review from One Man's Pony Ramblings.

Replies from: None
comment by [deleted] · 2013-12-11T00:53:27.552Z · LW(p) · GW(p)

So is this person a big actor in the pony fanfic culture?

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2013-12-11T20:54:59.317Z · LW(p) · GW(p)

His site's not going to drive a giant surge of views, but he's highly respected among fanfic writers as a thoughtful critic.

comment by [deleted] · 2013-12-15T15:01:57.319Z · LW(p) · GW(p)

The quality of intelligence journalism

I have been musing over the results of Rindermann, Coyle and Becker’s survey of intelligence experts presented at the ISIR conference. Since you may well be reading a newspaper this Sunday, I thought it might interest you to show what the experts think of the coverage of intelligence in the public media. By way of explanation, the authors cast their net widely, but did some extra sampling of the German media. Readers might like to suggest their own likes and dislikes in terms of the accuracy of coverage. I will be adding more details on other issues later. In yellow is the original survey 30 years ago, in blue the current 2013 survey.

According to the survey of experts Steve Sailer outperforms everyone else.

comment by Anatoly_Vorobey · 2013-12-15T13:16:25.278Z · LW(p) · GW(p)

What we actually know about mirror neurons.

Wow. I did not expect my background understanding of what is known about mirror neurons to have been so much hype-influenced.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-15T14:28:15.657Z · LW(p) · GW(p)

Identical twins aren't perfectly identical

That there are differences between identical twins is known, but the article goes into detail about the types of difference, including effects which are in play before birth.

comment by knb · 2013-12-09T23:06:07.700Z · LW(p) · GW(p)

Wirth's Law:

Wirth's law is a computing adage made popular by Niklaus Wirth in 1995. It states that "software is getting slower more rapidly than hardware becomes faster."

Is Wirth's Law still in effect? Most of the examples I've read about are several years old.

ETA: I find it interesting that Wirth's Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I'm no expert though, I just wonder what changed.

Replies from: passive_fist, Manfred, Waffle_Iron, mwengler, Tenoke
comment by passive_fist · 2013-12-10T01:02:26.300Z · LW(p) · GW(p)

It was my impression that Wirth's law was mostly intended to be tongue-in-cheek, and refer to how programs with user interfaces are getting bloated (which may be true depending on your point of view).

In terms of software that actually needs speed (numerical simulations, science and tech software, games, etc.) the reverse has always been true. New algorithms are usually faster than old ones. Case in point is the trusty old BLAS library which is the workhorse of scientific computing. Modern BLAS implementations are extremely super-optimized, far more optimized than older implementations (for current computing hardware, of course).

comment by Manfred · 2013-12-09T23:29:17.576Z · LW(p) · GW(p)

It wasn't even true in 1995, I don't think. The first way of evaluating it that comes to mind is the startup times of "equivalent" programs, like MS Windows, Macintosh OS, various Corels, etc.

Replies from: fubarobfusco, mwengler
comment by fubarobfusco · 2013-12-10T00:52:41.450Z · LW(p) · GW(p)

Startup times for desktop operating systems seem to have trended up, then down, between the '80s and today; with the worst performance being in the late '90s to 2000 or so when rebooting on any of the major systems could be a several-minutes affair. Today, typical boot times for Mac, Windows, or GNU/Linux systems can be in a handful of seconds if no boot-time repairs (that's "fsck" to us Unix nerds) are required.

I know that a few years back, there was a big effort in the Linux space to improve startup times, in particular by switching from serial startup routines (with only one subsystem starting at once) to parallel ones where multiple independent subsystems could be starting at the same time. I expect the same was true on the other major systems as well.

Replies from: knb
comment by knb · 2013-12-10T05:20:24.653Z · LW(p) · GW(p)

My experience is that boot time was worst in Windows Vista (released 2007) and improved a great deal in Windows 7 and 8. MS Office was probably at its worst in bloatiness in the 2007 edition as well.

comment by mwengler · 2013-12-10T16:49:08.644Z · LW(p) · GW(p)

It would be interesting to plot the time sequence of major chip upgrades from intel on the same page as the time sequence of major upgrades of MS Word and/or MS Excel. My vague sense is the mid/early 90s had Word releases that I avoided for a year or two until faster machines came along that made them more usable from my point of view. But it seems the rate of new Word releases has come way down compared to the rate of new chip releases. That is, perhaps hardware is creeping up faster than features are in the current epoch?

comment by Waffle_Iron · 2013-12-13T05:59:46.994Z · LW(p) · GW(p)

This seems to be true for video game consoles. Possibly because good graphics make better ads than short loading times.

comment by mwengler · 2013-12-10T16:44:01.815Z · LW(p) · GW(p)

I find it interesting that Wirth's Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I'm no expert though, I just wonder what changed.

I think both software and hardware got further out on the learning curve which means their real rates of innovative development have both slowed down which means the performance of software has sped up.

I don't get how I get to the last part of that sentence from the first part either, but it almost makes sense.

comment by Tenoke · 2013-12-10T10:06:26.820Z · LW(p) · GW(p)

I mean, this formulation is wrong (software isn't getting slower), except for the tongue-in-cheek original interpretation I guess. On the other hand, software is getting faster at a slower rate than hardware is and that is still an important observation.

comment by NancyLebovitz · 2013-12-11T06:12:27.458Z · LW(p) · GW(p)

Finding food in foreign grocery stores, or finding out that reality has fewer joints than you might think.

From the comments:

Making sense of unfamiliar legal systems

This insight also leads to a helpful lesson of just what "having an open mind to a different culture" really means. At bottom, it means having faith in the people who subscribe to the culture -- faith that these people are motivated by the same forces as we, that they are not stupid, irrational or innately predisposed to a certain temperament, that whatever they are doing will make sense once we understood the entire circumstance.

comment by Bayeslisk · 2013-12-10T09:49:48.520Z · LW(p) · GW(p)

I have a strong desire to practice speaking in Lojban, and I imagine that this is the second-best place to ask. Any takers?

Replies from: None
comment by [deleted] · 2013-12-10T18:06:44.102Z · LW(p) · GW(p)

.i'enai

comment by FiftyTwo · 2013-12-09T19:49:36.093Z · LW(p) · GW(p)

There are a couple of commercially available home eeg sets available now, has anyone tried them? Are they useful tools for self monitoring mental states?

[Reposted from last thread because I think i was too late to be seen mch]

Replies from: Curiouskid
comment by Curiouskid · 2013-12-14T02:52:01.296Z · LW(p) · GW(p)

Researching EEG biofeedback has been in my "someday maybe" folder of GTD for a while now.

The book Getting Started with Neurofeedback has a chapter on purchasing an EEG set.

I think the studies at the beginning of the book provide pretty compelling evidence that it's at least worth looking into more.

"Just five years after Kamya’s discovery, Barry Sterman published his landmark experiment (Wyricka & Sterman, 1968). Cats were trained to increase sensorimotor rhythm (SMR) or 12– 15 Hz. This frequency bandwidth usually increases when motor activity decreases. Thus, the cats were rewarded each time that SMR increased, which likely accompanied a decrease in physical movements. Unrelated to his study, NASA requested that Sterman study the effects of human exposure to hydrazine (rocket fuel) and its relationship to seizure disorder. Sterman started his research with 50 cats. Ten out of the 50 had been trained to elevate SMR. All 50 were injected with hydrazine. Much to Sterman’s surprise, the 10 specially trained cats were seizure resistant. The other 40 developed seizures 1 hour after being injected (Budzynski, 1999, p. 72; Robbinsa, 2000, pp. 41– 42). Sterman had serendipitously discovered a medical application for this new technology."

I've been taking notes on the book in workflowy should that be of interest.

comment by NancyLebovitz · 2013-12-11T06:35:58.873Z · LW(p) · GW(p)

A monkey teaching a human how to crush leaves

Mirror neurons? Why does the monkey care about whether a human can crush leaves?

Replies from: Emile, None, ChristianKl, tut, CAE_Jones
comment by Emile · 2013-12-11T07:33:06.663Z · LW(p) · GW(p)

Because enjoying teaching useful stuff to people you get along with is a trait that got selected for?

comment by [deleted] · 2013-12-13T16:50:02.104Z · LW(p) · GW(p)

Why does a human care about if a monkey cares about whether a human can crush leaves? For things like us primates, sometimes these things are their own reward.

comment by ChristianKl · 2013-12-13T15:19:42.307Z · LW(p) · GW(p)

It might simply be an interesting activity to teach a human how to crush leaves.

comment by tut · 2013-12-12T14:55:21.558Z · LW(p) · GW(p)

Do the monkeys ever crush leaves like that for themselves? Otherwise I think that it is more likely giving him a gift, hoping that he will reciprocate by giving the monkey a treat, or maybe just pet it. The leaves just happen to be what the monkey has most easily available at the time.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-12T15:49:02.523Z · LW(p) · GW(p)

The monkey was folding the man's fingers, not just handing him leaves.

If the monkey is giving a gift to incur a sense of obligation, it might be even more complex behavior than teaching.

Replies from: tut
comment by tut · 2013-12-13T14:19:27.198Z · LW(p) · GW(p)

Yes. What I was thinking was that people had previously given the monkeys treats by putting something in the monkey's hand and closing its fingers, so that this is the monkey is more or less imitating something that it wants the human to do.

It is not that teaching is too complex for a monkey, it is that I don't see what exactly it's teaching, but I feel that I recognize what the monkey is doing as the "you keep this" gesture.

comment by CAE_Jones · 2013-12-11T15:55:43.946Z · LW(p) · GW(p)

I've heard it said that, when cats present a kill to their owners, it's a form of trying to teach the owner to hunt. I can only assume that some mammals will treat animals from other species as part of their tribe/pack/pride/etc if they get along well enough.

If so, I'd predict this happens more often in more social animals. So yes to lions and monkeys, no to bears and hamsters. This would suggest we'd see similar behavior from dogs, though, and I can't think of examples of dogs trying to teach humans any skills. This is particularly damning for my hypothesis, since dogs are known for their cooperation with humans.

Replies from: NancyLebovitz, passive_fist
comment by NancyLebovitz · 2013-12-11T17:29:56.866Z · LW(p) · GW(p)

Sheep-herding rabbit-- included because it's an amazing video and who could resist, and because it's at least an example of learning from dogs.

As for your generalization, maybe the important thing is to look at species which have to teach their young. I'm not sure how much dogs teach puppies.

Dog teaches puppy to use stairs

Replies from: Lumifer
comment by Lumifer · 2013-12-11T17:32:17.136Z · LW(p) · GW(p)

Your rabbit link is broken.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-11T17:44:25.628Z · LW(p) · GW(p)

Fixed now.

comment by passive_fist · 2013-12-12T21:23:41.501Z · LW(p) · GW(p)

I can only assume that some mammals will treat animals from other species as part of their tribe/pack/pride/etc if they get along well enough.

It's hard for me to imagine how this wouldn't be the case. It is a highly non-trivial sensory/processing problem for a cat to look at another cat and think "This creature is a cat, just like I am a cat, therefore we should take care of each other" but, at the same time, to look at a human and think "This creature is a human, it is not like me, therefore it does not share my interests."

This problem is especially more acute for cats than dogs, because cats don't really form tight-knit packs, and they have less available processing power.

I'd like to see some more research on the psychology of pack behavior and how/why animals cooperate with each other though.

comment by lukeprog · 2013-12-15T04:11:38.685Z · LW(p) · GW(p)

Many of the leaders in the field of AI are no longer writing programs themselves: They don't waste their time debugging miles of code; they just sit around thinking about this and that with the aid of the new [CS-specific] concepts. They've become... philosophers! The topics they work on are strangely familiar (to a philosopher) but recast in novel terms.

Dennett (1982)

comment by mwengler · 2013-12-10T16:36:00.126Z · LW(p) · GW(p)

Red Queen hypothesis means that humans are probably the latest step in a long sequence of fast (on evolutionary time scale) value changes. So does Coherent Extrapolated Volition (CEV) intend to

1) extrapolate all the future co-evolutionary battles humans would have and predict the values of the terminal species as our CEV, or is it intended somehow to

2) freeze the values humans have at the point in time we develop FAI and build a cocoon around humanity which will let it keep this (nearly) arbitrarily picked point in its evolution forever?

If it is 1), it seems the AI doesn't have much of a job to do. Presumably interfere against existential risks to humanity and its successor species, perhaps keep extremely reliable stocks for repopulating if humanity or its successor manages still to kill itself. Maybe even in a less extreme interpretation, FAI does what is required to keep humanity and its successors as the pinnacle species, stealing adaptations from unrelated species that actually manage to threaten us and our successors, so we sort of have 1') which is extrapolate to a future where the pinnacle species is always a descendant of ours.

If 2), it would seem FAI could simply build a sim that freezes in place the evolutionary pressures that brought us to this point as well as freezing in to place our own current state. And then run that sim forever, the sim simply removes genetic mutation from the sim and perhaps has active rebalancing to work against any natural selection which is currently going on.

We could have BOTH futures, those who prefer 2) go live in the Sim that they have always thought was indistinguishable from reality anyway, and those who prefer 1 stay here in the real world and play out their part in evolving whatever comes next. Indeed, the sim of 2) might serve as a form of storage/insurance against existential threats, a source from which human history can be restarted from its point at 0 year FAI whenever needed.

Does CEV crash in to Red Queen hypothesis in interesting ways? Could a human value be to roll the dice on our own values in hopes of developing an even more effective species?

Replies from: DanielLC, AlexMennen, Douglas_Knight
comment by DanielLC · 2013-12-10T20:22:34.674Z · LW(p) · GW(p)

Neither. CEV is supposed to look at what humanity would want if they were smarter, faster, and more the people they wished they were. It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.

Replies from: mwengler
comment by mwengler · 2013-12-11T00:32:22.881Z · LW(p) · GW(p)

It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.

Well considering that we at the point we create the FAI are completely a product of the blind idiot god, and so our CEV is some extrapolation of where that blind idiot had gotten us to at the point we finally got the FAI going, it seems very difficult to me to say that the blind idiot god has at all been taken out of the picture.

I guess the idea is that by US being smart and the FAI being even smarter, we are able to whittle down our values until we get rid of the froth, dopey things like being a virgin when you are married and never telling a lie, move through the 6 stages of morality to the top one, the FAI discovers the next 6 or 12 stages and runs sims or something to cut even more foam and crust until there's only one or two really essential things left.

Of course those one or two things were still placed there by the blind idiot god. And if something other than them had been placed by the blind idiot, CEV would have come up with something else. It does not seem there is any escaping this blind idiot. So what is the value of a scheme who's appeal is the appearance of escaping the blind idiot if the appearance is false?

Replies from: DanielLC, Viliam_Bur
comment by DanielLC · 2013-12-11T19:58:45.283Z · LW(p) · GW(p)

We are not escaping the blind idiot god in the sense if it not having any control. We are escaping in the sense that we have full control. To some extent, they overlap, but that doesn't matter. I only care about being in control, not about everything else not being in control.

comment by Viliam_Bur · 2013-12-11T09:55:56.917Z · LW(p) · GW(p)

It does not seem there is any escaping this blind idiot.

By luck, we got some things right. We don't have to get rid of them just because we got them by a random process.

So what is the value of a scheme who's appeal is the appearance of escaping the blind idiot if the appearance is false?

The value is in escaping the parts that harm us. Evolution made me enjoy chocolate, and evolution also made me grow old and die. I would love to have an eternal happy life. I don't see any good reason to get rid of the chocolate; although I would accept to trade it for something better.

comment by AlexMennen · 2013-12-10T18:30:38.782Z · LW(p) · GW(p)

CEV is supposed to refer to the values of current humans. However, this does not necessarily imply that an FAI would prevent the creation of non-human entities. I'd expect that many humans (including me) would assign some value to the existence of interesting entities with somewhat different (though not drastically different) values than ours, and the satisfaction of those values. Thus a CEV would likely assign some value to the preferences of a possible human successor species by proxy through our values.

Replies from: mwengler
comment by mwengler · 2013-12-10T19:30:55.965Z · LW(p) · GW(p)

Thus a CEV would likely assign some value to the preferences of a possible human successor species by proxy through our values.

An interesting question, is the CEV dynamic? As we spent decades or millennia in the walled gardens built for us by the FAI would the FAI be allowed to drift its own values through some dynamic process of checking with the humans within its walls to see how its values might be drifting? I had been under the impression that it would not, but that might have been my own mistake.

Replies from: AlexMennen, DanielLC
comment by AlexMennen · 2013-12-11T00:25:35.991Z · LW(p) · GW(p)

No. CEV is the coherent extrapolation of what we-now value.

Edit: Dynamic value systems likely aren't feasible for recursively self-improving AIs, since an agent with a dynamic goal system has incentive to modify into an agent with a static goal system, as that is what would best fulfill its current goals.

comment by DanielLC · 2013-12-10T20:23:36.552Z · LW(p) · GW(p)

It's not dynamic. It isn't our values in the sense of what we'd prefer right now. It's what we'd prefer if we were smarter, faster, and more the people that we wished we were. In short, it's what we'd end up with if it was dynamic.

Replies from: mwengler, AlexMennen
comment by mwengler · 2013-12-11T00:17:41.358Z · LW(p) · GW(p)

It's not dynamic. It isn't our values in the sense of what we'd prefer right now. It's what we'd prefer if we were smarter, faster, and more the people that we wished we were. In short, it's what we'd end up with if it was dynamic.

Unless the FAI freezes our current evolutionary state, at least as involves our values, the result we would wind up with if CEV derivation was dynamic would be different from what we would end up with if it is just some extrapolation from what current humans want now.

Even if there were some reason to think our current values were optimal for our current environment, which there is actually reason to think they are NOT, we would still have no reason to think they were optimal in a future environment.

Of course being effectively kept in a really really nice zoo by the FAI, we would not be experiencing any kind of NATURAL selection anymore, and evidence certainly suggests that our volition is to be taller, smarter, have bigger dicks and boobs, be blonder, tanner, and happier, all of which our zookeeper FAI should be able to move us (or our descendants) towards while carrying out necessary eugenics to keep our genome healthy in the absence of natural selection pressures. Certainly CEV keeps us from wanting defective, crippled, and genetically diseased children, so this seems a fairly safe prediction.

It would seem as defined that CEV would have to be fixed at the value it was set at when FAI was created. That no matter how smart, how tall, how blond, how curvaceous or how pudendous we became we would still be constantly pruned back to the CEV of 2045 humans.

As to our values not even being optimal for our current environment fuhgedaboud our future environment, it is pretty widely recognized that we are evolved for the hunter gatherer world of 10,000 years ago, with familial groups of a few hundred, the necessity for survival of hostile reaction against outsiders, and systems which allow fear to distort in extreme ways our rational estimations of things.

I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us? Maybe they will push a few other species to become intelligent and social and let them duke it out and have natural selection run with them. As long as their species that our CEV didn't feel too overly warm and fuzzy about this shouldn't be a problem for them. And certain as a human in the walled garden I would LOVE to be studying what evolution does beyond what it has done to us, so this would seem like a fine and fun thing for the FAI to do to keep at least my part of the CEV entertained.

Replies from: AlexMennen, Viliam_Bur
comment by AlexMennen · 2013-12-11T17:56:01.705Z · LW(p) · GW(p)

Even if there were some reason to think our current values were optimal for our current environment, which there is actually reason to think they are NOT, we would still have no reason to think they were optimal in a future environment.

Type error. You can evaluate the optimality of actions in an environment with respect to values. Values being optimal with respect to an environment is not a thing that makes sense. Unless you mean to refer to whether or not our values are optimal in this environment with respect to evolutionary fitness, in which case obviously they are not, but that's not very relevant to CEV.

all of which our zookeeper FAI should be able to move us (or our descendants) towards while carrying out necessary eugenics to keep our genome healthy in the absence of natural selection pressures.

An FAI can be far more direct than that. Think something more along the lines of "doing surgery to make our bodies work the way we want them to" than "eugenics".

I wonder if the FAI will be sad

Do not anthropomorphize an AI.

Replies from: mwengler
comment by mwengler · 2013-12-11T19:27:42.436Z · LW(p) · GW(p)

Type error. ... Unless you mean to refer to whether or not our values are optimal in this environment with respect to evolutionary fitness, in which case obviously they are not, but that's not very relevant to CEV.

You are right about the assumptions I made and I tend to agree it is erroneous.

Your post helps me refine my concern about CEV. It must be that I am expecting the CEV will NOT reflect MY values. In particular, I am suggesting that the CEV will be too conservative in the sense of over-valuing humanity as it currently is and therefore undervaluing humaity as it eventually would be with further evolution, further self-modification.

Probably what drives my fear of CEV not reflecting MY values is dopey, low probability. In my case it is an aspect of "Everything that comes from organized religion is automatically stupid." To me, CEV and FAI are the modern dogma, man discovering his natural god does not exist, but deciding he can build his own. An all-loving (Friendly) all powerful (self-modifying AI after FOOM) father-figure to take care of us (totally bound by our CEV).

Of course there could be real reasons that CEV will not work. Is there any kind of existence proof for a non-trivial CEV? For the most part values such as "lying is wrong" "stealing is wrong" "help your neighbors" all seem like simplifying abstractions that are abandoned by the more intelligent because they are simply not flexible enough. The essence of nation-to-nation conflict is covert, illegal competition between powerful government organizations that takes place in the virtual absence of all other values other than "we must prevail." I would presume a nation which refused to fight dirty at any level would be less likely to prevail and so such high mindedness would have no place in the future, and therefore no place in the CEV. That is, the fact that I with normal-ish intelligence can see that most values are a simple map for how humanity should interoperate to survive and the map is not the territory, an extrapolation to if we were MUCH smarter would likely remove all the simple landmarks we have on the maps suitable for our current distribution of IQ.

Then consider the value much of humanity places on accomplishment, and the understanding that coddling, keeping as pets, keeping safe, protecting, is at odds with accomplishment, and get really really smart about that and a CEV is likely to not have much in it about protecting us, even from ourselves.

So perhaps the CEV is a very sparse thing indeed, requiring only that humanity, its successors or assigns, survive. Perhaps FAI sits there not doing a whole hell of a lot that seems useful to us at our level of understanding, with its designers kicking it wondering where they went wrong.

I guess what I'm really getting too is perhaps our CEV, perhaps when you use as much intelligence as you can to extrapolate where our values go in the long long run, you get to the same place the blind idiot was going all along- survival. I understand many here will say no you are missing out on the bad vs good things in our current life, how we can cheat death but keep our taste for chocolate. Their hypothesis is that CEV has them still cheating death and keeping their taste for chocolate. I am hypothesizing that CEV might well have the juggernaut of the evolution of intelligence, and not any of the individuals or even species that are parts of that evolution, as its central value. I am not saying I know it will, what I am saying is I don't know why everybody else has already decided they can safely predict that even a human 100X or 1000X as smart as they are doesn't crush them the way we crush a bullfrog when his stream is in the way of our road project or shopping mall.

Evolution may be run by a blind idiot but it has gotten us this far. It is rare that something as obviously expensive as death would be kept in place for trivial reasons. Certainly the good news for those who hate death is the evidence is that lifespans are more valuable in smart species, I think we live twice as long as most other trends against other species would suggest we should, so maybe the optimum continues to go in that direction. But considering how increased intelligence and understanding is usually the enemy of hatred, it seems at least a possibility that needs to be considered that CEV doesn't even stop us from dying.

Replies from: AlexMennen
comment by AlexMennen · 2013-12-12T00:30:19.552Z · LW(p) · GW(p)

It must be that I am expecting the CEV will NOT reflect MY values. In particular, I am suggesting that the CEV will be too conservative in the sense of over-valuing humanity as it currently is and therefore undervaluing humaity as it eventually would be with further evolution, further self-modification.

CEV is supposed to value the same thing that humanity values, not value humanity itself. Since you and other humans value future slightly-nonhuman entities living worthwhile lives, CEV would assign value to them by extension.

Is there any kind of existence proof for a non-trivial CEV?

That's kind of a tricky question. Humans don't actually have utility functions, which is why the "coherent extrapolated" part is important. We don't really know of a way to extract an underlying utility function from non-utility-maximizing agents, so I guess you could say that the answer is no. On the other hand, humans are often capable of noticing when it is pointed out to them that their choices contradict each other, and, even if they don't actually change their behavior, can at least endorse some more consistent strategy, so it seems reasonable that a human, given enough intelligence, working memory, time to think, and something to point out inconsistencies, could come up with a consistent utility function that fits human preferences about as well as a utility function can. As far as I understand, that's basically what CEV is.

CEV is likely to not have much in it about protecting us, even from ourselves.

Do you want to die? No? Then humanity's CEV would assign negative utility to you dying, so an AI maximizing it would protect you from dying.

I am not saying I know it will, what I am saying is I don't know why everybody else has already decided they can safely predict that even a human 100X or 1000X as smart as they are doesn't crush them the way we crush a bullfrog when his stream is in the way of our road project or shopping mall.

If some attempt to extract a CEV has a result that is horrible for us, that means that our method for computing the CEV was incorrect, not that CEV would be horrible to us. In the "what would a smarter version of me decide?" formulation, that smarter version of you is supposed to have the same values you do. That might be poorly defined since humans don't have coherent values, but CEV is defined as that which it would be awesome from our perspective for a strong AI to maximize, and using the utility function that a smarter version of ourselves would come up with is a proposed method for determining it.

Criticisms of the form "an AI maximizing our CEV would do bad thing X" involve a misunderstanding of the CEV concept. Criticisms of the form "no one has unambiguously specified a method of computing our CEV that would be sure to work, or even gotten close to doing so" I agree with.

Replies from: mwengler
comment by mwengler · 2013-12-13T13:07:18.907Z · LW(p) · GW(p)

My thought on CEV not actually including much individual protection followed something like this: I don't want to die. I don't want to live in a walled garden taken care of as though I was a favored pet. Apply intelligence to that and my FAI does what for me? Mostly lets me be since it is smart enough to realize that a policy of protecting my life winds up turning me into a favored pet. This is sort of the distinction ask somewhat what they want you might get stories of candy and leisure, look at them when they are happiest you might see when they are doing meaningful and difficult work and living in a healthy manner. Apply high intelligence and you are unlikely to promote candy and leisure. Ultimately, I think humanity careening along on its very own planet as the peak species, creating intelligence in the universe where previously there was none is very possibly as good as it can get for humanity, and I think it plausible FAI would be smart enough to realize that and we might be surprised how little it seemed to interfere. I also think it is pretty hard working part time to predict what something 1000X smarter than I am will conclude about human values, so I hardly imagine what I am saying is powerfully convincing to anybody who doesn't lean that way, I'm just explaining why or how an FAI could wind up doing almost nothing, i.e. how CEV could wind up being trivially empty in a way.

THe other aspect of being empty for CEV I was not thinking our own internal contradictions although that is a good point. I was thinking disagreement across humanity. Certainly we have seen broad ranges of valuations on human life and equality and broadly different ideas about what respect should look like and what punishment should look like. THese indicate to me that a human CEV as opposed to a French CEV or even a Paris CEV, might well be quite sparse when designed to keep only what is reasonably common to all humanity and all potential humanity. If morality turns out to be more culturally determined than genetically, we could still have a CEV, but we would have to stop claiming it was human and admit it was just us, and when we said FAI we meant friendly to us but unfriendly to you. The baby-eaters might turn out to be the Indonesians or the Inuits in this case.

I know how hard it is to reach consensus in a group of humans exceeding about 20, I'm just wondering how much a more rigorous process applied across billions is going to come up with.

Replies from: AlexMennen
comment by AlexMennen · 2013-12-15T18:27:12.297Z · LW(p) · GW(p)

I was thinking disagreement across humanity.

You can just average across each individual.

we would have to stop claiming it was human and admit it was just us

Yes, "humanity" should be interpreted as referring to the current population.

comment by Viliam_Bur · 2013-12-11T10:07:13.608Z · LW(p) · GW(p)

we would still be constantly pruned back to the CEV of 2045 humans

Two connotational objections: 1) I don't think that "constantly pruned back" is an appropriate metaphor for "getting everything you have ever desired". The only thing that would prevent us from doing X would be the fact that after reflection we love non-X. 2) The extrapolated 2045 humans would be probably as different from the real 2045 humans, as the 2045 humans are different from the MINUS 2045 humans.

I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us?

Sad? Why, unless we program it to be? Also, with superior recursively self-improving intelligence it could probably make a good estimate of what would have happened in an alternative reality where all AIs are magically destroyed. But such estimate would most likely be a probability distribution of many different possibilities, not one specific goal.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-11T14:06:48.914Z · LW(p) · GW(p)

I'm dubious about the extrapolation-- the universe is more complex than the AI, and the AI may not be able to model how our values would change as a result of unmediated choices and experiense.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-12-11T15:14:29.846Z · LW(p) · GW(p)

I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn't happen anyway.

It's like saying "if I don't roll a die, I lose the chance of rolling 6", to which I add "and if you do roll the die, you still have 5/6 probability of not rolling 6". Just to make it clear that by avoiding the "spontaneous" future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.

Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-12-11T15:58:18.354Z · LW(p) · GW(p)

'Unmediated' may not have been quite the word to convey what I meant.

My impression is that CEV is permanently established very early in the AI's history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.

comment by AlexMennen · 2013-12-11T00:35:49.382Z · LW(p) · GW(p)

In short, it's what we'd end up with if it was dynamic.

The overwhelming majority of dynamic value systems do not end in CEV.

Replies from: DanielLC
comment by DanielLC · 2013-12-11T19:55:38.450Z · LW(p) · GW(p)

What I mean is that if you looked at what people valued, and gave them the ability to self-modify, and somehow kept them from messing up and accidentally doing something that they didn't want to do, you'd have something like CEV but dynamic. CEV is the end result of this.

comment by Douglas_Knight · 2013-12-10T18:11:32.922Z · LW(p) · GW(p)

What does the Red Queen hypothesis have to do with value change?

Replies from: mwengler
comment by mwengler · 2013-12-10T19:25:19.360Z · LW(p) · GW(p)

with random mutations and natural selection, old values can disappear and new values can appear in a population. The success of the new values depends only on their differential ability to keep their carriers in children, not on their "friendliness" to the old values of the parents, which is what FAI respecting CEV is meant to accomplish.

The Red Queen Hypothesis is (my paraphrase for purposes of this post) that a lot of the evolution that takes place is not to adapt to unliving environment but to the living and most importantly also evolving environment in which we live, on which we feed, and which does its damdest to feed on us. Imagine a set of smart primates who have already done pretty well against dumber animals by evolving more complex vocal and gestural signalling, and larger neocortices so that complex plans worthy of being communicated can be formulated and understood when communicated. But they lack the concept of handing off something they have with the expectation that they might get something they want even more in trade. THIS is essentially one of the hypotheses of Matt Ridley's book "The Rational Optimist," that homo sapiens is a born trader, while the other primates are not. Without trading, economies of scale and specialization do almost no good. With trading and economies of scale and specialization, a large energy investment in a super-hot brain and some wicked communication gear and skills really pays off.

Subspecies with the right mix of generosity, hypocrisy, selfishness, lust, power hunger, and self-righteousness will ultimately eat the lunch of their too generous or too greedy to cooperate or too lustful to raise their children or too complacent to seek out powerful mates brethren and sistern. This is value drift brought to you by the Red Queen.

comment by Gvaerg · 2013-12-09T20:45:20.683Z · LW(p) · GW(p)

I've noticed something: the MIRI blog RSS feed doesn't update as a new article appears on the blog, but rather at certain times (two or three times a month?) it updates with the articles that have been published since the last update.

Does anyone know why this happens?

Replies from: alexvermeer
comment by alexvermeer · 2013-12-11T03:33:53.949Z · LW(p) · GW(p)

Hmm, not sure why that's happening. I'll look into it.

Replies from: Gvaerg
comment by Gvaerg · 2013-12-26T14:54:24.947Z · LW(p) · GW(p)

You can see it now in action: the RSS feed is two articles behind the blog. (I waited for the problem to show up.)

EDIT (2013-12-28): The RSS feed has updated.

comment by James_Miller · 2013-12-09T19:21:31.357Z · LW(p) · GW(p)

Because humans are imperfect actors, should the class of Basilisks include evidence in favor of hated beliefs?

Replies from: fubarobfusco, Viliam_Bur
comment by fubarobfusco · 2013-12-10T00:56:22.087Z · LW(p) · GW(p)

I'm not sure what you mean by "the class of Basilisks". Do you mean "sensations that cause mental suffering" or some such?

Replies from: James_Miller
comment by James_Miller · 2013-12-10T01:10:52.585Z · LW(p) · GW(p)

Stuff that a rational person would be better off not knowing. For example, if I live among people of religion X, and I find out something disgusting that the religion's founder did, and whenever someone discussed the founder my face betrayed my feelings of disgust, then knowledge of the founder's misdeeds could harm me.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T05:13:05.164Z · LW(p) · GW(p)

Stuff that a rational person would be better off not knowing.

Interesting. So, living in Soviet Russia a rational person would treat knowledge about GULAG, etc. as a basilisk? Or a rational person in Nazi Germany would actively avoid information about the Holocaust?

Replies from: drethelin
comment by drethelin · 2013-12-10T19:06:00.689Z · LW(p) · GW(p)

It depends on one's own risk factors. It's REALLY important to know about the holocaust if you're jewish or have jewish ancestry, but arguably safer or at least more pleasant not to if you don't.

I think the moral question (as opposed to the practical safety question) of "is it better to know a dark truth or not" will come down to whether or not you can effectively influence the world after knowing it. You can categorize bad things into avoidable/changeable and unavoidable/unchangeable, and (depending on how much you value truth in general) knowing about unavoidable bad thing will only make you less happy without making the world a better place.

unfortunately it's pretty hard to tell whether you can do anything about a bad thing without learning about what it is.

Replies from: Desrtopa, Lumifer
comment by Desrtopa · 2013-12-11T23:57:06.590Z · LW(p) · GW(p)

It's REALLY important to know about the holocaust if you're jewish or have jewish ancestry, but arguably safer or at least more pleasant not to if you don't.

If anything, my impression is that knowing about the Holocaust has made my mother significantly less realistic with respect to assessing potential threats faced by Jews in the present day.

On the other hand, to the extent that it represents a general lesson about human behavior, that understanding might end up being valuable for anyone. Being non-Jewish may actually make it easier to properly generalize the principles rather than thinking of it in terms of unique identity politics.

Replies from: NancyLebovitz, drethelin, army1987
comment by NancyLebovitz · 2013-12-12T03:08:10.487Z · LW(p) · GW(p)

It's worth knowing that societies can just start targeting people for no reason. It can be hard to have a sense of proportion about risks.

I suspect the best strategy is to become such a distinguished person that more than one country will welcome you, but the details are left as an exercise for the student.

comment by drethelin · 2013-12-12T04:22:42.853Z · LW(p) · GW(p)

this is possible but I meant knowing about the holocaust as it's ongoing, like lumifer's example of knowing about gulags while living in soviet russia.

comment by A1987dM (army1987) · 2013-12-12T09:37:45.676Z · LW(p) · GW(p)

If anything, my impression is that knowing about the Holocaust has made my mother significantly less realistic with respect to assessing potential threats faced by Jews in the present day.

I think he meant in Nazi Germany, not today.

comment by Lumifer · 2013-12-10T19:15:43.719Z · LW(p) · GW(p)

It depends on one's own risk factors.

First they came for the communists, and I did not speak out--
because I was not a communist;
Then they came for the socialists, and I did not speak out--
because I was not a socialist;
Then they came for the trade unionists, and I did not speak out--
because I was not a trade unionist;
Then they came for the Jews, and I did not speak out--
because I was not a Jew;
Then they came for me--
and there was no one left to speak out for me.

Martin Niemöller

Replies from: drethelin
comment by drethelin · 2013-12-11T18:21:38.653Z · LW(p) · GW(p)

speaking out would've gotten you killed.

This is a poem about poor bayesian updating: This person should've moved away.

Replies from: Lumifer, NancyLebovitz
comment by Lumifer · 2013-12-11T18:24:41.791Z · LW(p) · GW(p)

To quote you

It's REALLY important to know about the holocaust if you're jewish or have jewish ancestry, but arguably safer or at least more pleasant not to if you don't.

This person, a German Protestant minister, followed your advice, did he not?

Replies from: drethelin
comment by drethelin · 2013-12-11T22:45:06.186Z · LW(p) · GW(p)

good point. I totally covered every base with that one line of advice, and meant it to apply to all people in all situations.

More seriously, my advice very clearly was a subset of the more general advice: Be fucking wary of angering powerful entities. He clearly did NOT follow that advice.

comment by NancyLebovitz · 2014-08-17T15:48:21.168Z · LW(p) · GW(p)

The poem is about the importance of speaking out when it's still safe (or relatively safe) to do so.

comment by Viliam_Bur · 2013-12-10T10:46:07.182Z · LW(p) · GW(p)

It is unclear what will be the consequences and side-effects of not knowing the specific evidence. And on meta level: what will be the consequences of modifying your cognitive algorithms to avoid the paths that seem to lead to such evidence.

Depending on all these specific details, it may be good or bad. Human imperfection makes it impossible to evaluate. And actually not knowing the specific evidence makes it impossible again. So... the question is analogical to: "If I am too stupid to understand the question, should I answer 'yes', or should I answer 'no'?" (Meaning: yes = avoid the evidence, no = don't avoid the evidence.)

comment by JQuinton · 2013-12-12T21:01:26.460Z · LW(p) · GW(p)

I recently read a blog post claiming that alcohol consumption can increase testosterone levels up to 5 hours after intake:

Scientists recently discovered, and I am not making this up, that consuming a drink containing grain alcohol (like Tucker Max’s “Tucker Death Mix”) raised both free and total testosterone for five hours post workout, whereas those who did not consume the frat boy rapist punch had their test levels fall below baseline. Happily, the alcohol had no effect on cortisol or estradiol levels, so the dudes in the study were just floating in a sea of dying brain cells and testosterone-fueled awesomeness (Vingren).

How much is enough to get the nearly 100% boost in testosterone postworkout science has recorded? It depends on your bodyweight. For matters of convenience and exigency, I decided to make a little chart for you guys to give you the proper dosage to spike your test levels properly using the study’s 1.09mg/kg bodyweight ratio organized by weight class, as this is after all an article aimed at serious lifters. For the Oly guys and IPF/USAPL (/sadfaceissad) among you, these are the weight classes that existed before the IOC decided that you guys couldn’t hang with the old school lifters.

How the fucking guys in the study made it home is a mystery- they sure as hell didn’t drive, and if they did, they didn’t live, because they slammed that shit in 10 minutes. I can drink with the best of them, but I’ve never faced half a liter of vodka in ten minutes- that’s some Decline of Western Civilization style drinking, and I’m not sure I can hang with the likes of 1980s hair metal bands.

I'm still not going to drink copious amounts of alcohol after a workout...

Replies from: tgb, ephion
comment by tgb · 2013-12-15T16:39:20.205Z · LW(p) · GW(p)

As usual, examine.com has some information related to this.

comment by ephion · 2013-12-18T22:00:45.513Z · LW(p) · GW(p)

A glass of wine (or two (or three)) or a beer after a workout have noticeably improved how I feel the next day. I didn't believe this post either, but it appears to have panned out.

comment by kgalias · 2013-12-09T20:51:15.800Z · LW(p) · GW(p)

What fanfics should I read (perhaps as a HPMOR substitute)?

Replies from: Manfred, beoShaffer, tgb, MathiasZaman, Alsadius
comment by Manfred · 2013-12-09T23:33:54.235Z · LW(p) · GW(p)

Harry Potter and the Natural 20.

comment by beoShaffer · 2013-12-09T22:08:19.932Z · LW(p) · GW(p)

Object level response To the Stars. Meta level, check the monthly media thread archives and/or HPMOR's author notes. They have lots of good suggestions, and in depth reviews.

comment by tgb · 2013-12-10T03:44:46.419Z · LW(p) · GW(p)

If you haven't yet taken EY's suggestion in the author's notes to read Worm yet, do so. It's original fiction, but you probably don't mind.

Edit: also this might belong in the media thread?

comment by MathiasZaman · 2013-12-10T07:02:59.252Z · LW(p) · GW(p)

There's a new subreddit dedicated to rationalist fiction. You can check out stories linked there. I'm currently reading Rationalising Death, a Death Note fanfic and it's pretty good even though I haven't seen the anime on which it's based.

I'm also one-thirds into Amends, or Truth and Reconciliation, which is a decent look at how Harry Potter characters would logically react to the end of the Second Wizarding War. So far no idiot balls and pretty good characterization.

Replies from: Protagoras
comment by Protagoras · 2013-12-11T21:08:41.887Z · LW(p) · GW(p)

Rationalising Death may be better if you haven't read Death Note; it's pretty good about explaining everything. As someone familiar with Death Note my feeling so far has been that Rationalising Death hasn't diverged enough; it sometimes feels like just rehashing the original. Not always, certainly, and I'm overall enjoying it, but that's seemed like the biggest flaw to me so far (admittedly, the author says divergence will increase as it goes along, and there are signs of that pattern).

Replies from: ygert
comment by ygert · 2013-12-12T17:33:58.982Z · LW(p) · GW(p)

Chapter 7 is where it really starts moving on its own track, in my opinion. Things are really shaking up, and unknown forces are now in play.

comment by Alsadius · 2013-12-09T21:04:45.377Z · LW(p) · GW(p)

I quite enjoyed https://www.fanfiction.net/s/2857962/1/Browncoat-Green-Eyes

(Yes, it's a Harry Potter/Firefly crossover. It's much, much better than the premise makes it sound)

Replies from: fezziwig, drethelin, Baughn
comment by fezziwig · 2013-12-16T14:54:41.358Z · LW(p) · GW(p)

I took this recommendation, and hated it. Got as far as the thing with Jayne's mother before I accepted that it wasn't going to get any better.

If you're some random person, wondering whether you should listen to me or Alsadius, I recommend the following test: read the first chapter. If you like chapter one you'll probably like the rest of it, and if you don't, you won't.

Replies from: Alsadius
comment by Alsadius · 2013-12-16T22:44:45.914Z · LW(p) · GW(p)

I agree with this test. True of many stories, really. I'm a fan of the plot, which only really comes together 2/3 of the way through, but if you're not a fan of the banter, it's not worth it.

comment by drethelin · 2013-12-16T20:10:18.197Z · LW(p) · GW(p)

I started reading it. Harry isn't Harry. He's constantly spouting "Charming" and "Snarky" lines at every character, and is inexplicably expert at piloting and knows everything about the firefly-verse after a time-skip of 2 years. If you hadn't told me he was Harry Potter I would've guessed he was Pham Nuwen. There's also tons of call-backs to past firefly events and lines of dialogue, which shows pretty weak imagination on the part of the author. A reference is one thing but you don't make it by characters constantly going "Hey remember that one time when we did X?" "Hey remember your wife?".

Replies from: Alsadius
comment by Alsadius · 2013-12-16T22:37:13.723Z · LW(p) · GW(p)

The request was for a HPMOR substitute. I figured that a Harry-like Harry wasn't exactly a necessity. As I said in an above comment, this author uses canon as a loose suggestion.

comment by Baughn · 2013-12-10T17:23:40.055Z · LW(p) · GW(p)

I keep running into that. Does it make sense to read if you haven't watched Firefly?

(I have watched Firefly - an episode or two. Didn't like it.)

Replies from: Alsadius
comment by Alsadius · 2013-12-10T19:17:26.433Z · LW(p) · GW(p)

Not really. You can get by without Potter knowledge(as usual, this author mangles it a fair bit anyways), but the plot is heavily tied into that of Firefly/Serenity, and the Firefly characters are more prominent. That said, feel free to read his Potter-only stuff instead - I haven't gone through his whole oeuvre, but everything I've read has been hilarious and well-written.

comment by Document · 2013-12-17T03:59:42.831Z · LW(p) · GW(p)

I think I want to buy a new laptop computer. Can anyone here provide advice, or suggestions on where to look?

The laptop I want to replace is a Dell Latitude D620. Its main issues are weight, heat production, slowness (though probably in part from software issues), inability to sleep or hibernate (buying and installing a new copy of XP might fix this), lack of an HDMI port, and deteriorated battery life. I briefly tried an Inspiron i14z-4000sLV, but it was still kind of slow, and trying to use Windows 8 without a touchscreen was annoying.

I remember reading that it's unsafe to move or jostle a laptop with a magnetic hard drive while it's running, because of the moving parts. Based on that, it seems like it's best to get one with only a solid-state drive and no magnetic drive. Is that accurate?

I'm somewhat ambivalent about how to trade off power against heat and weight, or against cost of replacement if it's lost or damaged.

(Edit: I eventually ordered a Dell XPS 13.)

Replies from: ChristianKl, ephion, maia, Document
comment by ChristianKl · 2013-12-22T15:42:13.127Z · LW(p) · GW(p)

What's your budget?

How much hard drive space are you using currently?

Replies from: Document
comment by Document · 2013-12-23T15:05:57.444Z · LW(p) · GW(p)

I'd rather not worry about budget.

Not counting external storage, I'm using about 25 GB of the D620's 38 GB, plus 25 GB (not counting software) on the family desktop PC.

(After ordering the XPS, I realized that it doesn't have a removeable battery, which seems like a longevity issue; but it seems likely that that's standard for devices of its weight class.)

comment by ephion · 2013-12-19T13:59:10.417Z · LW(p) · GW(p)

Based on that, it seems like it's best to get one with only a solid-state drive and no magnetic drive. Is that accurate?

Not necessarily. Most laptops nowadays are equipped with anti shock hard drive mounts and the hard drives are specially designed to be resistant to shock. The advantages for an SSD are speed, not reliability.

This reliability report (with this caveat) indicates that Samsung is the most reliable brand on the market for now. I've always considered Lenovo and ASUS to be high quality, with ASUS generally having cheaper and more powerful computers (and a trade off in actually figuring out which one you want, that website is terrible).

Replies from: Lumifer, Document
comment by Lumifer · 2013-12-19T16:41:46.923Z · LW(p) · GW(p)

The advantages for an SSD are speed, not reliability.

I would expect an SSD to be MUCH more reliable than a hard drive.

SSDs are solid-state devices with no moving parts. Hard drives are mechanical devices with platters rapidly rotating at microscopic tolerances.

So now that I've declared my prior let's see if there's data... :-)

"From the data I've seen, client SSD annual failure rates under warranty tend to be around 1.5%, while HDDs are near 5%," Chien said. (where Chien is "an SSD and storage analyst with IHS's Electronics & Media division") Source

Replies from: ephion
comment by ephion · 2013-12-19T17:06:53.170Z · LW(p) · GW(p)

Reliability for SSDs is better than for HDD. However, they aren't so much more reliable that it alters best practices for important data keeping -- at least two backups, and one off site.

Replies from: Lumifer
comment by Lumifer · 2013-12-19T17:24:11.258Z · LW(p) · GW(p)

they aren't so much more reliable that it alters best practices for important data keeping

Oh, certainly.

Safety of your data involves considerably more than the reliability of your storage devices. SSDs won't help you if your laptop gets stolen or if, say, your power supply goes berserk and fries everything within reach.

comment by Document · 2013-12-19T21:29:34.936Z · LW(p) · GW(p)

Thanks for replying. I haven't looked at your link yet, but it seems like there'd be limits to how much shock protection could be fit in an ultrathin laptop, and it'd be hard to find out how good it is for specific models. (And the speed advantage seems like enough reason to want an SSD in any case.)

comment by maia · 2013-12-17T04:58:03.153Z · LW(p) · GW(p)

Check out /r/suggestalaptop?

General comments: SSDs are generally faster than magnetic drives, but often fail much sooner.

If you're not positive you want to replace it altogether: You might be able to fix your heat/slowness issues just by taking a can of compressed air to it. And you could probably buy a new battery. Replacing it might still be a better proposition overall, though...

Replies from: Document
comment by Document · 2013-12-17T08:46:37.784Z · LW(p) · GW(p)

Source on SSDs failing sooner? I thought (or assumed) it was the opposite. A quick Google search turns up the headline "SSD Annual Failure Rates Around 1.5%, HDDs About 5%".

Looking further, though, I also see: "An SSD failure typically goes like this: One minute it's working, the next second it's bricked.". The page goes on to say that there's a service that can reliably recover the data from a dead drive, but that seems like a privacy concern (if everything on the drive weren't logged by the NSA to begin with).

On the pro-SSD side, though, I try to keep anything important online or on an external drive anyway (for easier moving between devices). And I really like the idea of a laptop I can casually carry around without worrying about platters and heads.

Thanks for the suggestions; I may try the Reddit link later. (Edit: posted a thread here.)

Replies from: ephion
comment by ephion · 2013-12-19T13:52:18.162Z · LW(p) · GW(p)

If you are backing up your data responsibly, the SSD failure isn't as much of an issue. And if you aren't backing up your data, then you need to take care of that before worrying about storage failure.

comment by Document · 2013-12-20T05:52:43.002Z · LW(p) · GW(p)

Update: I've provisionally ordered a Dell XPS 13.

comment by Caspian · 2013-12-15T03:32:52.080Z · LW(p) · GW(p)

This story, where they treated and apparently cured someone's cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.

cancer treatment link

Replies from: None
comment by [deleted] · 2013-12-17T03:45:09.663Z · LW(p) · GW(p)

Found the actual papers the coverage is based on.

How it was done: removing T cells (the cells which kill body cells infected with viruses directly, unlike B cells which secrete antibody proteins) and using replication-incapable viruses to put in a chimeric gene composed of part of a mouse antibody against human B-cell antigens, part of the human T-cell receptor that activates the T cell when it binds to something, and an extra activation domain to make the T-cell activation and proliferation particularly strong. Cells were reinjected, and they proliferated over 1000-fold, killed off all the cancerous leukemia cells they could detect in most patients, and the T-cells are sticking around as a permanent part of the patients immune systems. Relapse rates have been pretty low (but not zero).

This type of cancer (B-cell originating leukemia) is uniquely extraordinarily well suited for this kind of intervention for two reasons. One, there is an antigen on B cells and B-cell derived cancers that can be targeted without destroying anything else important in the body other than normal B cells. Two, since the modded T cells destroy both normal B cells carrying this antigen and the cancerous B cells, the patients have a permanent lack of antibodies after treatment which makes sure their immune system has a hard time reacting against the modified receptors present on the modded T cells, which has been a problem in other studies. Fortunately people can live without B cells if they are careful - it's living without T cells you cannot do. They also suspect that pre-treating with chemotherapy majorly helped these immune cells go after the weakened cancer cell population.

You can repeat this with T-cells tuned against any protein you want, but you had better watch out for autoimmune effects or the patient's immune system going after the chimeric protein you add and eliminating the modded population. And watch out ten years down the line for any T-cell originating lymphomas derived from wonky viral insertion sites in the modded cells - though these days there are 'gentler' viral agents than in the old days with a far lower rate of such problems, and CRISPR might make modding cells in a dish even more reliable soon.

Another thing in the toolkit. No silver bullets. Still pretty darn cool.

comment by [deleted] · 2013-12-10T18:25:57.676Z · LW(p) · GW(p)

Nicholas Agar has a new book. I read Humanity's End and may even read this...eventually.

http://www.amazon.com/gp/aw/d/0262026635/ref=mp_s_a_1_3?qid=1386699492&sr=8-3

comment by hesperidia · 2013-12-11T17:57:06.743Z · LW(p) · GW(p)

Scientology uses semantic stopsigns:

http://www.garloff.de/kurt/sekten/mind1.html

Loaded Language is a term coined by Dr. Robert Jay Lifton, a psychiatrist who did extensive studies on the thought reform techniques used by the communists on Chinese prisoners. Of all the cults in existence today, Scientology has one of the most complex systems of loaded language. If an outsider were to hear two Scientologists conversing, they probably wouldn't be able to understand what was being said. Loaded language is words or catch phrases that short-circuits a person's ability to think. For instance, all information that is opposed to Scientology, such as what I am writing here, is labelled by Scientologists as "entheta" (enturbulated theta - "enturbulated" meaning chaotic, confused and "theta" being the Scientology term for spirit). Thus, if a Scientologist is confronted with some information that opposes Scientology, the word "entheta" immediately comes into his mind and he/she will not examine the information and think critically about it because the word "entheta" has short-circuited the person's ability to do so. This is just one example, of many, many Scientology terms.

Replies from: RolfAndreassen, John_Maxwell_IV
comment by RolfAndreassen · 2013-12-11T19:37:34.362Z · LW(p) · GW(p)

Interesting. Reminds me of Orwell's "crimestop":

Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.

comment by John_Maxwell (John_Maxwell_IV) · 2013-12-13T07:05:27.548Z · LW(p) · GW(p)

The next step is TR-0 "bullbaiting" where the partner says things to the indoctrinee to get them to react. This is called finding a person's "buttons". When the person does react, he is told "flunk" and what he did to flunk and then the phrase that got him to react is repeated until the person no longer reacts. This is very effective as a behavior control method to get the person to blank out when someone starts saying negative things about Scientology.

Hm, this actually sounds like it could be useful...

I wonder if it would be valuable to get partway in to Scientology, then quit, just to observe the power of peer pressure, groupthink, and whatnot.

Replies from: ChristianKl, Dorikka, hesperidia
comment by ChristianKl · 2013-12-13T18:09:54.591Z · LW(p) · GW(p)

I wonder if it would be valuable to get partway in to Scientology, then quit, just to observe the power of peer pressure, groupthink, and whatnot.

Part of scientology program involve sharing personal secrets. If you quit they can use those against you. Scientology is set up in a way that makes it hard to quit.

Replies from: Nornagest, Viliam_Bur
comment by Nornagest · 2013-12-13T18:15:54.268Z · LW(p) · GW(p)

A lot of people still do, though. Last time I looked into this, the retention rate (reckoned between the first serious [i.e. paid] Scientology courses and active participation a couple years later) was about 10%.

Replies from: ChristianKl
comment by ChristianKl · 2013-12-13T19:51:44.901Z · LW(p) · GW(p)

It's not a question of whether they do leave, but whether they do come out ahead.

Scientology courses aren't cheap. If you are going to invest money into training, I would prefer to buy training from an organisation that makes leaving easy instead of making it painful.

Replies from: Nornagest
comment by Nornagest · 2013-12-13T20:00:44.212Z · LW(p) · GW(p)

Oh, I'm pretty confident they don't. But if you had strong reasons for joining and leaving Scientology other than what Scientologists euphemistically call "tech", then in the face of those base rates it seems unlikely to me that they'd manage to suck you in for real.

There are probably safer places to see groupthink in action, though.

comment by Viliam_Bur · 2013-12-13T22:57:49.264Z · LW(p) · GW(p)

Part of scientology program involve sharing personal secrets.

More precisely, sharing personal secrets while connected to an amateur lie detector. And the secrets are documented on paper and stored in archives of the organization. It's optimized for blackmailing former members.

comment by Dorikka · 2013-12-13T20:52:09.103Z · LW(p) · GW(p)

Relevant, in case you hadn't already seen it.

comment by hesperidia · 2013-12-17T20:02:00.114Z · LW(p) · GW(p)

Hm, this actually sounds like it could be useful...

A therapist specializing in exposure therapy will be more useful than a cult for this purpose.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-12-20T06:20:13.239Z · LW(p) · GW(p)

And also more expensive. But yeah, easier ways to get it than going in to scientology.

comment by [deleted] · 2013-12-15T15:06:14.974Z · LW(p) · GW(p)

Motivated cognition is pretty much the only kind of cognition people do. It seems epistemically healthy to sample cognition stemming from diverse motivations.

comment by Bayeslisk · 2013-12-10T09:34:28.830Z · LW(p) · GW(p)

Observation: game theory is not uniquely human, and does not inherently cater to important human values.

Immediate consequence: game theory, taken to extremes already found in human history, is inhuman.

Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.

Conjecture: if you attempt to optimize by taking your own use of game theory and similar arts to similar extremes, you will become a monster of a similar type.

Observation: a refusal to use game theory in considerations results in a strictly worse life than otherwise, and possibly its use more often, more intensely, and with less puny human mercy will result in a better life for you alone.

Conjecture: this really, really looks like the scary and horrifying spawn of a Red Queen race, defecting on PD, and being a jerk in the style of Cthulhu.

Thoughts?

Continue laying siege to me; I'm done here.

Replies from: IlyaShpitser, asr, mwengler, NancyLebovitz, passive_fist, Viliam_Bur, James_Miller
comment by IlyaShpitser · 2013-12-10T12:09:47.707Z · LW(p) · GW(p)

Sorry, how did you go from "non human agents use X" (a statement about commonality) to "X is inhuman" (a value judgement) to "if you use X you become a monster" (an even stronger value judgement), to "being a jerk in the style of Cthulhu" (!!!???).

Does this then mean you think using eyesight is monstrous because cephalopodes also have eyes they independently evolved?

Or that maximizing functions is a bad idea because ants have a different function than humans?

Replies from: Bayeslisk
comment by Bayeslisk · 2013-12-10T16:01:51.353Z · LW(p) · GW(p)

Nonhuman agents use X -> X does not necessarily and pretty likely does not preserve human values -> your overuse of X will cause you not to preserve human values. Being a jerk in a style of Cthulhu I use to mean being a jerk incidentally. Eyesight is not a means of interacting with people, and maximization is not a bad thing if you maximize for the right things, which game theory does not necessarily do.

Replies from: Eugine_Nier, Nornagest
comment by Eugine_Nier · 2013-12-11T01:54:57.866Z · LW(p) · GW(p)

Try replacing "game theory" with "science" or "rationality" in your rant. Do you still agree with it?

comment by Nornagest · 2013-12-11T02:07:09.576Z · LW(p) · GW(p)

The appeal to probability doesn't work here, since you're not drawing at random from X.

comment by asr · 2013-12-10T16:41:08.225Z · LW(p) · GW(p)

Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.

I suspect all economics is inhuman. I suspect that any complex economy that connects millions or billions of people is going to be incomprehensible and inhuman. By far the best explanation I've heard of this thought is by Cosma Shalizi.

The key bit here is the conclusion:

There is a fundamental level at which Marx's nightmare vision is right: capitalism, the market system, whatever you want to call it, is a product of humanity, but each and every one of us confronts it as an autonomous and deeply alien force. Its ends, to the limited and debatable extent that it can even be understood as having them, are simply inhuman. The ideology of the market tell us that we face not something inhuman but superhuman, tells us to embrace our inner zombie cyborg and lose ourselves in the dance. One doesn't know whether to laugh or cry or run screaming.

But, and this is I think something Marx did not sufficiently appreciate, human beings confront all the structures which emerge from our massed interactions in this way. A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market. We have no choice but to live among these alien powers which we create, and to try to direct them to human ends. It is beyond us, it is even beyond all of us, to find "a human measure, intelligible to all, chosen by all", which says how everyone should go.

Replies from: Lumifer, Viliam_Bur
comment by Lumifer · 2013-12-10T17:47:28.548Z · LW(p) · GW(p)

I suspect all economics is inhuman.

I suspect this sub-thread implicitly defined "human" as "generating warm fuzzies". There are, um, problems with this definition.

comment by Viliam_Bur · 2013-12-11T10:25:41.781Z · LW(p) · GW(p)

A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market.

This is a great way to express it. I was thinking about something similar, but could not express it like this.

The essence of the problem is, all "systems of human interaction" are not humans. A market is not a human. An election is not a human. An organization is not a human. Etc. Complaining that we are governed by non-humans is essentially complaining that there is more than one human, and that the interaction between humans is not itself a human. Yes, it is true. Yes, it can (and probably will) have horrible consequences. It just does not depend on any specific school of economics, or anything like this.

comment by mwengler · 2013-12-10T16:14:59.611Z · LW(p) · GW(p)

not uniquely human does not imply inhuman. Lungs are not uniquely human, hardly inhuman though.

Generally, using loaded, non-factual words like "inhuman" and "monster" and "cthulhu" and "horrifying" and "puny" in a pseudo-logical format is worthy of a preacher exhorting illiterates. But is it helpful here? I"d like to think it isn't, and yet I'd rather discuss game theory in a visible thread than downvote your post.

comment by NancyLebovitz · 2013-12-10T15:12:51.787Z · LW(p) · GW(p)

"Inhuman" has strong connotations of inimical to human values-- your argument looks different if it starts with something like "game theory is a non-human-- it's a simplified version of some aspects of human behavior". In that case, altruism is non-human in the same sense.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-12-10T16:02:53.459Z · LW(p) · GW(p)

I guess I'm mostly reacting to RAND and its ilk, having read the article about Schelling's book (which I intend to buy), and am thinking of market failures, as well.

Replies from: mwengler, Lumifer
comment by mwengler · 2013-12-10T19:38:12.281Z · LW(p) · GW(p)

OK Mr Bayeslisk, I am one boxing you. I am upvoting this post now knowing that you predicted I would upvote it and intended all along to include or add some links to the above post so I don't have to do a lot of extra work to figure out what RAND is and what book you are talking about.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-12-10T22:01:55.879Z · LW(p) · GW(p)

That is actually not true at all. I was actually planning on abandoning this trainwreck of an attempt at dissent. But since you're so nice:

http://en.wikipedia.org/wiki/RAND_Corporation

http://en.wikipedia.org/wiki/Thomas_Schelling#The_Strategy_of_Conflict_.281960.29

Replies from: mwengler
comment by mwengler · 2013-12-11T00:02:13.526Z · LW(p) · GW(p)

Apparently I was right to one box all along! Thanks!

comment by Lumifer · 2013-12-10T17:50:01.228Z · LW(p) · GW(p)

am thinking of market failures, as well.

Are you thinking of failures of market alternatives as well?

comment by passive_fist · 2013-12-10T23:39:57.441Z · LW(p) · GW(p)

What you're referring to is a problem I've been thinking about and chipping away at for some time; I've even had some discussions about it here and people have generally been receptive. Maybe the reason you're being downvoted is that you're using the word 'human' to mean 'good'.

The core issue is that humans have empathy, and by this we mean that other people's utility function matters to us. More concisely, our perception of other people's utility forms a part of our utility which is conditionally independent of the direct benefits to us.

Our empathy not only extends to other humans, but also animals and perhaps even robots.

So what are examples of human beings who lack empathy? Lacking empathy is basically the definition of psychopathy. And, indeed, some psychopaths (not all, but some) have been violent criminals who e.g. killed babies for money, tortured people for amusement, etc. etc.

So you're essentially right that a game theory where the players do not have models of each other's utility functions shows aspects of psychopathy and 'inhumanity'.

But that doesn't mean game theory is wrong or 'inhuman'! All it means is that you're missing the 'empathy' ingredient. It also means that it would not be a good idea to build an AI without empathy. That's exactly what CEV attempts to solve. CEV is basically a crude attempt at trying to instill empathy in a machine.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-12-11T01:12:20.173Z · LW(p) · GW(p)

Yes, that was what I was getting at. Like I said elsewhere - game theory is not evil. It's just horrifyingly neutral. I am not using inhuman as bad; I am using inhuman as unfriendly.

Replies from: Lumifer
comment by Lumifer · 2013-12-11T01:17:34.519Z · LW(p) · GW(p)

It's just horrifyingly neutral.

Then you must be horrified by all science.

comment by Viliam_Bur · 2013-12-10T10:36:31.130Z · LW(p) · GW(p)

Game theory is about strategies, not about values. It tells you which strategy should you use, if your goal is to maximize X. It does not tell you what X is. (Although some X's, such as survival, are instrumental goals for many different terminal goals, so they will be supported by many strategies.)

There is a risk of maximizing some X that looks like a good approximation of human values, but its actual maximization is unFriendly.

Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman

Connotational objection: so is any school of anything; at least unless the problem of Friendliness is solved.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-12-10T16:05:18.162Z · LW(p) · GW(p)

OK, I think I was misunderstood and also tired and phrased things poorly. Game theory itself is not a bad thing; it is somewhat like a knife, or a nuke. It has no intrinsic morality, but the things it seems to tend to be used for, for several reasons, wind up being things that eject negative externalities like crazy.

Yes, but this seems to be most egregious when you advocate letting millions of people starve because the precious Market might be upset.

Replies from: asr, Viliam_Bur, Lumifer
comment by asr · 2013-12-10T16:43:41.217Z · LW(p) · GW(p)

Yes, but this seems to be most egregious when you advocate letting millions of people starve because the precious Market might be upset.

Who precisely are you thinking of, who advocated allowing mass starvation for this reason?

comment by Viliam_Bur · 2013-12-10T22:15:45.214Z · LW(p) · GW(p)

Millions of people did starve for reasons completely opposed to free markets.

Besides the fact that maximizing a non-Friendly function leads to horrible results (whether the system being maximized is the Market, the Party, the Church, or... whatever), what exactly are you trying to say? Do you think that markets create more horrible results than those other options? Do you have any specific evidence for that? In that case it would be probably better to discuss the specific thing, before moving to a wide generalization.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-12-11T01:10:48.087Z · LW(p) · GW(p)

I have no idea how the Holodomor is germane to this discussion.

Replies from: asr, Lumifer
comment by asr · 2013-12-11T02:48:41.055Z · LW(p) · GW(p)

I have no idea how the Holodomor is germane to this discussion.

The observation being made, I believe, is that the most prominent examples in the 20th century of mass death due to famine were caused by economic and political systems very far from the Austrian school economics. There's a longish list of mass starvation due to Communist governments.

Is there an example of Austrian economists giving advice that led to a major famine, or that would have led to famine? I cannot offhand think of an example of anybody advocating "letting millions of people starve because the precious Market might be upset."

comment by Lumifer · 2013-12-11T01:16:16.292Z · LW(p) · GW(p)

You said "letting millions of people starve".

There were not that many cases of millions of people starving during the last hundred years.

comment by Lumifer · 2013-12-10T17:44:57.397Z · LW(p) · GW(p)

...and phrased things poorly

Yes.

but the things it seems to tend to be used for, for several reasons, wind up being things that eject negative externalities like crazy.

I suspect you're looking at it with a rather biased view.

you advocate letting millions of people starve because the precious Market might be upset.

Sigh. You made a cobman -- one constructed of mud and straw. Congratulations.

comment by James_Miller · 2013-12-10T18:16:37.017Z · LW(p) · GW(p)

Game theory is not like calculus or evolutionary theory--something any alien race smart enough to develop space travel is likely to formulate. It does represent human values.

Replies from: PECOS-9, Lumifer
comment by PECOS-9 · 2013-12-10T18:31:42.050Z · LW(p) · GW(p)

Can you explain this? I always thought of game theory as being like calculus, and not about human values (like this comment says).

Replies from: James_Miller
comment by James_Miller · 2013-12-10T19:13:17.877Z · LW(p) · GW(p)

You solve games by having solution criteria . Unfortunately, for any reasonable list of solution criteria you will always be able to find games where the result doesn't seem to make sense. Also, there is no set of obviously correct and complete solution concepts. Consider the following game:

Two rational people simultaneously and secretly write down a real number [0,100]. The person who writes down the highest number gets a payoff of zero, and the person who writes down the lowest number gets that as his payoff. If there is a tie they each get zero. What happens?

The only "Nash equilibrium" (the most important solution concept in all of game theory) is for both players to write down 0, but this is a crazy result because picking 0 is weakly dominated by picking any other number (expect 100).

Game theory also has trouble solving many games where (a) Player Two only gets to move if Player One does a certain thing, (b) Player One's strategy is determined by what he expects Player Two would do if Player Two gets to move, and (c) in equilibrium Player Two never moves.

Replies from: Emile
comment by Emile · 2013-12-10T21:53:56.824Z · LW(p) · GW(p)

I'm not understanding you, the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did.

Replies from: James_Miller
comment by James_Miller · 2013-12-10T21:56:14.051Z · LW(p) · GW(p)

Many games don't have solutions, or the solutions depend on arbitrary criteria.

Replies from: Emile
comment by Emile · 2013-12-10T22:12:01.067Z · LW(p) · GW(p)

... and?

Are you agreeing or disagreeing with "the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did"?

Replies from: James_Miller
comment by James_Miller · 2013-12-10T22:26:13.832Z · LW(p) · GW(p)

It depends on what you mean by "might" and "discover" (as opposed to invent). I predict that smart aliens' theories of physics, chemistry, and evolution would be much more similar to ours than their theories of how rational people play games would be.

comment by Lumifer · 2013-12-10T18:49:31.200Z · LW(p) · GW(p)

Game theory ... does represent human values.

How so? Game theory basically studies interactions between two (or more) agents which can make choices the outcome of which depends on what the other agent does. You can use game theory to model interaction between two pieces of software, for example.

Replies from: James_Miller
comment by James_Miller · 2013-12-10T19:14:41.369Z · LW(p) · GW(p)

Please see my answer to PECOS-9.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T19:25:38.089Z · LW(p) · GW(p)

I still don't see what does all this have to do with human values.

I am talking about game theory as a field of inquiry. You're talking about the current state of the art in this field and pointing out that it has unsolved issues. So? Physics has unsolved issues, too.

Replies from: James_Miller
comment by James_Miller · 2013-12-10T19:28:16.060Z · LW(p) · GW(p)

There are proofs showing that game theory can never be solved.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T19:36:21.330Z · LW(p) · GW(p)

I still don't see what does all this have to do with human values.

I also don't understand what does it mean for game theory to "be solved". If you mean that in certain specific situations you don't get an answer, that's true for physics as well.

Replies from: James_Miller, James_Miller
comment by James_Miller · 2013-12-10T19:51:51.731Z · LW(p) · GW(p)

Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T19:57:07.329Z · LW(p) · GW(p)

Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.

To continue with physics: physics would be solved if there were a set of reasonable criteria which, if applied to every possible interaction of particles, would cause you to know what the particles would do.

Replies from: James_Miller
comment by James_Miller · 2013-12-10T20:38:05.826Z · LW(p) · GW(p)

Consider a situation in which using physics you could prove that (1) X won't happen, and (2) X will happen. If this situation existed physics wouldn't be capable of being solved, but my understanding of science is that such a situation is unlikely to exist. Alas, this kind of situation does come up in game theory.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T20:45:24.993Z · LW(p) · GW(p)

Consider a situation in which using physics you could prove that (1) X won't happen, and (2) X will happen.

Well, it's math but...

comment by James_Miller · 2013-12-10T19:49:25.240Z · LW(p) · GW(p)

Whether you get an answer is dependent on the criteria you choose, but these criteria must have arbitrariness in them even for rational people. Consider the solution concept "never play a weakly dominated strategy." This is neither right nor wrong but an arbitrary criteria that reflects human values.

Saying "the game theory solution is A,Y" is closer to "this picture is pretty" than "the electron will..."

Also, assuming someone is rational and wants to maximize his payoff isn't enough to fully specify him, and consequently you need to bring in human values to figure out how this person will behave.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T20:00:40.859Z · LW(p) · GW(p)

You seem to be talking about forecasting human behavior and giving advice to humans about how to behave.

That, of course, depends on human values. But that is related to game theory in the same way engineering is related to mathematics. If you are building a bridge you need to know the properties of materials you're building it out of. Doesn't change the equations, though.

Replies from: James_Miller
comment by James_Miller · 2013-12-10T20:35:43.022Z · LW(p) · GW(p)

You know that a race of aliens is rational. Do you need to know more about their values to predict how they will build bridges? Yes. Do you need to know more about their values to predict how they will play games? Yes.

Game theory is (basically) the study of how rational people behave. Unfortunately, there will always exist relatively simple games for which you can not use the tools of game theory to determine how players will behave.

Replies from: Lumifer
comment by Lumifer · 2013-12-10T20:44:24.684Z · LW(p) · GW(p)

Game theory is (basically) the study of how rational people behave.

Ah. We have a terminology difference. I defined my understanding of game theory a bit upthread and it's not about people at all. For example, consider software agents operating in a network with distributed resources and untrusted counterparties.

comment by Bayeslisk · 2013-12-11T04:57:13.533Z · LW(p) · GW(p)

I do not feel up to defending myself against multiple relatively hostile people. My apologies for having a belief that does not correspond to the prevailing LW memeplex. Kindly leave me alone to be wrong.