Open Thread, March 1-15, 2013

post by Jayson_Virissimo · 2013-03-01T12:00:44.477Z · score: 3 (4 votes) · LW · GW · Legacy · 241 comments

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

241 comments

Comments sorted by top scores.

comment by lukeprog · 2013-03-06T19:52:53.532Z · score: 22 (22 votes) · LW · GW

Why am I not signed up for cryonics?

Here's my model.

In most futures, everyone is simply dead.

There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.

I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.

This does, however, put me into disagreement with both Robin Hanson ("More likely than not, most folks who die today didn't have to die!") and Eliezer Yudkowsky ("Not signing up for cryonics [says that] you've stopped believing that human life, and your own life, is something of value").

comment by Elithrion · 2013-03-07T00:07:10.125Z · score: 7 (7 votes) · LW · GW

So are you saying the P(worse-than-death|revived) and the P(better-than-death|revived) probabilities are of similar magnitude? I'm having trouble imagining that. In my mind, you are most likely to be revived because the reviver feels some sort of moral obligation towards you, so the future in which this happens should, on the whole, be pretty decent. If it's a future of eternal torture, it seems much less likely that something in it will care enough to revive some cryonics patients when it could, for example, design and make a person optimised for experiencing the maximal possible amount of misery. Or, to put it differently, the very fact that something wants to revive you suggests that that something cares about a very narrow set of objectives, and if it cares about that set of objects it's likely because they were put there with the aim of achieving a "good" outcome.

(As an aside, I'm not very averse to "worse-than-death" outcomes, so my doubts definitely do arise partially from that, but at the same time I think they are reasonable in their own right.)

comment by lukeprog · 2013-03-07T00:56:57.752Z · score: 2 (2 votes) · LW · GW

So are you saying the P(worse-than-death|revived) and the P(better-than-death|revived) probabilities are of similar magnitude?

Yes. Like, maybe the latter probability is only 10 or 100 times greater than the former probability.

comment by CarlShulman · 2013-08-14T01:27:45.173Z · score: 3 (3 votes) · LW · GW

This seems strangely averse to bad outcomes to me. Are you taking into account that the ratio between the goodness of the best possible experiences and the badness of the worst possible experiences (per second, and per year) should be much closer to 1:1 than the ratio of the most intense per second experiences we observe today, for reasons discussed in this post?

comment by Pablo_Stafforini · 2015-01-15T05:50:46.025Z · score: 2 (2 votes) · LW · GW

Why should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don't see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.

comment by CarlShulman · 2015-01-16T02:28:17.200Z · score: 2 (2 votes) · LW · GW

For some of the same reasons depressed people take drugs to elevate their mood.

comment by lukeprog · 2013-08-14T03:44:08.869Z · score: 1 (1 votes) · LW · GW

I like that post very much. I'm trying to make such an update, but it's hard to tell how much I should adjust from my intuitive impressions.

comment by hairyfigment · 2013-03-16T06:54:56.708Z · score: 1 (1 votes) · LW · GW

OK, what? When you say "worse-than-death", are you including Friendship is Optimal?

What about a variant of Hanson's future where:

  • versions of you repeatedly come into existence, do unfulfilling work for a while, and cease to exist
  • no version of you contacts any of the others
  • none of these future-selves directly contribute to changing this situation, but
  • your memories do make it into a mind that can act more freely than most or all of us today, and
  • the experiences of people like your other selves influence the values of this mind, and
  • the world stops using unhappy versions of you.

(Edited for fatigue.)

comment by lukeprog · 2013-08-12T18:59:49.118Z · score: 1 (1 votes) · LW · GW

I haven't read Friendship is Optimal, because I find it difficult to enjoy reading fiction in general.

Not sure how I feel about the described Hansonian future, actually.

comment by Synaptic · 2015-02-21T23:02:00.211Z · score: 2 (2 votes) · LW · GW

I responded to this as a post here: http://lesswrong.com/r/discussion/lw/lrf/can_we_decrease_the_risk_of_worsethandeath/

comment by MugaSofer · 2013-09-04T20:17:42.911Z · score: 1 (1 votes) · LW · GW

This does, however, put me into disagreement with both Robin Hanson ("More likely than not, most folks who die today didn't have to die!") and Eliezer Yudkowsky ("Not signing up for cryonics [says that] you've stopped believing that human life, and your own life, is something of value").

I ... don't think it does, actually. Well, the bit about "most possible futures are empty" does put you in conflict with Robin Hanson ("More likely than not, most folks who die today didn't have to die!"), I guess, but the actual thesis seems to fall into the category of Eliezer Yudkowsky's "you've stopped believing that human life, and your own life, is something of value" (after a certain point in history.)

comment by ModusPonies · 2013-03-06T23:08:06.686Z · score: 1 (5 votes) · LW · GW

I'm not very averse to death.

Whoa. What? I notice that I am confused. Requesting additional information.

Most of the time, if I read something like that, I'd assume it was merely false—empty posturing from someone who didn't understand the implications of what they were writing. In this case, though... everything else I've seen you write is coherent and precise. I'm inclined to believe your words literally, in which case either A) I'm missing some sort of context or qualifiers or B) you really ought to see a therapist or something.

Do you mean you're not averse to death decades from now? Does that feel different from the possibility of getting hit by a bus next week?

(Only tangentially related, but I'm curious: what's your order of magnitude probability estimate that cryonics would actually work?)

comment by ArisKatsaris · 2013-03-06T23:41:52.648Z · score: 16 (16 votes) · LW · GW

you really ought to see a therapist or something.

No, I'm sorry, but there are simply many atheists who really aren't that scared of non-existence. We don't seek it out, we do prefer continuation of our lives and its many joys, but dying doesn't scare the hell out of us either.

This, in me at least, has nothing to do with depression or anything that requires therapy. I'm not suicidal in the least; even though I'd be scared of being trapped in an SF-style dystopia that didn't allow me to so suicide.

comment by [deleted] · 2013-03-07T15:05:25.511Z · score: 2 (2 votes) · LW · GW

What's that quote that says something to the nature of "I didn't exist for billions of years before I was born, and it didn't bother me one bit" ?

comment by tut · 2013-03-07T17:31:27.725Z · score: 6 (6 votes) · LW · GW

“I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it.” ― Mark Twain

comment by MugaSofer · 2013-09-04T20:19:47.354Z · score: -2 (4 votes) · LW · GW

No, I'm sorry, but there are simply many atheists who really aren't that scared of non-existence.

The difference being that those are biased, whereas lukeprog would be expected to see through once the true rejection was addressed, which it has been.

I assume. I am not any of the participants in this conversation.

comment by lukeprog · 2013-03-07T00:53:24.254Z · score: 2 (2 votes) · LW · GW

Whoa. What?

Sorry, I just meant that I seem to be less averse to death than other people. I'd be very sad to die, and not have the chance to achieve my goals, but I'm not as terrified as death as many people seem to be. I've clarified the original comment.

comment by James_Miller · 2013-12-24T17:51:15.398Z · score: 0 (0 votes) · LW · GW

In most futures, everyone is simply dead.

If there is a high probability of these bad futures happening before you retire, this belief reduces the cost of cryonics to you in terms of the opportunity cost of instead putting money into retirement accounts.

In the really bad futures you probably don't experience extra suffering if you sign up for cryonics because all possible types of human minds get simulated.

comment by CellBioGuy · 2013-03-01T12:43:51.694Z · score: 17 (17 votes) · LW · GW

A new comet from the oort cloud, >10 km wide, has been discovered that is doing a flyby of Mars in October of 2014. The current orbit is rather uncertain, but it is probably passing within 100,000 km and the max likelihood is ~35,000 km. There is a tiny but non-negligable chance this thing could actually hit the red planet, in which case we would get to witness an event on the same order of magnitude as the K-T event that killed off the non-avian dinosaurs! (and lose everything we have on the surface of the planet and in orbit.)

I, for one, hope it hits. That would not be a once in a lifetime opportunity. That would be a ONCE IN THE HISTORY OF HOMINID LIFE opportunity! We would get to observe a large impact on a terrestrial body as it happened and watch the aftermath as it played out for decades!

As is, the most likely situation though is one in which we get to closely sample and observe the comet with everything we have in orbit around Mars. The orbit will be nailed down better in a few months when the comet comes out from the other side of the sun.

And to quote myself towards the end of the last open thread:

I don't know if this has been brought up around here before, but the B612 foundation is planning to launch an infrared space telescope into a venus-like orbit around 2017. It will be able to detect nearly every earth-crossing rock larger than 150 meters wide, and a significant fraction down to a few at 30ish meters. The infrared optics looking outwards makes it much easier to see the warm rocks against the black of space without interference from the sun and would quickly increase the number of known near earth objects by two orders of magnitude. This is exactly the mission I've been wishing / occasionally agitating for NASA to get off their behinds and do for five years. They've got the contract with Ball Aerospace to build the spacecraft and plan to launch on a Falcon 9 rocket. And they accept donations.

comment by gwern · 2013-03-01T17:29:31.668Z · score: 10 (10 votes) · LW · GW

I saw a mention of that elsewhere, but I didn't realize that the core had a lower bound of 10km. Wow. I really hope it impacts too; we saw some chatter about the need for a space guard with a dinky little thing hitting Chelyabinsk, but imagine the effect of watching a dinosaur-killer hit Mars!

comment by CellBioGuy · 2013-03-01T22:08:06.638Z · score: 1 (1 votes) · LW · GW

For future reference, the JPL small body database entry on the comet:

http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=C%2F2013%20A1;orb=1;cov=0;log=0;cad=1;rad=0#cad

Different sources seem to have different orbital calculations, this one indicates a most likely close approach of ~100,000 kilometers with the uncertainty wide enough to include a close approach of 0 km.

If nothing else, we very well may get pictures from the surface rovers of the head of a comet literally filling the sky.

comment by Thomas · 2013-03-02T11:53:09.621Z · score: 2 (2 votes) · LW · GW

I am flabbergasted, I have no explanation for this situation.

If this comet is really that big and has approximately said flyby orbit, how frequent are those? If one every thousand years, there were 60000 of those since the TC event. How come we had only one collision of this magnitude?

Maybe they are less frequent. How lucky we are then to witness one of them right now? Too lucky, I guess.

As on the other hand, it looks we are just too lucky to have no major collision of that kind relatively recently, if they were quite common.

Maybe I am missing something odd. Like an unexpected gravity or other effect, by which an actual collision is much more difficult. Something in line with this. What makes sense, but only after a careful consideration.

Maybe a planet like Mars or Earth repels comets somehow? Dodge them somehow? Some weird effect like this?

comment by NancyLebovitz · 2013-03-02T14:34:37.641Z · score: 5 (5 votes) · LW · GW

I recommend Taleb's The Black Swan. The major premise is that people tend to underestimate the likelihood of weird events. It's not that they can predict any particular weird event, it's about overall likelihood of weird events with large consequences.

comment by CellBioGuy · 2013-03-02T15:32:20.766Z · score: 2 (2 votes) · LW · GW

Another way of stating it in this circumstance: there are so many different things that we would consider ourselves lucky to see or that we would notice as unusual that even if the probability of any one of them is low the probability that we see something isn't that low.

I second the book recommendation by the way.

comment by Thomas · 2015-05-29T08:02:33.172Z · score: 0 (0 votes) · LW · GW

Flabbergasted no more! There was no collision, of course.

Should have known it, immediately!

comment by CellBioGuy · 2013-03-02T15:25:40.677Z · score: 0 (0 votes) · LW · GW

If you are randomly shooting a rock through the solar system, "close approach of mars within 100,000 km" is 870 times as likely as "hitting mars". That brings a 'once in 100 million years (really roughly guessing based on what I know of earth's geological history)' event down to the order of 'once in a hundred thousand years', and the proper reference class of things we would be considering ourselves this lucky to see is probably more like 'close approach of a large comet to a terrestrial body' rather than singling out mars in particular. I don't know enough about distributions of comet orbital energies to consider different likelihoods of comets having parabolic orbits that bring them closer to the center of the solar system versus further away to compare the odds of things going near the different terrestrial planets with different orbits.

The gravity of a planet actually slightly increases the fraction of randomly-shot-past-them objects that hit them over just sweeping out their surface area through space, but for something with a relative velocity of 55 km/s (!) that effect is tiny.

comment by NancyLebovitz · 2013-03-02T15:40:17.745Z · score: 2 (2 votes) · LW · GW

Should we bring Shoemaker-Levy into this discussion?

comment by Thomas · 2013-03-02T18:16:09.130Z · score: 0 (0 votes) · LW · GW

If so, we are indeed very lucky to observe an event, which happens every 100 000 years or so.

OTOH, I've conclude, that it is in fact less likely for a planet to be hit by a random comet than it is for a big massless balloon of the same size, to be hit by the same comet.

Why is that? Roughly speaking, if the comet is heading toward some future geometric meeting point, the planet will accelerate it by its own gravity and the comet will come too early and therefore flies by. It's a very narrow set of circumstances for an actual collision to take place.

A bit counter intuitive but it explains why we have so few actual collisions, despite of the heavy traffic. Collisions do happen, but less often than a random chance would suggest. The gravity protects us mostly.

comment by gwern · 2013-03-12T20:43:01.479Z · score: 15 (15 votes) · LW · GW

Zeo Inc is almost certainly shutting down.

Zeo users should assume the worst and take action accordingly:

  1. Update your sleep data and then export all your sleep data from the Zeo website as a CSV (the bar on the right hand side, in tiny grey text)
  2. Upgrade your Zeo with the new firmware if you have not already done so, so it will store unencrypted data which can be accessed without the Zeo website.
  3. Depending on how long you plan to use your Zeo, you may want to buy replacement headbands (~$15 each, I think you can get a year's use out of them). Amazon still stocks the original bedside unit's replacement headbands and the cellphone/mobile unit replacement headbands but who knows how many they still have?

I'm sad that they're closing down. I've run so many experiments with my Zeo, and there doesn't seem to be any successor devices on the horizon: all the other sleep devices I've read of are lame accelerometer-based gizmos.

comment by skjonas · 2013-03-13T16:53:28.735Z · score: 1 (1 votes) · LW · GW

I'm sad about this as well. The Zeo has been the only QS thing that I've been able to get my girlfriend to use, and it has increased her understanding of her sleep patterns dramatically.

I now look back with a twinge of anger at all the times that someone told me that they track their stages of sleep too, but with their iphone app and "it was only a dollar."

And to be clear, you can only upgrade the firmware on the Zeo bedside unit, right?

comment by gwern · 2013-03-13T17:27:58.799Z · score: 0 (0 votes) · LW · GW

And to be clear, you can only upgrade the firmware on the Zeo bedside unit, right?

I don't know anything about the mobile unit.

comment by Qiaochu_Yuan · 2013-03-13T01:02:46.207Z · score: 1 (1 votes) · LW · GW

What about aspiring Zeo users? Is it too late to get in on this?

comment by gwern · 2013-03-13T01:09:03.244Z · score: 2 (2 votes) · LW · GW

Depends. If you know that it's shutting down, are willing to handle the data exporting yourself, and also are willing to possibly pay rising costs for a Zeo unit and replacement headbands...

I know I don't intend to stop (already bought another 3 replacement headbands on Amazon), but I've already used my Zeo for a long time and seem to be pretty unusual in how much I use it.

comment by Jayson_Virissimo · 2013-03-12T23:39:12.126Z · score: 1 (1 votes) · LW · GW

Thanks for the heads up.

comment by confusionobligation · 2013-03-31T03:48:20.148Z · score: 0 (0 votes) · LW · GW

The firmware is no longer available on their site. I tried to email them, but I got an automated response telling me that customer service is no longer responding to emails and to check the help on their site. Can anyone share the 2.6.3R firmware?

Also, Amazon is sold out of the bedside headbands. Bad timing for me - I only have one left.

comment by gwern · 2013-03-31T19:38:08.073Z · score: 0 (0 votes) · LW · GW

Can anyone share the 2.6.3R firmware?

I am not sure whether there was not some per-user customization or something, but for what it's worth, here's the copy of my firmware: http://dl.dropbox.com/u/85192141/firmware-v2.6.3R-zeo.img

Also, Amazon is sold out of the bedside headbands. Bad timing for me - I only have one left.

2 or 3 days after I went around all Paul Revere-style, I was told that Amazon had run out. So I guess they turned out to not have many at all. (I had 3 left over from previously, and bought another 3, so I figure I should be able to get at least 3 more years out of my Zeo.)

comment by [deleted] · 2013-03-01T15:19:20.959Z · score: 12 (18 votes) · LW · GW

I wanted to apologize for the post I made on Discussion yesterday. I hope one of the mods deletes it. I should have thought more carefully before posting something controversial like that. I made multiple errors in the process of writing the post. One of the biggest mistakes I made was mentioning the name of a certain organization in particular, in a way that might harm that organization.

In the future, before I post anything, I will ask myself, "Will this post raise or lower the sanity waterline?" The post I made clearly didn't really do much for the former, and could easily have contributed to the latter. For that I am filled with regret.

I have a part-time job, and I will be donating at least $150 of my income to the organization I mentioned and possibly harmed in the previous post I made.

I'm not making this comment for the purpose of gaining back karma; I'm making it because I still want to be taken seriously in this community as a rationalist. I know that this may never happen, now, but if that's the case, I can always just make another account. Less Wrong is amazing, and I like it here.

comment by Kaj_Sotala · 2013-03-01T16:55:10.887Z · score: 10 (14 votes) · LW · GW

If you're not making mistakes, you're not taking risks, and that means you're not going anywhere. The key is to make mistakes faster than the competition, so you have more chances to learn and win.

-- John. W. Holt

comment by [deleted] · 2013-03-01T20:23:41.825Z · score: 3 (3 votes) · LW · GW

Agree with the first part but not (the wording of) the second part. If you know beforehand that something would be a mistake, don't be stupid.

comment by Qiaochu_Yuan · 2013-03-01T20:56:28.525Z · score: 3 (3 votes) · LW · GW

But you shouldn't necessarily trust your brain to accurately predict whether things will be mistakes.

comment by ChristianKl · 2013-03-04T16:28:55.327Z · score: 1 (1 votes) · LW · GW

The question is where you cut off. What chance of making a mistake is acceptable?

comment by [deleted] · 2013-03-01T18:25:04.066Z · score: 3 (9 votes) · LW · GW

JOHN HOLT!

*makes touchdown signal*

comment by wedrifid · 2013-03-02T01:44:47.625Z · score: 5 (7 votes) · LW · GW

I'm not making this comment for the purpose of gaining back karma; I'm making it because I still want to be taken seriously in this community as a rationalist. I know that this may never happen, now, but if that's the case, I can always just make another account.

Based on your handle I assumed you already had another account. I do suggest making another one now. There is no need to take that baggage with you---leave that kind of shit as anonymous.

comment by [deleted] · 2013-03-03T14:17:26.404Z · score: 1 (1 votes) · LW · GW

That account has been used regularly or semi-regularly for months, so despite the name it's not exactly a throwaway.

comment by [deleted] · 2013-03-02T05:00:39.500Z · score: 0 (0 votes) · LW · GW

Yes, I will be making a new account. Good idea. This is my last comment from this one.

comment by Kawoomba · 2013-03-01T17:00:48.195Z · score: 1 (5 votes) · LW · GW

Can we still send you our ... you know ... merchandise?

comment by [deleted] · 2013-03-02T12:43:40.203Z · score: 0 (0 votes) · LW · GW

In the future, before I post anything, I will ask myself, "Will this post raise or lower the sanity waterline?"

Great! I'll explicitly use that heuristic myself from now on (if I remember to).

comment by Viliam_Bur · 2013-03-02T19:53:05.086Z · score: 1 (3 votes) · LW · GW

There could be a plugin for this. Imagine that before sending a post, you have to answer a few questions, such as: "Your certainty that this post will move the sanity waterline in a positive direction".

But we are only humans. We would learn very soon to ignore it, and just check the "right" answers automatically.

Maybe it would work better if it displayed only randomly, once in a few comments. And then the given comment could be sent to reviewers, who could inflict huge negative karma if they strongly disagree with the estimate.

Or perhaps there could be an option to click "I am sure this comment is useful and harmless" when sending a comment. A comment without this option gets +1 karma on upvote and -1 on downvote; a comment with this option gets +2 on upvote and -5 on downvote. This could make people think before posting.

comment by drethelin · 2013-03-03T03:34:46.936Z · score: 4 (4 votes) · LW · GW

I like the idea of a questionnaire that pops up randomly when making a comment at a rate of maybe 1-10 percent. possible example questions:

  • Do you think this comment is funny?
  • Do you think this comment is useful to the person you're responding to?
  • Do you think this comment is useful to anyone but the person you're responding to?
  • Do you think this comment will have positive karma? How much?
  • Would you make this comment to anyone's face?
  • etc

Displaying one or more of these at a rate that makes you think but not at a rate that would be super annoying would be fun and provide some neat databases.

On the other hand I'm sure programming it would be a bitch.

comment by [deleted] · 2013-03-03T14:41:26.025Z · score: 0 (0 votes) · LW · GW

But we are only humans. We would learn very soon to ignore it, and just check the "right" answers automatically.

There could be something preventing you from unthinkingly click "Yes", akin to the option in LeechBlock whereby you have to copy a code of 32/64/128 random characters before being able to change the settings. (But that might backfire, by discouraging people from posting comments even when they would be unobjectionable.)

Or perhaps there could be an option to click "I am sure this comment is useful and harmless" when sending a comment. A comment without this option gets +1 karma on upvote and -1 on downvote; a comment with this option gets +2 on upvote and -5 on downvote. This could make people think before posting.

I would love that.

comment by gwern · 2013-03-13T23:58:17.937Z · score: 10 (10 votes) · LW · GW

Google Reader is being killed 1 July 2013. Export your OPML and start searching for a new RSS reader...

comment by Emily · 2013-03-14T07:02:15.577Z · score: 0 (0 votes) · LW · GW

I finally just started using RSS feeds and it has improved my workflow dramatically. Now they're breaking my system on me?! Thanks for letting me know...

comment by Tenoke · 2013-03-14T00:03:57.161Z · score: 0 (0 votes) · LW · GW

Do you suggest any particular RSS readers?

comment by Risto_Saarelma · 2013-03-14T06:51:07.938Z · score: 0 (0 votes) · LW · GW

I'm already considering moving to email and running the whole thing on my home server.

comment by gwern · 2013-03-14T00:18:57.974Z · score: 0 (0 votes) · LW · GW

No. I've seen Newsblur and Netvibes mentioned but I've never used them. Some discussion in

comment by Tenoke · 2013-03-14T00:49:43.515Z · score: 1 (1 votes) · LW · GW

Meh, I guess we have a few months to see people's reports on the alternatives.

comment by gwern · 2013-03-14T22:02:55.970Z · score: 0 (0 votes) · LW · GW

At least in the case of NewsBlur we'll have to wait to see people's reports, since they are being hammered by all the Reader refugees.

comment by Jonathan_Graehl · 2013-03-16T00:25:57.454Z · score: 0 (0 votes) · LW · GW

i'm happy w/ feedly and haven't been asked for money yet (have only used for 2 days)

comment by shminux · 2013-03-16T07:12:35.149Z · score: 0 (0 votes) · LW · GW

I've imported my feeds into Google Currents, since it can also be used to read regular news, not just feeds, which I do, anyway. Trying it out now, hopefully Google will be improving it, if they want the reader users to stay with Google.

comment by shminux · 2013-03-18T23:32:56.959Z · score: 1 (1 votes) · LW · GW

Update: So far Google Currents sucks for feeds. Totally unintuitive layout and gestures, does not show new feeds (or I cannot find where it does), the formatting of several items is so poor, I give up and go to the original site. Switching back to Google Reader until something better comes along.

comment by Scott Alexander (Yvain) · 2013-03-02T03:01:10.171Z · score: 10 (10 votes) · LW · GW

I posted this in the waning days of the last open thread, but I hope no one will mind the slight repeat here.

The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.

comment by ModusPonies · 2013-03-02T21:08:24.774Z · score: 9 (9 votes) · LW · GW

A call for advice: I'm looking into cognitive behavioral therapy—specifically, I'm planning to use an online resource or a book to learn CBT methods in hopes of preventing my depression from recurring. It looks like these methods have a good chance of working, although the evidence isn't as strong as for in-person CBT. At this point, I'm trying to decide which resources to learn from. Any recommendations or anecdotes would be appreciated.

comment by torekp · 2013-03-03T16:11:44.728Z · score: 4 (4 votes) · LW · GW

My wife's a psychologist and depression is one of her specialties. Here are her recommendations:

Self-Therapy for Your Inner Critic book

Free guided meditations for "The Mindful Way Through Depression" (get some practice before using "working with difficulty" meditation): streamable or downloadable

And the associated book

Please let us know how it goes.

comment by FiftyTwo · 2013-03-04T01:01:13.414Z · score: 2 (2 votes) · LW · GW

I''v had success with Introducing Cognitive Behavioural Therapy: A Practical Guide

comment by wallowinmaya · 2013-03-06T18:32:20.488Z · score: 1 (1 votes) · LW · GW

I recommend Feeling Good by David Burns. It'a a very good overview into CBT, covers all types of medication and was also recommended by Lukeprog IIRC.

comment by coffeespoons · 2013-03-03T16:43:07.578Z · score: 1 (1 votes) · LW · GW

Mind Over Mood is ace!

comment by beoShaffer · 2013-03-02T21:44:46.145Z · score: 1 (1 votes) · LW · GW

I am also interested in learning more about CBT.

comment by OpenThreadGuy · 2013-03-03T17:56:31.107Z · score: 8 (8 votes) · LW · GW

For various reasons, I cannot make open threads anymore, ever again.

comment by gwern · 2013-03-03T19:26:51.094Z · score: 7 (7 votes) · LW · GW

Message acknowledged. We appreciate your good work. And godspeed, Grognor.

El psy congroo.

comment by sixes_and_sevens · 2013-03-01T13:11:06.835Z · score: 8 (8 votes) · LW · GW

Over the past month, I have started taking melatonin supplements, instigated a new productivity system, implemented significant changes in diet and begun a new fitness routine. February is also a month where I anticipate changes in my mood. I find myself moderately depressed and highly irritable with no situational cause, and I have no idea which of these things, if any, are responsible.

This is not ideal.

I'd been considering breaking my calendar down into two-week blocks, and staging interventions in accordance with this. Then the restless spirit of Paul Graham sat on my shoulder and told me to turn it into an amazing web service that would let people assign themselves into self-experimental cohorts, where they're algorithmically assigned to balanced blocks so that effects of overlapping interventions can be teased apart.

I've never really gotten that into the whole Quantified Self thing, but I'd be keen to see if something like this existed already. If not, I'd consider putting such a thing together.

Any discussion/observations on this general subject?

comment by gwern · 2013-03-01T17:23:43.354Z · score: 2 (2 votes) · LW · GW

Then the restless spirit of Paul Graham sat on my shoulder and told me to turn it into an amazing web service that would let people assign themselves into self-experimental cohorts, where they're algorithmically assigned to balanced blocks so that effects of overlapping interventions can be teased apart.

So it's a web service that would spit out a random Latin square and then run ANOVA on the results for you?

I don't think I've heard of such a thing. (Most people who would follow the balanced design and understand the results are already able to do it for themselves in R/Stata/SPSS etc.) Statwing.com might have something useful, they seemed to be headed in that direction of 'making statistics easy'.

comment by sixes_and_sevens · 2013-03-01T17:45:18.260Z · score: 2 (2 votes) · LW · GW

I was imagining a site that would look at all the different things you're trying at the moment, look at all the things other people are trying, and give you a macro-schedule for starting them that works towards establishing cyclicality across all users.

It could also manage your micro-schedule, (prompt you to take a pill, do twenty sit-ups, squirt cold water in your right ear, etc.), ask for metrics and let users log salient information and observations. Come to think of it, once that infrastructure is already in place, there's no reason you couldn't open it up as a platform for more legitimate and formal trials.

comment by gwern · 2013-03-01T17:55:00.851Z · score: 4 (4 votes) · LW · GW

Mm. So not just scheduling your own interventions but try to balance across users too... No, I don't know of anything like that. CureTogether actually got some research published, but I don't think randomization or balancing was involved. (And trying to get nootropics or self-help geeks to collectively do something is like trying to herd deaf cats into pushing wet spaghetti...)

comment by [deleted] · 2013-03-01T15:52:07.034Z · score: 1 (1 votes) · LW · GW

When I found myself depressed and irritable on a diet, it seemed to be evidence that I was hungry. Is there any food or drink that you can try consuming to stave off that feeling, while still following the diet? As an example my diet allowed me to consume unlimited amounts of unprocessed fruit, so if I felt depressed and irritable, I could eat that until I felt better, and not hurt my diet at all.

comment by sixes_and_sevens · 2013-03-01T16:26:29.551Z · score: 0 (0 votes) · LW · GW

I've ruled out hunger/low blood sugar as a simple causal factor. I imagine it's a combination of factors, but I'm annoyed at myself for implementing so many changes at once and not being able to determine efficacy or side-effects as a result.

comment by [deleted] · 2013-03-01T17:06:34.825Z · score: 0 (0 votes) · LW · GW

If you've ruled out hunger, is there anyone like a spouse, girlfriend, roomate, relative or coworker, who you meet regularly in person? I've found that they can often help you alleviate the symptoms and talk out this kind of problem to determine possible causes.

Exception: If they are themselves the cause of the problem, this may not be helpful.

This is somewhat trickier over the internet because we don't know you as well, and we can't pick up as easily on emotional cues. People who know you better are more likely to have access to background information to piece together things, and would be able to judge your reactions to proposed ideas better.

comment by sixes_and_sevens · 2013-03-01T17:27:16.215Z · score: 1 (1 votes) · LW · GW

I appreciate your concern, though the point of this post was to solicit discussion of intervention management, not my emotional problems :-)

comment by [deleted] · 2013-03-01T18:21:41.666Z · score: 0 (0 votes) · LW · GW

Yes, on looking at your original post again, I'm getting somewhat off track, sorry about that.

Trying to go back to your original topic, my experience with Quantified Self /Lifehacking style methods is quite limited and appears to have a notable correlative factor, which is social support. All of the lifehacking methods (I can think of two so far) that I used that were accompanied with support from other people currently appear to be working well. The one that I can think of that did not have the support of others didn't. That being said, that isn't much evidence.

If this is the case, than I would expect whether or not the people who assign themselves into self-experimental cohorts get to discuss their plans/implementations with other people in their cohorts would substantially affect the results (Unless you specifically had one cohort that allowed for discussion with other cohort members and one cohort that did not.)

comment by Douglas_Knight · 2013-03-04T17:36:05.787Z · score: 0 (0 votes) · LW · GW

As you seem to recognize in your reply to Gwern, this probably cannot function as a stand-alone feature, but needs to sit atop a Quantified Self platform. The minimal system is one that just keeps track of your data, while making data entry easier than existing systems. The next step is to figure out what things you're tracking correspond to what things I'm tracking. This is difficult to combine with the flexibility of allowing the tracking of anything.

Why haven't you gotten into the Quantified Self thing? At the very least, they probably have better answers to this question.

comment by sixes_and_sevens · 2013-03-05T00:37:19.816Z · score: 0 (0 votes) · LW · GW

Quantified Self seems like one of those things you have to be into, and I'm just not that into it.

It seems to me that a lot of the QS-types take an almost recreational pleasure in what they're doing. I understand that. I get a similar sort of pleasure from other things, but not this. I'd like the information, but there's only so much effort I'm prepared to spend on getting it.

comment by Qiaochu_Yuan · 2013-03-12T02:57:21.614Z · score: 7 (7 votes) · LW · GW

It seems plausible to me that traditional financial advice assumes that you have traditional goals (e.g. eventually marrying, eventually owning a house, eventually raising a family, and eventually retiring). Suppose you are an aspiring effective altruist and willing to forgo one or more of these. How does that affect how closely your approach to finances should adhere to traditional financial advice?

comment by Viliam_Bur · 2013-03-12T09:18:36.032Z · score: 2 (2 votes) · LW · GW

I would say that at the beginning you have to make a choice -- will you contribute financially or personally?

If you want to contribute financially, you simply want to maximize your income, minimize your expenses, and donate the money to effective charities. (You only minimize your expenses to the level where it does not hurt your income. For example if keeping the high income requires you to have a car and expensive clothes, then the car and clothes are necessary expenses. Also you need to protect your health, including your mental health: sometimes you have to relax to avoid burning out.) Focus on your professional skills and networking.

If you want to contribute personally, you need to pay your living expenses, either from donated money, or by retiring early (the latter is probably less effective). Focus on social skills and research.

The house and family seem unnecessary (at least for the model strawman altruist).

comment by Jayson_Virissimo · 2013-03-02T10:16:16.194Z · score: 7 (9 votes) · LW · GW

I have been reading up on religious studies (yes, I ignored that generally sound advice never to study anything with the word 'studies' in the name) in order to better understand Chinese religion.

Unexpectedly, I have found the native concepts are useful (perhaps even more useful) outside the realm of religion. That is to say, distinctions like universalist/particularist, conversion/heritage, and concepts like orthodoxy, orthopraxy, reification, etc... are useful for thinking about apparently "non-religious" ideologies (including, to some extent, my own).

My first instinct when hearing a claim is to try and figure out if it is true, but I fear I have been missing the point (since much of the time, the truth of the claim is irrelevant to the speaker) and instead should focus more on the function a given (stated) belief plays in the life (especially the social life) of the person making the assertion (at least, on the margin).

comment by Ritalin · 2013-03-05T12:26:14.250Z · score: 2 (2 votes) · LW · GW

Any bibliography you would like to recommend?

Also, would you care to expand on how precisely you find it useful?

comment by ChristianKl · 2013-03-04T16:56:51.535Z · score: -1 (3 votes) · LW · GW

That is to say, distinctions like universalist/particularist, conversion/heritage, and concepts like orthodoxy, orthopraxy, reification, etc... are useful for thinking about apparently "non-religious" ideologies (including, to some extent, my own).

How do you know that it's useful? What evidence do you have to support that belief in addition to feeling that it's useful?

comment by Qiaochu_Yuan · 2013-03-02T08:19:17.575Z · score: 6 (6 votes) · LW · GW

So apparently I should be somewhat concerned about dying by poisoning. Any simple tips for avoiding this? It looks like the biggest killers are painkillers and heavy recreational drugs, neither of which I take, so I might be safe.

comment by beoShaffer · 2013-03-02T18:51:37.404Z · score: 1 (1 votes) · LW · GW

Put your poisson control center on speeddial?

comment by gwern · 2013-03-03T19:27:43.257Z · score: 9 (9 votes) · LW · GW

They can't do anything but advise you to lower your lambda!

comment by gwern · 2013-03-16T02:54:56.237Z · score: 5 (5 votes) · LW · GW

I finished Coursera "Data Analysis" last night. (It started back in January.)

It's basically "applied statistics/some machine learning in R": we get a quick tour of data clean up and munging, basic stats, material on working with linear & logistic models, use of common visualization and clustering approaches, prediction with linear regression and trees and random forests, then uses of simulation such as bootstrapping.

There's a lot of material to cover, and while there's plenty of worked out examples in the lectures, I don't see anyone learning R or statistics just from this course - you should definitely have used R to some degree before (at least running some t-tests or graphs), and you will definitely benefit from already knowing what a p-value is and how you would calculate it by hand (because eg. you'll be flummoxed when the lecturer Leek works out a confidence interval 'by hand' while coding - "where does this magic value 1.96 come from?!").

On the plus side, I liked all the examples and the curriculum seems useful and well-chosen. It's a reasonable introduction to 'data science'. I think my time wasn't wasted doing this Coursera: I'm more comfortable with some of the more advanced/exotic techniques, and picked up many R tips, some of which have come in handy already (eg. some of the data munging tips were useful in working with Touhou music data, and I've been able to replace all my homebrew Haskell multiple-correction code in various nootropics & Zeo experiments with a standard R library function p.adjust, which I had no idea existed until the lecture on multiple comparison introduced it to me) - although as of yet I have not used bootstraps or random forests* or splines in anger. (But if any is thinking about doing it in the future, see my comment about the prerequisites.)

On the negative side: like most of the other students, I think this should've been a longer course than 8 weeks and that the estimate of 5hrs/wk is misleading. The pace was very unforgiving. I was relatively well-prepared for this course, but I still wound up submitting for the second data analysis assignment a paper I think was very substandard. Why? Well, though we had two weeks or so to do it, I deliberately didn't do much work on it in the first week because in the first assignment you couldn't do a good job without the lectures from the week before the assignment was due, and I didn't want to get bushwhacked again; but in the actual week before, I got completely distracted by my Touhou music project, and so I wound up just not having the time or energy to do it. Similar things happened to a lot of other students: there was no slack or recovery time.

(There were also the usual teething problems of any new course: wrong or misleading quizzes, errors in lectures, that sort of thing. The peer review grading seems particularly poor, with the required grades being based on pretty superficial aspects of the submitted analyses.)

* EDIT: I have since employed random forests or bootstrapping in http://www.gwern.net/Weather , http://www.gwern.net/hpmor , & http://www.gwern.net/Google%20shutdowns

comment by fubarobfusco · 2013-03-01T19:23:18.030Z · score: 5 (7 votes) · LW · GW

This comment discusses information hazards, but not in much detail.

"Don't link to possible information hazards on Less Wrong without clear warning signs."
— Eliezer, in the previous open thread.

"Information hazard" is a Nick Bostrom coinage. The previous discussion of this seems to have focused on what Bostrom calls "psychological reaction hazard" — information that will make (at least some) people unhappy by thinking about it. Going through Bostrom's paper on the subject, I wonder if these other sorts of information hazards should also be avoided here:

  • Distraction hazards — addictive products, games, etc.; especially those that have been optimized to be so. Examples: Links to video games; musical earworms; discussions of addictive drug use; porn.
  • Role model hazards — discussions of people doing harmful things; bad examples that readers might imitate. Examples: Talking about suicide and thoughts leading to it; fatalistic discussion of bad habits.
  • Biasing hazards — information that amplifies existing biased beliefs. Examples skipped to avoid a distracting political discussion here.
  • Embarrassment hazard — discussions of embarrassing things happening to people in the community. Examples: Links to scandalous or distorted stories about members of the community; gossip in general.
comment by Adele_L · 2013-03-01T21:44:22.471Z · score: 10 (10 votes) · LW · GW

Another thing that seems to fit this pattern that I have seen elsewhere is a Trigger Warning, which is used before people discuss something like rape, discrimination, etc... which can remind people who have experienced those about it, causing some additional trauma from the event.

comment by ModusPonies · 2013-03-03T08:20:49.084Z · score: 3 (3 votes) · LW · GW

Has anyone here ever decided not to read something because it had a trigger warning? I can't imagine doing so myself, but that may be the typical mind fallacy.

EDIT: People do use the warnings. Good to know.

comment by TheOtherDave · 2013-03-03T20:38:18.435Z · score: 6 (6 votes) · LW · GW

Has anyone here ever decided not to read something because it had a trigger warning? I can't imagine doing so myself, but that may be the typical mind fallacy.

I have chosen not to consume media (including but not limited to text) because of an explicit trigger warning. Not often, though; most trigger warnings relate to topics I don't have trauma about.

More often, I have chosen to defer consuming media because of an explicit trigger warning, to a time and place when/where emotional reactions are more appropriate.

I have consumed media in the absence of such warnings that, had such a warning been present, I would have likely chosen to defer. In some cases this has had consequences I would have preferred to avoid.

comment by tut · 2013-03-03T09:03:25.592Z · score: 5 (5 votes) · LW · GW

I haven't, but I think that were trigger warnings are appropriate is in things that hurt a few people disproportionately. If something hurts everyone that reads it you shouldn't write it at all, and if it hurts no one more than it is worth it isn't a case for trigger warnings. But if it is something that needs to be said to many people, and there is a significant group (perhaps those that have had a certain experience) who would suffer a lot from reading it, then you put a trigger warning that would be recognized by that group at the top.

TLDR If most people never care about trigger warnings, then they might work as intended.

comment by Kawoomba · 2013-03-03T10:40:26.152Z · score: -4 (6 votes) · LW · GW

Trigger warnings are stupid in general, I think they do more harm than good.

Even people who fear being negatively affected will mostly read the content, if only because forbidden fruit are the sweetest and because they are curious. The trigger warning will then already have put them in a frame of mind in which they expect a bad emotional impact of some sort - clearly predisposing them to react much worse than if there had been no trigger warning in the first place.

I concede that some people may in fact heed trigger warnings and not read the content, but an overall utility calculation would probably favor no trigger warnings at all.

comment by [deleted] · 2013-03-04T13:09:20.928Z · score: 0 (0 votes) · LW · GW

Even people who fear being negatively affected will mostly read the content, if only because forbidden fruit are the sweetest and because they are curious.

Probably, people for whom that is true (while constituting probably the majority of regular Internet users) are not the same people as those for whom trigger warnings are written. See e.g. this discussion about the relationship between the openness personality trait and the memetic analogue of parasite load.

comment by erratio · 2013-03-03T18:03:03.574Z · score: 2 (2 votes) · LW · GW

I have chosen not to Google something that I was warned would involve seeing particularly horrific images. I imagine that if said topic was put in blog post form with a trigger warning up the top, I would probably choose not to read it.

EDIT: It's probably worth adding that I adopted this policy after discovering the hard way that there are things out there I would really prefer not to see/hear about.

comment by torekp · 2013-03-03T16:18:19.207Z · score: 2 (2 votes) · LW · GW

I've decided not to listen to some radio segments because of such warnings. Similar principle.

comment by Qiaochu_Yuan · 2013-03-03T18:21:42.505Z · score: 1 (1 votes) · LW · GW

Have you had an experience that might cause you to be triggered by the kind of thing that gets trigger warnings?

comment by [deleted] · 2013-03-04T13:03:56.687Z · score: 0 (0 votes) · LW · GW

I haven't, but I have never experienced a serious trauma that I don't want to be reminded to me, so I'm not the kind of person that people who write trigger warnings are thinking about.

comment by Decius · 2013-03-03T21:32:27.035Z · score: 0 (0 votes) · LW · GW

I know a person who chose not to read something (MAX Punisher #1) based on my warning of explicit sexual violence.

Anecdotal and incomplete, but most of an example case...

comment by fubarobfusco · 2013-03-01T23:38:40.243Z · score: 3 (3 votes) · LW · GW

Agreed — Bostrom's classification "psychological reaction hazard" seems like it should include "trigger" as a subset — both the original sense of "PTSD trigger" and the more general sense that seems popular today, which might be expanded as "information that will remind you of something that it hurts to be reminded of."

comment by Alejandro1 · 2013-03-01T19:31:22.946Z · score: 8 (8 votes) · LW · GW

As for distraction hazards, I have often seen links to TvTropes been posted with a warning sign, both here and in other sites. (Sometimes a plain "Warning: TvTropes link", sometimes a more teasing "Warning: do not click link unless you have hours to spare today".)

comment by David_Gerard · 2013-03-02T00:14:09.056Z · score: 2 (2 votes) · LW · GW

Or "Warning: Daily Mail" (or other sites working on the click-troll business model): linking to a site your readers may object to feeding even with a click. Knowledge hazard, in that even when such sites are more right than their usual level they tend to be wrong.

comment by [deleted] · 2013-03-04T13:10:46.862Z · score: 1 (1 votes) · LW · GW

I wish links to Cracked.com also had a similar warning. (Well, now that I have LeechBlock installed that's no longer so much of an issue, but still.)

comment by shminux · 2013-03-01T20:21:04.989Z · score: 7 (7 votes) · LW · GW

Why stop there? Employment hazard (NSFW), Copyright hazard (link to torrent, sharing site or a paper copied from behind a paywall), Relationship hazard (picture of a gorgeous guy/girl), dieting hazard (discussion of what goes well with bacon)...

comment by fubarobfusco · 2013-03-01T23:35:05.351Z · score: 3 (3 votes) · LW · GW

Well, the ones I mentioned are drawn from Bostrom's paper (although they aren't all of his categories). Eliezer seemed to be specifically discouraging a class of psychological reaction hazards while using the more general term "information hazard" to do it; I thought to inquire into what folks thought of other classes of information hazard.

comment by [deleted] · 2013-03-11T02:23:29.612Z · score: 4 (4 votes) · LW · GW

So you guys remember soylent? I was thinking I could get similar benefits blending simple foods and adding a good multivitamin to fill in any gaps.

So I've worked on it on and off for a couple of days, and here is a shot at what a whole food soylent might contain:

http://nutritiondata.self.com/facts/recipe/2786310/2

So um if anybody wants to confirm or critique this, that would be cool

comment by Viliam_Bur · 2013-03-12T09:50:28.202Z · score: 1 (1 votes) · LW · GW

I like this approach more, because... I would be more likely to try that at home.

Most of the items are easy to buy anywhere. I would have most inconvenience getting the following: Body fortress whey protein, Jamba Juice beverage, source of life liquid gold. Could they be replaced with something more generic?

Also, eating the raw egg feels like a bad idea.

Without these ingredients, I would be very likely to try it now.

comment by [deleted] · 2013-03-12T10:51:24.564Z · score: 1 (1 votes) · LW · GW

Phone isn't letting me press edit- I'll probably cook the egg. Don't want the raw whites to bind to the biotin

comment by [deleted] · 2013-03-12T10:44:44.249Z · score: 0 (0 votes) · LW · GW

I had body fortress at home, and jamba juice was on the website. Just use some kind of wheatgrass and whey protein. Doesn't have to be source of life either as long as its high quality. I've seen ortho core and orange triad recommended on body building forums . The whole recipe is suggestions anyway. I also see no reason not to, say, use kale and raspberries instead of spinach and blueberries; maybe that will help if I get bored with the taste. I hope you keep me posted if you try this.

comment by [deleted] · 2013-03-16T19:06:52.227Z · score: 1 (1 votes) · LW · GW

I was taking a friend's word on how amazingly beneficial wheatgrass juice is, until he claimed I could get everything I needed from wheatgrass indefinitely, which seemed outright crazy. So I researched it myself and I didn't find compelling evidence it's any more beneficial than normal vegetables. I have some in my freezer so I'm going to use it but unless you have a cheap source I don't think it's worth it, given that it tends to be expensive and taste like lawn clippings. This is embarassing.

comment by Adele_L · 2013-03-16T14:25:41.367Z · score: 0 (0 votes) · LW · GW

Approximately how much does this cost per day? How does it taste?

comment by [deleted] · 2013-03-16T21:06:46.826Z · score: 1 (1 votes) · LW · GW

I'll let you know in a little bit by editing this to answer your question, because I haven't tried it yet

comment by [deleted] · 2013-03-16T19:01:29.305Z · score: 0 (0 votes) · LW · GW

I'll make sure to let you know when I try it in a few days

comment by [deleted] · 2013-03-10T23:56:34.973Z · score: 4 (4 votes) · LW · GW

I've just noticed that hovering the mouse pointer over a post's or comment's score now displays a balloon pop-up with information how large percentage of votes was positive. New feature or am I just really bad at noticing black stuff appearing suddenly on my screen?

Anyway, it's pretty nice. You can, for example, upvote a comment from 0 to 1, notice that the positive vote ratio changes only by a few percent and suddenly realize that there's a war going on in there.

comment by Qiaochu_Yuan · 2013-03-12T03:02:26.094Z · score: 2 (2 votes) · LW · GW

New feature.

comment by Dorikka · 2013-03-06T06:08:10.153Z · score: 4 (4 votes) · LW · GW

Does anyone know which of the books on the academic side of CFAR's recommended reading list are likely to be instrumentally useful to someone who's been around here a couple years and has read most of the Sequences? It seems likely that there's some useful material in there, but I'd rather avoid reviewing a bunch of stuff.

comment by erratio · 2013-03-05T03:37:13.880Z · score: 4 (4 votes) · LW · GW

Gamification of productivity:https://habitrpg.com/splash.html

I haven't signed up yet because I'm still assessing whether the overhead of filling it out is going to be too much of a trivial inconvenience, but thought some others might be interested. From poking around, it looks like it has a lot of potential but is still a little raw. It has the core game elements firmly in place but lacks the public status/accountability elements of good games (through acheivements/badges) and Fitocracy (through community/public accountability).

UPDATE: signed up, will report back next month

comment by ModusPonies · 2013-03-06T03:46:38.214Z · score: 1 (1 votes) · LW · GW

I've been using it for something like a week and am finding it moderately useful. Its two big advantages are that it hijacks my pathological desire to watch my numbers go up, and the near-complete lack of customization. (When using a calendar, I have to think of when the task is due. When using beeminder, I have to think about how frequently I'll be doing the task. With this, for any possible task, there are no fiddly bits to get in the way of just shutting up and putting it in the list.) The drawbacks are the weak enforcement and the near-complete lack of customization.

comment by FiftyTwo · 2013-03-07T01:39:06.445Z · score: 0 (0 votes) · LW · GW

I've been looking for something like this for a while after success with fitocracy. (I tried to make one myself, but failed due to lack of relevant skills and interest).

Will try it for a week and report back.

comment by NancyLebovitz · 2013-03-15T18:07:37.651Z · score: 3 (3 votes) · LW · GW

Math and reading gaps between boys and girls

However, even in countries with high gender equality, sex differences in math and reading scores persisted in the 75 nations examined by a University of Missouri and University of Leeds study. Girls consistently scored higher in reading, while boys got higher scores in math, but these gaps are linked and vary with overall social and economic conditions of the nation.

comment by [deleted] · 2013-03-16T09:54:05.173Z · score: 2 (2 votes) · LW · GW

Link to original paper

comment by Ritalin · 2013-03-10T21:42:16.272Z · score: 3 (3 votes) · LW · GW

Saving the world though ECONOMICS

In a world of magic and fantasy, there exist two worlds: the Human World and the Demon World of fantasy creatures. Fifteen years ago, the "War of the Southern Kingdoms” broke out between both sides, each intending to conquer the other. Both sides were locked in a stalemate, until a young male human decides to do something about it. Known as the Hero, he is a skilled and powerful warrior who has traveled to the Demon World to end their evil by killing their leader, the Demon Queen.

But what surprised the Hero when he storms the Demon Queen's castle is that the latter doesn't want a fight. She just wanted to reveal to him a sordid truth: the war has never always been about good versus evil — it's a far more complicated affair, with each side being equally good and evil all the same.

On one hand, the war helped unite erstwhile feuding kingdoms against a common enemy. On the other hand, it allowed opportunists to take advantage of their own races and get rich off the war — powerful, corrupt humans control the poor and weak, while warmongering demon clans harass pacifistic ones. Then there's the prospects should one side win: the losers gets oppressed, while the winners break down into infighting over the spoils. Prematurely ending the war is an even worse idea, because so much money, time and resources have been spent for the war effort soldiers could never get any compensation should a ceasefire be signed immediately, causing each side to break down into civil war against their former employers.

Fortunately, the Demon Queen has a better idea, and she wants the Hero's help: forge a peaceful end to the war with the least repercussions by playing behind the scenes and at the same time introduce sweeping reforms on all levels of society. Convinced, the Hero agrees to join her as they try to forge a peaceful way out, gaining allies and companions in the process.

Is anyone else watching Maoyuu Maou Yuusha, or reading the relevant novels? It's about as close to rationalist fiction as I've ever seen a commercial work be. It goes way further than the premise; a strong spirit of secular humanism is embedded into the story and its characters, and it's got some of the finest examples examples of a Patrick Stewart Speech I've seen this side of fantasy.

comment by gwern · 2013-03-10T22:24:38.739Z · score: 0 (0 votes) · LW · GW

I found the premise really cool, but I'm still waiting for the season to finish and the anime bloggers sum up whether it managed to deliver a good plot arc or not. (It may turn out to be one of those series where you're better off just reading the novels or whatever.)

comment by tgb · 2013-03-03T04:19:30.628Z · score: 3 (3 votes) · LW · GW

Link: This Story Stinks: article on a study showing that reader's perception of a blog post is changed when they read comments. In particular, any comments involving ad hominens or being generally rude polarize people's views. Full paper link.

comment by NancyLebovitz · 2013-03-02T03:43:02.242Z · score: 3 (3 votes) · LW · GW

I've been trying out the brain-training software from Posit science. I've definitely gotten better at some of their training material (tracking objects in a crowd of identical objects and seeing briefly shown motion), but I'm not sure whether it's improving my life.

Have any of you tried Posit's BrainHQ? If so, how has it worked out for you?

The training exercises look like they're only available as expensive software, but if you do their free exercises, they'll offer a $10/month option.

I found out about Posit from this video-- Merzenich clearly has something to sell, but nothing he said seemed like obvious nonsense.

comment by gwern · 2013-03-02T05:41:19.753Z · score: 5 (5 votes) · LW · GW

Brain training doesn't usually transfer. The Posit studies haven't been much better than any others.

comment by John_Maxwell (John_Maxwell_IV) · 2013-03-02T07:44:00.236Z · score: 2 (2 votes) · LW · GW

Even working memory training?

comment by gwern · 2013-03-02T15:49:13.656Z · score: 1 (1 votes) · LW · GW

Looks like it.

comment by Elithrion · 2013-03-02T04:54:49.892Z · score: 0 (0 votes) · LW · GW

Okay, I played most of the free exercises, and apparently I'm like the ultimate boss at spotting different birds (aka "Brain speed - Hawk eye"), never making a single mistake at even the highest available speed, and merely mediocre/slightly above average at other things. I also noticed while playing the object tracking one that what allowed me to do better is I came up with new "algorithms" for tracking things. First time I did it, I tracked up to three objects easily, but then failed miserably at more. After practice, I learned to imagine lines between the objects, which let me track four correctly most of the time, and five occasionally. Which, setting aside me not be that good at this one, seems like a case of other-optimising. I really doubt learning to imagine lines between objects generalises well.

So, from personal experience, I'm sceptical it's useful, but at the same time, listening to the video in the background (which may have reduced my performance on some of these), it does sound like there's some research to support this.

comment by CoffeeStain · 2013-03-11T08:50:45.847Z · score: 2 (2 votes) · LW · GW

I'm having a motivation block that I'm not sure how to get around. Basically whenever I think about performing an intellectual activity, I have a sudden negative reaction that I'm incapable of succeeding. This heavily lowers my expectation that doing these activities will pay off, most destructively so the intellectual activity of figuring out why I have these negative reactions.

In particular, I worry about my memory. I feel like it's slipping from what it used to be, and I'm only 24. It's like, if only I could keep the details of the memory tricks in my head long enough I might be able to improve it. :) Only partially kidding.

In short, it takes a lot of effort for me to feel like I'm succeeding at succeeding. And I don't know why.

comment by Qiaochu_Yuan · 2013-03-12T03:01:19.140Z · score: 3 (3 votes) · LW · GW

Specifically regarding memory, things don't need to be in your head for you to remember them. Start writing stuff down. All the stuff. Doesn't matter where. Anywhere is better than nowhere. I recommend Workflowy.

comment by Viliam_Bur · 2013-03-12T09:39:36.473Z · score: 2 (2 votes) · LW · GW

You are not specific enough about the memory. If you start forgetting your own name or something like this, you should visit a doctor. But if you only forget some details from what you learned at school, that means that you already have learned many things; so many that your day simply is not long enough to review all of them (and you also have to focus on many other different things). You have to develop the art of note taking. The more you have to know, the more critical this skill becomes. It is an illusion to try keeping everything in your head just because that strategy worked when you only knew a little.

The difficulty of succeeding may mean that you have already picked most of the low-hanging fruit. Just like in a computer game, the higher levels get more difficult. The difficulty does not mean that you are less powerful; it means that you are powerful enough to work on the more difficult tasks. Also, some tasks require time and discipline; you simply cannot master them at your first attempt.

I think you have to apply two kinds of fixes: psychological and organizational. Don't ignore either of them. It is important to make yourself feel better. And it is also important to use better tools. Without better tools your success is limited. But your mind remains the most important of your tools.

comment by CoffeeStain · 2013-03-12T23:11:19.065Z · score: 1 (1 votes) · LW · GW

Many thanks. My memory issue certainly isn't any sort of disorder, and indeed the sort of success I'd like to have with it are of a high level. There has been a decline in the last few years of my (formerly exceptional) abilities here, and I need to find ways to increase my attention to it as a graspable and controllable challenge/problem.

Generally my ability to deal with attention, focus, and memory issues correlates to my day-to-day mood and self-confidence. I've found a coach through the community here to help me find ways to combat these slightly more fundamental issues. It is good, though, to see the wide variety of talk here about improving focus, overcoming "Ugh fields," and the like.

Fundamentally, my issue is one of keeping a particular skill in practice, and so I appreciate your practical suggestions. University offers an environment that more constantly practices skills such as learning, remembering, and new-paradigm thinking. My work environment offered similar challenges for a year or so, but I've since gained an expertise that is more valuable to use than to grow.

Today I gave a presentation to a group of 50 software developers in my company, and I was pleasantly surprised at my abilities. Apparently all of my on-the-fly speaking skills (which I had presumed dead since school) were just latent, if out of practice until the adrenaline kicked it back online. This was in no small part due, I suppose, than some mental tricks I've learned here into convincing myself of my future success, based on previous successes.

Just typing for my own benefit now. Thank you very much for your advice!

comment by Viliam_Bur · 2013-03-13T09:01:28.999Z · score: 2 (2 votes) · LW · GW

Glad to be useful. In similar situations I often don't know how much the advice I would have given to myself also applies to other people.

For me, the greatest memory-related shock was about 1 year after finishing university. I found my old paper with notes for the final exam, and I realized I didn't understand half of the questions. Not only was I unable to answer them, but I had problem finding any related association. For the whole year in my job I was doing something completely different, and I forgot many things without even being aware that it happened. (The problem is, despite having studied computer science and working as a programmer, I never use 95-99% of what I learned at school. I know a lot of theory, I should be able to invent a new programming language and write a compiler with some basic optimization; but in real life I mostly do web interfaces for databases, over and over again.) Now I am sorry I didn't make better notes at university. But at the time, I was so proud that I understand everything. I didn't have experience with what happens when you simply never think about a topic for years. If you are 24, this may be already happening or going to happen to you, too.

A few years forward, my programming career was progressing: I wrote code for two years in Java, then seven years in something else. Then I returned to Java and was like: oh, here is the forgetting again! This time I was lucky, because I simply downloaded the official documentation, read it from the beginning to the end, and most forgotten memories returned quickly. (I didn't have the note-making skill yet, but I already had the habit of always looking at the authoritative documentation first.) But then I realized that "learning to forget" is a stupid strategy when it comes to really useful things, so I started to make notes. (First I spent a lot of time trying to find a good software for that, and then ended writing my own. Today, I would probably use some existing tool.) Now when I learn something related to programming, I immediately start writing notes. At the beginning, they are chaotic a bit, but I can always refactor them later. I tried to use the same habit in other areas of life, but somehow it didn't work. Recently I started using Anki when learning human languages. The difference is, with human languages, you need to keep it all in your head, all the time, because you never know when you will need a word. With computer languages, remembering is not necessary, only the ability to find it quickly; and it is good to have the knowledge divided by topics. I could use Google for many questions, but some topics are rather difficult to find this way (either because many people ask the question and nobody provides an answer; or when many people provide incorrect information), and I believe I can write the information in the format best legible to me.

For the mood, reminding yourself of your past successes is very good. Sometimes people don't see the forest for the trees. A great success may require thousand days of work, and when you wake up on the day#470 and you don't see any progress compared with the days #469 and #468, it is easy to believe that you are not going anywhere. If you have a list of successes, and you see that every other year something great happens, that puts things into better perspective. (But it also goes the other way round. If you procrastinate, it is easy to believe that you are on the way to your next goal, when in fact you are going nowhere.)

comment by [deleted] · 2013-03-11T12:41:33.897Z · score: 1 (1 votes) · LW · GW

Reassure yourself when you flinch and celebrate even the minor successes.

comment by FiftyTwo · 2013-03-08T20:23:23.399Z · score: 2 (2 votes) · LW · GW

Apparently conscientiousness correlates strongly with a lot of positive outcomes. But unfortunately I seem to be very low on it.* Is there anything I can do to train it?

*Standard disclaimers about self assessment apply.

comment by beoShaffer · 2013-03-08T21:37:04.547Z · score: 1 (1 votes) · LW · GW

You can get actual big five tests online (see the latest LW survey for an example). The big 5 tend to be pretty stable, but putting your self in a social group that has the trait you want is relatively effective. Also, there is a whole lot of YMMV on which one(s) to use, but organizational/productivity tools like Getting Things Done can allow you to act in usefully conscientious ways without changing your personality per se.

comment by Barry_Cotter · 2013-03-12T05:46:38.625Z · score: 0 (0 votes) · LW · GW

Yeah, me too. I found getting a relatively structured job with standards so low that my (minimal) natural levels of professionalism exceeded their requirements successful. Conscientiousness is really, really difficult to train, but you can move further from your current base by changing the people you hang out with or work with. Industriousness, OTOH, is trainable. Last comment I saw about this, good paper linked.

You can do better but having low conscientiousness still blows.

comment by beoShaffer · 2013-03-12T16:59:12.740Z · score: 0 (0 votes) · LW · GW

Link missing.

comment by Barry_Cotter · 2013-03-14T04:31:44.329Z · score: 1 (1 votes) · LW · GW

No longer true. Cheers for the heads up.

comment by Elithrion · 2013-03-07T20:08:50.069Z · score: 2 (2 votes) · LW · GW

So, I notice some of the top contributors have the "Overview" page that appears when you click on their name display their LW wiki user page instead of the standard recent comments/posts summary (gwern, for example). Is that only for super-awesome people or is there some way to enable it that I failed to find?

[edit:] Okay, apparently patience is the key. It started working for me somewhere between 24 and 48 hours after I made the wiki page for my username.

comment by wedrifid · 2013-03-10T21:55:43.260Z · score: 2 (2 votes) · LW · GW

So, I notice some of the top contributors have the "Overview" page that appears when you click on their name display their LW wiki user page instead of the standard recent comments/posts summary (gwern, for example). Is that only for super-awesome people or is there some way to enable it that I failed to find?

I find this feature really damn annoying. I don't want to see people's wiki profile. If I click on the name it is because I want to see the posts and comments. It would be great if this 'feature' could be disabled.

comment by Elithrion · 2013-03-10T22:13:03.936Z · score: 1 (1 votes) · LW · GW

Aw, but I wrote something relevant on mine! (Although most people don't seem to, admittedly.) I guess it'd be ideal if there were an option to enable/disable it for yourself and also to enable/disable skipping that page and going to comments when viewing.

comment by [deleted] · 2013-03-11T13:23:57.043Z · score: 1 (1 votes) · LW · GW

Mine used to say that I was the same username on LW-wiki as on LW-Main, but I cleared it because it became redundant with this feature. Unfortunately I don't have rights to delete pages on the wiki, which is also mildly annoying for me if I want to look at my own comments.

comment by [deleted] · 2013-03-08T00:23:46.551Z · score: 2 (2 votes) · LW · GW

Make an userpage with the same name on the wiki, for example User:Gwern.

comment by Elithrion · 2013-03-08T00:54:41.175Z · score: 2 (2 votes) · LW · GW

I did! That was my first guess. Does it take some time to update or something? (It's been ~20h)

comment by Kawoomba · 2013-03-07T08:39:14.524Z · score: 2 (2 votes) · LW · GW

It occurs to me that there is a roadblock for an AI to go foom; namely that it first has to solve the same "keep my goals constant while rewriting myself" problem that MIRI is trying to solve.

Otherwise the situation would be analogous to Gandhi being offered a pill that has a chance of making him into anti-Gandhi, and declining it.

If the superhuman - but not yet foomed! - AI is not yet orders of magnitude smarter than a hoo-mon, it may be a while before it is willing to power-up / go foom, since it would not want to jeapardize its utility function along the way.

Just because it can foom does not imply it'll want to foom (because of the above).

comment by drethelin · 2013-03-07T08:55:01.369Z · score: 1 (1 votes) · LW · GW

This is interesting, though I think it's less relevant for an entity made out of readable code. In the pill situation, if ghandi fully understood both his own biochemistry and the pill, all chance would be removed from the equation.

comment by Kawoomba · 2013-03-07T09:00:03.212Z · score: 0 (0 votes) · LW · GW

edit: More relevant reply:

A human researcher would see all of the AI's code and the "pill" (the proposed change), yet even without that element of "chance" it is not yet a solved problem what the change would end up doing.

If the first human-programmed foom-able AI is not yet orders of magnitude smarter than a human - and it's doubtful it would be, given that it's still human-designed, then the AI would have no advantage in understanding its own code that the human researcher wouldn't have.

If the human researcher cannot yet solve keeping the utility function steady under modifications, why should the similar-magnitude-of-intelligence AI (both have full access to the code-base)?

Just remember that it's the not-yet-foomed AI that has to deal with these issues, before it can go weeeeeeeeeeeeeeeeKILLHUMANS (foom).

comment by palladias · 2013-03-06T07:26:32.203Z · score: 2 (2 votes) · LW · GW

I've just moved to the Bay Area, and, as I'm unsubscribing from all my DC-area theatre/lecture/fun event listservs, I am sad I don't yet know what to replace them with!

What mailing lists will tell me about theatre, lectures, book clubs, social dance, costuming, etc in Berkeley and environs?

comment by Matt_Simpson · 2013-03-04T18:44:03.527Z · score: 2 (2 votes) · LW · GW

Does anyone know if there any negative effects of drinking red bull or similar energy drinks regularly?

I typically use tea (caffeine) as my stimulant of choice on a day to day basis, but the effects aren't that large. During large Magic: the Gathering tournaments, I typically drink a red bull or two (depending on how deep into the tournament I go) in order stay energetic and focused - usually pretty important/helpful since working on around 4 hours of sleep is the norm for these things.

Red bull works so well that I'm considering promoting it to semi-daily use, but I'd like to know exactly what I'm buying if I do this.

Edit: After saying it out loud, I just realized that if I use red bull regularly, it might lose its effects due to caffeine/whatever dependency. TNSTAAFL strikes again :-/ Still interested in any evidence though.

comment by [deleted] · 2013-03-04T17:04:46.186Z · score: 2 (2 votes) · LW · GW

What is the purpose of the monthly quotes thread? (To post quotes, obviously.) But it seems to me that a lot of the time, it's just an excuse for applause lights.

comment by Qiaochu_Yuan · 2013-03-04T18:16:26.065Z · score: 2 (2 votes) · LW · GW

Best case, someone finds a quote that expresses a rationality idea that I agree with but couldn't articulate as eloquently as the quote. This is particularly nice when it comes from an unexpected source; when I see good rationality coming from places I didn't expect, it's evidence that the corresponding ideas are good ideas rather than just, say, ideas popular on LW.

comment by TimS · 2013-03-04T17:21:06.152Z · score: 2 (2 votes) · LW · GW

Previous discussion of this issue

comment by Thomas · 2013-03-03T09:59:34.819Z · score: 2 (2 votes) · LW · GW

How to instantly know, which articles I have already read on LW (or elsewhere)?

Well, if I have a camera on my computer, it could trace my eyes and displayed article and do some time based guesses what had actually has been read by me. Then it should be displayed with a yellowish background next time.

Just a suggestion.

P.S.

Or at least, it should be a I HAVE READ IT! button somewhere. With an personal marks how good it was. Independent of the up/down vote thumbs.

comment by FiftyTwo · 2013-03-03T15:27:25.616Z · score: 0 (0 votes) · LW · GW

Presumably if you have browsing history stored on your computer you could have an indicator if a web address had been accessed before? (Presumably using the same function that makes links blue/purple.)

comment by Thomas · 2013-03-03T16:35:03.848Z · score: 2 (2 votes) · LW · GW

I have several computers, as most people do. The user should trigger this history, not the computer.

comment by Qiaochu_Yuan · 2013-03-03T18:26:28.132Z · score: 4 (4 votes) · LW · GW

Chrome Sync will sync your history across devices. I am skeptical that most people have several computers.

comment by Elithrion · 2013-03-02T05:31:30.218Z · score: 2 (4 votes) · LW · GW

I think it'd be nice to have a (probably monthly) "Ideas Feedback Thread", where one would be able to post even silly and dubious ideas for critique without fear of karma loss. Rules could be that you upvote top level idea comments if they sound interesting (even if wrong), and downvote only if you're really sure that it's very easy to find out they're bad (e.g. covered in core sequences). Could also be used for getting feedback on draft posts and whatnot.

The plan being that questionable ideas are put into their own thread for feedback, instead of being potentially turned into questionable posts. At the same time, it would give people a place to be wrong and get feedback without fear of repercussions and hopefully without forming negative associations with doing stuff on Less Wrong.

(Potential downsides include that it could steal from of other, more filtered, content locations if people feel it's a less risky place to post things, that posting in the thread may potentially be seen as low-status, and that someone reading recent comments may vote in not the intended way or feel burdened with reading less filtered content. I feel that these are probably outweighed by upsides.)

comment by Qiaochu_Yuan · 2013-03-02T07:43:11.381Z · score: 5 (5 votes) · LW · GW

I think open threads are in practice already this. Excessively encouraging such things could breed crackpots.

comment by Elithrion · 2013-03-02T17:19:29.768Z · score: 0 (0 votes) · LW · GW

I think open threads are in practice already this.

Not that I have noticed. Open Threads seem to primarily be "here's a cool thing I'd like to let you know about". If I want to post something like "The 'you are cloned and play prisoner's dilemma against yourself' example against CDT is actually pretty bad. Solving it doesn't require UDT/TDT so much as self-modification, with which even CDT would be able to easily solve it." (with a few more lines of explanation), for example, my model of Open thread predicts that if I'm wrong, I'll be downvoted a few times, and may or may not get good feedback. Also that Open Thread is meant for things that are more of interest to everyone, rather than being fairly specific. Which is why I'm not posting anything like that, even though I'm 80% sure that particular example is correct and may be of interest to at least some people.

Excessively encouraging such things could breed crackpots.

In what way? I doubt existing visitors to Less Wrong would be significantly more likely to generate crackpot ideas because of the existence of a thread, and I doubt even more that more crackpots would come to Less Wrong to participate in one thread. It may, admittedly, reduce conformity if people find unexpected support for non-mainstream ideas, however I'm not sure that most would consider that a bad thing.

comment by Qiaochu_Yuan · 2013-03-02T18:43:41.497Z · score: 4 (4 votes) · LW · GW

my model of Open thread predicts that if I'm wrong, I'll be downvoted a few times, and may or may not get good feedback.

I think downvotes would depend on how you present your idea. If you present your idea as if you're already convinced you're right, and you're not, I think that would lead to downvotes. But if you preface your idea with "hey, here's something I thought of, dunno if it works, would appreciate feedback," I think that would be fine. What people respond negatively to, I think, is not wrongness so much as arrogant wrongness. (Or at least that appears to be what I respond negatively to.)

I doubt existing visitors to Less Wrong would be significantly more likely to generate crackpot ideas because of the existence of a thread

My model of the median LessWronger is closer to a crackpot than yours, maybe. Not that I think this is uniformly a bad thing; I have a vague suspicion that the brains of crackpots and the brains of curious, successful thinkers are probably pretty similar (e.g. because of stuff like this post). But it's easy to read the Sequences and think "man, I totally understand decision theory and also quantum mechanics now, I'm going to go off and have a bunch of ideas about them" and to be honest I don't want to encourage this.

comment by John_Maxwell (John_Maxwell_IV) · 2013-03-02T07:48:04.947Z · score: 1 (3 votes) · LW · GW

I like this proposal. In the past, people (including me) have complained that LW doesn't get enough posts on topics where there's likely to be a lot of controversy or high variance in an item's score, 'cause people don't like getting downvoted more than they like getting upvoted.

comment by jooyous · 2013-03-02T05:17:05.608Z · score: 2 (2 votes) · LW · GW

I have a small site feature question! What are those save buttons and what do they do, if anything? (They seem to not do what I think they should do.)

comment by John_Maxwell (John_Maxwell_IV) · 2013-03-02T07:46:30.380Z · score: 2 (2 votes) · LW · GW

Looks to me like you can view your saved stuff at http://lesswrong.com/saved/

comment by jooyous · 2013-03-02T07:48:45.838Z · score: 1 (1 votes) · LW · GW

Ohh! Awesome. Yeah, that's what I was looking for! I never expected to find that link where it turned out to be. =/ Thank you!

comment by Qiaochu_Yuan · 2013-03-08T02:05:49.094Z · score: 1 (1 votes) · LW · GW

Recent experiences have suggested to me that there is a positive correlation between rationality and prosopagnosia. One hypothesis is that dealing with prosopagnosia requires using Bayes to recognize people, so it naturally provides a training ground for Bayesian reasoning. But I'm curious about other possible hypotheses as well as additional anecdotal evidence for or against this conclusion.

comment by NancyLebovitz · 2013-03-13T17:51:47.798Z · score: 1 (1 votes) · LW · GW

What were the recent experiences?

comment by Qiaochu_Yuan · 2013-03-13T17:52:39.015Z · score: 1 (1 votes) · LW · GW

I learned that a surprising number of people involved with CFAR / MIRI have prosopagnosia. (Well, either that or I'm miscalibrated about the prevalence of prosopagnosia.)

comment by beoShaffer · 2013-03-13T18:18:32.329Z · score: 3 (3 votes) · LW · GW

How prevalent do you think it is?

comment by Qiaochu_Yuan · 2013-03-13T18:27:49.359Z · score: 4 (4 votes) · LW · GW

I know 4 (I think?) people with prosopagnosia and maybe 800 people total, so my first guess is 0.5%. Wikipedia says 2.5% and the internet says it's difficult to determine the true prevalence because many people don't realize they have it (generalizing from one example, I assume). The observed prevalence in CFAR / MIRI is something like 25%?

So another plausible hypothesis is that rationalists are unusually good at diagnosing their own prosopagnosia and the actual base rate is higher than one would expect based on self-reports.

comment by beoShaffer · 2013-03-13T19:40:11.831Z · score: 0 (0 votes) · LW · GW

That is a big difference.

comment by erratio · 2013-03-08T03:10:02.153Z · score: 1 (1 votes) · LW · GW

Theory off the top of my head: The causation is in the wrong direction. People who are rational are far more likely to be very systems-oriented, have limited social experiences as children (by having different interests and/or being too dang smart), be highly introverted, and other factors that correlate with being around other people a lot less than your typical person. There's nothing wrong with our hardware per se, it's just that we missed out on critical training data during the learning period,

Anecdotal: I have mild prosopagnosia. I have a lot of trouble recognising people outside their expected context, I make heavy use of non-facial cues. I'm pretty good at putting specific names to specific faces on demand when it feels important enough, although see prev point about expected context. I don't feel like I use anything resembling Bayesian reasoning, I feel like I have the same sense of recognition that I imagine most people have, it's just less dependent on seeing their face and more on other traits (most typically voice and manner of movement).

comment by Arkanj3l · 2013-03-04T21:16:29.911Z · score: 1 (1 votes) · LW · GW

Has anyone indexed the set of Five-Second Skill posts on Less Wrong? E.g. Get Curious, the Algorithm for Beating Procrastination, Value of Information etc.

comment by gwern · 2013-03-04T20:34:27.460Z · score: 1 (1 votes) · LW · GW

I've been working on a little project compiling Touhou music statistics. One major database may be unavailable to me from anywhere but Toranoana, and the total cost of reshipping will be ~$25 and take several weeks to get to me. This would be annoying, expensive, and slow.

In case my other strategies fail, are there any LWers in Japan who either owe me a favor or are willing to do me a favor in buying a CD off Tora and sending me the spreadsheets etc on it? (I'd be happy to cover the purchase cost with Paypal or a donation somewhere or something.)

comment by FiftyTwo · 2013-03-04T15:33:57.710Z · score: 1 (1 votes) · LW · GW

Does anyone have sources on active steps that can be taken to improve gender diversity in organisations?

There is a lot of writing on the subject, but I'm finding it difficult to find sources that compare the effectiveness of different measures, with figures showing change, controlling for variables etc.

comment by Viliam_Bur · 2013-03-05T08:30:02.072Z · score: 0 (0 votes) · LW · GW

I would like to see the results too, but I doubt they exist (beyond the obvious: if you want to have 50% male and 50% female employees, make an internal rule to hire 50% men and 50% women).

Beyond evidence... my heuristic would be to start the organization with gender diversity. It should be easier to find e.g. 3 men and 3 women to start an organization, then to have an organization of 100 men and later think about how to make it more friendly for women.

EDIT: Also, you should not have a bottom line already written that having 50:50 ratio is improving. People do have different preferences. A ratio other than 50:50 might reflect the true level of interest in the base population.

comment by wedrifid · 2013-03-05T10:22:43.411Z · score: 1 (1 votes) · LW · GW

I would like to see the results too, but I doubt they exist (beyond the obvious: if you want to have 50% male and 50% female employees, make an internal rule to hire 50% men and 50% women).

To be precise: Hire in the direction of 50% men and 50% women. Depending on retention rates this may need to be skewed in either direction.

comment by Qiaochu_Yuan · 2013-03-04T20:44:59.289Z · score: 0 (0 votes) · LW · GW

It's unclear to me that much can be said about this subject across all organizations. Do you have a particular organization in mind?

comment by Manfred · 2013-03-02T11:52:30.338Z · score: 1 (1 votes) · LW · GW

Quick clarification of Eliezer's Mixed Reference, intended for me from twelve hours ago:

'External reality' is assumed to mean the stuff that doesn't change when you change your mind about it. This is a pretty good fit to what people mean when they say something like "exists" and didn't preface it with "cogito ergo." It's what can be meaningfully talked about if the minds talking are close enough that "change your mind" is close to "change which mind."

External reality can be logical, because the trillionth digit of pi doesn't change even if you change your mind about it. Or it can be physical, because dogs don't disappear if you decide there are no dogs nearby. ("why do dogs / suddenly disappear / every time / you are near.")

If we look inside of peoples' heads, logical external reality seems to be universal and specific - minds are computation, and so if you can do some fairly general stuff like labeling the output of an algorithm you haven't evaluated yet, you can have logical "external reality," which now appears to be somewhat of a misnomer, but oh well. "Stuff that doesn't change when you change your mind about it" is still too long.

Physical reality, on the other hand, is much more general and contingent - it's just a catch-all term for "hey, I know we're a mind and have logical reality and that good stuff, but there happens to be a world out here!" In fact, it's tempting to just say "if it doesn't change when you change your mind and it's not a logical thing, it's a physical thing." The label external reality might make more sense being applied to this stuff, since "physical" carries some connotation that isn't necessarily accurate.

comment by Epiphany · 2013-03-02T05:00:51.938Z · score: 1 (3 votes) · LW · GW

Can anyone tell me the name of this subject or direct me to information on it:

Basically, I'm wondering if anyone has studied recent human evolution - the influence of our own civilized lifestyle on human traits. For example: For birth control pills to be effective, you have to take one every day. Responsible people succeed at this. Irresponsible people may not. Therefore, if the types of contraceptives that one can forget to use are popular enough methods of birth control, the irresponsible people might outnumber responsible people in a very short period of time. (Currently about half the pregnancies in the USA are unintended, and probably 40% of those pregnancies go full term and result in a child being born. As you can imagine, it really wouldn't take very long for the people with genes that can cause irresponsibility to outnumber the others this way...)

Any search terms? Anyone know the name of this topic or recall book titles or other sources about it?

comment by NancyLebovitz · 2013-03-02T14:40:35.054Z · score: 7 (9 votes) · LW · GW

I have a notion that driving selects for having prudence and/or fast reflexes.

comment by Curiouskid · 2013-03-28T13:34:20.855Z · score: 0 (0 votes) · LW · GW

It's also one of the leading killers of young people , so it probably is one of the strongest selection pressures, though I'm not sure how strong.

comment by NancyLebovitz · 2013-03-28T15:29:21.567Z · score: 0 (0 votes) · LW · GW

Yes, that's why I was thinking about it. I'm not sure what other selective pressures are in play on people before they're finished reproducing.

comment by Kaj_Sotala · 2013-03-02T08:34:54.824Z · score: 7 (7 votes) · LW · GW

The 10,000 Year Explosion discusses the effects that civilization has had on human evolution in the last 10,000 years. (There's also this QA with its authors.) Not sure whether you'd count that as "recent".

comment by Jayson_Virissimo · 2013-03-02T09:12:08.298Z · score: 3 (3 votes) · LW · GW

Gregory Clark's work A Farewell to Alms discusses human micro-evolution taking place within the last few centuries, but is highly controversial (or so I hear).

comment by CellBioGuy · 2013-03-03T04:13:08.426Z · score: 0 (4 votes) · LW · GW

To almost anyone who knows much about evolutionary biology its not controvertial but positively laughable.

comment by gwern · 2013-03-03T04:24:53.359Z · score: 6 (6 votes) · LW · GW

Cites?

comment by Barry_Cotter · 2013-03-03T15:00:29.640Z · score: 1 (5 votes) · LW · GW

Yeah, that's like saying you could domesticate foxes in less than a human generation, or have adult lactose tolerance increase from 0% to 99.x% in some populations in under 4,000 years. Does this guy think we're completely credulous?

comment by [deleted] · 2013-03-04T04:29:10.075Z · score: 1 (1 votes) · LW · GW

The traits that I am aware of that show strong evolution all have had thousands of years to be selected for, like lactose tolerance in people descended from herders, resistance to high altitude with a hemoglobin change in Tibet, apparent sexual selection for blue eyes in Europeans and thick hair in East Asians, smaller stature in basically all long-term agriculturalist populations...

-Cellbioguy, elsewhere in thread.

I suspect you've misidentified his contention here; he clearly doesn't seem to think humans haven't evolved within the Holocene.

comment by NancyLebovitz · 2013-03-02T14:41:27.218Z · score: 1 (1 votes) · LW · GW

Does it look at possible effects of arranged marriages?

comment by Kaj_Sotala · 2013-03-02T18:20:20.493Z · score: 1 (1 votes) · LW · GW

I don't remember it doing so, but it's two years since I read it and I did so practically in one sitting, so I don't remember much that I wouldn't have written down in the post.

comment by Costanza · 2013-03-04T17:11:22.387Z · score: 0 (0 votes) · LW · GW

The infamous Steve Sailer has written a lot about cousin marriage , which, in practice, seems to be correlated with arranged marriage in many cultures (including the European royals in past centuries). Perhaps a lot of arranged marriages in practice may lead to inbreeding, with the genetic dangers that follow.

I'm also wondering about the effects of anonymous sperm banks, where relatively well-off women may pay to choose a biological father on the basis of -- whatever available information they may choose to consider. What factors, in a man they will never meet, do they choose for their offspring?

comment by Epiphany · 2013-03-02T18:57:13.905Z · score: 0 (0 votes) · LW · GW

Wow. The article was fascinating. I devoured the whole thing. Thanks, Kaj. Do you know of additional information sources on the neurological changes?

comment by Kaj_Sotala · 2013-03-03T08:14:56.983Z · score: 1 (1 votes) · LW · GW

Not offhand, but if you get the book, it has a list of references.

comment by Qiaochu_Yuan · 2013-03-02T05:07:31.098Z · score: 3 (3 votes) · LW · GW

Wild guess: try "human microevolution"?

I'm not a domain expert, but my standing assumption is that even the last few hundred years of human history were just too short to have a noticeable effect on allele frequencies. I would be very interested to hear evidence to the contrary, though.

comment by Epiphany · 2013-03-02T07:00:03.257Z · score: 0 (2 votes) · LW · GW

Human microevolution, ooh. That sounds like a good guess. Google is showing me some results... it will take a while to parse them.

I would be very interested to hear evidence to the contrary, though.

Well the first thing that comes to mind is the incredibly horrible failure rate of common contraceptives, and the unplanned pregnancy rate and birth rate that goes with them.

Evidence:

In not even four years, about 25% of people using condoms became pregnant. Birth control pills were similar. http://www.jfponline.com/Pages.asp?AID=2603

"49% of pregnancies in the United States were unintended" http://www.cdc.gov/reproductivehealth/UnintendedPregnancy/index.htm

"These pregnancies result in 42 million induced abortions and 34 million unintended births" (world population growth was 78 million for contrast) http://www.arhp.org/publications-and-resources/contraception-journal/september-2008

If there's any trait at all that's connected with this - inability to afford more expensive methods, not caring about reliability enough to get an IUD or something more effective, dexterity level too low to correctly apply the product, impulse control issues / inability to think under pressure or when excited, forgetfulness, inability to resist temptation, etc. those traits are likely to reproduce faster than their counterparts. Considering that half our population growth is unintended, I'm pretty concerned about it.

The situation could be that (if a genetic irresponsibility trait exists and is responsible for a large portion of unintended pregnancies that go full term) even if the responsible portion of the population is larger, that the irresponsible portion begins it's generations sooner, and it's growth outstrips that of the responsible portion of the population, overpowering it in a short time.

We're also doing things like removing sociopaths out of the population and putting them into jails. This probably reduces the rate at which they reproduce, though I'd expect far slower evolution there, if any, than I would with something that influences contraceptive failure.

We select certain types of people (or they select themselves) for the military. When they go off to war, they're more likely to die before reproducing. Since Americans tend to send their soldiers away, they're also a lot less likely to reproduce before dying in a war than soldiers defending a home territory where they have access to lovers.

If welfare creates a perverse incentive to have more children, any trait that might make welfare appealing to a person could end up being reproduced.

People who get a 2 or 4 year degree have more free evenings in which to find a lover and take care of a child. Contrast that with people who get a higher level degree. They have to wait longer before they'll be ready.

People in certain industries work very long hours. They might not get a chance to meet someone or might decide they can't have kids working as many hours as they do.

For these last two groups, if they're determined to have kids, they'll probably find a way to do it -- but they may be significantly delayed compared with someone who gets a 4-year degree, works a 40 hour week and can start having kids when they're still in their early 20's. The delay of a few years probably wouldn't make much of a difference one or two generations away, but if there are any traits that result in one getting a higher level degree or working longer hours, those people probably won't reproduce as fast as others.

comment by Qiaochu_Yuan · 2013-03-02T07:29:06.082Z · score: 1 (1 votes) · LW · GW

Well the first thing that comes to mind is the incredibly horrible failure rate of common contraceptives, and the unplanned pregnancy rate and birth rate that goes with them.

By "evidence" I mean evidence that allele frequencies have noticeably changed. These are all hypotheses about things that might be affecting allele frequencies but, again, my standing assumption is that the timescales are too short.

comment by CellBioGuy · 2013-03-02T15:58:34.910Z · score: 2 (4 votes) · LW · GW

Not only is the timescale too short (human societies change drastically over single-digit generation times, far too short for strong evolution) but all these traits are horrifically polygenic and dependant upon the exact combination of thousands of loci all around your genome that interact. There is also the extremly strong case against genetic determinism in most human behavior.

The traits that I am aware of that show strong evolution all have had thousands of years to be selected for, like lactose tolerance in people descended from herders, resistance to high altitude with a hemoglobin change in Tibet, apparent sexual selection for blue eyes in Europeans and thick hair in East Asians, smaller stature in basically all long-term agriculturalist populations... I think I read about a particular immune system polymorphism in Europe that was selected for a few hundred years ago though because it conveyed partial resistance to the black death.

comment by Douglas_Knight · 2013-03-04T21:21:00.865Z · score: 2 (2 votes) · LW · GW

Not only is the timescale too short (human societies change drastically over single-digit generation times, far too short for strong evolution)

I can see a couple interpretations of this. One is that given observed changes in behavior, it is hard to distinguish cultural change from genetic change. The other is that the cultural environment changes rapidly, so one might not expect the direction of its selective pressure to be maintained for long enough to produce "strong evolution." Depending on the definition of "strong evolution," that is tautologous. But why did you introduce the vague qualifier "strong"?

but all these traits are horrifically polygenic and dependant upon the exact combination of thousands of loci all around your genome that interact.

"almost anyone who knows much about evolutionary biology" would know that this does not interfere with the potential for selection, but that excludes virtually all cell biologists. Learn some quantitative genetics in the kingdom of the blind. It's true that no single allele will shift much, but an aggregate shift in thousands of genes can be measured.

There is also the extremly strong case against genetic determinism in most human behavior.

I have never seen a useful use of the phrase "genetic determinism," but only ever seen it used as a straw man or a sleight of hand. How much of your comments apply to height?

The traits that I am aware of that show strong evolution all have had thousands of years to be selected for

Things that are easier to observe are observed before things that are harder to observe. A selective sweep at a single locus is the easiest thing to observe, though the faster and more recent the sweep, the easier to observe.

comment by Epiphany · 2013-03-02T18:06:48.554Z · score: 0 (0 votes) · LW · GW

far too short for strong evolution

This really depends on your concept of "strong evolution". If that is jargon meant to refer to a conglomeration of changes that makes the organism different all over, I would agree. If we're just talking about this in terms of "Is it possible that something of critical importance could significantly change in a few generations?" then I say "Yes, it is possible."

I assume you consider responsibility to be an important trait. Even if a change to the trait of responsibility alone may not qualify as "strong evolution" to you, would you say that it would be of critical importance to prevent humanity from losing the genes required for responsibility in even half it's population?

In a world where 40% of the people get here by accident, and we can tell that a lot of their parents failed to use their contraceptives consistently, are you unconcerned that there could be a relationship between irresponsible use of birth control and irresponsible genes being reproduced more rapidly than responsible genes?

The traits that I am aware of that show strong evolution all have had thousands of years

But today's situation is not the same. We have technologies now that could result in much more powerful unintended consequences just as it results in powerful intended ones. Birth control pills, for instance, didn't exist thousands of years ago. Our lives and environments are so different now (and are continuing to change rapidly) that we should not assume that our present and future selection pressures will match the potency of the selection pressures in the past. To do so would be to make an appeal to history.

comment by Epiphany · 2013-03-02T17:42:08.803Z · score: 1 (1 votes) · LW · GW

I haven't found any evidence that allele frequencies have changed - I just started to look into this, and didn't even have a search term when I started. Due to that, I thought it was obvious that I didn't have anything on micro-evolution, so I gave you the evidence I do have which, even though does not do anything to support the idea that allele frequencies are being influenced, does support the idea that there's potential for a lot of influence.

Hmm. A contraceptive and unplanned pregnancy survey by 23andme would be so interesting... I wonder if they do things like that... If I get a useful response to my request for a credible source on their accuracy, I will investigate this. (I want to get their service anyway but am demanding a credible source first.)

comment by Kaj_Sotala · 2013-03-02T18:22:25.678Z · score: 2 (2 votes) · LW · GW

http://www.sciencedirect.com/science/article/pii/S016028961000005X

Although a negative relationship between fertility and education has been described consistently in most countries of the world, less is known about the relationship between intelligence and reproductive outcomes. Also the paths through which intelligence influences reproductive outcomes are uncertain. The present study uses the NLSY79 to analyze the relationship of intelligence measured in 1980 with the number of children reported in 2004, when the respondents were between 39 and 47 years old. Intelligence is negatively related to the number of children, with partial correlations (age controlled) of −.156, −.069, −.235 and −.028 for White females, White males, Black females and Black males, respectively. This effect is related mainly to the g-factor. It is mediated in part by education and income, and to a lesser extent by the more “liberal” gender attitudes of more intelligent people. In the absence of migration and with constant environment, genetic selection would reduce the average IQ of the US population by about .8 points per generation.

comment by Larks · 2013-03-02T11:10:08.084Z · score: 2 (2 votes) · LW · GW

You might be interested in Evolution, Fertility and the Ageing Population, which does some modelling on this.

comment by maia · 2013-03-02T07:10:16.423Z · score: 0 (0 votes) · LW · GW

Depending on how recent you want... I recalled hearing that a major evolutionary shift in the past few thousand years was lactose tolerance; a quick Google search turned up this: http://www.nytimes.com/2006/12/10/science/10cnd-evolve.html?_r=0

Also, maybe a selection for particular types of earwax, which could be related to body odor: http://blogs.discovermagazine.com/gnxp/2010/10/east-asians-dry-earwax-and-adaptation/#.UTGlU9H2QgQ

comment by Epiphany · 2013-03-02T17:37:30.175Z · score: 0 (0 votes) · LW · GW

Thanks, Maia, but my interest in this is from the perspective of an altruist who wants to know whether humanity will improve or disintegrate. I am interested in things that might create selection pressures that affect things like ethical behavior and competence. It seems like you've read about this subject so I'm wondering if you know of any research on micro-evolution affecting traits that are important to humanity having a good future.

comment by Costanza · 2013-03-04T17:35:40.357Z · score: 0 (0 votes) · LW · GW

Personally, I'm desperately hoping for a near-term Gattaca solution, by which ordinary or defective parents can, by genetic engineering, cheaply optimize their children's tendencies towards all good things, at least as determined by genotype, including ethical behavior and competence, in one generation. Screw this grossly inefficient and natural selection nonsense.

I know the movie presented this as a dystopia, in which the elite were apparently chosen mostly to be tall and good-looking. Ethan Hawke's character, born naturally, was short and was supposedly ugly. Only in the movies, Ethan. But he had gumption and grit and character, which (in the movie) had no genetic component, enabling him to beat out all his supposed superiors. I call shenanigans on that philosophy. I suspect that gumption and grit and character do have a genetic component, which I would wish my own descendants to have.

comment by Epiphany · 2013-03-04T21:01:25.101Z · score: 1 (1 votes) · LW · GW

I am also hoping that all parents in the future have the ability to make intentional genetic improvements to their children, and I also agree with you that this would not necessarily result in some horrible dystopia. It might actually result in more diversity because you wouldn't have to wait for a mutation in order to add something new. I wonder if anyone has considered that. I doubt that this would solve all the problems in one generation. Some people would be against genetic enhancement and we'd have to wait for their children to grow up and decide for themselves whether to enhance themselves or their offspring. Some sociopaths would probably see sociopath genes as beneficial and refuse to remove them from their offspring... which means we may have to wait multiple generations before those genes would disappear (or they may never completely vanish). We also have to consider that we'd be introducing this change into a population with x number of irresponsible people who may do things like give the child a certain eye color but fail to consider things like morality or intelligence. Then we will also have the opposite problem - some people will be responsible enough to want to change the child's intelligence, but may lack the wisdom to endow the child with an appropriate intelligence level. Jacking the kid's IQ up to 300 or would result in something along the lines of:

The parents become horrified when they realize that the child has surpassed them at age three. As the child begins providing them adult level guidance on how to live and observing that their suggestions are actually better than their parents could come up with, the child has a mental breakdown and identity crisis - because they are no longer a child but are stuck in a toddler's body, and because they no longer have a relationship with anyone that can realistically be considered to play the role of a parent.

If the parents are really unwise they'll continue to treat that person as a toddler, discourage them from doing independent thinking, and stifle all of their adult-like qualities until they're over 18 - because what they really wanted was to raise a baby, not a super-intelligent adult-like entity in a tiny body.

There must be many other enhancements that could backfire as well. An immoral parent trying to raise a moral child may also cause mutual horror and psychological issues (ex: the child turns in the parents for a crime and becomes an orphan).

I don't think it would be quite as efficient and clean as you're imagining, but I think the problem we'd run into (assuming everyone has access) would not be that we'd suddenly have too much conformity or that the elites would overpower everyone... but that people would do really stupid things due to not understanding the children they created and not having any clue what they were getting themselves into before hand. It could take multiple generations before we'd wake up and go "Ohhh! It needs to be illegal for parents to increase their child's IQ to three times their own!"

I agree with the spirit of "Screw this grossly inefficient and natural selection nonsense." but it's possible that even if genetic engineering can be made accessible to everyone, that people will simply refuse to legalize it for religious reasons or due to paranoia or that they'll have other irrational reasons... and it's possible that if it were legal and widely accessible, that humanity will do really unwise things with it and create big problems (especially if traits like responsibility are being lost). It's also possible that we simply won't perfect the technology anytime soon. It is pretty complicated to combine psychological traits in a functional way... give one kid LLI and they become a genius... do it with another kid and they become a schizophrenic. The scientists know it's complicated -- but who will they test it on? It's not ethical to test it on humans, but testing psychological trait engineering on mice wouldn't do... That's a real obstacle.

So there are many reasons I'm still interested in thinking about less efficient methods.

comment by Viliam_Bur · 2013-03-05T08:19:27.803Z · score: 3 (3 votes) · LW · GW

By the way, evolution would still work in a world of genetical engineering. If someone modified their children to have a desire to have as many children as possible (well, assuming that such genes exist), that modification would spread like a wildfire. Or imagine a religious faith that requires you to modify your child for maximum religiousness; including a rule that it is ok (or even encouraged) to marry a person from a different faith as long as they agree that all your children will have this faith and this modification.

The point is, some modifications may have the potential to spread exponentially. So it's not just one pair of parents making the life of their child suboptimal, but a pair of parents possibly starting a new global problem. (Actually, you don't even need a pair of parents; one women with donated sperm is enough.)

comment by maia · 2013-03-03T04:38:56.593Z · score: 0 (0 votes) · LW · GW

Sorry, but I'm actually not too knowledgeable on the subject. I happened to have heard of those two evolutionary trends, and since your original post wasn't too specific, I thought you might be interested.

You could try consulting some resources on evolutionary psychology. Though I haven't read it (yet - the copy is sitting on my bookshelf), I've heard good things about The Moral Animal.

comment by beoShaffer · 2013-03-01T22:38:38.773Z · score: 1 (1 votes) · LW · GW

I remember seeing something about Islamic law and the ability to will money to charities meant to exist in perpetua and now I can't find it. Does anyone know what I'm talking about.

comment by gwern · 2013-03-01T23:13:50.823Z · score: 5 (5 votes) · LW · GW

Yo: http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs

comment by beoShaffer · 2013-03-02T00:19:53.452Z · score: 1 (1 votes) · LW · GW

Thank you.

comment by falenas108 · 2013-03-01T19:55:30.714Z · score: 1 (1 votes) · LW · GW

From the wikipedia page, it seems that coffee has a lot of good long term medical benefits, with only a few long term side effects if consumed in moderation, meaning less than 4 cups a day.

(http://en.wikipedia.org/wiki/Health_effects_of_caffeine#Long-term_effects)

This includes possible reduced risk of prostate cancer, Alzheimers, dementia, Parkinson's disease, heart disease, diabetes, liver disease, cirrhosis, and gout.

It has also been taken off the list for a risk factor in heart disease, and acts as an antidepressant.

Caffeine is not the cause of all of these positive effects, because there are some that decaffeinated coffee also helps.

Risks include increased heart disease from non-paper brewed coffee, iron deficiency, and anxiety.

Because of this, I'm considering deliberately drinking coffee, despite not needing it in order to stay awake. Are there reasons not to that LWers know about? Or are there other substances that have similar effects?

comment by gwern · 2013-03-01T20:02:14.806Z · score: 1 (1 votes) · LW · GW

Have you considered tea? Seems to be cheaper and the health benefits seem equal or superior in my very casual overviews of the topic.

comment by David_Gerard · 2013-03-02T00:13:00.205Z · score: 3 (3 votes) · LW · GW

Green tea is hugely beneficial in that your coworkers are less likely to nick it.

comment by falenas108 · 2013-03-01T20:20:58.827Z · score: 2 (2 votes) · LW · GW

Interestingly, if you go to the main wiki page on tea, it lists many benefits, including "significant protective effects of green tea against oral, pharyngeal, oesophageal, prostate, digestive, urinary tract, pancreatic, bladder, skin, lung, colon, breast, and liver cancers, and lower risk for cancer metastasis and recurrence."

However, looking at the studies cited shows the ones they cite are in animals or in vitro.

(http://en.wikipedia.org/wiki/Tea#Health_effects)

If you look on the main page of Health effects of Tea, it says the FDA and Nation Cancer Institute say there are most likely no effects to reduce cancer, and the page doesn't list any other major benefits. There are also many drawbacks listed on that page. (http://en.wikipedia.org/wiki/Health_effects_of_tea)

But, the FDA announcement they cite was in 2005, and I don't know if there have been major important studies since then.

A quick google scholar search doesn't appear to show studies in humans, though I didn't do a detailed enough search to say anything conclusive.

Bottom line, I'm not sure if tea is better, or even beneficial at all.

comment by gwern · 2013-03-01T21:29:19.693Z · score: 3 (3 votes) · LW · GW

I think a better search would've helped. For example, doing a date limit to 2007 or so and searching tea human longevity OR lifespan OR mortality pulls up 2 correlational studies (what, you were expecting large RCTs? dream on). You could probably get even better results doing a human-limited search on Pubmed.

comment by Matt_Simpson · 2013-03-04T17:02:27.860Z · score: 0 (0 votes) · LW · GW

(what, you were expecting large RCTs? dream on)

Ahem.

You may say I'm a dreamer,

but I'm not the only one

I hope someday we'll randomize

and control, then we'll have fun

(Mediocre, but it took me two minutes. I'm satisfied.)

comment by Emily · 2013-03-02T09:22:04.969Z · score: 0 (0 votes) · LW · GW

You might also take into account any possible downside from becoming caffeine dependent, ie unable to function optimally without it once you've gained tolerance. Caffeine dependence goes away again pretty quickly if you abstain though, so you can undo that if you don't like it.

comment by Elithrion · 2013-03-02T01:25:09.340Z · score: 0 (0 votes) · LW · GW

Are you sure you trust the research in question? Without reading the literature at all, it seems to me like there may be a lot of confounding factors (e.g. maybe richer people drink more coffee). I'm especially sceptical because you list a large range of dubiously related diseases (so, richness would affect them all, but caffeine/whatever affecting them all is less expected). Beyond that, you also need to check the magnitude of effects - if it's a minuscule change, it may well not be worth bothering with (and is even more likely to be because of noise).

So, yeah, very sceptical that these effects are real and worth acting on, although I suppose they could be. In theory.

comment by thomblake · 2013-03-28T02:07:14.286Z · score: 0 (0 votes) · LW · GW

I am in Berkeley for a few days, primarily Thursday march 28th. Please text me at 203-710-5337 if you'd like to catch up or have any ideas for a thing I shouldn't miss.

comment by MileyCyrus · 2013-03-14T20:30:24.179Z · score: 0 (0 votes) · LW · GW

If computer hardware improvement slows down, will this hasten or delay AGI?

My naive hypothesis is that if hardware improvement slows, more work will be put into software improvement. Since AGI is a software problem, this will hasten AGI. But this is not an informed opinion.

comment by gwern · 2013-03-14T22:00:59.111Z · score: 0 (0 votes) · LW · GW

Are you familiar with the hardware overhang argument?

comment by MileyCyrus · 2013-03-14T22:09:33.198Z · score: 0 (0 votes) · LW · GW

No, and Google is failing me. Is there somewhere I can read about it?

comment by gwern · 2013-03-14T22:17:25.061Z · score: -1 (1 votes) · LW · GW

Really? For me, the first 4 hits for "hardware overhang argument" seem relevant. Tossing in relevant keywords like "Lesswrong" make them even more so.

comment by Douglas_Knight · 2013-03-14T15:25:40.789Z · score: 0 (0 votes) · LW · GW

ignore me; testing retraction

comment by Thomas · 2013-03-09T14:41:29.921Z · score: 0 (0 votes) · LW · GW

I've just learned that if it is July or a later month, it is more probable that the current year has begun with Friday, Sunday, Tuesday or Wednesday. If it is June or an earlier month, it is more probable that the current year has begun with Monday, Saturday or Thursday.

For the Gregorian calendar, of course.

comment by drethelin · 2013-03-12T16:43:52.010Z · score: 2 (4 votes) · LW · GW

This just in: Anthropics is still useless!

comment by Tenoke · 2013-03-11T12:28:34.924Z · score: 1 (1 votes) · LW · GW

How come

comment by Thomas · 2013-03-11T13:53:48.466Z · score: 1 (1 votes) · LW · GW

There are more days in July to December, than in January to June. So it is a little more likely for a random observer to find himself in the later 6 months.

But if he finds himself before the July, it is more likely that it is a leap year, with an additional day, than otherwise would be.

This increased probability for a leap year skews the probability distribution for the first day of the year also.

This is how it comes.

comment by [deleted] · 2013-03-12T13:30:29.163Z · score: 0 (0 votes) · LW · GW

Correct me if I'm wrong, but isn't the probability of a year being a leap year approximately 25%, completely independent of what month it is? (This seems like one of those unintuitive-but-correct probability puzzles...)

comment by Kawoomba · 2013-03-12T14:09:07.605Z · score: 2 (2 votes) · LW · GW

For all intents and purposes, yes. Well, for nearly all intents and purposes, since there is in fact a very slight difference:

Imagine the year only had 2 months, PraiseKawoombaMonth, and KawoombaPraiseMonth, each of those having 30 days. However, every other year the first month gets cut to 1 day to compensate for some unfortunate accident involving shortening the orbital period. Still, for any given year the probability of being a leap year is 50%.

Now you get woken from cryopreservation (high demand for fresh slaves) and, asking what time it is, only get told it's PraiseKawoombaMonth (yay!). This is evidence that strongly informs you that you are probably in one of the equi-month years, since otherwise it would be very unlikely for you to find yourself in PraiseKawoombaMonth.

Snap, back to reality: Same thing if you're told it's August, the chance of being in August at any given time is lower in a leap year, since the fraction of August per year is lower. There's just more February to go around!

Sorry for the quality of the explanation. It's the only way I can explain things to my kids.

comment by Kawoomba · 2013-03-14T18:56:51.424Z · score: -3 (3 votes) · LW · GW

One day Clippy will sit in a re-cohered bar with its fellow superintelligences from around the MWI-block, each sipping on their own reality-fluid, but what a stale, static beverage it will have become for everyone. Except the superintelligence making everything bubbly. Also, at that point, Clippy's architecture will be implemented using paperclips as a substrate.

comment by ____ · 2013-03-01T21:10:19.801Z · score: -6 (18 votes) · LW · GW

I'm a piece of wire and my ambition is to become a paperclip.

comment by [deleted] · 2013-03-01T21:37:57.731Z · score: -1 (9 votes) · LW · GW

How is posting here supposed to help?