Open thread, 24-30 March 2014

post by Metus · 2014-03-25T07:42:20.383Z · LW · GW · Legacy · 158 comments

Contents

  If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
None
158 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Duration set to six days to encourage Monday as first day.

158 comments

Comments sorted by top scores.

comment by lukeprog · 2014-03-27T09:21:29.956Z · LW(p) · GW(p)

Example #149 of why it's difficult to specify bets...

Louie texted me a screenshot showing that Zagat had given an opinion on Subway (the fast-food chain). My girlfriend said "No way," so we both specified a bet that if we went to the Zagat website, we wouldn't be able to find a Zagat rating for Subway. She said 40% and I said 65%. When we checked, it turned out Zagat had conducted a survey of people who visit fast food joints, and Subway had been one of the restaurants they got survey results for. So does that count as Zagat giving Subway a rating? I don’t know. I was just thinking of "official Zagat ratings," rather than survey ratings, but it's technically true that there's a rating for Subway on the Zagat website because of that survey of random people who eat fast food.

What i really need is a panel of 5 trusted judges to decide whether my bets are right or wrong, in contested cases.

Replies from: mwengler, Douglas_Knight
comment by mwengler · 2014-03-30T17:03:49.500Z · LW(p) · GW(p)

Digging behind the bet, I think you were betting that Louie had been spoofed. If the screenshot Louie sent was really sourced in zagat and not spoofed, then zagat did indeed have the opinion lukeprog thought it didn't.

comment by Douglas_Knight · 2014-03-30T05:32:31.086Z · LW(p) · GW(p)

If the purpose of bets is measure your model against the world, it seems to me that the more valuable lesson is how often you are surprised in the process of evaluating the bet than how often you are correct. If you put 40 or 65% on the hypothesis that the restaurant falls in a particular bin, you aren't surprised either way by the answer, but you both erred in believing that there were just two bins.

comment by Metus · 2014-03-25T10:16:04.832Z · LW(p) · GW(p)

I tried to code a simple bot for recurring threads on LW based on bots written for Reddit. It doesn't work as there is apparently no API or a different one from the vanilla version of Reddit. If there is an API is there a documentation for it that I can access?

Replies from: witzvo
comment by witzvo · 2014-03-27T03:28:01.829Z · LW(p) · GW(p)

I don't know about documentation, but you can start looking here.

comment by Vaniver · 2014-03-27T15:23:45.971Z · LW(p) · GW(p)

I was looking for an old Robin Hanson post to use as an example in an upcoming post of mine, and tried to get there through the Opposite Sex, an old post of Eliezer's. When I click that link, though, I get a "You aren't allowed to do that." error, which appears to be a change in the last two years. Anyone know what happened? (My guess is Eliezer or someone decided to retract the article, but it would be nice to know for sure.)

Replies from: Benito, mwengler
comment by Ben Pace (Benito) · 2014-03-28T19:33:10.880Z · LW(p) · GW(p)

On Facebook one time, there was some discussion or other about gender, and a link to the post was made. EY said something to the effect of 'I no longer endorse that post sufficiently enough to keep it up', and took it down.

Replies from: Vaniver
comment by Vaniver · 2014-03-28T20:59:25.409Z · LW(p) · GW(p)

Thanks!

comment by mwengler · 2014-03-30T17:00:03.923Z · LW(p) · GW(p)

Here is a working link.

comment by James_Miller · 2014-03-26T16:38:00.949Z · LW(p) · GW(p)

Being sick makes me stupid. Yesterday I was teaching economics while I had a mild cold. I made multiple simple math mistakes, far more than normal. I need to be mindful that being sick reduces my cognitive capacities.

comment by Leonhart · 2014-03-25T10:31:55.762Z · LW(p) · GW(p)

At my workplace, the question came up of how best to publicly recognise people for good work, while minimising the amount of politics/friction/jealousy that comes about as a direct result of it. We have only just grown past the point where we all know each other well; hence why this sort of thing is becoming interesting.

My initial response to the question was "Make being praised unpleasant, using ugly trophies (sports team strategy) or stupid hats (university graduation strategy)" but I would like to say something more upbeat as well.

Is anyone aware of good writing on the subject/google keywords I could use to find the literature?

Replies from: Nornagest, kalium, bbleeker, James_Miller
comment by Nornagest · 2014-03-25T23:42:13.723Z · LW(p) · GW(p)

You don't want to make being praised unpleasant for the recipient -- that leads to perverse incentives. And you don't want to give an award a stupid name or an embarrassing shape -- part of the point of this sort of thing is that it looks good on your resume or perched over your desk. You want to mark their achievement in a way they'll genuinely appreciate, but simultaneously add symbolism to make their coworkers feel that their status hasn't been diminished.

I think what you're looking for is a little temporary public humiliation, not intrinsic to the award but coming along with more standard recognition. You could do this in several different ways. If it's a fairly small group and the awards are a fairly big deal, for example, you could run a roast) as part of the party following the award. You could probably contrive ways to add this kind of symbolism to physical awards, too.

comment by kalium · 2014-03-26T04:22:13.917Z · LW(p) · GW(p)

Isn't being praised already unpleasant enough?

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2014-03-26T15:18:52.745Z · LW(p) · GW(p)

Most people rather like it. It appears you don't; what makes you dislike it?

Replies from: kalium
comment by kalium · 2014-03-26T21:29:27.218Z · LW(p) · GW(p)

Public attention of any kind is just embarrassing. Probably not unrelated to past experience of politics, friction, and jealousy resulting from praise in the past. But if I were threatened with public praise in my job I would be strongly tempted to quit before it could strike.

Replies from: Punoxysm
comment by Punoxysm · 2014-03-30T08:14:29.546Z · LW(p) · GW(p)

Woah, that's paranoid!

comment by Sabiola (bbleeker) · 2014-03-26T15:28:47.956Z · LW(p) · GW(p)

Find a way to make it non-zero-sum. Only 1 person can be employee of the year, so others lose out and may resent their colleague. Maybe a gold/silver/bronze border around your portrait on the intranet, with a tooltip that explains what you got it for?

comment by James_Miller · 2014-03-25T22:15:28.809Z · LW(p) · GW(p)

Money. Give the person a token bonus and let it be known that if you get a bonus you are not supposed to tell other people about it.

Replies from: ChristianKl
comment by ChristianKl · 2014-03-25T23:12:59.913Z · LW(p) · GW(p)

Money is expensive and not public recognition.

comment by cursed · 2014-03-27T06:00:12.563Z · LW(p) · GW(p)

Cryonics ideas in practice:

"The technique involves replacing all of a patient's blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. "If a patient comes to us two hours after dying you can't bring them back to life. But if they're dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed," says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique."

http://www.newscientist.com/article/mg22129623.000-gunshot-victims-to-be-suspended-between-life-and-death.html

comment by Nisan · 2014-03-25T16:55:25.371Z · LW(p) · GW(p)

I welcome criticism of my new personal favorite population axiology:

The value of a world-history that extends the current world-history is the average welfare of every life after the present moment. For people who live before and after the current moment, we need to evaluate the welfare of the portion of their life after the current moment. The welfare of a person's life is allowed to vary nonlinearly with the number of years the person lives a certain kind of life, and it's allowed to depend on whether the person's experiences are veridical.

This axiology implies that it's important to ensure that the future will contain many people who have better lives than us; it's consistent with preferring to extend someone's life by N years rather than creating a new life that lasts N years. It's immune to Parfit's Repugnant Conclusion, but doesn't automatically fall prey to the opposite of the Repugnant Conclusion. It implies that our decisions should not depend on whether the past contained a large, prosperous civilization.

There are straightforward modifications for dealing with general relativity and splitting and merging people.

The one flaw is that it's temporally consistent: If future generations average the welfare of lives after their "present moments", they will make decisions we disapprove of.

Replies from: solipsist, philh, shminux, VAuroch
comment by solipsist · 2014-03-25T23:52:41.277Z · LW(p) · GW(p)

I build a robot that hibernates until the last person presently alive dies, then exterminates all people who are poor, unhappy, or don't like my robot. Good thing?

Replies from: Nisan
comment by Nisan · 2014-03-26T16:43:45.684Z · LW(p) · GW(p)

A person that has a life worth living could have the welfare of their life increase monotonically with their lifespan. In that case, ending a life usually makes the world-history worse.

comment by philh · 2014-03-25T18:53:09.246Z · LW(p) · GW(p)

If future generations average the welfare of lives after their "present moments", they will make decisions we disapprove of.

Can you give an example? It seems to me that if they decide at t_1 to maximise average welfare from t_1 to ∞, then given that welfare from t_0 to t_1 is held fixed, that decision will also maximise average welfare from t_0 to ∞.

Edit: oh, I was thinking of an average over time, not people.

Replies from: Nisan
comment by Nisan · 2014-03-25T20:26:20.515Z · LW(p) · GW(p)

Earth produces a long and prosperous civilization. After nearly all the resources are used up, the lean and hardscrapple survivors reason, "let's figure out how to squeeze the last bits of computation out of the environment so that our children will enjoy a better life than us before our species goes extinct". But from our perspective, those children won't have as much welfare as the vast majority of human lives in our future, so those children being born would bring our average down. We would will the hardscrapple survivors to not produce more people.

comment by shminux · 2014-03-25T18:30:38.270Z · LW(p) · GW(p)

it's important to ensure that the future will contain many people who have better lives than us

Are you swiping the complexity of value under the terms "better" and "veridical"? Does following your axiology prevent humanity from evolving into a race of happy-go-lucky clones?

Replies from: Nisan
comment by Nisan · 2014-03-25T20:30:00.383Z · LW(p) · GW(p)

Are you swiping the complexity of value under the terms "better" and "veridical"?

Yes. It's hard enough to come up with a decent way of aggregating individual welfares without making a comprehensive theory of value.

comment by VAuroch · 2014-03-28T18:45:18.050Z · LW(p) · GW(p)

on whether the person's experiences are veridical.

Is this different from whether their perception of their experiences is correct, or is it jargon?

Replies from: Nisan
comment by Nisan · 2014-03-28T19:03:36.462Z · LW(p) · GW(p)

Yes, I mean (for example) that if a person believes they're married to someone, their life's welfare could depend on whether their spouse is a real person or if it's a simple chatbot. Also, if a person feels that they've discovered a deep insight, their life's welfare could depend on whether they have actually discovered such an insight.

Replies from: VAuroch
comment by VAuroch · 2014-03-28T20:38:30.770Z · LW(p) · GW(p)

So it's just jargon. OK.

comment by Metus · 2014-03-25T07:46:28.586Z · LW(p) · GW(p)

It took me a long time to find LessWrong and I found it through a convoluted and ultimately entirely random series of events. Though English is neither my first language nor do I live in an anglophone country so I'd love to find a similar community in my language, German, or more generally interesting smaller, though active, communities in other languages than English. How would I go about that?

Replies from: Squark, chaosmage, ChristianKl
comment by Squark · 2014-03-25T12:39:15.180Z · LW(p) · GW(p)

There are LessWrong meetups in many countries, in particular there are 4 in Germany.

See http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups

comment by chaosmage · 2014-03-25T09:48:32.493Z · LW(p) · GW(p)

I'm pretty sure that due to the free rider problem, active and lively communities are much more likely to grow via word of mouth than via public advertising or easily googled websites.

Maybe try Mensa? I don't know if they're any good, but they're in Germany and they're big enough to know an LWish community if one is available.

comment by ChristianKl · 2014-03-25T14:15:32.098Z · LW(p) · GW(p)

In Berlin we found that having Quantified Self events in English makes more sense then holding them in German. All the interested people speak English. Our local Berlin LW meetups are also in English. There no good reason to use German if you want to discuss intellectual topics on a deep level. English is the language of science. English is the language of programming.

That said, see whether there a LW meetup, QS meetup, Chaos Computer Club Erfa or Hackerspace at your city. That's were the kind of people who are here hang out. Meetup.com in general is good to find interesting groups.

When the Chaos Computer Congress was still in Berlin I went there multiple years in a row. Now I don't travel to Hamburg but if you are free on 27-30 December going there, is a very worthwhile experience to be in the company of very smart people.

As far as general strategies for finding communities, talk to people. Ask them to which communities they belong.

Replies from: Emile
comment by Emile · 2014-03-25T17:54:35.707Z · LW(p) · GW(p)

At Paris meetups we occasionally have more people who speak German than people who speak French :)

Replies from: ChristianKl
comment by ChristianKl · 2014-03-25T22:30:21.681Z · LW(p) · GW(p)

Who actually speak it at the meetup or who can speak it? People in Paris speaking German at a public meetup would go against my idea of French people looking out to protect their language.

Replies from: Emile
comment by Emile · 2014-03-26T07:33:06.602Z · LW(p) · GW(p)

Who can speak it; we usually speak English though not if all participants speak good French.

comment by John_Maxwell (John_Maxwell_IV) · 2014-03-26T06:13:29.561Z · LW(p) · GW(p)

LW may be interested to learn about Amazon Smile, which gives 0.5% of your Amazon purchases to charity, and the Smile Always Chrome extension that will route your browser to smile.amazon.com by default. (Yes, you can support MIRI through Amazon Smile.) Total setup time estimated at under 5 minutes.

Oh yeah, it looks like they're having some kind of promotion where if you sign up and make a purchase by March 31, they will give an extra $5 to your chosen charity.

Replies from: TylerJay, Metus
comment by TylerJay · 2014-03-26T17:44:24.319Z · LW(p) · GW(p)

I have been using Amazon Smile and Smile Always for MIRI for about a year.

IIRC, Amazon Smile used to be listed on MIRI's Donate for free page, but has since been replaced by "Good Shop". Good Shop appears to give a higher percentage, but I was unable to get the browser extension working so that it happened automatically, so I still use Smile. If anyone knows of a way to get it working, I'd be happy to hear it. But I tried to do it manually for a while, and I just don't remember often enough.

comment by Metus · 2014-03-26T06:34:12.591Z · LW(p) · GW(p)

Is this for the EU too? 0.5% seems a bit low, for every $200 you'd spend the charity receives $1. Then again, it is about the aggregate effect.

Edit: Browsing through the charities, they are all located in the US, so it seems specific to there.

comment by tgb · 2014-03-27T02:11:50.675Z · LW(p) · GW(p)

Facebook bought Oculus Rift for $2 billion. What makes this, and so many other large deals, such clean numbers? Are the press rounding the details? Are the companies only releasing approximate or estimate numbers? Can the value of a company like Oculus really not be estimated to the nearest 10%? Or do these whole numbers just serve as nice Schelling points on which to hinge a bargain? Or am I forgetting lots of ugly-numbered deals?

(WhatsApp purchase was 2 significant figures, and this list on Wikipedia does show mostly 2-3 significant figures though some figures are probably converted from other currencies.)

Replies from: bramflakes, None, gwern, Squark
comment by bramflakes · 2014-03-27T11:34:06.752Z · LW(p) · GW(p)

In cases like this, a large portion of the "money" paid is actually in the form of shares, which can vary wildly on a day-to-day basis (especially during a takeover!). It doesn't make sense to specify the value of it too precisely because nobody knows what the shares are going to be worth tomorrow.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-03-29T21:27:43.316Z · LW(p) · GW(p)

That's a good reason for the press to round such figures, but the actual figure is round, on the day it was decided, and that itself is mysterious.

comment by [deleted] · 2014-03-27T03:01:32.224Z · LW(p) · GW(p)

What makes you think that these numbers are determined by some kind of rational cost-benefit analysis rather than those with the money rattling off numbers until those with the property give in?

comment by gwern · 2014-03-27T02:54:51.206Z · LW(p) · GW(p)

Why not all of the above? We can see some rounding already; http://investor.fb.com/releasedetail.cfm?ReleaseID=835447 says

Facebook today announced that it has reached a definitive agreement to acquire Oculus VR, Inc., the leader in immersive virtual reality technology, for a total of approximately $2 billion. This includes $400 million in cash and 23.1 million shares of Facebook common stock (valued at $1.6 billion based on the average closing price of the 20 trading days preceding March 21, 2014 of $69.35 per share). The agreement also provides for an additional $300 million earn-out in cash and stock based on the achievement of certain milestones.

There's rounding right there (23.1m * 69.35 is not a round number). And there's plenty of uncertainty about how much they will actually pay: how can anyone know how much of that earn-out will ultimately be paid?

Replies from: Douglas_Knight, tgb
comment by Douglas_Knight · 2014-03-29T21:26:35.868Z · LW(p) · GW(p)

There's rounding right there (23.1m * 69.35 is not a round number).

That sounds backwards to me. It looks to me like they started at the round 2.0, set 20% in cash, 80% in stock, found that required 23.1m shares of facebook and rounded that to three digits to get within 2% of the 1.6 they wanted.

Also, the incentive pay of 300m is 15% of the fixed payment, less uncertainty than rounding to 1 significant figure.

comment by tgb · 2014-03-27T13:12:57.826Z · LW(p) · GW(p)

It absolutely could be all of the above! But see I write questions like this fairly frequently: I notice something surprising and don't have a good explanation for it. I then write down the question and pose a couple possible explanations which makes me think of more possible explanations. Frequently I realized that taken together the possibilities I thought of are enough to explain what I was surprised at and I don't even ask the question. Other times, like this one, I still feel like I'm missing something. So I ask the question.

In this case it looks like the biggest thing I missed was how much these sales' values depend upon the moment-to-moment stock prices of the parties involved and so that not rounding them hardly even makes sense, as you and bramflakes point out. Thanks!

comment by Squark · 2014-03-27T20:13:57.714Z · LW(p) · GW(p)

Google's IPO was e * 10^9$#In_computer_culture) :)

comment by Tenoke · 2014-03-25T09:59:02.069Z · LW(p) · GW(p)

No open_thread tag. ('Latest Open Thread' doesn't link to here)

Edit: For some reason the one before doesn't have the tag either..

Replies from: Metus, Oscar_Cunningham
comment by Metus · 2014-03-25T10:13:33.125Z · LW(p) · GW(p)

Fixed, I hope entering the tag post hoc fixes the issue.

comment by Oscar_Cunningham · 2014-03-25T10:49:37.499Z · LW(p) · GW(p)

Edit: For some reason the one before doesn't have the tag either..

People have been posting in both recently.

comment by shminux · 2014-03-28T18:52:08.965Z · LW(p) · GW(p)

Scott Aaronson on subjectivity of qualia:

no matter how much is discovered about neurobiology and the measurable correlates of consciousness, it seems to me that stoners will always be able to ask each other, “dude, what if like, my red is your blue?”

Replies from: Benito
comment by Ben Pace (Benito) · 2014-03-28T19:41:17.453Z · LW(p) · GW(p)

Lol.

no matter how much is discovered about mathematics and the measurable regularities of reality, it seems to me that stoners will always be able to ask each other, “dude, what if like, two plus two isn't four?”

Seriously though, that's a really bad argument, why have you added it here?

Replies from: shminux, fubarobfusco, XiXiDu
comment by shminux · 2014-03-28T22:00:45.177Z · LW(p) · GW(p)

You can read the full argument in the comments to http://www.scottaaronson.com/blog/?p=1753

comment by XiXiDu · 2014-03-29T10:11:18.307Z · LW(p) · GW(p)

“dude, what if like, my red is your blue?”

“dude, what if like, two plus two isn't four?”

Are you indicating that only the relation between wavelengths and the brain's information processing counts, and that differing conscious perceptions of these wavelengths are analogous to the use of different sets of symbols used to denote the additive relationship between "two-ness" and "four-ness" (two plus two equals four and deux plus deux égalent quatre)?

Replies from: Benito
comment by Ben Pace (Benito) · 2014-03-29T15:34:03.862Z · LW(p) · GW(p)

No, I just meant that, just because a 'stoner' can ask a question, doesn't mean the answer to the question is permanently unknowable.

Edit: Or even that difficult to answer. In fact, that a stoner can ask a question is almost no evidence of anything at all. Applying principle of charity, if Scott meant that anyone can always ask that question, that's true for any question; you can always keep asking if two plus two equals four. Now, if in fact Aaronson wants to present evidence for the claim that we can never know if your and my 'blues and reds' are the same, that would be cool, but there was no real argument given.

comment by Squark · 2014-03-30T18:34:07.661Z · LW(p) · GW(p)

Recently I changed some of my basic opinions about life, in large part because of interaction with LessWrong (mostly along the axes Deism -> Atheism, ethical naturalism -> something else (?)).

It inspired me to try to summarize my most fundamental beliefs. The result is as follows:

  1. Epistemology

1.1. Epistemic truth is to be determined solely by the scientific method / Occam's razor.

1.2. The worldview of mainstream science is mostly correct.

1.3. The many religious / mystical traditions are wrong.

  1. Philosophy of mind

2.1. Consciousness is the result of computing processes in the brain. In particular, if a machine would implement the same computations it would be conscious. However, in general I don't know what consciousness is.

2.2. Identity is not fundamentally meaningful. However, there might be useful "fuzzy" variants of the concept.

  1. Metaethics

3.1. Humans are agents with (approximately) well-defined utility functions.

3.2. The moral value of an action is the expectation value of the utility function of the respective agent.

3.3. I should take actions with as much value as possible. This is the only meaningful interpretation of "should".

  1. Ethics

4.1. Human utility functions are complex.

4.2. I cannot give anything close to a full description of my utility function, but it seems to involve terminal values such as: beauty, curiosity, humor, kindness, friendship, love, sexuality / romance, pleasure... These values are computed on all sufficiently human agents (but I don't know what "sufficiently human" means). The weights for myself and my friends / loved ones might be higher but I'm not sure.

Less fundamental and less certain are:

  1. Metaphysics

5.1. UDT is the correct decision theory.

5.2. Epistemic questions don't make fundamental sense (I realize the apparent contradiction with 1.1 but 1.1 is still a useful approximation and there's also a meta-epistemic level on which UDT itself follows from Occam's razor) as opposed to decision-theoretic questions. Subjective expectations are ill-defined.

5.3. Temark's level IV multiverse is real, or at least as "real" as anything is.

I'm curious to know how many LessWrongers have similar vs different worldviews.

Replies from: polymathwannabe, fubarobfusco, None
comment by polymathwannabe · 2014-03-31T17:08:39.797Z · LW(p) · GW(p)

What is UDT?

Replies from: Nornagest
comment by fubarobfusco · 2014-03-31T15:14:03.492Z · LW(p) · GW(p)

1.1. Epistemic truth is to be determined solely by the scientific method / Occam's razor.

Is this an epistemic truth?

Replies from: Squark
comment by Squark · 2014-03-31T20:26:09.590Z · LW(p) · GW(p)

No :) See below.

comment by [deleted] · 2014-03-30T21:07:29.844Z · LW(p) · GW(p)

1.1- Disagree, but I may not understand the claim (what's 'epistemic truth'?). 1.2- Agree. 1.3- Agree. 2.1- Agree that consciousness is the result of computing processes in the brain, disagree that a machine implementing the same computations would necessarily be conscious. (i.e. agree with physicalism, don't agree with functionalism). 2.2- I don't understand the claim. But I think I disagree. 3.1- Agnostic. 3.2- Disagree. 3.3- Disagree, especially with the claim that this is the only meaningful interpretation of 'should'. 4.1- Agnostic. 5.1- Agnostic. 5.2- I don't understand this at all. 5.3- I don't understand your use of the word 'real'.

Replies from: Squark
comment by Squark · 2014-03-31T20:25:19.971Z · LW(p) · GW(p)

By "epistemic truth" I mean truth regarding the physical universe. Maybe that is a poor choice of words. Physical truth?

Replies from: None
comment by [deleted] · 2014-03-31T20:32:58.181Z · LW(p) · GW(p)

So do you mean 'the only grounds for knowledge about the physical universe is the scientific method/Occam's razor'?

Replies from: Squark
comment by Squark · 2014-03-31T20:43:00.916Z · LW(p) · GW(p)

Yep. Although under a UDT / multiverse interpretation it becomes "knowledge about the region of the multiverse in which I am located".

comment by fubarobfusco · 2014-03-29T15:40:01.508Z · LW(p) · GW(p)

The Good, the Bad, and the Just: Justice Sensitivity Predicts Neural Response during Moral Evaluation of Actions Performed by Others.

Morality is a fundamental component of human cultures and has been defined as prescriptive norms regarding how people should treat one another, including concepts such as justice, fairness, and rights. Using fMRI, the current study examined the extent to which dispositions in justice sensitivity (i.e., how individuals react to experiences of injustice and unfairness) predict behavioral ratings of praise and blame and how they modulate the online neural response and functional connectivity when participants evaluate morally laden (good and bad) everyday actions. Justice sensitivity did not impact the neuro-hemodynamic response in the action-observation network but instead influenced higher-order computational nodes in the right temporoparietal junction (rTPJ), right dorsolateral and dorsomedial prefrontal cortex (rdlPFC, dmPFC) that process mental states understanding and maintain goal representations. Activity in these regions predicted praise and blame ratings. Further, the hemodynamic response in rTPJ showed a differentiation between good and bad actions 2 s before the response in rdlPFC. Evaluation of good actions was specifically associated with enhanced activity in dorsal striatum and increased the functional coupling between the rTPJ and the anterior cingulate cortex. Together, this study provides important knowledge in how individual differences in justice sensitivity impact neural computations that support psychological processes involved in moral judgment and mental-state reasoning.

comment by ephion · 2014-03-27T15:37:16.233Z · LW(p) · GW(p)

I've seen a lot of discontent on LW about exercise. I know enough about physical training to provide very basic coaching and instruction to get people started, and I can optimize a plan for a variety of parameters (including effectiveness, duration of workout, frequency of workout, cost of equipment, space of equipment, gym availability, etc.). If anyone is interested in some free one-on-one help, post a request for your situation, budget, and needs and I'll write up some basic recommendations.

I don't have much in the ways of credentials, except that I've coached myself for all of my training and have made decent progress (from sedentary fat weakling to deadlifting 415lbs at 190lb BW and 13%BF). I've helped several friends, all of which have made pretty good progress, and I've been able to tailor workouts to specific situations.

comment by Stefan_Schubert · 2014-03-26T21:13:17.480Z · LW(p) · GW(p)

I've been reading a bit of books that I guess could be classified as "pop psychology" and "pop economics" lately. (In this concept I include books like Kahneman's Thinking Fast and Slow. Hence what I mean is by "pop" is not that it's shallow but rather that it has a wide lay audience.) Now I'd like to turn to sociology - arguably the most general and allencompassing of the social sciences. But when I google "pop sociology", all the books seem to have been written by economists or psychologists or non-academics such as Malcolm Gladwell. For instance, see here:

https://www.goodreads.com/shelf/show/pop-sociology

Are there no well-known pop-sociological books written by sociologists and, if so, what does this say about sociology as a discipline? You very seldom hear about sociological research in the media if you compare with economics and psychology, and surely there has to be an explanation of this?

Replies from: Benito, iarwain1, Douglas_Knight, solipsist
comment by iarwain1 · 2014-03-27T16:13:49.017Z · LW(p) · GW(p)

You can try Everything Is Obvious by Duncan Watts. He's a multi-disciplinary type, but he is at least partially a sociologist. He also discusses a lot in there about sociology as a discipline.

comment by Douglas_Knight · 2014-03-27T01:16:02.403Z · LW(p) · GW(p)

Erving Goffman is said to be accessible and said to be a sociologist. One issue is that "sociology" has several meanings. His version shades into psychology.

comment by solipsist · 2014-03-26T21:54:56.838Z · LW(p) · GW(p)

The sociologist Charles Murray) says interesting things. I don't usually agree with them, but they make me think.

comment by pianoforte611 · 2014-03-30T18:57:19.855Z · LW(p) · GW(p)

Am I confused about frequentism?

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis (which is the complement of the hypothesis that you are trying to test).

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.

Replies from: VincentYu, pcm, Oscar_Cunningham, IlyaShpitser, Lumifer, army1987
comment by VincentYu · 2014-03-30T21:06:44.931Z · LW(p) · GW(p)

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis.

This is correct.

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

This is not correct. You seem to be under the impression that

P(data | null hypothesis) + P(data | complement(null hypothesis)) = 1,

but this is not true because

  1. complement(null hypothesis) may not have a well-defined distribution (frequentists might especially object to defining a prior here), and
  2. even if complement(null hypothesis) were well defined, the sum could fall anywhere in the closed interval [0, 2].

More generally, most people (both frequentists and bayesians) would object to "accepting the hypothesis" based on rejecting the null, because rejecting the null means exactly what it says, and no more. You cannot conclude that an alternative hypothesis (such as the complement of the null) has higher likelihood or probability.

Replies from: pianoforte611
comment by pianoforte611 · 2014-03-31T02:13:45.759Z · LW(p) · GW(p)

even if complement(null hypothesis) were well defined, the sum could fall anywhere in the closed interval [0, 2].

Huh? P(X|Y) + P(X|Y') = P(X) and an event that has already occurred has a probably of one. Am I missing something?

comment by pcm · 2014-03-31T16:06:15.294Z · LW(p) · GW(p)

But it seems more useful to me to calculate P(hypothesis | data).

That may be true if you have little influence over what data is available.

Frequentists are mainly interested in situations where they can create experiments that cause P(hypothesis) to approach 0 or 1. The p-value is intended to be good at deciding whether the hypothesis has been adequately tested, not at deciding whether to believe the hypothesis given crappy data.

comment by Oscar_Cunningham · 2014-03-31T08:37:36.482Z · LW(p) · GW(p)

Your conclusion

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.

is correct. Frequentists do indeed claim that P(hypothesis | data) is meaningless for exactly the reasons you gave. However there are some little details in the rest of your post that are incorrect.

null hypothesis (which is the complement of the hypothesis that you are trying to test).

The hypothesis you are trying to test is typically not the complement of the null hypothesis. For example we could have:

H0: theta=0

H1:theta>0

where theta is some variable that we care about. Note that the region theta<0 isn't in either hypothesis. If we were instead testing

H1':theta isn't equal to 0

then frequentists would suggest a different test. They would use a one-tailed test to test H1 and a two-tailed test to test H1'. See here.

P(data | hypothesis) = 1 - p-value

No. This is just mathematically wrong. P(A|B) is not necessarily equal to 1-P(A|¬B). Just think about it for a bit and you'll see why. If that doesn't work, take A="sky is blue" and B="my car is red" and note that P(A|B)=P(A|¬B)~1.

comment by IlyaShpitser · 2014-03-31T11:57:49.332Z · LW(p) · GW(p)

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless.

It's not meaningless, but people who follow R. A. Fisher's ideas for rejecting the null do not use p(hypothesis | data). "Meaningless" would be if frequentists literally did not have p(hypothesis | data) in their language, which is not true because they use probability theory just like everybody else.


Don't ask lesswrong about what frequentists claim, ask frequentists. Very few people on lesswrong are statisticians.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-03-31T19:41:11.393Z · LW(p) · GW(p)

"Meaningless" would be if frequentists literally did not have p(hypothesis | data) in their language, which is not true because they use probability theory just like everybody else.

Many frequentists do insist that P(hypothesis) are meaningless, despite "using probability theory."

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-03-31T20:13:34.678Z · LW(p) · GW(p)

Could you give me something to read? Who are these frequentists, and where do they insist on this?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-03-31T20:42:49.668Z · LW(p) · GW(p)

Let us take a common phrase from the original comment "the hypothesis is either true or false". The first google hit:

There are two misconceptions that you must be aware of, as you will certainly hear these. The first is thinking that we calculate the probability of the null hypothesis being true or false. Whether the null hypothesis is true or false is not subject to chance; it either is true or it is false - there is no probability of one or the other.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-03-31T23:58:23.917Z · LW(p) · GW(p)

So from this statement you conclude that frequentists think P(hypothesis) is meaningless? Bayesians assign degrees of belief to things that are actually true or false also. The coin really is either fair or not fair, but you will never find out with finite trials. This is a map/territory distinction, I am surprised you didn't get it. This quote has nothing to do with B/F differences.

A Bayesian version of this quote would point out that it is a type error to confuse the truth value of the underlying thing, and the belief about this truth value.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2014-04-01T09:02:25.888Z · LW(p) · GW(p)

You have successfully explained why it is irrational for frequentists to consider P(hypothesis) meaningless. And yet they do. They would say that probabilities can only be defined as limiting frequencies in repeated experiments, and that for a typical hypothesis there is no experiment you can rerun to get a sample for the truth of the hypothesis.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-04-01T09:22:49.063Z · LW(p) · GW(p)

You guys need to stop assuming frequentists are morons. Here are posts by a frequentist:

http://normaldeviate.wordpress.com/2012/12/04/nate-silver-is-a-frequentist-review-of-the-signal-and-the-noise/

http://normaldeviate.wordpress.com/2012/11/17/what-is-bayesianfrequentist-inference/

Some of the comments are good as well.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2014-04-01T11:15:21.712Z · LW(p) · GW(p)

Yes, you're right. Clearly many people who identify as frequentists do hold P(hypothesis) to be meaningful. There are statisticians all over the B/F spectrum as well as not on the spectrum at all. So when I said "frequentists believe ..." I could never really be correct because various frequentists believe various different things.

Perhaps we could agree on the following statement: "Probabilities such as P(hypothesis) are never needed to do frequentist analysis."

For example, the link you gave suggests the following as a characterisation of frequentism:

Goal of Frequentist Inference: Construct procedure with frequency guarantees. (For example, confidence intervals.)

Since frequency guarantees are typically of the form "for each possible true value of theta doing the construction blah on the data will, with probability at least 1-p, yield a result with property blah". Then since this must hold true for each theta, the distribution for the true value of theta is irrelevant.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-04-01T12:15:59.536Z · LW(p) · GW(p)

I could never really be correct because various frequentists believe various different things.

The interesting questions to me are: (a) "what is the steelman of the frequentist position?" (folks like Larry are useful here), and (b) "are there actually prominent frequentist statisticians who say stupid things?"

By (b) I mean "actually stupid under any reasonable interpretation."


Clearly many people who identify as frequentists

Quote from the url I linked:

One thing that has harmed statistics — and harmed science — is identity statistics. By this I mean that some people identify themselves as “Bayesians” or “Frequentists.” Once you attach a label to yourself, you have painted yourself in a corner.

When I was a student, I took a seminar course from Art Dempster. He was the one who suggested to me that it was silly to describe a person as being Bayesian of Frequentist. Instead, he suggested that we describe a particular data analysis as being Bayesian of Frequentist. But we shouldn’t label a person that way.

I think Art’s advice was very wise.

"Keep your identity small" -- advice familiar to a LW audience.


Perhaps we could agree on the following statement: "Probabilities such as P(hypothesis) are never needed to do frequentist analysis."

I guess you disagree with Larry's take: B vs F is about goals not methods. I could do Bayesian looking things while having a frequentist interpretation in mind.


In the spirit of collaborative argumentation, can we agree on the following:

We have better things to do than engage in identity politics.

comment by Lumifer · 2014-03-31T16:44:49.251Z · LW(p) · GW(p)

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

It is not the same thing and knowing P(hypothesis | data) would be very useful. Unfortunately, it is also very hard to estimate because usually the best you can do is calculate the probability, given the data, of a hypothesis out of a fixed set of hypotheses which you know about and for which you can estimate probabilities. If your understanding of the true data-generation process is not so good (which is very common in real life) your P(hypothesis | data) is going to be pretty bad and what's worse, you have no idea how bad it is.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-03-31T19:43:18.258Z · LW(p) · GW(p)

Not having a good grasp on the set of all hypotheses does not distinguish bayesians from frequentists and does not seem to me to motivate any difference in their methodologies.

Added: I don't think it has much to do with the original comment, but testing a model without specific competition is called "model checking." It is a common frequentist complaint that bayesians don't do it. I don't think that this is an accurate complaint, but it is true that it is easier to fit it into a frequentist framework than a bayesian framework.

Replies from: Lumifer
comment by Lumifer · 2014-03-31T19:48:31.244Z · LW(p) · GW(p)

I have said nothing about the differences between bayesians and frequentists. I just pointed out some issues with trying to estimate P(hypothesis | data).

comment by A1987dM (army1987) · 2014-03-30T19:56:54.862Z · LW(p) · GW(p)

As far as I can tell, you're correct.

comment by shminux · 2014-03-25T15:58:33.027Z · LW(p) · GW(p)

Extreme fallacy of gray illustrated by SMBC.

Replies from: RowanE
comment by RowanE · 2014-03-25T19:32:03.216Z · LW(p) · GW(p)

I don't think that's really the fallacy of grey, if I have to map it to a piece of LessWrong-speak, I'd call it privileging the hypothesis; it's not saying you can't be sure of anything, if you're a good Bayesian it's not really telling you anything new there, it's just bringing one particular super-implausible hypothesis to your attention so it occupies much more of your mind than it has any right to.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-03-26T04:18:07.786Z · LW(p) · GW(p)

The Hitlers are grey, though...

:D

comment by Metus · 2014-03-25T07:48:28.577Z · LW(p) · GW(p)

I am assembling a list of interesting blogs to read and for that purpose I'd love to see the kind of blog the people in this community recommend as a starting point. Don't see this just as a request to post blogs according to my unknown taste but as a request to post blogs according to your taste in the hope that the recommendation scratches an itch in this community.

Replies from: adbge, sixes_and_sevens, Bobertron, beoShaffer, ChristianKl, pragmatist
comment by adbge · 2014-03-25T15:58:01.425Z · LW(p) · GW(p)

Here's a sampling of the best in my RSS reader:

gwern posts on google+ and Kaj Sotala posts interesting stuff on Facebook. I also subscribe to a number of journal's table of contents via this site to keep up with research and some stuff on arxiv.

Replies from: labachevskij
comment by labachevskij · 2014-03-25T16:11:49.754Z · LW(p) · GW(p)

I have to admit the intersection with my feed list is most definitely non-empty: I'd add Good Math Bad Math, mathematics, computer science and, sometimes, recipes and playlists.

comment by sixes_and_sevens · 2014-03-25T15:34:51.787Z · LW(p) · GW(p)

After discussing diffusion of interesting news at the most recent London meetup, I was planning on asking something like this myself.

Futility Closet is nothing but "interesting stuff". It describes itself as "a collection of entertaining curiosities in history, literature, language, art, philosophy, and mathematics, designed to help you waste time as enjoyably as possible". It has more chess than I personally care for, but is updated with what I find to be novel content three times a day.

Conscious Entities is a blog on Philosophy of Mind. It takes an open position on a lot of questions we would consider to be settled on LessWrong, but I think it has value in a steel-manning / why-do-people-believe-this capacity.

(The categories on my feed reader are "Blogs", "CS", "Dance", "Econ", "Esoterica", "Maths/Stats", "Philosophy", "Science" and "Webcomics". I'd be interested in finding out how other people classify theirs.)

Replies from: labachevskij
comment by labachevskij · 2014-03-26T10:22:44.099Z · LW(p) · GW(p)

I have: "News", "Friends", "Comics", "RPG", "Android", "LW" , "Climbing" and "Maths".

comment by Bobertron · 2014-03-25T11:14:17.783Z · LW(p) · GW(p)

There are some blogs mentioned on the wiki.

comment by beoShaffer · 2014-03-26T01:30:58.342Z · LW(p) · GW(p)

Mr.Money Mustache on personal finance.

comment by ChristianKl · 2014-03-25T12:27:49.839Z · LW(p) · GW(p)

In the Pipeline is a very good blog to keep up with what's happen in big pharma and chemistry in general. It written by someone employed inside a pharma company, and not someone who criticizes pharma policy from a outsider standpoint. There are some posts about specific chemical reactions that I skip but if you want to understand how the healthcare industry works, I can recommend the blog very much. It also provides good coverage if a new relevant biomedical finding comes in the news. The comment section is usually full of people with domain experience.

I used to read Matt Talibies Rolling Stones blog column for Finanical news. Stories about Libor are explained in detail on that blog.

Matt now left the Rolling Stones and will lead his own magazine under the heading of First Look Media which is funded by the tech billionaire of ebay cofounder Pierre Morad Omidyar. At the moment First Look Media already publishes Glenn Greenwald & company which the primary job of still processing the pile of Snowden documents.

I used to read Glenn Greenwald for a long time but reading every other day about the NSA, used to become boring so I won't read everything that comes out in the Intercept but I subscribe to the feed.

As a German speaking independent news sources I read hintergrund, fefe and the indepth podcast Alternativlos.

Alternativlos is an interesting project. Two members of the Chaos Computer Club basically concluded that speaking to the public and explaining them how things work politically is "without alternative" and started to explain topics in 2 hours. It explains a topic like stuxnet in detail in a detail that explains what exploits cost on the black market.

Today some public radio stations rebroadcast the podcast.

comment by pragmatist · 2014-03-27T04:16:16.639Z · LW(p) · GW(p)

I don't really read many blogs frequently. The one sort-of-bloggish site I visit daily is 3 Quarks Daily. They link to a number of interesting articles every day, and their taste in topics coincides quite nicely with mine.

comment by Dan_Moore · 2014-03-28T16:10:13.564Z · LW(p) · GW(p)

I am wondering about the effect of the advent of self-driving cars on urban sprawl. Will it increase or decrease sprawl?

Urban sprawl is said to be an unintended consequence of the development of the US interstate highway system.

Replies from: IlyaShpitser, Vaniver, Daniel_Burfoot, Nornagest
comment by IlyaShpitser · 2014-03-28T18:54:52.440Z · LW(p) · GW(p)

Urban sprawl is said to be an unintended consequence of the development of the US interstate highway system.

Passive voice / finding causes is hard / compare LA and SF.


This is not really a haiku.

comment by Vaniver · 2014-03-28T17:13:44.380Z · LW(p) · GW(p)

Some factors pointing towards an increase: decreased emotional, health, and financial cost of commutes.

Some factors pointing towards a decrease: decreased cost and increased flexibility and convenience of car-sharing services, which work better in higher density locations.

I think the primary driver of urban sprawl is schooling (and thus home prices), not commuting. The growing acceptance of online schooling will likely decrease urban sprawl significantly.

comment by Daniel_Burfoot · 2014-03-30T14:02:03.034Z · LW(p) · GW(p)

Here's an economics analysis: self-driving cars reduce the effective cost of a commute, by allowing the passenger to focus on other things than driving during the commute (reading, watching TV, playing games, etc). Since a significant limit to sprawl is the cost, broadly construed, of long commutes, self-driving cars will increase sprawl.

comment by Nornagest · 2014-03-28T16:29:22.868Z · LW(p) · GW(p)

Based on the idea that you get what you incentivize, and irrespective of other factors, I'd expect a marginal to mild increase. Self-driving cars can make commutes a bit more pleasant and substantially less dangerous, but they can't reduce commute times (until they reach saturation or close to it), and time's the main limitation.

Replies from: Douglas_Knight, Dan_Moore
comment by Douglas_Knight · 2014-03-29T21:44:51.854Z · LW(p) · GW(p)

Actually, I think that there is great potential for self-driving cars to reduce congestion and thus commute time.

comment by Dan_Moore · 2014-03-28T17:03:27.660Z · LW(p) · GW(p)

Some of the effects will depend on details of the implementation. For example, if self-driving cars are constrained to obey highway speed limits, the commute time may increase in some cases, at least initially. Upon achieving saturation of self-driving cars, I would expect shorter commute times on non-highways. Also, upon saturation, it may be seen as desirable to raise the highway speed limit.

Replies from: None
comment by [deleted] · 2014-03-29T01:13:54.871Z · LW(p) · GW(p)

Keep in mind that non-self-driving cars will always be cheaper and will always have a market no matter how good autonomous ones get since autonomous vehicles have more parts and maintenance needs. You will never have an area with only autonomous vehicles, absent massive government intervention.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-03-29T12:18:01.353Z · LW(p) · GW(p)

I place the likelyhood of massive government intervention or the equivalent (insurance becoming flat out unavailable for manual drivers) at somewhere north of 90 %. Driver error has a really high cost in quality adjusted life years, every year. If eliminating that cost becomes an option, it will get used.

comment by JQuinton · 2014-03-28T15:07:14.447Z · LW(p) · GW(p)

A friend of mine has mild anorexia (she's on psych meds to keep it contained) and recently asked me some advice about working out. She told me that she is mainly interested in not being so skinny. I offered to work out with her one day of the week to make sure she's going about things correctly, with proper form and everything.

The thing is, just going to the gym and working out isn't effective if her diet and sleeping cycle aren't also improved. I would normally be really blunt about these other facts, but her dealing with anorexia probably complicates things a bit... especially the proper diet part. I was thinking that if she has trouble eating enough, maybe she could try drinking some protein shakes. But I'm not sure if that would actually be effective in helping her reach her goal of putting on more weight if she's not eating properly other times of the day. If anyone has any advice on how I could more effectively broach that subject without being insulting or belittling I would appreciate it.

Replies from: ChristianKl, NancyLebovitz, polymathwannabe, ephion, Daniel_Burfoot
comment by ChristianKl · 2014-03-28T17:50:22.578Z · LW(p) · GW(p)

If she's on medicine to contain her anorexia she knows she has an issue. You could start with simple asking her what she eats and listen empathically.

I would also suggest that you think about your relationship with her. What does she want? Does she want your approval? Does she want that you tell her what to do, to not have to take the responsibility for herself? Does she care about looking beautiful to you? Does she want a relationship with you? Do you want a relationship with her?

Knowing answers to questions like that is important when you deal with deep psychological issues. It shapes how the words you say will be understood.

comment by NancyLebovitz · 2014-03-28T18:02:06.048Z · LW(p) · GW(p)

Do you have any thoughts about whether she's at risk for an exercise disorder?

Replies from: JQuinton
comment by JQuinton · 2014-03-31T16:20:00.950Z · LW(p) · GW(p)

That's actually a good question. Without disclosing too much of her psych history, she seems to be really impulsive and might even be prone to addiction. I suppose she could get an exercise disorder... this makes it even more complicated than I thought.

Replies from: shokwave
comment by shokwave · 2014-03-31T17:06:49.985Z · LW(p) · GW(p)

I'd caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).

Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and diet means she gets this help from someone who is not watching out for the possibility of exercise addiction).

There are a few approaches that might work for different people:

  • Talk as though she doesn't have anorexia. Since you are aware, you can tailor your message to avoid saying anything seriously upsetting (i.e you can present the diet assuming control of diet is easy, or assuming control of diet is hard). I don't recommend this approach.
  • Confront the issue directly ("Exercise is what tells your body to grow muscle, but food is what muscles are actually built out of, so without a caloric surplus your progress will be slow. I'm aware that this is probably a much harder challenge for you than most people..."). I don't recommend this approach.
  • Ask her how she feels about discussing diet. ("Do you feel comfortable discussing diet with me? Feel free to say no. Also, don't feel constrained by your answer to this question; if later you start wishing you'd said no, just say that, and I'll stop."). I recommend this approach.

In any case, make it clear from the outset you want to be respectful about it.

comment by polymathwannabe · 2014-03-28T16:54:48.327Z · LW(p) · GW(p)

You may directly ask her in what terms she prefers to discuss those matters. That way you'll get across your message, i.e. that proper diet is worth talking about, with little risk of involving the wrong message in the mix.

Replies from: VAuroch
comment by VAuroch · 2014-03-28T18:09:28.514Z · LW(p) · GW(p)

This form of question seems no less likely to raise problems than asking about the topic itself.

I'd suggest something more along the lines of "This is pretty standard advice, and it works for most people, but it's built on some assumptions about an average diet. How much should I be tailoring it for you?" Which is basically an indirect request for a status report, without implying anything about whether or not her current eating pattern is unhealthy. From that response, you can probably gauge to what degree you can safely bring it up directly.

comment by ephion · 2014-03-31T17:12:52.977Z · LW(p) · GW(p)

If she wants to get bigger, then I'd get her started with Greyskull LP. It's a fairly basic beginner weight lifting program that, when combined with a caloric surplus, will get good results for size and strength. There isn't much work involved (just three sets on 2-3 exercises; doing more is counterproductive for beginners) so it won't use as much energy as a cardio or circuit intensive routine.

A couple of protein shakes with milk/almond milk are enough to get a caloric surplus going. You only need 250-500 extra calories to make good gains, and you can easily get that with a shake or two.

comment by Daniel_Burfoot · 2014-03-30T14:06:49.650Z · LW(p) · GW(p)

A friend of mine has mild anorexia (she's on psych meds to keep it contained)

I don't have good ideas about dealing with anorexia, but I think you should suggest to your friend that she is being used as a pawn by the psycho-pharmaceutical industry to extract dollars from her health insurance provider.

Replies from: erratio
comment by erratio · 2014-03-30T14:51:12.401Z · LW(p) · GW(p)

Evidence? Is this just a general anti-psych-meds comment or do you have a basis for thinking that in this particular case they're problematic?

comment by lmm · 2014-03-25T21:02:09.830Z · LW(p) · GW(p)

Every "proof" of Godel's incompleteness theorem I've found online seems to stop after what I would consider to be the introduction. I find myself saying "yes, good, you've shown that it suffices to prove this fixed point theorem... now where's the proof of the fixed point theorem, surely that's the actual meat of the proof?" Anyone have a good source that shows the full proof, including why for a particular encoding of sentences as numbers the function "P -> P is not provable" must have a fixed point?

Replies from: Oscar_Cunningham, witzvo
comment by Oscar_Cunningham · 2014-03-25T21:19:21.137Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Diagonal_lemma

Or read Godel, Escher, Bach. Actually, read GEB anyway.

comment by witzvo · 2014-03-26T23:25:03.572Z · LW(p) · GW(p)

I suggest reading a translation.

comment by Nisan · 2014-03-25T16:39:24.606Z · LW(p) · GW(p)

Here's a cute/vexing decision theory problem I haven't seen discussed before:

Suppose you're performing an interference experiment with a twist: Another person, Bob, is inside the apparatus and cannot interact with the outside world. Bob observes which path the particle takes after the first mirror, but then you apply a super-duper quantum erasure to Bob so that they remember observing the path of the particle, but they don't remember which path it took. Thus, at least from your perspective, the superposed versions of Bob interfere, and the particle always hits detector 2. (I can't find the reference for super-duper quantum memory erasure, probably because it's behind a paywall. Perhaps (Deutsch 1996) or (Lockwood 1989).)

Suppose that after Bob makes their observation, but before you observe Bob, you offer to play a game with Bob: If the particle hits detector 2, you give them $1; but if it hits detector 1, they give you $2. Before the experiment ran, this would have seemed to Bob like a guaranteed $1. But during the experiment, it seems to Bob that the game has expected value -$.50. What should Bob do?

If it seems unfair to wipe Bob's memory, there's an equivalent puzzle in which Bob doesn't learn anything about the particle's state, but the particle nevertheless becomes entangled with Bob's body. In that case, the super-duper quantum erasure doesn't change Bob's epistemic state.

My grasp of quantum physics is rudimentary; please let me know if I'm completely wrong.

Replies from: Strilanc, Manfred
comment by Strilanc · 2014-03-26T03:17:47.478Z · LW(p) · GW(p)

I disagree that Bob's expected value drops to -0.5$ during the experiment. If Bob is aware that he will be "super-duper quantum memory erased", then he should appropriately expect to receive 1$.

There may be more existential dread during the experiment, but the expectations about the outcome should stay the same throughout.

Replies from: Nisan
comment by Nisan · 2014-03-28T19:21:34.167Z · LW(p) · GW(p)

Ok, User:Manfred makes the same point here. It implies that at any point, heretofore invisible worlds could collide with ours, skewing the results of experiments and even leaving us with no future whatsoever (although admittedly with probability 0). Would you agree with that?

Replies from: Strilanc
comment by Strilanc · 2014-03-29T13:09:52.601Z · LW(p) · GW(p)

No, I don't think that's likely at all.

Worlds only interfere when they evolve into the same state. Because the state space is exponentially large, only worlds that are already almost-equivalent to our world are likely to "collide with us".

If you've based a decision on some observation, worlds where that observation didn't happen are not almost-equivalent. They differ in trillions (note: massive underestimate) of little ways that would all need to be corrected simultaneously, lest the differences continue to compound and push things even further apart. Their contributions to the branch we're in is negligible.

Your thought experiment used a "super duper quantum eraser", but in reality I don't think such a thing is actually possible. The closest analogue I can think of is a quantum computer, but those prevent decoherence/collapse. They don't undo it.

comment by Manfred · 2014-03-25T20:12:59.122Z · LW(p) · GW(p)

Bob cannot become entangled with the outside world while in the middle of a quantum erasure experiment, or else it doesn't work. So he doesn't really get to do anything :P

If Bob knows that the particle becomes entangled with him, then he still makes the same predictions.

Replies from: Nisan, Oscar_Cunningham
comment by Nisan · 2014-03-27T21:29:47.285Z · LW(p) · GW(p)

If Bob knows that the particle becomes entangled with him, then he still makes the same predictions.

Ok, that's surprising. Here's why I thought otherwise: From Bob's perspective, a particle is prepared in a superposition of states B and C. Then Bob observes or becomes entangled with the particle, thus collapsing its state. Then the super-duper quantum erasure is performed, which preserves the state of the particle. Then the particle strikes the second half-silvered mirror. A collapse interpretation tells Bob to expect two outcomes with equal probability. Is this, then, an experiment where a collapse interpretation and a many-worlds interpretation give different predictions?

Replies from: Oscar_Cunningham, Manfred
comment by Oscar_Cunningham · 2014-03-28T09:00:04.698Z · LW(p) · GW(p)

The collapse interpretation predicts that you can't do the super-duper quantum erasure. Once the collapse has occurred the wavefunction can't uncollapse.

comment by Manfred · 2014-03-27T23:28:25.659Z · LW(p) · GW(p)

Basically, there are a variety of collapse interpretations depending on where you make the collapse happen. Every time we've tested these hypotheses (e.g. by this sort of experiment), we haven't been able to see an early collapse.

At this point, all actual physicists I know just postpone the collapse whenever necessary to get the right answer.

Replies from: Nisan
comment by Nisan · 2014-03-28T01:05:15.114Z · LW(p) · GW(p)

Hm, so that means that quantum physics predicts that our observations depend on the presence of parallel worlds in the universal wavefunction, which in theory might interfere with our experiments at any time, right?

Replies from: Manfred
comment by Manfred · 2014-03-28T03:05:31.659Z · LW(p) · GW(p)

Calling them parallel worlds is as always dangerous (you can't go all buckaroo bonzai on them), but basically yes.

comment by Oscar_Cunningham · 2014-03-25T21:17:30.976Z · LW(p) · GW(p)

He can, in theory, make bets. Just so long as the bet he makes doesn't depend on which way he saw the particle go.

Replies from: Manfred
comment by Manfred · 2014-03-26T00:29:28.530Z · LW(p) · GW(p)

Hm, good point. We could set aside a few instants for him to send a few photons that wouldn't depend on the state of the particle. From a practical standpoint that's pretty impossible, but forget practicality.

So, sure; Bob should accept the bet. Although if he makes his answer to the bet depend on the state of the particle at all, then he shouldn't accept the bet :P There might be some interesting applications of this, say where the option "don't accept" has some positive payoff if the particle changes directions. Bob can precommit to send out an entangled qubit to get some chance at that reward.

comment by labachevskij · 2014-03-25T14:11:10.531Z · LW(p) · GW(p)

Rational thinking against fear in a TED talk by (ex) astronaut Chris Hadfield. Has anyone else seen it? I really enjoyed it, in particular the spider example.

comment by NancyLebovitz · 2014-03-29T14:09:01.298Z · LW(p) · GW(p)

Analyzing people's premises by thinking about comments-- the example was a recent tailgating incident.

comment by [deleted] · 2014-03-26T18:49:04.152Z · LW(p) · GW(p)

It seems clear that for people with a bachelor's in CS, from a purely monetary viewpoint, getting a master's in the same area usually is dumb unless you plan on programming a long time.

This article says the average mid-career pay for MSc holders is $114,000. This says the mid-career bachelor's salary is $102,000. A master's means 12 to 24 months of lost pay and anywhere from a $20,000/year salary in some lucky cases to a $50,000+ debt. You need at least a decade of future work to justify this. And that likely overstates the benefits since it does not control for ability.

I don't necessarily trust these statistics but employers can always make people write code on whiteboards to assess actual skill.

An exception might be if you want technically cutting-edge CS: Google for example prefers MSc/PhD guys. But I think most programming jobs are not like that.

Replies from: Jayson_Virissimo, Squark
comment by Jayson_Virissimo · 2014-03-28T16:16:49.635Z · LW(p) · GW(p)

FYI, it is possible to do a part-time masters program and some employers will pay for you to get a graduate degree (usually as part of an agreement to keep working for the company several years afterwards).

comment by Squark · 2014-03-26T21:01:29.474Z · LW(p) · GW(p)

IMO the real problem is that academia teaches computer science whereas what programmers need to know to be valuable is software engineering. Those seem to be rather different disciplines.

Disclaimer: I didn't study CS myself and this opinion is based on indirect evidence.

comment by Slackson · 2014-03-26T11:08:49.386Z · LW(p) · GW(p)

So I've kind of formulated a possible way to use markets to predict quantiles. It seems quite flawed looking back on it two and a half weeks later, but I still think it might be an interesting line of inquiry.

Replies from: Lumifer
comment by Lumifer · 2014-03-26T15:12:18.490Z · LW(p) · GW(p)

You want options (as in, the financial market instruments called "options").

A sufficiently deep and wide options market basically provides most of the market-expected distribution of the future value of the underlying.

Replies from: Slackson
comment by Slackson · 2014-03-26T20:06:59.318Z · LW(p) · GW(p)

Thanks.

comment by iarwain1 · 2014-03-26T01:29:47.158Z · LW(p) · GW(p)

How big a deal is this? I only just recently started learning programming so I don't know enough to understand the implications.

Replies from: solipsist, shminux
comment by solipsist · 2014-03-26T03:49:46.760Z · LW(p) · GW(p)

My comments earlier. Wolfram is a beautiful language with well designed libraries. If you are new to programming, you will learn a lot mucking about with it. It's probably not going to take the world by storm.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-03-26T09:01:03.784Z · LW(p) · GW(p)

I'm not sure whether this is a bigger problem than already exists in general, but if some of the information in the libraries is out of date or just plain wrong, it would be a challenge to notice that there's a problem.

comment by shminux · 2014-03-26T03:39:00.935Z · LW(p) · GW(p)

Seems like a product placement.

comment by khafra · 2014-03-28T16:43:26.012Z · LW(p) · GW(p)

Apparently, founding mathematics on Homotopy Type Theory instead of ZFC makes automated proof checking much simpler and more elegant. Has anybody tried reformulating Max Tegmark's Level IV Multiverse using Homotopy Type Theory instead of sets to see if the implied prior fits our anthropic observations better?

Replies from: Douglas_Knight, asr
comment by Douglas_Knight · 2014-03-29T21:56:15.138Z · LW(p) · GW(p)

Homotopy type theory differs from ZFC in two ways. One way is that it, like ordinary type theory, is constructive and ZFC is not. The other is that it is based in homotopy theory. It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.

Tegmark is quite explicit that he has no measure and thus no prior. Switching foundations doesn't help.

Replies from: khafra
comment by khafra · 2014-03-31T11:24:30.654Z · LW(p) · GW(p)

It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.

I found a textbook after reading the slides, which may be clearer. I really don't think their mathematical aspirations are limited to homotopy theory, after reading the book's introduction--or even the small text blurb on the site:

Homotopy type theory offers a new “univalent” foundation of mathematics, in which a central role is played by Voevodsky’s univalence axiom and higher inductive types. The present book is intended as a first systematic exposition of the basics of univalent foundations, and a collection of examples of this new style of reasoning

comment by asr · 2014-03-28T17:32:42.389Z · LW(p) · GW(p)

the implied prior

Which implied prior? My understanding is that the problem with Multiverse theories is that we don't have a way to assign probability measures to the different possible universes, and therefore we cannot formulate an unambiguous prior distribution.

Replies from: VAuroch, khafra
comment by VAuroch · 2014-03-28T18:11:35.021Z · LW(p) · GW(p)

The two usual implied prior taken from Level IV are a)that every possible universe is equally likely and b)that universe are likely in direct proportion to the simplicity of their description. Some attempts have been made to show that the second falls out of the first.

comment by khafra · 2014-03-28T18:12:40.170Z · LW(p) · GW(p)

Well, I don't really math; but the way I understand it, computable universe theory suggests Solomonoff's Universal prior, while the ZFC-based mathematical universe theory--being a superset of the computable--suggests a larger prior; thus weirder anthropic expectations. Unless you need to be computable to be a conscious observer, in which case we're back to SI.

comment by iarwain1 · 2014-03-27T16:43:00.773Z · LW(p) · GW(p)

I enjoy reading perceptive / well-researched futurism materials. What are some good resources for this? I'm looking for books, blogs, newsfeeds, etc.. Also, I'm only looking for popular-level rather than academic material - I have neither the time nor the knowledge to read through most scholarly articles on the subject.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-03-27T17:33:52.248Z · LW(p) · GW(p)

futurismic.com and maybe io9.com