Open thread, Jan. 19 - Jan. 25, 2015
post by Gondolinian · 2015-01-19T00:04:25.527Z · LW · GW · Legacy · 303 commentsContents
303 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
303 comments
Comments sorted by top scores.
comment by Alex_Miller · 2015-01-19T01:05:16.284Z · LW(p) · GW(p)
In my small fourth grade class of 20 students, we are learning how to write essays, and get to pick our own thesis statements. One kid, who had a younger sibling, picked the thesis statement: "Being an older sibling is hard." Another kid did "Being the youngest child is hard." Yet another did "Being the middle child is hard", and someone else did "Being an only child is hard." I find this as a rather humorous example of how people often make it look like they're being oppressed.
Does anyone know why people do this?
Replies from: Punoxysm, Gondolinian, JoshuaZ, gjm, cameroncowan, BenLowell, jsu, DanielLC, ZankerH, Good_Burning_Plastic↑ comment by Punoxysm · 2015-01-19T04:35:10.081Z · LW(p) · GW(p)
Be charitable; don't assume they're trying to present themselves as martyrs. Instead they could be outlining the peculiar challenges and difficulties of their particular positions.
Life is hard for everyone at times.
Replies from: mwengler, dxu↑ comment by mwengler · 2015-01-20T13:08:50.286Z · LW(p) · GW(p)
Anybody should be able to write an essay "why my life is hard." They should also be able to write an essay "why my life is easy." It might be a great exercise to have every student write a second essay on a thesis which is essentially the opposite of the thesis of their first essay.
↑ comment by dxu · 2015-01-21T04:40:42.180Z · LW(p) · GW(p)
I wouldn't ascribe conscious intent to their actions, but it may be that making your own life seem harder is an evolved social behavior. Remember, humans are adaptation-executors, not fitness-maximizers, so it's entirely possible that the students thought they were being honest, when in fact they may have been subconsciously exaggerating the difficulties they were facing in day-to-day life.
Related: Why Does Power Corrupt?
↑ comment by Gondolinian · 2015-01-19T01:24:17.575Z · LW(p) · GW(p)
One kid, who had a younger sibling, picked the thesis statement: "Being an older sibling is hard." Another kid did "Being the youngest child is hard." Yet another did "Being the middle child is hard", and someone else did "Being an only child is hard." I find this as a rather humorous example of how people often make it look like they're being oppressed.
Taken at face value, the four statements aren't incompatible. Saying that being X is hard in an absolute sense isn't the same as saying that being X is harder than being Y in a relative sense, or that X people are being oppressed.
Replies from: B_For_Bandana↑ comment by B_For_Bandana · 2015-01-20T18:16:58.331Z · LW(p) · GW(p)
Sure, but the point is that the same argument applies to the flipside: everyone could've written essays like "X is fun" or "Y is fun" without contradiction. But they chose "hard" instead. Why?
Replies from: Gondolinian↑ comment by Gondolinian · 2015-01-20T18:53:27.874Z · LW(p) · GW(p)
Sure, but the point is that the same argument applies to the flipside: everyone could've written essays like "X is fun" or "Y is fun" [...] But they chose "hard" instead. Why?
There were sixteen other students in the class. For all we know, theses about fun things could have been in the majority.
without contradiction.
If you accept what I wrote in the GP, where do you see a contradiction in the four statements? And if you don't, could you try to articulate why?
Replies from: B_For_Bandana↑ comment by B_For_Bandana · 2015-01-20T21:36:09.521Z · LW(p) · GW(p)
There were sixteen other students in the class. For all we know, theses about fun things could have been in the majority.
Yeah, maybe.
If you accept what I wrote in the GP, where do you see a contradiction in the four statements? And if you don't, could you try to articulate why?
No, no I don't think you had a contradiction either. I was just saying that you could do the same thing with "fun." And maybe other kids did, as you say.
↑ comment by JoshuaZ · 2015-01-19T02:39:09.527Z · LW(p) · GW(p)
It is much easier to notice the things in your situation that don't go well than notice all the things that happen in someone else's situation.
I'm curious; have you pointed this out to the students? If so, how did they react?
Replies from: James_Miller↑ comment by James_Miller · 2015-01-19T03:42:11.874Z · LW(p) · GW(p)
Alex Miller, my son, is one of the students.
Replies from: JoshuaZ, None↑ comment by JoshuaZ · 2015-01-19T04:44:31.025Z · LW(p) · GW(p)
Ah, that clarifies that. I think I read "we are learning" as the teacher saying that since I've seen teachers use that language (e.g. "next week we'll learn about derivatives").
Replies from: James_Miller↑ comment by James_Miller · 2015-01-19T05:46:49.299Z · LW(p) · GW(p)
Alex greatly enjoyed being mistaken for his teacher.
↑ comment by [deleted] · 2015-01-19T10:59:45.379Z · LW(p) · GW(p)
So nice that you two are able to enjoy LessWrong together. Given that this is an open threat, is there anything you (or Alex) would like to share about raising rationalists? My daughters are 3yo and 1yo, so I'm only beginning to think about this...
EDIT: I made a top-level post here.
Replies from: James_Miller, Calien↑ comment by James_Miller · 2015-01-20T00:44:51.319Z · LW(p) · GW(p)
Alex loves using rationality to beat me in arguments, and part of why he is interested in learning about cognitive biases is to use them to explain why I'm wrong about something. I have warned him against doing this with anyone but me for now. I recommend the game Meta-Forms for your kids when they get to be 4-6. When he was much younger I would say something silly and insist I was right to provoke him into arguing against me.
↑ comment by cameroncowan · 2015-01-20T05:13:34.934Z · LW(p) · GW(p)
Each experience has its own difficulties that are unknown unless you've lived it.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-01-20T14:01:58.652Z · LW(p) · GW(p)
Each experience has its own difficulties that are unknown unless you've lived it.
Corollary: one's own difficulties always seem bigger than everyone else's.
↑ comment by BenLowell · 2015-01-19T18:53:17.482Z · LW(p) · GW(p)
A lot of times different ways that people act are different ways of getting emotional needs, even if that isn't a conscious choice. In this case it is likely that they want recognition and sympathy for different pains they have have. Or, it's more likely the case that the different hurts they have (being lonely, being picked on, getting hand-me-downs, whatever) are easily brought to mind. But when the person tells someone else about the things in their life that bother them, it's possible that someone could say "hey, it sounds like you are really lonely being an only child" and they would feel better.
Some different example needs are things like attention, control, acceptance, trust, play, meaning. There is a psychological model of how humans work that thinks of emotional needs similar to physical needs like hunger, etc. So people have some need for attention, and will do different things for attention. They also have a need for emotional safety, just like physical safety. So just like if someone was sitting on an uncomfortable chair will move and complain about how their chair is uncomfortable, someone will do a similar thing if their big brother is picking on them.
Another reason people often make it look like they are being oppressed is that they feel oppressed. I don't know if you are mostly talking about people your age, or everyone, but it is not a surprise to me that lots of kids feel oppressed, since school and their parents prevent them from doing what they want. Plenty of adults express similar feelings though, i just expect not as many.
↑ comment by jsu · 2015-01-19T11:29:10.010Z · LW(p) · GW(p)
Maybe they are friends and discussed their thesis topics with each other. I find it unlikely that 4 out of 20 students would come up with sibling related topics independently.
Replies from: gjm↑ comment by gjm · 2015-01-19T12:24:48.905Z · LW(p) · GW(p)
Or maybe they picked them out loud in class, and some of those were deliberate responses to others.
So what happens is: Albert is an oldest child whose younger sister is loud and annoying and gets all the attention. He says "I'm going to write about how being an older sibling is hard". Beth is a youngest child whose older brothers get all the new clothes and toys and things; she gets their hand-me-downs. She thinks Albert's got it all wrong and, determined to set the record straight, says "I'm going to write about how being the youngest child is hard." Charles realises that as a middle child he has all the same problems Albert and Beth do, and misses out on some of their advantages, and says he's going to write about that. Diana hears all these and thinks, "Well, at least they have siblings to play with and relate to", and announces her intention to explain how things are bad for only children.
Notice that all these children may be absolutely right in thinking that they have difficulties caused by their sibling situation. They may also all be right in thinking that they would be better off with a different sibling situation. (Perhaps there's another youngest child in the class who loves it -- but you didn't hear from him.)
Replies from: jsu↑ comment by ZankerH · 2015-01-19T11:22:57.023Z · LW(p) · GW(p)
Because running in the oppression olympics is the easiest way to gain status in most western societies. Looks like even children are starting to realise that, or maybe they're being indoctrinated to do so in other classes or at home.
Replies from: None, None↑ comment by [deleted] · 2015-01-22T03:43:19.840Z · LW(p) · GW(p)
I would like to point out that this is the only comment in the thread that doesn't assume that this behavior is culturally invariant, and suggest that the rest of LW think about that for a while.
Replies from: emr↑ comment by emr · 2015-01-22T08:22:50.667Z · LW(p) · GW(p)
I think the term "oppression olympics" is needlessly charged.
But it is a good question: Under what conditions will someone voice a complaint, and about what?
We learn early on that voicing certain complaints results in social punishment, even when those complaints are "valid" according to the stated moral aspirations of the community. If memory holds, the process of learning which complaints can be voiced is painful.
But at the same time, not all superficially negative self-disclosures are a true social loss: Signaling affliction seems to have been a subcultural strategy for quite a while, nowadays in teenagers, but we also have famous references to the over-the-top displays of grief and penitence from ancient (Judeo-Christian) cultures. And of course, complaints can also result in support, or can play a role in political games.
So there's a cost-benefit happening somewhere in the system, which we might hope to be reasonably specific about.
To touch on some controversies: There's a big push to reduce the dissonance between what we publicly accept as grounds for complaint and what we actually punish people for complaining about. Accepting for the moment that our stated principles are okay (which is where I expect you might disagree), this can still go wrong several ways:
People may mistake the aspiration for reality e.g. we tell kids they should complain about bullying and feel like we're making progress, but then we allow the system to punish kids just as harshly as ever after their disclosure, because we can't or won't change it.
Or we feel that offering non-complaint-based advice is perpetuating or accepting a discrepancy between "valid complaints" and "effective complaints", e.g. the outcry when someone suggests a concrete way to avoid being sexually assaulted, or voices a concern about "victim mentality" (the mistake of thinking that complaining is more effective than it really is, often because everyone is only pretending that we are going to take complaints more seriously now)
The project is eaten by political concerns e.g. we find ourselves debating exactly which groups get to participate in the new glasnost of complaining about complaint-hypocrisy.
A group becomes unable to exclude to bad actors who cloak themselves in the new language of moral progress. Social justice groups, who are very concerned with unfair exclusion, have this problem to a non-trivial degree.
The "Oppression olympics" is mostly point 3, with a bit of point 4. I'm actually far more concerned with points 1 and 2.
Replies from: seer, None↑ comment by seer · 2015-03-27T07:39:22.097Z · LW(p) · GW(p)
Accepting for the moment that our stated principles are okay (which is where I expect you might disagree)
This is not a good thing to accept, since the stated principals are themselves subject to change. Hence
5. Once society starts taking complaint X seriously enough to punish the perpetrator, people start making (weaker) complaint X'. Once society takes that complaint seriously people start making complaint X'', etc.
I would argue that long term 5. is actually the biggest problem.
↑ comment by [deleted] · 2015-03-26T10:35:55.378Z · LW(p) · GW(p)
I think we need to separate complaints of the "what you did was not against the rules but it still hurt me" and "you violated the rules, and hurt me through that".
The second complaint is very powerful. The first one requires high amounts of compassion in the other person to work.
I mean, extrinsic motivation replaces intrinsic motivation. This means, while with a complete lack of rules people may - may - be compassionate, if Behavior No. 11 is forbidden under threat of punishment because it hurts others, then people will care more about that it is forbidden and they can get punished for, rather than about the hurt it causes to others. For example the fact that rape carries heavy prison sentences reduces compassion for rape victims: see victim-blaming and related behaviors. It simply turns the discussion away from "Does Jill feel hurt from what John did?" towards "Is John really evil enough for five years in prison?" and then if not, then it is so easy write off Jill's hurt.
But the catch is, if Behavior No. 11b is sufficiently similar but not expressly forbidden, the rule and punishment for Behavior No. 11 may still prevent compassion towards its victims, even in people who would have compassion towards the victims of behavior that are entirely unregulated.
And that is how it requires extraordinary compassion to give a damn about "what you did was not against the rules but still it hurt me". Modern societies are so strongly regulated by both law and social pressure that almost any kind of hurt will at least resemble a different hurt that is forbidden hence the intrinsic compassionate motivation lost.
And that is why people who are not extremely compassionate give no damn about e.g. accusations of misgendering. It sounds roughly like the rules of politeness learned in childhood i.e. you will address the neighbor as "good morning Mr. Smith" not "hi old fart" or get punished. Since this sounds similar, but there is no such actual rule that is enforced, not extremely compassionate people do not care much.
Replies from: seer↑ comment by seer · 2015-03-27T07:31:08.054Z · LW(p) · GW(p)
It simply turns the discussion away from "Does Jill feel hurt from what John did?"
How about the question "Is it reasonable for Jill to fill hurt from what John did?", otherwise you're motivating Jill to self-modify into a negative utility monster.
Replies from: None, Lumifer↑ comment by [deleted] · 2015-03-27T08:25:21.784Z · LW(p) · GW(p)
This sounds simple enough, but I think this is actually a huge box of yet unresolved complexities.
A few generations ago where formal politeness and etiquette was more socially mandatory, the idea was that the rules go both ways: they forbid ways of speaking many people would feel offended by, on the other hand, if people still feel offended by approved forms of speaking, it is basically their problem. So people were expected to work on what they give and what they receive (i.e. toughen up to be able to deal with socially approved forms of offense): this is very similar how programmers define interface / data exchange standards like TCP/IP. Programmers have a rule of be conservative in what you send and be liberal in what you accept / receive (i.e. 2015-03-27 is the accepted XML date format and always send this, but if your customers are mainly Americans better accept 03-27-2015 too, just in case) and this too is how formal etiquette worked.
As you can sense, I highly approve of formal etiquette although I don't actually use it on forums like this as it would make look like a grandpa.
I think a formal, rules-based, etiquette oriented world was far more autism-spectrum friendly than todays unspoken-rules world. I also think todays "creep epidemic" (i.e. lot of women complaining about creeps) is due to the lack of formal courting rules making men on the spectrum awkward. Back then when womanizing was all about dancing waltzers on balls it was so much more easier for autism-spectrum men who want formal rules and algorithms to follow.
I think I could and perhaps should spin it like "lack of formal etiquette esp. in courting is ableist and neurotypicalist".
Of course, formal etiquette also means sometimes dealing with things that feel hurtful but approved and the need to toughen up for cases like this.
Here I see a strange thing. Remember when in the 1960's the progressive people of that era i.e. the hippies were highly interested in stuff like Zen? I approve of that. I think it was a far better world when left-wingers listened to Alan Watts. What disciplines like that teach is precisely that you don't need to cover the whole world with leather in order to protect your feet: you can just put on shoes. Of course it requires some personal responsibility, self-reflection and self-criticism, outer view etc. Low ego basically.
And somehow it disappeared. Much of the social-justice stuff today is perfect anti-Zen, no putting on mental shoes whatsoever, just complaining of assholes who leave pebbles on walkways.
This is frankly one of the most alarming development I see in the Western world. Without some Zen-like mental shoes, without the idea to decide to deal with some kinds of felt hurts, there cannot be a social level progress, just squabbling groupuscules.
But I am being offtopic here. No rape victim should be required to wear mental shoes, that kind of crime is simply too evil to put any onus on dealing with on the victim.
However, some amount of "creepy" behavior or hands-off sexual harassment may fall into this category.
Replies from: seer↑ comment by seer · 2015-03-28T02:31:37.585Z · LW(p) · GW(p)
No rape victim should be required to wear mental shoes, that kind of crime is simply too evil to put any onus on dealing with on the victim.
Depends on what one means by "rape". If you are using the standard definition from ~20 years ago (and for all I know still the standard definition in your country), I agree. However, recently American feminists have been trying to get away with calling all kinds of things "rape".
↑ comment by Lumifer · 2015-03-27T15:09:02.636Z · LW(p) · GW(p)
otherwise you're motivating Jill to self-modify into a negative utility monster.
I actually know a woman who was a nice and reasonable human being, and then had a very nasty break-up with her boyfriend. Part of that nasty break-up was her accusations of physical abuse (I have no idea to which degree they were true). This experience, unfortunately, made her fully accept the victim identity and become completely focused on her victim status. The transformation was pretty sad to watch and wasn't good for her (or anyone) at all.
↑ comment by [deleted] · 2015-03-26T10:19:55.162Z · LW(p) · GW(p)
Because running in the oppression olympics is the easiest way to gain status in most western societies.
I would argue that the sentimental compassion it exploits is a very specifically American feature and it is less efficient elsewhere. If I had to guess, American culture has uniquely selfish subsets (such as the Ayn Rand fans), and as a reaction, the opposite shine-with-goodness attitude evolved which then gets exploited. What you see is the middle ground missing, probably.
A good example is middle-class people seeing the welfare state either sentimentally as hearts going out for the poor, or the judgemental "bunch of lazy leeches" view which are both moralistic. What middle ground is missing is the simple "customer" attitude to the welfare state "well, I might need it any time, better make sure it works right, potentially for ME" which is the most common European attitude. This middle ground is missing, because there is a tribe that derives identity from shining-with-goodness, and another tribe from selfishness, usually interpreting selfishness as toughness.
Both can be exploited. Oppression olympics exploits the shine-with-goodness tribe and shit like not even a year of paid maternity leave exploits the my-selfishness-is-toughness tribe.
But I think in Western societies who go for the middle in things like this, oppression olympics e.g. complaining about misgendering generally gets answers roughly like "But I am just doing what the rules and social customs permit / prescribe?" with the connotation "Why exactly would I care about your personal feelings?"
↑ comment by Good_Burning_Plastic · 2015-03-28T15:07:28.556Z · LW(p) · GW(p)
Related post: http://lesswrong.com/lw/9b/help_help_im_being_oppressed/
comment by jaime2000 · 2015-01-19T02:07:55.095Z · LW(p) · GW(p)
Since Eliezer has forsaken us in favor of posting on Facebook, can somebody with an account please link to his posts? His page cannot be read by someone who is not logged in, but individual posts can be read if the url is provided. As someone who abandoned his Facebook account years ago, I find this frustrarting.
Replies from: None, mwengler↑ comment by [deleted] · 2015-01-19T11:49:43.939Z · LW(p) · GW(p)
Here's a month's worth:
https://www.facebook.com/yudkowsky/posts/10153041257924228
https://www.facebook.com/yudkowsky/posts/10153033570824228
https://www.facebook.com/yudkowsky/posts/10153030238814228
https://www.facebook.com/yudkowsky/posts/10153021749629228
https://www.facebook.com/yudkowsky/posts/10152977126839228
https://www.facebook.com/yudkowsky/posts/10152972605814228
https://www.facebook.com/yudkowsky/posts/10152972301299228
https://www.facebook.com/yudkowsky/posts/10152964087234228
https://www.facebook.com/yudkowsky/posts/10152957903859228
https://www.facebook.com/yudkowsky/posts/10152947952344228
https://www.facebook.com/yudkowsky/posts/10152946520029228
https://www.facebook.com/yudkowsky/posts/10152945423789228
https://www.facebook.com/yudkowsky/posts/10152941108249228
https://www.facebook.com/yudkowsky/posts/10152940624254228
https://www.facebook.com/yudkowsky/posts/10152938634304228
https://www.facebook.com/yudkowsky/posts/10152937953959228
https://www.facebook.com/yudkowsky/posts/10152933586294228
https://www.facebook.com/yudkowsky/posts/10152929868929228
https://www.facebook.com/yudkowsky/posts/10152919146569228
https://www.facebook.com/yudkowsky/posts/10152918491764228
https://www.facebook.com/yudkowsky/posts/10152915799124228
https://www.facebook.com/yudkowsky/posts/10152912313154228
https://www.facebook.com/yudkowsky/posts/10152908949454228
https://www.facebook.com/yudkowsky/posts/10152904788444228
https://www.facebook.com/yudkowsky/posts/10152902713609228
https://www.facebook.com/yudkowsky/posts/10152900703339228
Replies from: jaime2000↑ comment by mwengler · 2015-01-20T13:04:19.088Z · LW(p) · GW(p)
Why would you not create a sockpuppet facebook account for the purposes of reading posts you want to read?
Replies from: Leonhart↑ comment by Leonhart · 2015-01-21T08:54:48.823Z · LW(p) · GW(p)
Not speaking for above poster: because that's not actually trivial - you need a real fake phone number to receive validation on, etc. Also, putting fake data into a computer system feels disvirtuous enough to put me off doing it further.
Replies from: Elo, Lumifer↑ comment by Lumifer · 2015-01-21T16:04:42.376Z · LW(p) · GW(p)
putting fake data into a computer system feels disvirtuous enough to put me off doing it further.
Interesting. I consider poisoning big surveillance/marketing databases to be virtuous X-D
Replies from: Leonhart↑ comment by Leonhart · 2015-01-21T23:08:14.907Z · LW(p) · GW(p)
I don't like to frustrate the poor databases' telos, it is not at fault for the use humans put its data to.
(Yes, I realise this is silly. It's still an actual weight in the mess I call a morality; just a small one.)
Replies from: Lumifercomment by [deleted] · 2015-01-19T11:02:47.970Z · LW(p) · GW(p)
There seem to be some parents (and their children) here. I myself am the father of 3yo and 1yo daughters. Is there any suggestions you have for raising young rationalists, and getting them to enjoy critical, skeptical thinking without it backfiring from being forced on them?
Replies from: Gram_Stone, Illano, Evan_Gaensbauer, Gunnar_Zarncke, JoshuaZ, advancedatheist↑ comment by Gram_Stone · 2015-01-19T17:17:19.000Z · LW(p) · GW(p)
Julia Galef, President and Co-founder of the Center for Applied Rationality, has video blogged on this twice. The first was How to Raise a Rationalist Kid, and the second is Wisdom from Our Mother, which might be a bit more relevant to you because, in that video, her brother Jesse specifically discusses what his mother did in situations where he wasn't enthusiastic about learning something. I should say that it has more to do with when your kids think that they're bad at things than with when they reject something out of hand. To that I would say, and I think many others would say: Kids are smart and curious, rationalism makes sense, and if they don't reject everything else kids have learned throughout history out of hand, then they probably won't reject rationalism out of hand.
↑ comment by Illano · 2015-01-20T16:18:40.256Z · LW(p) · GW(p)
I also am the father of 3yo and 1yo daughters. One of the things I try to do is let their critical thinking or rationality actually have a payoff in the real world. I think a lot of times critical thinking skills can be squashed by overly strict authority figures who do not take the child's reasoning into account when they make decisions. I try to give my daughters a chance to reason with me when we disagree on something, and will change my mind if they make a good point.
Another thing I try to do, is intentionally inject errors into what I say sometime, to make sure they are listening and paying attention. (e.g. This apple is purple, right? ) I think this helps to avoid them just automatically agreeing with parents/teachers and critically thinking through on their own what makes sense. Now my oldest is quick to call me out on any errors I may make when reading her stories, or talking in general, even when I didn't intentionally inject them.
Lastly, to help them learn in general, make their learning applicable to the real world. As an example, both of my daughters, when learning to count, got stuck at around 4. To help get them over that hurdle, I started asking them questions like, "How many fruit snacks do you want?" and then giving them that number. That quickly inspired them to learn bigger numbers.
Replies from: passive_fist↑ comment by passive_fist · 2015-01-20T20:55:18.589Z · LW(p) · GW(p)
This sounds like solid parenting; my only concern is that you might not be taking the psychology of children into account. Children sometimes really do need an authority figure to tell them what's true and what isn't; the reason for truth is far less important at that stage (and can be given later, maybe even years later).
One issue that could arise is that if you don't show authority then your child may instead gravitate to other authority figures and believe them instead. A child may paradoxically put more faith in the opinions of someone who insists on them irrationally than someone who is willing to change their beliefs according to reason or evidence (actually, this applies to many adults too). It's possible that "demeanor and tone of voice" trumps "this person was wrong in the past."
The point is that children's reasoning is far far less developed than adults and you have to take their irrationalities into account when teaching them.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-01-20T21:14:11.897Z · LW(p) · GW(p)
The best thing about my Catholic high school was that it was run by the Salesian Order, which prefers a preventive method based on always giving good reasons for the rules.
↑ comment by Evan_Gaensbauer · 2015-01-20T05:45:13.880Z · LW(p) · GW(p)
[This isn't a direct response to Mark, but a reply to encourage more responses]
To add another helpful framing, if you don't have children, but think as an adult part of your attraction to LessWrong was based on how your parents raised you with an appreciation with rationality, how did that go? Obvious caveats about how memories of childhood are unreliable and fuzzy, and personal perspectives on how your parents raised you will be biased.
I was raised by secular parents, who didn't in particular put a special emphasis on rationality when raising me, compared to other parents. However, for example, Julia and Jesse Galef have written on their blog of how their father raised them with rationality in mind.
Replies from: None, ilzolende, wadavis↑ comment by ilzolende · 2015-01-21T02:37:06.142Z · LW(p) · GW(p)
They left Scientific American lying around a lot. The column that had the fewest prerequisites was Michael Shermer's skepticism column. Also, people around me kept trying to fix my brain, and when I ran into cognitive bias and other rationality topics, they were about fixing your own brain, so then I assumed that I needed to fix it.
In terms of religion stuff: My parents raised me with something between Conservative and Reform Judaism, but they talked about other religions in a way that implied Judaism was not particularly special, and mentioned internal religious differences, and I got just bored enough in religious services to read other parts of the book, which had some of the less appealing if more interesting content. (It wasn't the greatest comparative religious education: I thought that the way Islam worked was that they had the Torah, the New Testament, and the Qur'an as a third book, sort of the way the Christians had our religious text as well as the New Testament as a second book.)
↑ comment by wadavis · 2015-01-21T22:52:04.114Z · LW(p) · GW(p)
Thank for putting up this branch Evan, I don't have children. I think my raising helped my rationality, but the lens of time is known to distort, so take it with a grain of salt.
Most of my rationality influence was a lead by example case. Accountability and agency were encouraged too, they may have made fertile soil for rational thought.
Ethics conversations were had and taken seriously (paraphrase: 'Why does everyone like you?' 'Cause I always cooperate' 'Don't people defect against you?' 'Yes, but defectors are rare and I more than cover my losses when dealing with other cooperators').
Thinking outside the box was encouraged (paraphrase: 'interfering the receiver is a 10 yard penalty, I can't do that.' 'What's worse, 10 yards or a touchdown?' 'But it is against the rules.' 'Why do you think the rule is for only 10 yards, and not kicked from the game? Do you think the rule, and penalty, are part of the game mechanics?').
Goal based action was encouraged, acting on impulse was treated as being stupid (paraphrase: 'Why did you get in a fight' 'I was being bullied' 'Did fighting stop the bullying?' 'No' 'Ok, what are you going to try next?').
↑ comment by Gunnar_Zarncke · 2015-01-20T21:33:01.088Z · LW(p) · GW(p)
I am also father of four boys now 3, 6, 8 and 11. You can find some parenting resources linked on my user page.
↑ comment by JoshuaZ · 2015-01-19T22:26:32.446Z · LW(p) · GW(p)
I know of families who have used the "tooth fairy" as an opportunity to do critical thinking. I think it has gotten mentioned here before. Apparently sometimes children do this on their own. This post is relevant.
↑ comment by advancedatheist · 2015-01-19T19:41:26.985Z · LW(p) · GW(p)
Apparently you don't want grandchildren, in other words. Religiosity in women correlates strongly with fecundity.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-01-19T20:36:53.744Z · LW(p) · GW(p)
What part of his statement makes you reach such a conclusion about his intentions?
comment by [deleted] · 2015-01-19T19:04:31.355Z · LW(p) · GW(p)
Something I frequently see from people defending free speech is some variant of the idea "in the marketplace of ideas, the good ones will win out". Is anyone familiar with any deeper examination of this idea? For instance, whether an idea market actually exists, how much it resembles a marketplace for goods, how it might reliably go wrong, etc.
Replies from: Vaniver, g_pepper, Slider, None↑ comment by Vaniver · 2015-01-19T22:18:30.734Z · LW(p) · GW(p)
I think you're better off looking into theories of memetics; that is, a marketplace doesn't seem to be as good an analogy as an ecology. That makes the somewhat less cheery argument that 'good' doesn't mean 'true' so much as 'effective at spreading,' and in particular memes can win by poisoning their competitors through allelopathy, just like an oak tree.
↑ comment by g_pepper · 2015-01-19T23:40:10.896Z · LW(p) · GW(p)
This video is somewhat on topic: The New (and Old) Attacks on Free Thought: Jonathan Rauch on Kindly Inquisitors
Jonathan Rauch discusses the new edition of his book, Kindly Inquisitors, and presents a thoughtful and rational defense of free speech. I believe he makes some comparisons between the marketplace of ideas and economic markets and he certainly makes an argument similar to the one that you mention. It is an excellent video, IMO, and well worth watching.
↑ comment by Slider · 2015-01-25T02:23:50.445Z · LW(p) · GW(p)
There is a method of devalueing the weight of ones words by refering how them saying it doesn't have any actual implications to action. In a free speech environment people can become decoupled from their ideas implications. Epistemic authors are usually reliable because they have passed a filter for errors. If there is no filter on error there is no measure of quality. This can easily turn in that no public shared filter is wanted and everybody is supposed to use their own filter. This has one failure mode of everybody being entirely on their own when it comes to interpreting information, ie that education is not only not provided but it would be wrong to provide it.
One also has to realise that in a market place of ideas bad ideas lose out by going bankrupt. The united states is kind of the home of capitalism but it pansies out when the laws of market would require it's big banks to fail. So instead of natural death artificial economical support is provided. In the same way you would need to actually watch coolly when stupid people are being stupid and hurting themselfs. That is an idea either blooms or goes bust and if "goes bust" means results in injury to your health or sanity you are just supposed to live with it. Or rathet than giving or requiring each person a pretty basic but universally provided methodology of learnign about the world you rather rely on multiple biases canceling each other out or clustering the world into different audiences.
A market place phasing might also be about information control. Fox news has a bad name for coloring news and in general being stupid and with an agenda. However it might be worth it for americans to rather hear about the outside world than avoid the spin and not hear about it. The idea of a filter bubble is also relevant. There could also be stresses on meaning and communication. If no consistency of concepts is upkept the end result might be a tower of Babel islands of non-communicating schools of thought. This can already be seen in how terms of politics between america and europe is signfiicant enough that misunderstandings happen and one can genuinely ask whether the "talking of politics" is the same thing in both locales. There is a stress in highly specailized scientist using special lingo that is inpenetrable to a layman. You could for example think how "global warming" means somewhat differnt things to a scientist than a politician. It would be good to be aware on how technical results migth have impact on economies and policy. You would also want decision making to be informed about on what they are decising on. If you leave a scientist and a politician to work it out without guidelines on how to deal with the others stances there are some pretty destructive cooperation modes possible even if the indivudals beliefs could be of high fit to their individual lives. And the danger is that their lives remian individual. If they can't reach a common decision they basically split into two diffrenent nations. if they are free to speak different languages it will be very hard to have a common decision even if desired. Note that this could be solved if each were required to learn the other language. But then they lose the freedom of being lazy and not putting in the time. However if all of their time is spent learning the other language they will not have enough time to do the actual decision making. And in deciding what part of the others language they learn they implicitly decide on what things they can reach common ground on. If you predefine that certain areas are just off limits it frees up energy to have a better quality shared solution to the "on limits" part. A world of private property needs police to enforce against theft. Thus making "ownership" mandatory it frees up guarding your yard with a rifle in favour of organising the ownership relations. Having a forum enforcing against moderation you are guaranteed access to data but no promises to signal-to-noise ratio is given and it is up to the listener to be able to extract any (usefull) information (this includes things like defence against trolls and viruses).
↑ comment by [deleted] · 2015-01-20T00:34:34.823Z · LW(p) · GW(p)
Here's Scott Alexander discussing this concept in the context of lifehacks: http://slatestarcodex.com/2014/03/03/do-life-hacks-ever-reach-fixation/
comment by Plasmon · 2015-01-19T07:26:34.096Z · LW(p) · GW(p)
Recently, there has been talk of outlawing or greatly limiting encryption in Britain. Many people hypothesize that this is a deliberate attempt at shifting the overton window, in order to get a more reasonable sounding but still quite extreme law passed.
For anyone who would want to shift the overton window in the other direction, is there a position that is more extreme than "we should encrypt everything all the time" ?
Replies from: ilzolende, emr, fubarobfusco, James_Miller↑ comment by ilzolende · 2015-01-19T08:04:32.266Z · LW(p) · GW(p)
Assuming you just want people throwing ideas at you:
Make it illegal to communicate in cleartext? Add mandatory cryptography classes to schools? Requiring everyone to register a public key and having a government key server? Not compensating identity theft victims and the like if they didn't use good security?
Replies from: VincentYu↑ comment by VincentYu · 2015-01-19T12:11:24.510Z · LW(p) · GW(p)
Requiring everyone to register a public key
This is already the case in Estonia, where every citizen over the age of 14 has a government-issued ID card containing two X.509 RSA key pairs. TLS client authentication is widely deployed for Estonian web services such as internet banking.
(Due to ideological differences regarding the centralization of trust, I think it's unlikely that governments will adopt OpenPGP over X.509.)
Replies from: None↑ comment by [deleted] · 2015-01-19T12:29:04.097Z · LW(p) · GW(p)
Giving people an official RSA keypair in their smartcard government IDs is fine. That solves all sorts of problems, and enables a bunch of really cool tech.
Requiring that every public key used in any context be registered with the government, or worse, some sort of key escrow, is a totally different matter.
Replies from: ilzolende, Alsadius↑ comment by ilzolende · 2015-01-20T00:25:25.887Z · LW(p) · GW(p)
I was thinking less "everyone must register all their public keys, and you can't have a second identity with its own key" and more "everyone has to have at least 1 public key officially associated with them so that they can sign things and be sent stuff securely." And that Estonian system sounds pretty cool.
↑ comment by Alsadius · 2015-01-20T04:23:22.393Z · LW(p) · GW(p)
What would you estimate the probability of ever having the former without the latter being? Of having that happy state last for more than a few years?
Replies from: None, Lumifer↑ comment by [deleted] · 2015-01-20T14:46:15.191Z · LW(p) · GW(p)
Well the former pretty much describes the current state of affairs. Anyone with a government ID card or national healthcare ID probably has a chip embedded with an escrowed signing key. There's really nothing unique about Estonia here -- they're using the same system everyone else is using. Even if your country, like the USA, doesn't have a national ID of some kind or doesn't have a chip embedded, your passport does. The international standard governing "smart passports" being issued by just about every country in existence for the past 5-10 years includes embedded digital signature capability.
Now I don't really know how to estimate the probability of sliding into the latter case. I don't see them as intrinsically connected however.
↑ comment by emr · 2015-01-19T18:06:41.116Z · LW(p) · GW(p)
Frame attempts to limit the use of encryption as unilateral disarmament, and name specific threats.
As in, if the government "has your password", how sure are you that your password isn't eventually going to be stolen by Chinese government hackers? Putin? Estonian scammers? Terrorists? Your ex-partner? And you know that your allies over in (Germany, United States, Israel, France) are going to get their hands on it too, right? And have you thought about when (hated political party) gets voted into power 5 years from now?
A second good framing is used by the ACLU representative in the Guardian article: You won't be able to use technologies X Y and Z, and you'll fall behind other countries technologically and economically.
↑ comment by fubarobfusco · 2015-01-19T08:46:12.332Z · LW(p) · GW(p)
To be a bit more specific than "we should encrypt everything all the time":
Mandatory full-disk encryption on all computer systems sold, by analogy to mandatory seat belts in cars — it used to be an optional extra, but in the modern world it's unsafe to operate without it.
↑ comment by James_Miller · 2015-01-20T03:16:28.741Z · LW(p) · GW(p)
The criminalization of all encryption in the U.S. is just one big terrorist attack away.
Replies from: Alsadius↑ comment by Alsadius · 2015-01-20T04:24:35.242Z · LW(p) · GW(p)
Doubtful. Too much of the economy takes place online today - you can't have e-banking without strong crypto.
Replies from: roystgnr, James_Miller↑ comment by roystgnr · 2015-01-20T15:24:02.282Z · LW(p) · GW(p)
You can have e-banking and e-commerce with "key escrow", though. That didn't fly in the 90s, and it's always been an inane idea, but I could definitely imagine "you should hide from hackers, but not from the police" PR spin ramping up again.
Replies from: Lumifer, Alsadius↑ comment by James_Miller · 2015-01-20T04:51:22.692Z · LW(p) · GW(p)
Good point. I revise my prediction to "after the next big terrorist attack the U.S. will heavily regulate encryption."
comment by Gram_Stone · 2015-01-23T15:55:08.326Z · LW(p) · GW(p)
Just thought of something. If you want to talk about variation and selection but you can't say 'evolution' without someone flipping a table, then talk about animal husbandry instead.
EDIT: Heh, turns out Darwin actually did this.
comment by Dorikka · 2015-01-19T02:18:36.876Z · LW(p) · GW(p)
At one point there was a significant amount of discussion regarding Modafinil - this seems to have died down in the past year or so. I'm curious whether any significant updating has occurred since then (based either on research or experiences.)
(This is a repost from last week's open thread due to many upvotes and few replies. However, see here for Gwern's response.)
Replies from: btrettel, sediment, Dr_Manhattan, None↑ comment by btrettel · 2015-01-19T18:18:24.014Z · LW(p) · GW(p)
I meant to post something about my experience with armodafinil about a year ago, but I never got around to it. My overall experience was strongly negative. Looks like I did write a long post in a text file a day or so after taking armodafinil, so here's what I had to say back then:
Some background:
I'm a white male in my mid-20s. I have excessive daytime sleepiness, and I believe this is because I'm a long sleeper who has difficulty getting an adequate duration of sleep. There are several long sleepers in my family. My mother and I tend to not like how stimulants make us feel, e.g., pseudoephedrine makes us fairly nervous, though it will help our nasal congestion from allergies and help wake us up. I was interested in trying modafinil because I hear it has proportionally less of the negative effects compared against its wake-promoting effects.
My neurologist gave me a few samples of armodafinil, which is basically a variant of modafinil. I was busy in the month after I met my neurologist last and didn't think about taking it at all, but come mid-February I remembered to try it.
Saturday, Feb. 15, 2014:
I woke up at 8:30 am, as I usually did, and started eating a chocolate chip muffin for breakfast. During the breakfast I took 4000 IU of vitamin D and 150 mg of armodafinil. I took these at 8:37 am.
I started organizing files on my computer. I still felt fairly tired, and considered going back to sleep, but I did not because I try to keep a very regular sleep schedule. I will take naps in the afternoon (before 8 pm, or so, to avoid delaying my bedtime) if necessary, but I try to wait until then. Until around 10:30 am, I thought armodafinil was doing absolutely nothing. I know armodafinil takes some time to kick in, but I didn't expect that long. Maybe I'm one of the people for which modafinil doesn't work?
At around 11 am I realized that I felt weird. It was obvious that the armodafinil had kicked in fierce at that point. I checked my heart rate: 75 bpm, which is higher than normal, though not as high as other stimulants take me. I wouldn't quite describe how I felt as more awake, though I don't think I could involuntarily fall asleep now. It felt as if I could fall asleep if I wanted to, but I didn't want to. I felt a bit more nervous, perhaps, but that might just be the placebo effect. It certainly was not as strong as what 60 mg of pseudoephedrine does to me. I got a phone call from my apartment manager saying that they'll be showing my apartment today, so I (slowly) started sweeping and vacuuming to make my apartment a bit more presentable. I was pacing around like crazy while doing this.
At about 11:30 am I took a shower. I started realizing that I have no impulse control. Instead of washing myself, I'd start, get distracted by some thought, think about that for a while, realize I'm in the shower, forget where I was in my shower routine, etc. I started thinking that armodafinil might have given me ADHD, which is odd given that I've read it might be useful for the treatment of ADHD.
After the shower I consulted the note packet that came with the armodafinil. Given what these notes said, I think I was experiencing a side effect. The notes said to discontinue use of armodafinil if you experience these symptoms. "Okay, can do." is what I thought.
I went to the LessWrong meetup and told Vaniver that I think armodafinil is not doing nice things for me. Another LWer suggested that perhaps these effects go away with repeated use; I said that I didn't know, but I don't intend to find out. During the entire meetup I had a lot of difficulty sitting still. I got up a few times to get water, or a napkin, or a bag of chips, but I don't think I actually wanted any of those things; I guess I just didn't want to stay still.
The early afternoon is the hardest time for me to stay awake, and this meetup spanned that time entirely. I yawned a few times during the meetup, but I didn't become so drowsy that I had to take a nap, as I often do. I take this as evidence that armodafinil helps my EDS, though it's not that strong because I never really felt "awake" during this entire process. I felt really weird in a way that I can't quite describe.
After the meetup (about 4 pm), I rode my bike to the downtown library to return a book. Purely subjectively, I'd say armodafinil increased my endurance. I'm in reasonable shape now, but I felt that I could maintain 20+ mph easier today than a few days ago. Objectively, though, it doesn't seem that my average speed increased much if at all; it was about 13 mph on Saturday and 12 to 13 on most days.
When I got back to my apartment, I felt a little better. Still fidgety and easily distracted, but slightly better. Perhaps the exercise helps, or the armodafinil was wearing off? I go running around now usually anyway, so I hoped this would help more. I went on a run, but it didn't have quite the effect the bike ride did. I then started making dinner, but I was continually distracted by my computer through that.
I noticed that my tinnitus was much worse today. Not sure if this was due to the armodafinil, but it sounded at least 10 dB louder than usual. Ambient noises could not mask it.
Around 10 pm, I started feeling more tired, so I figured the armodafinil must be wearing off. I still felt odd and easily distracted, though. I read on my couch for a while until I felt as if I could fall asleep quickly, and I slept briefly on my couch. I woke up and moved to my bed, where it took me a while to fall asleep again, but I did. I woke up several times during the night and felt I had to try quite a few positions before I found something comfortable. This wasn't particularly restful. Otherwise, I don't think armodafinil did much to my nighttime sleep. I think if it hadn't caused some manic symptoms, I probably wouldn't have had any issues sleeping.
Sunday, Feb. 16, 2014:
I still felt a little odd when I woke up, but it was very obvious now that these effects were wearing off. I had read that armodafinil has a half-life of about 12 to 15 hours, so using a simple exponential decay with a conservative half-life, I saw that I still had the equivalent of about 45 mg of armodafinil in my system. Tomorrow morning that decrease to about 15 mg; after the third day it's down to 5 mg. I can't wait for this to be out of my system.
Overall, I'd say taking armodafinil was worthwhile as I learned something about myself, which is that I probably should avoid stimulants as much as possible.
(Not from my original intended post: I want to note that I'm doing much better now, after getting more sleep. No stimulants necessary. I haven't seen a neurologist since I wrote the post above and probably won't again.)
Replies from: hg00↑ comment by hg00 · 2015-01-21T06:07:20.888Z · LW(p) · GW(p)
Your main complaints about your drug experience seem to be (a) feeling unusual, (b) having some difficulty managing your attention, (c) feeling excessively fidgety, (d) louder tinnitus, and (e) sleep difficulty. As someone who has experimented with psychoactive drugs a fair amount, including modafinil, my impression is that (a) and (b) are pretty common with psychoactive drugs and are almost always transient and harmless (unless you're driving a car, biking, operating heavy machinery, etc.). ((c) is less common but definitely present with some, e.g. coffee. (d) and (e) are probably good reasons to stop using a particular drug.) In fact, I've gotten to the point where I consider feeling unusual and having my attention work differently to be fun, interesting experiences to observe and learn from.
So my thought is that before trying modafinil, maybe people should experiment with small doses of strongly psychoactive drugs that don't have a 12-hour half life, perhaps in a safe & supervised environment, to learn that altered mental states aren't scary and can be pretty useful for certain tasks--they're like distinct mental gears you can enter using cheap, reliable external aids.
(For example, drink half a cup of coffee, then a full cup of coffee, then two cups of coffee on separate days to know what it's like to be highly stimulated, and a cup of beer, two cups of beer, and four cups of beer on separate days to know what it's like to be highly disinhibited. Kratom is another highly useful but little known legal psychoactive; for example, this successful blogger primarily credits kratom with his success at building his online empire, and I'm not surprised at all given my kratom experiences... any resistance I have to doing tasks seems to just melt away on kratom.)
(Disclaimer: I'm a foolish young person and maybe you should ignore everything I'm saying. Also if you really did experience stimulant induced mania you should probably follow the instructions on the label.)
Replies from: btrettel↑ comment by btrettel · 2015-01-28T23:05:02.004Z · LW(p) · GW(p)
Appreciate your response and perspective, hg00.
I think smaller doses are prudent for people experimenting with these things. If I were to try armodafinil again, I would have cut the pill in half or even quarters. (I had no real choice in the pill dosage, as I only received a sample.) Though, in retrospect I think avoiding (ar)modafinil all together would be smart because the half-life is way too long.
I'm basically straight-edge, though I'm open minded and willing to try some drugs if I think they might have a positive effect on me. I've only tried nootropics, and so far I have not been impressed. Either they do nothing or make me feel really strange. Others' experiences may vary. There doesn't seem to be anything here for me. At this point I have no intention of ever trying a drug for non-medical reasons.
What I experienced isn't exactly clear, but, I didn't like what I experienced. In fact, it took several weeks for me to fully recover from taking armodafinil. After a few weeks or so I felt mostly normal, and a bit later the tinnitus finally died down. The latter isn't that unusual for my tinnitus, actually. After exposure to a loud noise I might have louder tinnitus for several weeks. (Not that mine ever is quiet. It doesn't bother me, but I imagine normal for me would drive most people nuts. It never goes away and probably will only ever get worse, and I accept that.)
Replies from: hg00↑ comment by hg00 · 2015-01-29T21:56:44.989Z · LW(p) · GW(p)
Understood. I don't doubt your self-assessments, just wanted to provide a contrasting perspective. For tinnitus, you might want to try googling "tinnitus replacement therapy" or experimenting with ear/jaw/neck massage; both of these seem to have been helpful for me.
Replies from: btrettel↑ comment by btrettel · 2015-01-30T01:25:11.664Z · LW(p) · GW(p)
I've looked into tinnitus retraining therapy (I think this is what you meant) but decided I'm not bothered enough by my tinnitus to go that route. I'll keep it in mind if this changes. I have not heard about massage helping tinnitus. I'll have to give that a shot as I'm sure it would be enjoyable even without tinnitus relief.
Otherwise, I've found noise machines to be helpful. Sometimes I also listen to a brown noise mp3 when working and I don't want to listen to music. I find that this totally masks my tinnitus, masks most ambient noises, and is rather pleasant (it sounds like a waterfall). (I want to note that my brother finds artificial noise to be worse than tinnitus, so your mileage may vary.)
If you use Linux and have the right software installed you can run the following commands to generate a brown noise mp3:
sox -c 2 --null out.wav synth 30:00 brownnoise vol -0.4dB fade t 3 30:00
lame --preset insane out.wav out.mp3
Replies from: hg00↑ comment by hg00 · 2015-01-30T06:04:24.720Z · LW(p) · GW(p)
The core idea behind tinnitus retraining therapy is to listen to noise that doesn't totally mask the tinnitus but is more salient than it. The principle being that it helps you think of your tinnitus as background noise. Seems to work for me.
↑ comment by sediment · 2015-01-21T19:16:40.707Z · LW(p) · GW(p)
A month or two ago I started taking Modafinil occasionally; I've probably taken it fewer than a dozen times overall.
I think I'd expected it to give a kind of Ritalin-like focus and concentrate, but that isn't really how it affected me. I'd describe the effects less in terms of "focus" and more in terms of a variable I term "wherewithal". I've recently started using this term in my internal monologue to describe my levels of "ability to undertake tasks". E.g., "I'm hungry, but I definitely don't have the wherewithal to cook anything complicated tonight; better just get a pizza." Or, on waking up: "Hey, my wherewithal levels are unusually high today. Better not fritter that away." (Semantically, it's a bit like the SJ-originating concept of "spoons" but without that term's baggage.) It's this quantity which I think Modafinil targets, for me: it's a sort of "wherewithal boost". I don't know how well this accords with other people's experience. I do think I've heard some people describe it as a focus/concentration booster. (Perhaps I should try another nootropic to get that effect, or perhaps my brain is just beyond help on that front.)
I did, however, start to feel it suppressed my appetite to unhealthily, even dangerously, low levels. (After taking it for two days in a row, I felt dizzy after coming down a flight of stairs.) I realize that it's possible to compensate for this by making oneself eat when one doesn't feel hungry, but somehow this doesn't seem that pleasant. For this reason, I've been taking it less recently.
I'd be curious to know whether others experience the appetite suppression to the same extent; it's not something that I hear people talk about very much. Perhaps others are just better at dealing with it than I am or don't care.
It's also hard to say how much of its positive effects were placebo, given that I took it on days when I'd already determined I wanted to "get a lot of shit done".
I might still try armodafinil at some point.
Replies from: None, NancyLebovitz↑ comment by NancyLebovitz · 2015-01-23T21:40:03.188Z · LW(p) · GW(p)
I wonder if activation energy is a good way of describing difficulties with getting started.
Discussion of different kinds of werewithal
Replies from: sediment↑ comment by Dr_Manhattan · 2015-01-21T01:52:37.121Z · LW(p) · GW(p)
Mixed feelings. If you need wakefullness it's available on tap, but with a side of anxiety and trouble going to sleep later if your dosage is not perfectly calibrated.
↑ comment by [deleted] · 2015-01-22T03:38:46.536Z · LW(p) · GW(p)
I took modafinil twice. I'd been having problems staying awake during the day -- it's hard for me to sleep before 2am -- and those completely disappeared. I had more energy then than I've had in a while. No negatives. The only reason I haven't gotten more is that I don't have a mailing address.
(Disclaimer: I drink a lot of coffee and tea, use a lot of snus, and drink like a relevant ethnic stereotype on weekends.)
comment by Adam Zerner (adamzerner) · 2015-01-23T03:42:54.954Z · LW(p) · GW(p)
I just started using the Less Wrong Study Hall. It's been great! I find myself to be more productive, and there's something fun about being amongst the company of other friendly people.
I don't have anything insightful to say. I'd just like to reiterate that:
1) It exists and you should consider using it (it seems that not too many people know about it).
2) I (and others) think that there should be a link to it in the sidebar.
comment by sixes_and_sevens · 2015-01-19T10:50:57.221Z · LW(p) · GW(p)
Tell us about your feed reader of choice.
I've been using Feedly since Google Reader went away, and has enough faults (buggy interface, terrible bookmarking, awkward phone app that needs to be online all the time) to motivate me towards a new one. Any recommendations?
Replies from: philh, ZankerH, polymathwannabe, gjm, harshhpareek, roystgnr, twanvl, Richard_Kennaway, Pfft, Username, cameroncowan, Alsadius, beoShaffer, emr↑ comment by polymathwannabe · 2015-01-19T20:39:10.246Z · LW(p) · GW(p)
I've found Feedly on a browser is much more manageable than the Android app.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2015-01-20T02:59:52.312Z · LW(p) · GW(p)
Feedly's default settings on the app are intolerable. It can be mostly fixed with settings changes though. I actually prefer the ap to the desktop now because I use it to pack dead time with reading my RSS feed instead of productive time.
↑ comment by gjm · 2015-01-19T12:04:23.718Z · LW(p) · GW(p)
I use rawdog. It runs on my computer and generates a single HTML file, which contains a nice unified list of articles (rather than the common alternative, a list of feeds which I then have to drill down into). It doesn't rely on any external services other than the feeds themselves. By diddling with the template it uses to generate the HTML, I have given it a little interactivity (e.g., I can tell it to "collapse" some feeds so that they show only article titles rather than content; I can then un-collapse individual articles).
Last I checked, it didn't work on Windows but could be coerced into doing so by fiddling with the source code (it's in Python).
There is a thing called Tiny Tiny RSS that, from what others have said, I suspect may offer kinda-similar functionality but better (with perhaps a bit more effort to get it set up initially). I keep meaning to check it out but failing to do so.
Replies from: None↑ comment by harshhpareek · 2015-01-20T23:10:06.392Z · LW(p) · GW(p)
I tried using RSS readers, but I tended to forget to check their websites or apps. I could have trained myself to check them more often but I ended up using https://blogtrottr.com/ instead. It sends RSS feeds to your email inbox, so I can check blogs along with my email in the morning.
I haven't had any issues so far. They send you ads along with the feed to generate revenue. Having a revenue model is a solid plus in my book.
What I don't like about it: they don't have accounts so managing subscriptions is a little hard.
↑ comment by roystgnr · 2015-01-20T15:17:37.302Z · LW(p) · GW(p)
I've tried TheOldReader, which worked well, even when they had to handle the sudden influx of Google Reader refugees. I'm currently using InoReader, which works very well, and Bloglines, which seems to be broken (for nearly a week now IIRC, and not for the first time in the last year).
Replies from: gwern↑ comment by gwern · 2015-01-20T17:16:36.788Z · LW(p) · GW(p)
Do you pay for The Old Reader?
Replies from: roystgnr↑ comment by roystgnr · 2015-01-23T02:43:15.432Z · LW(p) · GW(p)
IIRC I used it in the brief interlude between "we're hobbyists providing a little free service for people who aren't very happy with Google's latest changes" and "holy hell, they're shutting down Google Reader and our userbase just went up by an order of magnitude; we can't keep this site public anymore". IIRC I would have been willing to shell out $3/month for the service, but by the time that option opened up I'd discovered InoReader.
↑ comment by twanvl · 2015-01-19T17:46:47.173Z · LW(p) · GW(p)
I switched to The Old Reader, which, as the name suggests, is pretty close to Google Reader in functionality.
↑ comment by Richard_Kennaway · 2015-01-19T14:13:25.616Z · LW(p) · GW(p)
I used Safari until Apple removed the RSS functionality, then switched to Vienna. OSX only.
↑ comment by Pfft · 2015-01-20T04:22:59.329Z · LW(p) · GW(p)
I use Digg Reader. It does not have any social networking features, but otherwise it basically works like Google Reader did.
For a while I was also using The Old Reader, but I switched away when it briefly looked like they were going to shut down. Digg Reader and The Old Reader seem very similar.
↑ comment by Username · 2015-01-20T21:17:32.328Z · LW(p) · GW(p)
I simply use the wordpress.com reader (I have a blog that I update through there, so it consolidates the tools I use). I notice it tends to have a bit of a delay in getting new posts, but I don't mind not being absolutely to-the-minute up to date.
↑ comment by cameroncowan · 2015-01-20T05:21:31.222Z · LW(p) · GW(p)
Digg is good for me.
↑ comment by Alsadius · 2015-01-20T04:25:57.358Z · LW(p) · GW(p)
I use RSS Feed Reader(Chrome plugin). It's been fairly good to me, though I have noticed a couple of my feeds disappearing over time. Unsure if this is due to abandonment by the feed admins or due to software issues. I'd still recommend it as a decent option, but I'd believe that better ones exist elsewhere.
↑ comment by beoShaffer · 2015-01-20T00:23:03.150Z · LW(p) · GW(p)
I use Vienna.
comment by SarahSrinivasan (GuySrinivasan) · 2015-01-19T01:52:37.523Z · LW(p) · GW(p)
We're looking for beta testers for the 16th "annual" Microsoft puzzle hunt. Interested folks should PM me, especially if you're in the Seattle area.
comment by [deleted] · 2015-01-21T15:16:20.280Z · LW(p) · GW(p)
Uhm. this is an rather weird way to describe how I think.. but, I feel like I've come full-circle. I'm automatically thinking of ways to optimize, automatically try to better understand the world about me. I'm reading LW articles and I sometimes think "yeah, I know about this".. I no longer feel the "Aha! How did I not realize this seemingly obvious thing I should have thought of already that hurts me nerd always-be-right ego!" but rather, I read mid-post and just feel like I know this stuff already.
Naturally, I still am not 100% perfect, but I still think I'm on the right path. I've been mostly a lurker and registered not long ago. Has anyone else gotten the same feel? This feeling isn't really backed up by anything other than having a "I know this already" thought.
Replies from: Vaniver, Viliam_Bur, Evan_Gaensbauer↑ comment by Vaniver · 2015-01-23T16:29:35.591Z · LW(p) · GW(p)
Has anyone else gotten the same feel?
Yes. Oftentimes people who played lots of games will describe the feeling as "leveling up," and it's a normal and desirable part of growth. This quote is relevant: it's important to not say "well, I've leveled up, no more growth necessary!", but instead always be on the lookout for the way to get to the next level. But the path that got you from level n-1 to n and the path that gets you from level n to level n+1 may be very different, and the restlessness that comes with feeling like you know this stuff is useful for getting you to look elsewhere.
(I'm not saying that you're "done with LW," but I do think you're "done with lurking" and I think that you've done the right thing by registering; it makes for different kinds of interaction, which leads to different kinds of learning.)
↑ comment by Viliam_Bur · 2015-01-22T15:41:36.646Z · LW(p) · GW(p)
I don't have a link, but something like this was already mentioned on LW... when you have already mastered some kind of thinking, it seems "obvious", even if it seemed original and awesome when you were reading it for the first time.
Although, this only proves that you have become more familiar with LW style of thinking. It does not automatically follow that "LW style of thinking" is "rationality". (Although I personally believe it is related.)
Replies from: None↑ comment by [deleted] · 2015-01-22T16:02:58.944Z · LW(p) · GW(p)
Well, that's a nice thing to point out. Was there any research to how many lives were effectively changed by LW?
Also, have anyone else got the feeling that there's some sort of innate rationality? It's the same thing as the awesome flare you feel when the seemingly obvious things are pointed out. I probably wouldn't be thinking like this if it wasn't for anything LW-esque. (Maybe LW has something unique going for it?) Maybe it's something unique for me - but sometimes I feel certain things inside me were either locked or repressed, or in the case of actions, misguided.
Replies from: MathiasZaman↑ comment by MathiasZaman · 2015-01-23T11:39:02.609Z · LW(p) · GW(p)
Was there any research to how many lives were effectively changed by LW?
No, only anecdotal evidence.
↑ comment by Evan_Gaensbauer · 2015-01-23T14:30:57.056Z · LW(p) · GW(p)
I haven't "come full-circle", but I've had a similar experience. I haven't read all of LessWrong Sequences, maybe not even half. Some old friends of mine got me into the meetup at a time when I was studying microeconomics, and started majoring in cognitive science. So, I was enthralled by discussion, and went around the Internet and life learning about related topics. Occasionally, I read Sequences essays I haven't read before, and I realize I get the gist halfway through reading it.
That's my "yeah, I know about this...". It works for me epistemically. It might have helped that I tried to rationalize the existence of the Christian God as a child, up to the point of deism not specific to any religion, and finally to virtual atheism. I found by the time I encountered arguments for or against the existence of God in theology or philosophy in university, I wasn't phased by any of them because I'd generated all of them on my own before. That's another "yeah, I know about this" set of experiences, rather than a series of "Aha!'s" I expected. These mental exercises may have prepared me for future thinking on LessWrong.
Sometimes I'm not as curious as I used to be, and I don't often automatically think of ways to optimize. Instrumentally, I don't believe I'm "on the right path" for fulfilling my own goals. However, that is confounded by other factors of my own life I'm not willing to discuss publicly. So, I'm unsure how instrumentally rational I may or may not be.
comment by Username · 2015-01-25T18:04:00.229Z · LW(p) · GW(p)
I have (what I presume to be) massive social anxiety. I live near lots of communities of interest that probably contain lots of people I would like to meet and spend time with, but the psychological "activation energy" required to go to social events and not leave halfway though is huge, and so I usually end up just staying at home. I would prefer to be out meeting people and doing things, but when I actually try to do this, I get overcome by anxiety (or something resembling it), and I need to leave. Has anyone else had this problem, and if so, what techniques helped you overcome it? "Just practice" doesn't seem to be working--when I am able to muster up the willpower to go to social events (even very structured ones, which are much easier to deal with), it takes more and more willpower to stay there as the event goes on, and this doesn't seem to be changing.
Replies from: fubarobfusco, ChristianKl, MrMind, VincentYu↑ comment by fubarobfusco · 2015-01-25T18:58:51.928Z · LW(p) · GW(p)
In my personal experience, what I thought was anxiety largely went away when I was treated for depression.
So I'm just gonna recommend what Scott has to say on that matter:
http://slatestarcodex.com/2014/06/16/things-that-sometimes-help-if-youre-depressed/
Replies from: Username↑ comment by Username · 2015-01-25T23:05:49.496Z · LW(p) · GW(p)
Thank you!
Based on the test Scott linked and my own subjective experience, it seems very unlikely that I am depressed. Which aspects of your treatment helped with what you thought was anxiety?
Replies from: fubarobfusco↑ comment by fubarobfusco · 2015-01-26T08:26:07.980Z · LW(p) · GW(p)
Well, I suspect the drugs (SSRIs) helped.
So did being reminded that I actually had a lot more control over my situation than I alieved I did, and doing something about it (namely, changing jobs).
Thing is, the problem I went in with was "I can't sleep, I'm nervous too damn much, and I'm doing terribly at work." Not "I can't get out of bed, nothing is fun, I'm thinking of killing myself, and heroin sounds like a smashingly great idea" — the sorts of things I associated with the label "depression".
And I certainly didn't go in with "Doctor, I need to be more comfortable in social situations from parties to random crowds than I ever have before in my life."
But that ended up happening anyway, which is pretty interesting.
↑ comment by ChristianKl · 2015-02-19T12:23:51.146Z · LW(p) · GW(p)
Do you do any sports? Martial arts classes for example gives you an environment where you face your anxiety head on.
↑ comment by MrMind · 2015-01-26T10:49:34.468Z · LW(p) · GW(p)
I can offer at least two point of view.
The first is that what I thought was massive social anxiety was actually just social inexperience, that is a large part of my anxiety derived from not knowing what was the accepted social protocol in a given situation. Usually sitting quietly and observing what others did helped.
The second is that you need to subdivide and identify which steps of social interactions you are able to do and which you aren't. For example, instead of just throwing yourself into a social gathering, you can (for example) get ready and go out from your house, but not get in front of the place. Or you can get in front of the place but not enter. Or you can enter but you have a sense of urgency that prompts you to leave immediately after, etc. Instead of "just practice" the whole interactions, identify the smallest next step that you can practice, and if you can't practice that step, subdivide into even smaller units (e.g. literally just doing the next step).
↑ comment by VincentYu · 2015-01-26T01:22:36.068Z · LW(p) · GW(p)
I recommend reading section 19 (on the management of social anxiety disorder) in the recent treatment guidelines from the British Association for Psychopharmacology (pp. 17–19). A sample:
19.1. Recognition and diagnosis
Social anxiety disorder is often not recognised in primary medical care (Weiller et al., 1996) but detection can be enhanced through the use of screening questionnaires in psychologically distressed primary care patients (Donker et al., 2010; Terluin et al., 2009). Social anxiety disorder is often misconstrued as mere ‘shyness’ but can be distinguished from shyness by the higher levels of personal distress, more severe symptoms and greater impairment (Burstein et al., 2011; Heiser et al., 2009). The generalised sub-type (where anxiety is associated with many situations) is associated with greater disability and higher comorbidity, but patients with the non-generalised subtype (where anxiety is focused on a limited number of situations) can be substantially impaired (Aderka et al., 2012; Wong et al., 2012). Social anxiety disorder is hard to distinguish from avoidant personality disorder, which may represent a more severe form of the same condition (Reich, 2009). Patients with social anxiety disorder often present with symptoms arising from comorbid conditions (especially depression), rather than with anxiety symptoms and avoidance of social and performance situations (Stein et al., 1999). There are strong, and possibly two-way, associations between social anxiety disorder and dependence on alcohol and cannabis (Buckner et al., 2008; Robinson et al., 2011).
19.2. Acute treatment
The findings of meta-analyses and randomised placebocontrolled treatment studies indicate that a range of approaches are efficacious in acute treatment (Blanco et al., 2013). CBT [cognitive behavioral therapy] is efficacious in adults (Hofmann and Smits, 2008) and children (James et al., 2005): cognitive therapy appears superior to exposure therapy (Ougrin, 2011), but the evidence for the efficacy of social skills training is less strong (Ponniah and Hollon, 2008). Antidepressant drugs with proven efficacy include most SSRIs (escitalopram, fluoxetine, fluvoxamine, paroxetine, sertraline), the SNRI venlafaxine, the MAOI phenelzine, and the RIMA moclobemide.
[...]
19.4. Comparative efficacy of pharmacological, psychological and combination treatments
Pharmacological and psychological treatments, when delivered singly, have broadly similar efficacy in acute treatment (Canton et al., 2012). However, acute treatment with cognitive therapy (group or individual) is associated with a reduced risk of symptomatic relapse at follow-up (Canton et al., 2012). It is unlikely that the combination of pharmacological with psychological treatments is associated with greater overall efficacy than with either treatment, when given alone, as only one in four studies of the relative efficacy of combination treatment found evidence for superior efficacy (Blanco et al., 2010). The findings of small randomised placebo-controlled studies suggest that the efficacy of psychological treatment may be enhanced through prior administration of d-cycloserine (Guastella et al., 2008; Hofmann et al., 2006) or cannabidiol (Bergamaschi et al., 2011).
From a patient perspective, the guidelines suggest that each of the following four approaches should be similarly effective for the treatment of social anxiety as long as the care provider is adequately trained and up-to-date with current best practice:
- Pharmacotherapy
- given by a psychiatrist.
- given by a primary care physician.
- Psychotherapy
- with a therapist.
- in a group setting.
comment by philosophytorres · 2015-01-24T00:40:40.181Z · LW(p) · GW(p)
Hello! I'm working on a couple of papers that may be published soon. Before this happens, I'd be extremely curious to know what people think about them -- in particular, what people think about my critique of Bostrom's definition of "existential risks." A very short write-up of the ideas can be found at the link below. (If posting links is in any way discouraged here, I'll take it down right away. Still trying to figure out what the norms of conversation are in this forum!)
A few key ideas are: Bostrom's definition is problematic for two reasons: first, it's account of who an existential risk affects is too promiscuous. It opens up the door for counterexamples in which humanity is violently destroyed yet no existential risk occurs. And second, Bostrom's typology is incoherent. It fails to recognize that a consequence's scope has both spatial and temporal components, where different degrees of each can be combined with the other in different ways. At the end of the paper, I propose my own definition - one that attempts to solve both of these problems. Figure C may be particularly helpful.
Thoughts? I am more than open to feedback!
http://philosophytorres.org/XRiskologytheConceptofanExistentialRisk.pdf
Replies from: Manfred↑ comment by Manfred · 2015-01-26T05:41:10.745Z · LW(p) · GW(p)
This is a nice paper, and is probably the sort of thing philosphers can really sink their teeth into. One thing I really wanted was some addressing of the basic "something that would cause much of what we value about the universe to be lost" definition of 'catastrophic', which you could probably even find Bostrom endorsing somewhere.
comment by solipsist · 2015-01-22T03:47:35.934Z · LW(p) · GW(p)
Who chooses the Featured Articles of the week?
Replies from: Douglas_Knight, Evan_Gaensbauer↑ comment by Douglas_Knight · 2015-01-28T22:59:30.159Z · LW(p) · GW(p)
The homepage is controlled from the wiki here; it includes the template Lesswrong:FeaturedArticles that google tells me is here. From the history, the editor of three years tenure has wiki username Costanza and is probably the same as the LW user of the same name.
↑ comment by Evan_Gaensbauer · 2015-01-23T14:20:19.536Z · LW(p) · GW(p)
This is just a guesstimate, not an informed answer. The Featured Articles of the week seem topical to what's happening that week, such at recent events, a national date, or new developments in some organization. I'm guessing it's an administrator who pays attention to such things closely, so maybe lukeprog. That's just the availability heuristic at work, though. It could be an administrator who doesn't post very often, but still follows events closely.
comment by RowanE · 2015-01-19T13:27:06.073Z · LW(p) · GW(p)
How long do the effects of caffeine tolerance, where when you're not on caffeine you're below baseline and caffeine just brings you back to normal, last? If I took tolerance breaks inbetween stretches of caffeine use, could I be better off on average than if I simply avoided it entirely?
Replies from: BrassLion, VincentYu↑ comment by BrassLion · 2015-01-19T16:21:16.214Z · LW(p) · GW(p)
I think you are thinking about this the wrong way. People become caffeine tolerant quickly, but tolerance goes away pretty quickly too. You would get more benefit out of the opposite approach - spending most of your time without caffeine, but drinking a cup of coffee rarely, when you really need it. You would effectively be caffeine naive most of the time, with brief breaks for caffeine use, and this never develop much of a tolerance. If it's been a long time since that first cup of coffee that you don't remember it, trust me, the effects of caffeine on a caffeine-naive brain are incredible.
I know I once read a study that says you can get back to caffeine naive in two weeks if you go cold turkey, but I can't find anything on it again for the life of me. I do remember distinctly that going cold turkey is a bad plan, as the withdrawal effects are pretty unpleasant - slowly lowering your dose is better.
On a more practical level, it is certainly possible to have relatively little caffeine, such that you aren't noticeably impaired on zero caffeine, while still having some caffeine. The average coffee drinker is far beyond this point. I would try to lower your daily dose over the course of a month or so until you are consuming less than a cup of coffee a day - ideally, a lot less, like no cups of coffee. Try substituting tea (herbal or otherwise) if you need something hot to drink to help kill the craving - herbal tea has no caffeine, black tea has about 1/4 of the caffeine per cup, and if you add cream and sugar the taste will be familiar.
EDIT: VincentYu's comment above is interesting in light of this. I am not going to perform my own meta analysis on this, but there are a great deal of studies that find that caffeine tolerance and caffeine withdrawal are real things - a quick Google Scholar search for "caffeine tolerance" will find them.
I am now very interested in a large study on this without the possible conflict of interest. Also, I find it odd that they choose to not include studies before 1992.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-01-28T22:03:10.724Z · LW(p) · GW(p)
If it's been a long time since that first cup of coffee that you don't remember it, trust me, the effects of caffeine on a caffeine-naive brain are incredible.
Yes, a cup of coffee is too much.
↑ comment by VincentYu · 2015-01-20T03:58:38.282Z · LW(p) · GW(p)
where when you're not on caffeine you're below baseline and caffeine just brings you back to normal
This is a hypothesized explanation for the acute performance-enhancing effects of caffeine that fits well with the Algernon argument, but it is not a conclusive result of the literature. For instance, the following recent review disputes that.
Einöther SJL, Giesbrecht T (2013). Caffeine as an attention enhancer: reviewing existing assumptions. Psychopharmacology, 225:251–74.
Abstract (emphasis mine):
Rationale: Despite the large number of studies on the behavioural effects of caffeine, an unequivocal conclusion had not been reached. In this review, we seek to disentangle a number of questions.
Objective: Whereas there is a general consensus that caffeine can improve performance on simple tasks, it is not clear whether complex tasks are also affected, or if caffeine affects performance of the three attention networks (alerting, orienting and executive control). Other questions being raised in this review are whether effects are more pronounced for higher levels of caffeine, are influenced by habitual caffeine use and whether there [sic] effects are due to withdrawal reversal.
Method: Literature review of double-blind placebo controlled studies that assessed acute effects of caffeine on attention tasks in healthy adult volunteers.
Results: Caffeine improves performance on simple and complex attention tasks, and affects the alerting, and executive control networks. Furthermore, there is inconclusive evidence on dose-related performance effects of caffeine, or the influence of habitual caffeine consumption on the performance effects of caffeine. Finally, caffeine’s effects cannot be attributed to withdrawal reversal.
Conclusions: Evidence shows that caffeine has clear beneficial effects on attention, and that the effects are even more widespread than previously assumed.
The authors' conclusions:
- Caffeine improves performance on both simple and complex attention tasks.
- Caffeine improves alerting, executive control and potentially also orienting.
- There is inconclusive evidence on dose-related performance effects of caffeine.
- There is inconclusive evidence on the influence of habitual caffeine consumption on the performance effects of caffeine.
- Caffeine’s effects cannot be attributed to withdrawal reversal.
Note the following conflict of interest:
The authors are employees of Unilever, which markets tea and tea-based beverages.
comment by JoshuaZ · 2015-01-20T16:34:35.344Z · LW(p) · GW(p)
Precommitting to a secret prediction which I'll reveal on April 15. MD5 hash for the prediction is 38bd807a6872f6a5622aa2b011fd8f03 .
Replies from: gjm, Vaniver, JoshuaZ↑ comment by gjm · 2015-01-20T17:45:19.753Z · LW(p) · GW(p)
This is advance notice that unless your prediction is a short bit of plaintext that obviously doesn't have more than a few bits' worth of scope for massaging, your use of MD5 is likely to be taken as showing that you cheated.
Replies from: JoshuaZcomment by NancyLebovitz · 2015-01-24T11:58:14.577Z · LW(p) · GW(p)
What makes teams more effective
It isn't the total IQ of the team, and whether they're working face to face doesn't matter.
The factors discovered were that the members make fairly equal contributions to discussions, level of emotional perceptiveness, and number of women, though part of the effect of number of women is partially explained by women tending to be emotionally perceptive.
On the one hand, I've learned to be skeptical of social science research-- and I add some extra skepticism for experiments that are simulations of the real world. In this case, the teams were working on toy problems.
On the other hand, this study appeals strongly to my prejudice in favor of niceness. I found the presence of women to be a surprising factor, since I haven't noticed women as being easier to work with.
A notion: the fairly equal contribution part may be, not exactly that everyone contributes more, but that if the conversation is dominated by a few voices, those voices tend to repeat themselves a lot, and therefore contribute little compared to the time they take up.
Replies from: Gram_Stone, Viliam_Bur↑ comment by Gram_Stone · 2015-01-24T20:54:57.272Z · LW(p) · GW(p)
Here are the papers:
I wonder how all female groups compare to groups with just one male, and how all male groups compare to groups with just one female. It seems to me like it's harder for any one person to dominate whenever people feel the need to signal egalitarian values like a preference for gender or racial equality. I don't know anything about statistics yet, so maybe this is implausible, but I think part of the reason that diversity was an insignificant predictor was that poor theory of mind caused by (?) ingroup favoritism dominates the effect as diversity increases and it drowns out the effect of the need to signal egalitarian values, so I think it would be cool to see how the collective intelligence changes when you go from 'completely' homogenous to 'almost' homogenous in experimental groups composed of subjects from cultures that value egalitarianism highly. I would like to see this replicated by subjects from less egalitarian cultures as well, but that's hard sometimes.
↑ comment by Viliam_Bur · 2015-01-26T09:03:52.578Z · LW(p) · GW(p)
My guess: People in the team need to communicate. This can be essentially achieved by two ways:
1) All team members voice their opinions openly.
2) Some team members don't voice their opinions, but other members are good at reading emotions, so the latter recognize when the former believe they know something relevant.
If this model is true, we would see that equal contribution (no one is silent) or emotional perceptiveness (other people recognize when the silent person wants to say something) increase the team output.
comment by somnicule · 2015-01-20T08:58:41.931Z · LW(p) · GW(p)
Didn't get a response in the last thread, so I'm asking again, a bit more generally.
I've recently been diagnosed with ADHD-PI. I'm wondering how to best use that information to my advantage, and am looking for resources that might help manage this. Does anyone have anything to recommend?
In the short-term I'm trying to lower barriers for things like actually eating by preparing snacks in snaplock bags, printing out and laminating checklists to remind me of basic tasks, and finding more ways to get instant feedback on progress in as many areas as I can (for coding, this means test-driven development).
Replies from: atorm, someonewrongonthenet↑ comment by atorm · 2015-01-20T12:42:44.402Z · LW(p) · GW(p)
My experience of ADHD includes a tendency to become distracted by thought while moving between tasks or places. I have found that headphones with an audiobook help lock my attention down to two tracks instead of half a dozen: I'm either thinking about my task, or the words in my ear. Obviously your mileage may vary, but ADHD people develop all sorts of coping methods, so my broad advice is "experiment with lots of things to help get things done, even if other people are skeptical of their effectiveness."
Replies from: somnicule↑ comment by someonewrongonthenet · 2015-01-21T04:12:01.108Z · LW(p) · GW(p)
use that information to my advantage
You can get accommodations for many academic activities if you are still a student.
comment by Gram_Stone · 2015-01-19T12:26:18.746Z · LW(p) · GW(p)
I've never studied any branch of ethics, maybe stumbling across something on Wikipedia now and then. Would I be out of my depth reading a metaethics textbook without having read books about the other branches of ethics? It also looks like logic must play a significant role in metaethics given its purpose, so in that regard I should say that I'm going through Lepore's Meaning and Argument right now.
Replies from: TheAncientGeek, gjm, Gram_Stone, is4junk, None, cameroncowan↑ comment by TheAncientGeek · 2015-01-19T18:44:38.264Z · LW(p) · GW(p)
You could dip a toe on Stanford Encyclopedia of Philosophy.
↑ comment by gjm · 2015-01-19T13:50:55.598Z · LW(p) · GW(p)
The best way to tell is to read the metaethics textbook and see what happens. If it turns out you need a crash course on (say) utilitarian thinking, you can always do that and then return to metaethics.
What is your reason for wanting to read a metaethics textbook? I ask because the most obvious reason (I think) is "because I want to live a good life, so I want to figure out what constitutes living a good life, and for that I need a coherent system of ethics" but I'd have thought that most people thinking in those terms and inclined to read philosophy textbooks would already have looked into (at least) whatever variety of ethics they find most congenial.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-01-19T14:47:43.055Z · LW(p) · GW(p)
Good point. I ordered it yesterday, and it's supposed to be an easy introduction, so we'll see what happens.
Well it seems to me that there are so many different schools of normative ethics, that unless we're all normative moral relativists (I don't think we are), most people must be wrong about normative ethics. I've seen claims here that mainstream metaethics has it all wrong, I just found out that lukeprog's got his own metaethics sequence, and some of the things that he claims to resolve seem like they would have profound implications for normative ethics. I guess I feel like I'm saving myself time not reading about a million different theories of normative ethics (kind of like I think I'm saving myself time not reading about a million different types of psychotherapy, unless it's for some sort of test) and just learning about where the mainstream field of metaethics is, and then seeing where Eliezer and Luke differ from it, and if I agree.
Is it crazy to want to have some idea of what ethical statements mean before I use them as a justification for my behavior? That you say "whatever variety of ethics they find most congenial," makes me think that you might not think it is that crazy. And I mean, I'm at least not murdering anyone right now; I have time for this. And if I don't ever take the time, then I could end up becoming the dreaded worse-than-useless.
I'm also curious about FAI so I'm generally schooling myself in LW-related stuff, hence the books on logic and AI and ethics. I'm working towards others as well.
↑ comment by Gram_Stone · 2015-01-20T02:43:55.720Z · LW(p) · GW(p)
I found my own answer in the comments of the course recommendations for friendliness thread. Luke says:
It's really hard to find good writing on metaethics. My recommendation would be to read the chapter on ethical reductionism from Miller's [Contemporary Metaethics: An Introduction], my own unfinished sequence on metaethics, and Eliezer's new sequence (most of it's not metaethics, but it's required reading for understanding the explanation of his 2nd attempt to explain metaethics, which is more precise than his first attempt in the earlier Sequences).
On normative ethics, Luke says elsewhere:
I don't read much on normative ethics, but Smart & Williams' Utilitarianism: For and Against has some good back-and-forth on the major issues, at least up to 1973. The other advantage of this book is that it's really short.
But there are probably better books on the subject I'm just not aware of.
From what I see, he seems to attribute a similarly low significance to most of contemporary normative ethics.
Also, the Stanford Encyclopedia of Philosophy has been suggested twice, in case I do need to know anything in particular about normative ethics. I'll keep that in mind.
For posterity, as far as I can tell, the most popular undergraduate text on normative ethics is Rachels' The Elements of Moral Philosophy. The 7th edition has good reviews on Amazon. Apparently the 8th edition is too new to have reviews.
Replies from: Furcas↑ comment by Furcas · 2015-01-20T05:12:23.072Z · LW(p) · GW(p)
and Eliezer's new sequence (most of it's not metaethics, but it's required reading for understanding the explanation of his 2nd attempt to explain metaethics, which is more precise than his first attempt in the earlier Sequences).
Where is this 2nd attempt to explain metaethics by Eliezer?
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-01-20T14:02:44.164Z · LW(p) · GW(p)
I'm pretty new, I couldn't tell you for sure. I'm pretty sure it's two posts in that second sequence: Mixed Reference: The Great Reductionist Project and By Which It May Be Judged. I'm pretty sure the rest of the sequence at least is necessary to understand those.
↑ comment by cameroncowan · 2015-01-20T05:22:52.673Z · LW(p) · GW(p)
Oxford's Rhetoric could be helpful in this area.
comment by SanguineEmpiricist · 2015-01-19T00:45:50.007Z · LW(p) · GW(p)
(http://www.fooledbyrandomness.com/genealogy.jpg
Genealogy of the ideas contained in Taleb's work. Pretty useful. I had it embedded but it took up the entire page for me.
comment by Lumifer · 2015-01-22T17:23:32.652Z · LW(p) · GW(p)
People are perennially interested in the reliability of hard drives. Here is useful hard data. Summary:
At Backblaze, as of December 31, 2014, we had 41,213 disk drives spinning in our data center, storing all of the data for our unlimited backup service. That is up from 27,134 at the end of 2013. ... The table below shows the annual failure rate through the year 2014.
tl;dr Avoid 3Tb Seagate Barracuda drives.
Replies from: zedzed, Nornagest↑ comment by zedzed · 2015-01-25T15:23:25.728Z · LW(p) · GW(p)
I spend time in hardware enthusiast communities and not so impressed with Backblaze. Even here, the Seagate failure rates seem suspiciously anomalous.
Also, SSDs, which are probably a better match for most people here (my rig has run a 256 GB SSD for the past 2.5 years and I'm yet to want for more storage). Especially for laptops; they use less power (= your battery lasts longer) and can stand up to shock (so your laptop doesn't break if you drop it).
Replies from: Lumifer↑ comment by Lumifer · 2015-01-26T17:09:14.474Z · LW(p) · GW(p)
I did not mean to endorse any particular service or give recommendations as to which storage devices should people buy. I found hard data which is rare to come by, I shared it. If you think the data is wrong or misleading, do tell.
Replies from: zedzed↑ comment by zedzed · 2015-01-26T19:28:32.679Z · LW(p) · GW(p)
Consensus is that modern HDD's from reputable manufacturers have approximately equal low failure rates, especially after the first year. You should still back up important data (low != 0), but the differences failure rates in consumer space is small enough to not really sway purchasing decisions.
Their methodology probably doesn't extrapolate well because they're testing the drives in what amounts to a NAS and the WD reds (which did well) are NAS drives, and therefore designed to operate 24/7 with vibration and nongreat cooling, whereas the Seagate Barracudas are just absolutely not NAS drives (unlike, say, the Seagate NAS drives). So, it's not really surprising they had a much higher failure rate, but it'd also be incorrect to conclude that you should avoid them. If I'm building a rig for work, internet use, or gaming {1}, then my HDD's going to be in a well-cooled, non-vibrating environment, and not used in use 24/7, so I'm essentially throwing away 15% price premium for the WD Red's (or 60% for the HGST Deskstar's). OTOH, if you're backing up your data locally on a NAS, pay the gorram premium.
{1} Again, though, SSD's are increasingly likely the way to go. You can get a sufficiently good 256 GB SSD for about the price of a 3 GB HDD and if you're never going to use more than 250 GB (which, I'm guessing is at least 80% of people reading this who don't already know whether an SSD or HDD better meets their needs), you're essentially getting substantially better performance (up to an order of magnitude), more reliability, and less noise for free. I harp on this because SSD's come in a 2.5-inch form factor and the more the standard storage option is SSD, the more cases won't have a whole bunch of room taken up with 3.5-inch bays I don't use. More importantly, there'll finally be budget laptops that I don't have to immediately take apart, clone the OS onto an SSD, reassemble, and figure out what to do with the HDD it came with just to get a decent experience. Gah! SSD's are the right choice for most people and there's externalities when they get HDD's instead because "more gigabytes".
Replies from: Lumifer↑ comment by Lumifer · 2015-01-26T19:44:36.069Z · LW(p) · GW(p)
Consensus is that modern HDD's from reputable manufacturers have approximately equal low failure rates, especially after the first year.
I am sorry, the link shows hard data which disproves that statement and not in a gentle way, either.
So, it's not really surprising they had a much higher failure rate
Didn't your first sentence state that all failure rates are "approximately equal"? Make up your mind.
my HDD's going to be in a well-cooled, non-vibrating environment
Assumption not in evidence. I've seen a LOT of computers totally taken over by dust bunnies :-) The reason you go look at that grey disk where the fan vent used to be is that your bios starts screaming at you that the machine is overheating :-D
SSD's are the right choice for most people
Yes, but that's irrelevant to the original post which looks at reliability of rotating-platter hard drives. If you think you don't care about the issue, well, what are you doing in this subthread?
Replies from: zedzed↑ comment by zedzed · 2015-01-26T21:01:28.201Z · LW(p) · GW(p)
My above comment was poorly written. Sorry. Hem.
Consumer-grade HDD's, used properly, all have about same, low failure rate. If you treat your desktop like a NAS or server, they will drop like flies (as evidenced). If you treat your desktop like a desktop, then a lot of the price-raising enterprise-grade features (vibration resistance, 24/7 operation) count for zilch. They're still higher-end drives, and will last longer, but assuming you give your desktop a fraction of the maintenance you give your car (like, take 5 minutes to blow it out every other year), not a lot.
Assumption not in evidence.
Mea culpa. I'll give you heat, but vibration tolerance and 24/7 operation are enterprise-grade features with minimal relevance to desktop hard drives. Evidence. Evidence. Why I'm inclined to distrust anything Backblaze publishes + evidence.
tl;dr Looking at this data and concluding "avoid Seagate Barracuda drives" is a bit like noticing that bikers survive accidents more often when they're wearing a helmet and then issuing a blanket recommendation to a population primarily of car-drivers to wear bike helmets. Sure, it'll reduce your expecting mortality when you go out for a drive, but not nearly as much as you'd expect from the biking numbers.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-27T01:29:29.659Z · LW(p) · GW(p)
Consumer-grade HDD's, used properly, all have about same, low failure rate. If you treat your desktop like a NAS or server, they will drop like flies (as evidenced).
Sigh. No. Really, go look at the data. I am not going to take the "consensus" of the anand crowd over it.
Hitachi Deskstar 7K2000 is a consumer-grade non-enterprise hard drive. In the sample of ~4,600 drives it has 1.1% annual failure rate in the NAS environment.
Seagate Barracuda 7200.14 is a consumer-grade non-enterprise hard drive. In the sample of ~1,200 drives it has 43.1% annual failure rate in the NAS environment.
Those are VERY VERY DIFFERENT failure rates.
I, for example, have five-drive zfs array at home which is on 24/7. I am very much interested in what kind of drives will give me a 1% failure rates and which kind of drives will give me 43% failure rates. I am not average, but I hardly think I'm unique in that respect in the LW crowd.
Replies from: zedzed↑ comment by zedzed · 2015-01-28T01:00:31.235Z · LW(p) · GW(p)
Do we actually disagree about anything?
We certainly agree that the Barracuda's are crap in NAS's. I believe that WD Red's are a major improvement and Hitachi Deskstars a further improvement, which is just reading the Backblaze data (which is eminently applicable to NAS environments), so I'm we're in complete agreement that, for NAS's, Barracuda << Red < 7K2000.
However, I also contend that, in a desktop PC, a lot of what makes the Reds and 7K2000 more reliable (e.g. superior vibration resistance) will count for very little, so they'll still fail less often, just not 1/40th as much. Even if they're four times as reliable, moving from, say, a 4% annual failure rate vs a 1% annual failure rate may not be worth the price premium (using Newegg pricing, the Hitachi drive costs 72.5% more, but on Amazon, the Hitachi drive is cheaper. Yay Hitachi?), especially since RAID 1 is a thing (which would give us a 0.16% annual failure rate at a 100% price premium). Obviously, if you can find higher-quality drives for less than lower-quality drives, use those. But, in what we'd naively expect to be the normal case, if you're paying for features that drastically reduce failure rates in NAS environments, but using your drives in a desktop environment where these features are doing little to extend your drive life, then you're probably better off using RAID 1.
(Why do I use low single-digit annual failure rates? Because I remember Linus of Linus Tech Tips, who worked as a product manager at NCIX and therefore is privy to RMA and warranty rates, implied that's about right. He produces a metric shit-ton of content, though, so there's no way I'm going to dig it up.)
I'm also interested why you're dismissive of AnandTech. I currently believe they're gold standard of tech reviews, but if they're not as reputable as I believe they are, I would very much like to stop believing they are.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-28T16:42:56.240Z · LW(p) · GW(p)
Do we actually disagree about anything?
Yes. You keep saying that there are no significant differences in reliability between hard drives of similar class (consumer or enterprise, basically) in similar conditions. I keep saying there are.
I'm also interested why you're dismissive of AnandTech. I currently believe they're gold standard of tech reviews, but if they're not as reputable as I believe they are, I would very much like to stop believing they are.
I don't follow the hardware scene much nowadays, but I don't think AnandTech was ever considered the "gold standard" except maybe by AnandTech itself. It's a commercial website, not horrible, but not outstanding either. Garden-variety hardware reviews, more or less. In any case, I trust discussion on the forums much more than I trust official reviews (recall the Sturgeon's Law).
↑ comment by Nornagest · 2015-01-22T21:36:32.960Z · LW(p) · GW(p)
I've found that modern hard drives tend to be quite reliable for consumer purposes; we've come a long way since the bad old days of the Click of Doom.
Their enclosures, not so much. I've had three backplanes for external hard drives, from three different manufacturers, fail in as many years. And one cable. But that table won't give you any information on how common this sort of thing is or how to mitigate your risk.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-23T00:21:15.205Z · LW(p) · GW(p)
modern hard drives tend to be quite reliable for consumer purposes
Heh. I'd say the reverse: modern hard drive are not reliable enough for consumer purposes since consumers typically don't make backups and a failed hard drive is a disaster. They are sufficiently reliable for professional purposes where when a drive fails you just swap in another one and continue as before.
Their enclosures, not so much
Yeah, these are usually cheaply made. But then if an enclosure fails you just get another one and no data is lost or needs to be recovered from backups.
Replies from: Nornagest↑ comment by Nornagest · 2015-01-23T00:53:07.002Z · LW(p) · GW(p)
But then if an enclosure fails you just get another one and no data is lost or needs to be recovered from backups.
Unless the manufacturer in their infinite wisdom has enabled hardware encryption with the keys stored in the backplane.
Replies from: Lumifercomment by Alsadius · 2015-01-20T16:01:32.487Z · LW(p) · GW(p)
I'm looking at setting up my own website, both for the experience and to allow hosting of some files for a game I'm making. What I'd like is to register a domain, probably (myrealname).com and/or .ca, both of which are available, set up a wiki on it, and host a few(reasonably large) files. Thing is, I have a computer that stays on 24/7, and I'm generally competent with computers, so I suspect I can probably get by without paying for hosting, which appeals to me.
Can anyone link me to guides on how to do this? My Googling is turning up shockingly little, just "Pay someone for hosting!". I've registered domains before, but never done any hosting.
Replies from: Lumifer, ZankerH, btrettel, Douglas_Knight, mkf↑ comment by Lumifer · 2015-01-20T18:16:20.689Z · LW(p) · GW(p)
The two relevant questions here are:
What's your ISP's upload speed and stated policy towards home servers? A lot of ISPs prohibit servers for residential customers, though actual enforcement is rare.
Are you sure you're up to the task of handling security for your home server that will be exposed to the 'net?
↑ comment by Alsadius · 2015-01-21T03:56:27.791Z · LW(p) · GW(p)
What's your ISP's upload speed and stated policy towards home servers? A lot of ISPs prohibit servers for residential customers, though actual enforcement is rare.
You're right, it's prohibited. That doesn't concern me too much.
Are you sure you're up to the task of handling security for your home server that will be exposed to the 'net?
Frankly, no, I'm not sure at all. Good point :/
Follow-up question: What sort of domain/hosting sites can give me, say, a gig of storage and a few gigs a month of bandwidth for a low price?
Replies from: philh↑ comment by philh · 2015-01-21T10:20:07.132Z · LW(p) · GW(p)
You can run a small server on EC2 for free for a year. After that there will be cheaper options, but not necessarily cheaper enough for you to care. http://aws.amazon.com/ec2/pricing/
↑ comment by ZankerH · 2015-01-20T17:17:14.299Z · LW(p) · GW(p)
You'll need to configure and run a web server on your computer. The most commonly used, publicly documented, free and accessible to people just trying stuff out is LAMP). You'll then need to point your domain at the IP address of your server.
Thing is, I have a computer that stays on 24/7
What kind of hardware are we talking about? How much traffic are you looking at supporting? What kind of internet connection do you have at home? Are you familiar with the concept of mathematical multiplication?
Replies from: Alsadius↑ comment by Alsadius · 2015-01-21T04:00:46.376Z · LW(p) · GW(p)
Regular home PC, fairly dated at this point. Not much traffic is intended, though - it'll have a fairly quiet home page for my job(I'm not allowed to have more, for tedious reasons of legal compliance in advertising), and a hidden wiki that'll be seen by maybe a dozen friends. It's a toy site, not anything serious.
Re mathematical multiplication, I assume you don't mean 3x4=12. Is this some sort of traffic collision issue?
Replies from: ZankerH↑ comment by ZankerH · 2015-01-21T08:25:02.109Z · LW(p) · GW(p)
Re mathematical multiplication, I assume you don't mean 3x4=12.
As it happens, I do. Depending on what you're planning on hosting, even trying to serve "a few reasonably large files" may be unreasonably slow on a home internet connection. Divide your upload speed by the number of concurrent users you expect - that's the theoretically maximal download speeed they can expect from your site.
Replies from: Alsadius↑ comment by Alsadius · 2015-01-21T15:11:37.410Z · LW(p) · GW(p)
Ah, fair. I have 10 Mbps nominal upload, and the files in question are a few hundred megs(so too big to pass around by things like email, but not large by the standards of the modern world). I'm not terribly worried about upload speed, if it takes five minutes.
↑ comment by btrettel · 2015-01-20T17:16:41.445Z · LW(p) · GW(p)
Acquiring hosting is straightforward. Pick a company with a good reputation, a reasonable price, and all the features you need, sign up, and pay. (I can't be of much help here, as I've used the same hosting company since 2004 or so, and I'm not sure if I could get a better deal elsewhere.)
The remainder is more specific, and that might be why you are having trouble finding tuturials. E.g., uploading and setting up a wiki could mean you read tutorials on SSH or FTP, tutorials on file permissions, and/or tutorials on the wiki-specific details of setting up a wiki. All of this depends on your experience level. When I started out, I knew none of this, and I basically figured it out as I went along.
↑ comment by Douglas_Knight · 2015-01-28T22:33:39.319Z · LW(p) · GW(p)
Start by paying someone for hosting. That's enough to learn about. Maybe start by paying Amazon nothing for a year of EC2 hosting. Once you understand how to host a website, you can migrate it to your home computer, where you will run into additional difficulties, like installing a base webserver and automatically updating your DNS. But probably you should stick with paid hosting. For static files, Amazon S3 is extremely cheap. For a full-fledged webserver to install your wiki, Nearly Free Speech will do, and is probably cheaper than Amazon, especially at your usage level.
comment by CronoDAS · 2015-01-25T10:54:08.112Z · LW(p) · GW(p)
I've got a problem. My sleep schedule is FUCKED UP.
Yesterday, I went to bed at around 8:00 AM and got up at 10:00 PM. I don't normally sleep 14 hours, but I've somehow become nocturnal; sleeping from 7 AM until up at 5 PM isn't particularly unusual for me. I'm not actually sleep deprived, but always sleeping through "normal business hours" tends to cause me problems - I can't get to the bank even when it's important - and isn't very convenient for my girlfriend either. My father jokes that I must be turning into a vampire because I'm never awake when the sun is up. Now, I don't actually have a job or go to school, and my only fixed-time obligation is to help my wheelchair-bound mother get into bed, which tends to start at around 1 AM and finish between 3 AM and 4 AM. (There's nobody else to do it at that hour and getting her to go to bed at a different time, or to get ready faster, is practically impossible and not worth the screaming.)
Any advice?
Replies from: Viliam_Bur, Manfred, gjm, tut↑ comment by Viliam_Bur · 2015-01-26T09:07:53.334Z · LW(p) · GW(p)
Some kind of polyphasic sleep? E.g. from 9 PM to 1 AM (4 hours) and then from 4 AM to 8 AM (4 hours).
↑ comment by gjm · 2015-01-25T14:32:04.224Z · LW(p) · GW(p)
It's hard to see what scope there is for the problem to get all that much better if you are required to be awake from 1am to 3am (or later) every day. It seems like the best you can do is to try to establish a routine of always going straight to bed (and not reading, browsing the internet, etc., once there) after dealing with your mother, which might maybe get you a ~ 4am-12pm sleeping time on typical days.
Replies from: CronoDAScomment by buybuydandavis · 2015-01-19T00:11:57.153Z · LW(p) · GW(p)
Anyone have a source for a summary of full life extension testing/supplementation regime?
Thiel?Kurzweil?
I've let things slide for a while, and want to get back on track with a full regime, including hormones and pharmaceuticals. I'm thinking cardiovascular, blood sugar, hormone, and neuroprotection.
Replies from: None, Punoxysm, James_Miller, moridinamael↑ comment by [deleted] · 2015-01-19T03:03:24.465Z · LW(p) · GW(p)
I would not recommend hormones. Beware of Algernon's law - if a simple biochemical tweak were always helpful, it'd probably already be that way. In particular a lot of things that try to work against 'aging' as opposed to specific dysfunctions will probably cause cancer. Thiel is a particular offender there, he recently started taking HGH with the justification "we'll probably have cancer licked in a decade or two". I read that statement to some people in my lab, where it provoked universal laughter.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2015-01-19T03:51:33.413Z · LW(p) · GW(p)
if a simple biochemical tweak were always helpful, it'd probably already be that way.
The key question: helpful, for what?
There's no reason to think Evolution has optimized my machinery for longevity.
As for giggles, I'll bet on Kurzweil's predictions over the people in your lab.
Replies from: RowanE↑ comment by RowanE · 2015-01-19T12:52:34.946Z · LW(p) · GW(p)
The longer you live, and especially the longer you remain healthy, the more evolutionarily fit you are. At least insofar as it doesn't funge against other traits evolution cares even more about, there's every reason to think evolution has optimized your machinery for longevity. We might find the non-helpful side of the tradeoff to actually be beneficial to modern human eyes even if they're horrible evolutionarily, like an ideal one would be "doubles your lifespan, makes you infertile", but there won't be anything evolution would see as a free lunch or a good tradeoff.
CellBioGuy attributed the "cancer licked in a decade or two" prediction to Thiel, not Kurzweil, do you actually have a source for it from Kurzweil? And does he have any particular reasons for stating it? Because even as someone on board with the singularity thing, that sounds like an insane pipe dream.
Replies from: gjm↑ comment by gjm · 2015-01-19T13:47:49.628Z · LW(p) · GW(p)
A couple of remarks to expand on RowanE's points for anyone who may be skeptical that evolution cares at all about longevity past (something like) typical childrearing age:
- Men (but not women) can continue to father children pretty much as long as they live.
- Children may receive some support from extended families; the longer (e.g.) grandparents remain alive and healthy, the better for them (hence for their genes, which overlap a lot with the grandparents').
- I bet most things that make you more likely still to be alive at 80 also make you likely to be healthier (hence more useful to your children) at 30.
↑ comment by [deleted] · 2015-01-19T15:07:54.249Z · LW(p) · GW(p)
This is all true. However buyandbuydavis has a point. Evolution optimizes for offspring and longevity is only selected for as a means to that end. When you selectively breed and mutate fruit flies and nematodes for lifespan over hundreds of generations you can double or triple them, universally at the expense of total offspring. Granted mammals are much more k selected, putting lots of effort into a few offspring, than those r selected species that throw hundreds or even thousands of eggs to the wind per generation so lifespan does matter at least some to us and we probably already lie somewhere along that evolutionary axis away from the flies. But you can still see how there might be some tension between the two optimizations and we're certainly not perfectly optimized for longevity.
That doesnt change my assessment that within any given existing evolved tuned organism a lot of the evidence ive seen suggests that mucking with hormone levels exogenously (as opposed to endogenously through general health activity diet etc) to try to keep energy or cell division or whatever up in the absence of an existing pathology of that hormone system will probably increase cancer rates.
Theres actually a promising line of research on a substance being developed by one of the scientific grandaddies of my current metabolism research that appears to be broadly neuroprotective via messing with regulation of aerobic respiration, something that also goes weird in muscles with age. I greatly look forward to seeing if it increases tumor rates too [there are biochemical mechanistic reasons i think it might] or if that particular dysregulation is something you can attack without nasty side effects (though i gotta say i would take a raised cancer risk to hold off alzheimers or parkinsonism or traumatic brain injury any day).
↑ comment by James_Miller · 2015-01-19T00:37:22.191Z · LW(p) · GW(p)
Start with medical tests for cholesterol, blood pressure, vitamin D, magnesium, diabetes and anything else your doctor recommends based on your age, family and disease history.
↑ comment by moridinamael · 2015-01-19T04:10:27.064Z · LW(p) · GW(p)
I don't know if this is what you meant by "summary", but Kurzweil's book (co-written with some homeopath) Transcend (Amazon) is his most up-to-date effort. I've read it mostly and it seems well researched and also explains the science behind its recommendations.
I also thought I'd mention that there are now certain compounds (Wikipedia) which show some evidence of initiating telomerase production in adult humans. If this drug works as well in humans as it has been shown to work in mice, it should significantly increase your healthspan.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2015-01-19T07:25:58.177Z · LW(p) · GW(p)
an actual medical doctor,
Well...
Replies from: moridinamaelDr. Grossman is licensed as an M.D., and an M.D.(H), a homeopathic medical doctor.
↑ comment by moridinamael · 2015-01-19T21:27:01.347Z · LW(p) · GW(p)
Oh.
comment by JoshuaZ · 2015-01-25T01:04:27.284Z · LW(p) · GW(p)
(Warning: politics)
Posting a few links to relevant followups to the "Comment 171" situation and the related sexual harassment scandal and MIT's reaction which prompted that discussion. I'm posting these because the issue has come up in the last few weeks of open threads.
This piece seems like an excellent example of reading others as charitably as possible and essentially steelmanning every argument involved. It also gives a pretty good summary of the entire situation with relevant links.
Also, one of the women involved in the original sexual harassment situation has come forward to provide some details of what was actually going on: here. Here is the Slashdot thread on the article.
comment by Richard Korzekwa (Grothor) · 2015-01-21T22:05:24.201Z · LW(p) · GW(p)
I am taking a graduate course called "Vision Systems". This course "presents an introduction to the physiology, psychophysics, and computational aspects of vision". The professor teaching the course recommended that those of us that have not taken at least an undergraduate course in perception get an introductory book on the subject. The one he recommends, which is also the one he uses for his undergraduate course, is this: http://www.amazon.com/Sensation-Perception-Looseleaf-Third-Edition/dp/0878938761 Unfortunately, this book goes for $60-75 for used loose leaf, all the way up to $105 for new hardcover. I'd rather not pay that, unless I can get an independent recommendation for it, or for some other book on the subject.
Does anybody here have a recommendation? Are there good course notes available on the web somewhere?
Replies from: Nonecomment by Evan_Gaensbauer · 2015-01-20T05:38:55.497Z · LW(p) · GW(p)
Scott Alexander, alias Yvain, conducted a companion survey for the readership of his blog, Slate Star Codex, to parallel and contrast with the survey of the LessWrong community. The issue I ponder below will likely come to light when the results from that survey are published. However, I'm too curious thinking about this to wait, even if present speculation is later rendered futile.
Slate Star Codex is among my favorite websites, let alone blogs. I spend more time reading it than I do on LessWrong, and it may only be second to Wikipedia or Facebook for website which I spend the most time on. Anyway, like almost everyone else reading this, I migrated to Slate Star Codex from LessWrong. So, in my mind, it seems alien to me that Slate Star Codex would have a readership that doesn't have virtually complete overlap with the LessWrong readership.
I imagine readers of Slate Star Codex not familiar with LessWrong include:
- medical professionals within a couple of degrees, socially, of Scott's professional circles
- some neoreactionaries, and social justice activists, from across the blogosphere
Does anyone else have an impression of who might read Slate Star Codex who doesn't read LessWrong? Alternatively, if you don't like Slate Star Codex, or are turned off by it, I'm curious as to why. I've encountered virtually unanimous appreciation of Slate Star Codex from among my friends who read LessWrong, so I'm fascinated by the possibility of outlying opinions.
Replies from: ahbwramc, knb↑ comment by ahbwramc · 2015-01-20T16:43:14.984Z · LW(p) · GW(p)
A number of SSC posts have gone viral on Reddit or elsewhere. I'm sure he's picked up a fair number of readers from the greater internet. Also, for what it's worth, I've turned two of my friends on to SSC who were never much interested in LW.
But I'll second it being among my favourite websites.
Replies from: beoShaffer↑ comment by beoShaffer · 2015-01-22T05:20:06.088Z · LW(p) · GW(p)
Similarly, I've had several non LW friends who have started reading SSC after semi-frequently being linked there by my FB.
comment by Evan_Gaensbauer · 2015-01-20T05:31:04.495Z · LW(p) · GW(p)
More on Slate Star Codex than on LessWrong, there is discussion of memes as a useful concept for explaining or thinking about cultural evolution. The term 'memetics' is thrown around to correspond to the theory of memes as a field of inquiry. I want to know more about memetics, lest I would consider it not worth my time to think about it more deeply. More broadly, if not definitely a pseudoscience, it skirts that border more frequently. I expect the discourse on memes might be at least a bit less speculative if us amateur memeticists here knew more about it. Thus, I've generated a post covering memetics. Some of them are notes on the history of memetics as a field, and others are interesting. I don't go in-depth in explaining any idea, but sources are provided so readers can pursue individual, uh, memes...from within memeplexes themselves:
That's a link to the note as published by me on Facebook, as I don't have my own blog. It should be accessible publicly. If you can't access it, logged into Facebook or not, let me know, and I'll see if I can solve that problem.
Replies from: somnicule↑ comment by somnicule · 2015-01-20T09:20:48.959Z · LW(p) · GW(p)
You could post this as a top level discussion post here, if you want to make it more available and reduce trivial inconveniences to those without access to facebook.
comment by passive_fist · 2015-01-19T03:15:45.206Z · LW(p) · GW(p)
I've been thinking about (and writing out my thoughts on) the real meaning of entropy in physics and how it relates to physical models. It should be obvious that entropy(physical system) isn't well-defined; only entropy(physical model, physical system) is defined. Here, 'physical model' might refer to something like the kinetic theory of gases, and 'physical system' would refer to, say, some volume of gas or a cup of tea. It's interesting to think about entropy from this perspective because it becomes related to the subjectivist interpretation of probability. I want to know if anyone knows of any links to similar ideas and thoughts.
Replies from: mwengler, spxtr, buybuydandavis, shminux, Pfft↑ comment by mwengler · 2015-01-20T12:59:25.920Z · LW(p) · GW(p)
There are approximations in figuring entropy and thermal statistics that may be wrong in very nearly immeasurable ways. The one that used to stick in my head was the calculation of the probability of all the gas in a volume showing up briefly in one-half the volume. Without doing math I figured it is actually much less than the classic calculated result, because the classic result assumes zero correlation between where any two molecules are, and once any kind of significant density difference exists between the two sides of the volume this will break.
But entropy is still real in the sense that it is "out there." An entire civilization is powered (and cooled) by thermodynamic engines, engines which quite predictable provide useful functionalities in ways predictable in detail from calculations of entropy.
A glass of hot water burns your skin even if you know the water and the skin's precise characterization in parameter space before they come in contact. Fast moving (relative to the skin) molecules of water break the bonds of some bits of skin they come in contact with. On the micro scale it may look like a scene from the matrix with a lot of slow moving machine gun bullets. The details of the destruction may be quite beautiful and "feel" cold, but essentially thanks to the central limit theorem, a whole lot of what happens will be predictable in a quite useful, and quite unavoidable way without having to appeal to the detail.
I think the only sense in which you can extract energy from water with a specially built machine that is custom designed for the current parameter space of the water, it is the machine which is at 0 or at least low temperature. And so the fact that useful energy can be extracted from the interaction of finite temperature water and a cold machine is totally consistent with entropy being real, thermal differences can power machines. And they do, witness the cars, trucks, airplanes and electric grid that are essential for our economy. The good news is you can get all the energy you need without knowing the detailed parameter space of the hot water, which is helpful because you then don't have to redesign your cold machine every few microseconds as you bring in new hot water to it from which to extract the next bit of energy.
Entropy is as real as energy whether it feels that way or not, and that is why machines work even when left unattended by consciousnesses to perceive their entropy and its flows.
Replies from: passive_fist, Viliam_Bur↑ comment by passive_fist · 2015-01-20T20:04:57.401Z · LW(p) · GW(p)
I think you're getting several things wrong here.
because the classic result assumes zero correlation between where any two molecules are, and once any kind of significant density difference exists between the two sides of the volume this will break.
The assumption of zero correlation is valid for ideal gases. It will not break if there is a density difference. We're talking about statistical correlation here.
Entropy is as real as energy whether it feels that way or not, and that is why machines work even when left unattended by consciousnesses to perceive their entropy and its flows.
"Entropy is in the mind" doesn't mean that you need consciousness for entropy to exist. All you need is a model of the world. Part of Jaynes' argument is that even though probabilities are subjective, entropy emerges as an objective value for a system (provided the model is given), since any rational Bayesian intelligence will arrive at the same value, given the same physical model and same information about the system.
Replies from: mwengler↑ comment by mwengler · 2015-01-20T23:26:31.320Z · LW(p) · GW(p)
because the classic result assumes zero correlation between where any two molecules are, and once any kind of significant density difference exists between the two sides of the volume this will break.
The assumption of zero correlation is valid for ideal gases. It will not break if there is a density difference. We're talking about statistical correlation here.
Statistical independence means the chance that a molecule is at a particular spot depends not at all on where the other molecules are. Certainly if the molecules never hit each other, they only bounce off the walls of the volume, then this would be true as the molecules don't interract with each other so their probability of being one place or another is not changed by putting the other molecules anywhere, as long as they don't interract.
But molecules in a gas do interact they bounce off each other. Even an ideal gas. There is an average distance they travel before bouncing off another molecule called a mean free path. A situation where the mean free path is << size of volume is typical at STP.
Does this interaction break non-correlation? My intuition is that it does. But the thing I know for sure is that the only derivation I have ever seen for calculating the probability that all the gas is in 1/2 the volume was done with the assumptions of zero correlations, which we only know is the case for zero interaction, which is NOT an assumption required in the ideal gas models. And is certainly not true of any real gases.
"Entropy is in the mind" doesn't mean that you need consciousness for entropy to exist. All you need is a model of the world.
This is as true for Entropy as it is for Energy. By this standard, Entropy and Energy are both in the mind, neither one is "realer" than the other.
Replies from: spxtr↑ comment by spxtr · 2015-01-21T04:53:49.507Z · LW(p) · GW(p)
Entropy is in the mind in exactly the same sense that probability is in the mind. See the relevant Sequence post if you don't know what that means.
The usual ideal gas model is that collisions are perfectly elastic, so even if you do factor in collisions they don't actually change anything. Interactions such as van der Waals have been factored in. The ideal gas approximation should be quite close to the actual value for gases like Helium.
Replies from: mwengler, mwengler↑ comment by mwengler · 2015-01-21T06:26:46.753Z · LW(p) · GW(p)
See the relevant Sequence post if you don't know what that means.
Without a link! So I went to the sequences page in the wiki and the word entropy doesn't even appear on the page! Good job referring me there without a link.
Entropy is in the mind in exactly the same sense that probability is in the mind.
Okay... Is that the same sense in which Energy is in the mind? Considering that this seems to be my claim that you are responding to, AND there is no reasonable way to get to a sequence page that corresponds to your not-quite-on-topic-but-not-quite-orthogonal response, that would be awfully nice to know.
Are you agreeing with me and amplifying, or disagreeing with me and explaining?
Replies from: spxtr↑ comment by spxtr · 2015-01-21T06:32:11.886Z · LW(p) · GW(p)
Replies from: mwengler↑ comment by mwengler · 2015-01-21T06:49:36.030Z · LW(p) · GW(p)
THank you.
The thing that leaps out at me is that the rhetorical equation in that article between the sexiness of a woman being in the mind and the probability of two male children being in the mind is bogus.
I look at a woman and think she is sexy. If I assume the sexiness is in the woman, and that an alien creature would think she is sexy, or my wife would think she is sexy, because they would see the sexiness in her, then the article claims I have been guilty of the mind projection fallacy because the woman's sexiness is in my mind, not in the woman.
The article then proceeds to enumerate a few situations in which I am given incomplete information about reality and each different scenario corresponds to a different estimate that a person has two boy children.
BUT... it seems to me, and I would love to know if Eliezer himself would agree, even an alien given the same partial information would, if it were rational and intelligent, reach the same conclusions about the probabilities involved! So... probability, even Bayesian probability based on uncertainty is no more or less in my head than is 1+1=2. 1+1=2 whether I am an Alien mind or a Human mind, unlike that woman is sexy which may only be true in heterosexual male, homosexual female, and bisexual human minds, but not Alien minds.
But be that as it may, your comment still ignores the entire discussion, which is is Entropy and more or less "real" than Energy? The fact is that Aliens who had steam engines, internal combustion engines, gas turbines, and air conditioners would almost certainly have thermodynamics, and understand entropy, and agree with Humans on the laws of thermodynamics and the trajectories of entropy in the various machines.
If Bayesian probability is in the mind, and Entropy is in the mind, then they are like 1+1=2 being in the mind, things which would be in the mind of anything which we considered rational or intelligent. They would NOT be like "sexiness."
Replies from: gjm↑ comment by gjm · 2015-01-21T15:09:00.783Z · LW(p) · GW(p)
Probability depends on state of knowledge, which is a fact about your mind. Another agent with the same state of knowledge will assign the same probabilities. Another agent fully aware of your state of knowledge will be able to say what probabilities you should be assigning.
Sexiness depends on sexual preferences, which are a fact about your mind. Another agent with the same sexual preferences will assess sexiness the same way. Another agent fully aware of your sexual preferences will be able to say how sexy you will find someone.
I don't see that there's a big difference here. Except maybe for the fact that "states of knowledge", unlike "sexual preferences", can (in principle) be ranked: it's just plain better for your state of knowledge to be more accurate.
Replies from: mwengler↑ comment by mwengler · 2015-01-21T21:53:42.160Z · LW(p) · GW(p)
Well yes. Of course everything you can say about probability and sexiness you can say about Energy, Entropy, and Apple. That is, the estimate of the energy or entropy relationships in a particular machine or experimental scenario depend on the equations for energy and entropy, the measurements you make on the system to find the values of the elements that go into those equations. Any mind with the same information will reach the same conclusions about the Energy and Entropy that you would, assuming you are all doing it "right." Any intelligence desiring to transform heat producing processes into mechanical or electrical energy will even discover the same relationships to calculate energy and entropy as any other intelligence and will build similar machines, machines that would not be too hard for technologists from the other civilization to understand.
Even determining if something is an apple. Any set of intelligences that know the definitions of apples common among humans on earth will be able to look at various earth objects and determine which of them are apples, which are not, and which are borderline. (I'm imagining there must be some "crabapples" that are marginally edible that people would argue over whether to call apples or not, as well as a hybrid between an apple and a pear that some would call an apple and some wouldn't).
So "Apple" "Sexy" "Entropy" "Energy" and "Probability" are all EQUALLY in the mind of the intelligence dealing with them.
If you check, you will see this discussion started by suggesting that Energy was "realer" than Entropy. That Entropy was more like Probability and Sexiness, and thus, not as real, while Energy was somehow actually "out there" and therefore realer.
My contention is that all these terms are equally as much in the mind as in reality, that as you say any intelligence who knows the definitions will come up with the same conclusions about any given real situation, and that there is no distinction in "realness" between Energy and Entropy, no distinction between these and Apple, and indeed no distinction between any of these and "Bayesian Probability." That pointing out that features of the map are not features of the territory does NOT allow you to privilege some descriptive terms as being "really" part of the territory after all, even though they are words that can and should obviously be written down on the map.
If you are going to explicate further, please state whether you agree or disagree that some of these terms are realer than others, as this is how the thread started and open-ended explication is ambiguous.
Replies from: gjm↑ comment by gjm · 2015-01-21T22:43:05.934Z · LW(p) · GW(p)
So "Apple" "Sexy" "Entropy" "Energy" and "Probability" are all EQUALLY in the mind of the intelligence dealing with them.
Anything at all is "in the mind" in the sense that different people might for whatever reason choose to define the words differently. Because this applies to everything, it's not terribly interesting and usually we don't bother to state it. "Apple" and "energy" are "in the mind" in this sense.
But (in principle) someone could give you a definition of "energy" that makes no reference to your opinions or feelings or health or anything else about you, and be confident that you or anyone else could use that definition to evaluate the "energy" of a wide variety of systems and all converge on the same answer as your knowledge and skill grows.
"Entropy" (in the "log of number of possibilities" sense) and "probability" are "in the mind" in another, stronger sense. A good, universally applicable definition of "probability" needs to take into account what the person whose probability it is already knows. Of course one can define "probability, given everything there is to know about mwengler's background information on such-and-such an occasion" and everyone will (in principle) agree about that, but it's an interesting figure primarily for mwengler on that occasion and not really for anyone else. (Unlike the situation for "energy".) And presumably it's true that for all (reasonable) agents, as their knowledge and skill grow, they will converge on the same probability-relative-to-that-knowledge for any given proposition -- but frequently that won't in any useful sense be "the probability that it's true", it'll be either 0 or 1 depending on whether the proposition turns out to be true or false. For propositions about the future (assuming that we fix when the probability is evaluated) is might end up being something neither 0 nor 1 for quantum-mechanical reasons, but that's a special case.
Similarly, entropy in the "log of number of possibilities" sense is meaningful only for an agent with given knowledge. (There is probably a reasonably respectable way of saying "relative to what one could find out by macroscopic observation, not examining the system too closely", and I think that's often what "entropy" is taken to mean, and that's fine. But that isn't quite the meaning that's being advocated for in this post.)
Sexiness is "in the mind" in an even stronger sense, I suppose. But I think it's reasonable to say that on the scale from "energy" to "sexiness", probability is a fair fraction of the way towards "sexiness".
Replies from: mwengler↑ comment by mwengler · 2015-01-21T23:26:02.737Z · LW(p) · GW(p)
"Entropy" (in the "log of number of possibilities" sense) and "probability" are "in the mind" in another, stronger sense.
Aha! So it would seem the original sense that "Energy" is "realer" (more like Apple) than Entropy is because Entropy is associated with Probability, and Bayesian Probability, the local favorite, is more in the mind than other things because its accurate estimation requires information about the state of knowledge of the person estimating it.
So it is proposed there is a spectrum "in the mind" (or dependent on other things in the mind as well as things in the real world) to "real" (or in the mind only to the extent that it depends on definitions all minds would tend to share).
We have Sexiness is in the mind, and thinking it is in reality is a projection fallacy. At the other end of the spectrum, we have things like Energy and Apple which are barely in the mind, which depend in straightforward ways on straightforward observations of reality, and would be agreed upon by all minds that agreed on the definitions.
And then we have probability. Frequentist definitions of probability are intended to be like Energy and Apple, relatively straightforward to calculate from easy to define observations.
But then we have Bayesian probability, which is a statement which links our current knowledge of various details with our estimate of probability. So considering that different minds can have different bits of other knowledge in them than other minds, different minds can "correctly" estimate different probabilities for the same occurrences, just as different minds can estimate different amounts of sexiness for the same creatures, depending on the species and genders of the different minds.
And then we have Entropy. And somebody defines Entropy as the "log of number of possibilities" and possibilities are like probabilities, and we prefer Bayesian "in the mind" probability to Frequentist "in reality" definitions of probability. And so some people think Entropy might be in the mind like Bayesian probability and sexiness, rather than in reality like Energy and Apple.
Good summary? I know! It is!
So here is the thing. Entropy in physics is defined as That is, the entropy is very deterministically added to a system by heating the system with an unambiguously determined amount of energy dQrev, and dividing that amount of energy by an unambiguously determined temperature of the system. That sure doesn't look like it has any probabilities in it. So THIS definition of Entropy is as real as Energy and Apple. And this is where I have been coming from. You me and an alien from Alpha Centauri can all learn the thermodynamics required to build steam engines, internal combustion engines, and refrigerators, and we will all find the same definitions for Energy and Entropy (however we might name them), and we will all determine the same trajectories in time and space for Energies and Entropies for any given thermodynamic system we analyze. Entropy defined this way is as real as Energy and Apples.
But what about that "log of number of possibilities" thing? Well a more pedantic answer would be, that the number of possibilities has nothing to do with probabilities. I have a multiparticle state with known physics of interactions. Its state when first specified, the possibility it initially occupies, has a certain amount of energy associated with it. The energy (we consider only closed systems for now) will stay constant, and EVERY possible point in parameter space which has the SAME energy as our initial state shows up on our list of possibilities for the system, and every point in parameter space with a DIFFERENT energy than our initial state is NOT a possible state of this system.
So counting the possibilities does NOT seem to involve any Bayesian probabilities at all. You, me, and an alien from Alpha Centauri who all look at the same system all come up with the same Entropy curves, just as we all come up with the same energy curves.
But perhaps I can do better than this. Tie this in to the intuition that entropy has something to do with probabilities. And I can.
The probabilities that entropy has to do with are FREQUENTIST probabilities. Enumerations of the physically possible states of the system. We could estimate them mathematically by hypothesizing a map of the system called parameter space, or we could take 10^30 snapshots of the physical system spread out over many millenia and just observe all the states the system gets into. Of course this second is impractical, but when has impractical ever stopped a lesswrong discussion?
So the real reason Entropy, Energy and Apple are "real" even though Bayesian Probability like Sexiness is "in the mind" is because Entropy is unambiguously defined for physical systems in terms of other unambiguous physical quantities "Energy" and "Temperature." (BTW, Temperature is Average Kinetic Energy of the particles, not some ooky "in the mind" mind thing. Or for simplicity, define temperature as what the thermometer tells you.)
And to the extent you love Bayesian probability so much that you want somehow to interpret a list of states in parameter space that all have the same energy as somehow "in the mind," you just need to realize that a frequentist interpretation of probability is more appropriate for any discussion of entropy than is a bayesian one: we use entropy to calculate what systems we know "enough" about will do, not to estimate how different people in different states of ignorance will bet on what they will do. If we enumerate the states wrong we get the wrong entropy and our engine doesn't work the way we said it would, we don't get to be right, in the subjective sense that our estimate was as good as it could be given what we knew.
I hope this is clear enough to be meaningful to anybody following this topic. It sure explains to me what has been going on.
Replies from: gjm↑ comment by gjm · 2015-01-22T01:47:40.635Z · LW(p) · GW(p)
So here's the thing. Entropy in physics is defined as [...]
That is one definition. It is not the only viable way to define entropy. (As you clearly know.) The recent LW post on entropy that (unless I'm confused) gives the background for this discussion defines it differently, and gives the author's reasons for preferring that definition.
(I am, I take it like you, not convinced that the author's reasons are cogent enough to justify the claim that the probabilistic definition of entropy is the only right one and that the thermodynamic definition is wrong. If I have given a different impression, then I have screwed up and I'm sorry.)
"Log of #possibilities" doesn't have any probabilities in it, but only because it's a deliberate simplification, targetting the case where all the probabilities are roughly equal (which turns out not to be a bad approximation because there are theorems that say most states have roughly equal probability and you don't go far wrong by pretending those are the only ones and they're all equiprobable). The actual definition, of course, is the "- sum of p log p" one, which does have probabilities in it.
So, the central question at issue -- I think -- is whether it is an error to apply the "- sum of p log p" definition of entropy when the probabilities you're working with are of the Bayesian rather than the frequentist sort; that is, when rather than naively counting states and treating them all as equiprobable you adjust according to whatever knowledge you have about the system. Well, of course you can always (in principle) do the calculation; the questions are (1) is the quantity you compute in this way of any physical relevance? and (2) is it appropriate to call it "entropy"?
Now, for sure your state of knowledge of a system doesn't affect the behaviour of a heat engine constructed without the benefit of that knowledge. If you want to predict its behaviour, then (this is a handwavy way of speaking, but I like it) the background knowledge you need to apply when computing probabilities is what's "known" by the engine. And of course you end up with ordinary thermodynamic entropy. (I am fairly sure no one who has been talking about entropy on LW recently would disagree.)
But suppose you know enough about the details of a system that the entropy calculated on the basis of your knowledge is appreciably different from the thermodynamic entropy; that is, you have extra information about which of its many similar-looking equal-energy states it's more likely to be in. Then (in principle, as always) you can construct an engine that extracts more energy from the system than you would expect from the usual thermodynamic calculations.
Does this make this "Bayesian entropy" an interesting quantity and justify calling it entropy? I think so, even though in almost all real situations it's indistinguishable from the thermodynamic entropy. If you start out with only macroscopic information, then barring miracles you're not going to improve that situation. But it seems to me that this notion of entropy may make for a simpler treatment of some non-equilibrium situations. Say you have a box with a partition in it, gas on one side and vacuum on the other. Now you remove the partition. You briefly have extra information about the state of what's in the box beyond what knowing the temperature, volume and pressure gives you, and indeed you can exploit that to extract energy even if once the gas settles down its temperature is the same as that of its environment. I confess I haven't actually done the calculations to verify that the "Bayesian" approach actually leads to the right answers; if (as I expect) it does, or can be adjusted in a principled way so that it does, then this seems like a nice way of unifying the equilibrium case (where you talk about temperature and entropy) and the non-equilibrium case (where you have to do something more resembling mechanics to figure out what energy you can extract and how). And -- though here I may just be displaying my ignorance -- I don't see how you answer questions like "10ms after the partition is removed, once the gas has started flowing into the previously empty space, but isn't uniformly spread out yet, what's the entropy of the system?" without something resembling the Bayesian approach, at least to the extent of not assuming all microstates are equally probable.
[EDITED to add: I see you've already commented on the "extracting energy from a thermodynamically hot thing whose microstate is known" thing, your answer being that the machine you do it with needs to be very cold and that explains how you get energy out. But I haven't understood why the machine has to be very cold. Isn't it, in fact, likely to have lots of bits moving very fast to match up somehow with the molecules it's exploiting? That would make it hot according to the thermodynamic definition of temperature. I suppose you might argue that it's really cold because its state is tightly controlled -- but that would be the exact same argument that you reject when it's applied to the hot thing the machine is exploiting its knowledge of.]
Replies from: mwengler↑ comment by mwengler · 2015-01-22T16:12:27.914Z · LW(p) · GW(p)
OK this is in fact interesting. In an important sense you have already won, or I have learned something, whichever description you find less objectionable.
I still think that the real definition of entropy is as you originally said, the log of the number of allowable states, where allowable means "at the same total energy as the starting state has." To the extent entropy is then used to calculate the dynamics of a system, this unambiguous definition will apply when the system moves smoothly and slowly from one thermal equilibrium to another, as some macrosopic component of the system changes "slowly," slowly enough that all intermediate steps look like thermal equilibria, also known in the trade as "reversibly."
But your "10 ms after the partition removed" statement highlights that the kinds of dynamics you are thinking of are not reversible, not the dynamics of systems in thermal equilibrium. Soon after the partition is removed, you have a region that used to be vacuum that has only fast moving molecules in it, the slow moving ones from the distribution haven't had time to get there yet! Soon after that when the fast molecules are first reaching the far wall, you have some interesting mixing going on involving fast molecules bouncing off the wall and hitting slower molecules still heading towards the wall. And in a frame by frame sense, and so on and so on.
Eventually (seconds? Less?) zillions (that's a technical term) of collisions have occurred and the distributions of molecular speeds in any small region of the large volume is a thermal distribution, at a lower temperature than the original distribution before the partition was removed (gasses cool on expansion). But the details of how the system got to this new equilibrium are lost. The system has thermalized, come to a new thermal equilibrium.
I would still maintain that formally, the log of the number of states is a fine definition, that the entropy thus defined is as unambiguous as "Energy," and that it is as useful as energy.
If you start modifying the "entropy," if you start counting some states more than others, there are two reasons that might make sense. 1) you are interested in non-thermal-equilibrium dynamics, and given a particular starting state for the system, you want to count only the parameter states the system could reach in some particular short time frame, or 2) you are equivocating, pretending that your more complete knowledge of the starting point of the system than someone else had gives your entropy calculation an "in your mind" component when all it does is mean at least one of the minds making the calculation was making it for a different system than the one in front of them.
In the case of 1) non-equilibrium dynamics is certainly a reasonable thing to be interested in. However, the utility of the previously and unambiguously defined entropy in calculating the dynamics of systems which reach thermal equilibrium is so high that it really is up to those who would modify it, to modify the name describing it as well. So the entropy-like calculation that counts only states reachable after 10 ms might be called the "prompt entropy" or the "evolving entropy." It really isn't reasonable to just call it entropy and then claim an "in your mind" component to the property of entropy, because in your mind you are actually doing something different from what everybody else is doing.
In the case of 2), where you look in detail at the system and see a different set of states the system can get to than someone else who looked at the system saw, then it is not a matter of entropy being in your mind that distinguishes, it is a situation of one of you being wrong about what the entropy is. And my calling an orange "an Apple" no more makes Apple ambiguous than my saying 2+2=5 calls into question the objective truth of addition.
As to the machine that subtracts extra energy... Consider an air jet blowing a stream of high pressure air into a chamber with a piston in it. THe piston can move and you can extract energy. Someone using thermo to build an engine based on this might just calculate the rise in pressure in the volume as the air jet blows into it, and put the piston in a place where the air jet is not blowing directly on to it, and might then find their machine performs in a way you would expect from a thermo calculation. I.e. they might build their machine so the energy from air jet is "thermalized" with the rest of the air in the volume before it pushes on the piston. Somebody else might look at this and think "I'm putting the piston lined up with the air jet so the air jet blows right on to the piston." They might well extract MORE energy from the motion of the piston then the person who did the thermo calculation and placed their piston out of direct air flow. I think in every sense, the person exploiting the direct air flow from the air jet is building his super thermo machine exploiting his detailed knowledge of the state of the air in the chamber. I believe THIS is a picture you should have in your mind as you read all this stuff about Bayesian probability and entropy in the mind. And my comment on it is this: there are plenty of machines that are non-thermo. Thermo applies to steam engines and internal combustion engines when the working fluids thermalize faster than the mechanical components move. But a bicycle, being pumped by your legs, is not a thermo machine. THere is some quite non-thermalized chemistry going on in your muscles that causes motions of the pedals and gears that are MUCH higher than any local temperatures would predict, and which do interesting things on a MUCH faster time scale than the energy involved can leak out and thermalize the rest of the system.
THere is no special "in the mind" component of this non-thermo-equilibrium air jet machine. Anybody who sees the machine I have built where the air jet blows directly on the piston, who analyzes the machine, will calculate the same performance of the machine if they have the same gas-dynamics simulation code that I have. THey will recognize that this machine is not using a thermalized volume of gas to press the piston, that it is using a very not-in-equilibrium stream of fast gas to push the piston harder.
In conclusion: the kinds of special knowledge invoked to make Entropy an "in your mind" quantity are really going beyond the traditional objective definition of Entropy and just failing to give this new different quantity a a new different name. This represents an equivocation, not a subjective component to entropy, just as someone changing the definition of apple to include oranges is not proving the subjectivity of the concept of Apple, they are simply using words differently than the people they are talking to and forgetting to mention that.
Further, the particular "special knowledge of details" physics discussed is not anything new. It is mechanics. Thermodynamics is a subclass of mechanics useful for analyzing system dynamics where fluids interact internally much faster than they act on the pieces of the machine they are pushing. In these cases thermodynamic calculations. But when the details of the system are that it is NOT in thermodynamic equilibrium as it interacts with the moving parts of a machine, this does not make entropy subjective, it makes entropy a more difficult to use tool in the analysis, just as an Apple pealer is not so useful to a guy who thinks oranges are a kind of apple.
Finally, there is an intermediate realm of mechanics where fluids are used and they are partially thermalized, but not completely because the dynamics of the rest of the machine are comparable to thermalization times. There might be interesting extensions from the concepts of entropy that could be useful in calculating the dynamics of these systems. But the fact that only one of two minds in a room is thinking these thoughts at a given moment does not make either the original entropy concept or these new extensions any more "in the mind" then is Energy. It just means the two minds need to each understand this new physics for this intermediate case but when they do they will be using unambiguous definitions for "prompt entropy" or whatever they call it.
↑ comment by mwengler · 2015-01-21T06:33:32.245Z · LW(p) · GW(p)
The usual ideal gas model is that collisions are perfectly elastic, so even if you do factor in collisions they don't actually change anything.
They don't change ANYTHING? Suppose I start with a gas of molecules all moving at the same speed but in different directions, and they have elastic collisions off the walls of the volume. If they do not collide with each other, they never "thermalize," their speeds stay the same forever as they bounce off the walls but not off each other. But if they do bounce off each other, the velocity distribution does become thermalized by their collisions, even when these collisions are elastic. So collisions don't chage ANYTHING? They change the distribution of velocities to a thermal one, which seems to me to be something.
The ideal gas approximation should be quite close to the actual value for gases like Helium.
So even if an ideal gas maintained perfect decorrelation between molecule positions in an ideal gas with collisions, which I do not think you can demonstrate (and appealing to an unlinked sequence does not count as a demonstration), you would still have to face the fact that an actual gas like Helium would be "quite close" to uncorrelated, which is another way of saying... correlated.
↑ comment by Viliam_Bur · 2015-01-20T14:44:37.738Z · LW(p) · GW(p)
Both the "entropy is in the mind" and "entropy is real" explanations seem plausible to me (well, I am not a physicist, so anything may seem plausible), so now that I think about it... maybe the problem is that even if we would be able to know a lot of stuff, we might still be limited in ways we can use this knowledge. And the knowledge you can't realistically use, it's as if you wouldn't even have it.
So, in theory, there could be a microscopical demon able to travel between molecules of boiling water without hitting any of them -- so from the demon's point of view, there is nothing hot about that water -- the problem is that we cannot do this with real stuff; not even with nanomachines probably. Calculating the path for the nanomachine would be computationally too expensive, and it is probably too big to fit between the molecules. So the fact is that a few molecules are going to hit that nanomachine, or any greater object, anyway.
Or perhaps we could avoid the whole paradox by saying: "Actually no, you cannot have the knowledge about all molecules of the boiling water. How specifically would you get it, and how specifically would you keep it up to date?"
Replies from: passive_fist↑ comment by passive_fist · 2015-01-20T20:07:21.857Z · LW(p) · GW(p)
This is pretty much it, and it's a really subtle detail that causes a lot of confusion. This is why the real problem with Maxwell's demon isn't how you obtain the information, it's how you store the information, as Landauer showed. To extract useful work you have to erase bits ('forget' knowledge) at some point. And this raises the entropy.
↑ comment by spxtr · 2015-01-19T06:52:16.718Z · LW(p) · GW(p)
I made a post about this a month or so ago. Yay!
Replies from: passive_fist↑ comment by passive_fist · 2015-01-19T07:18:54.933Z · LW(p) · GW(p)
That's pretty much exactly what I had in mind. Thanks.
↑ comment by buybuydandavis · 2015-01-19T04:04:10.580Z · LW(p) · GW(p)
It's interesting to think about entropy from this perspective because it becomes related to the subjectivist interpretation of probability.
If you haven't already read Jaynes derivation of maxent, and the further derivation of much of statistical mechanics from those principles, that would be a good place to start.
↑ comment by Shmi (shminux) · 2015-01-19T04:34:52.356Z · LW(p) · GW(p)
In this way entropy is not much different from energy. The latter also depends on the model as much as on the physical system itself.
Replies from: passive_fist↑ comment by passive_fist · 2015-01-19T05:19:50.519Z · LW(p) · GW(p)
I'm going to disagree with you here. Not that energy doesn't depend on our models. It just depends on them in a very different way. The entropy of a physical system is the Shannon entropy of its distribution of 'microstates'. But there is no distribution of microstates 'out there'. It's a construction that purely exists in our models. Whereas energy does exist 'out there'. It's true that no absolute value can be given for energy and that it's relative, but in a way energy is far more 'real' than entropy.
Replies from: DanielLC, shminux↑ comment by Shmi (shminux) · 2015-01-19T07:05:56.676Z · LW(p) · GW(p)
Whereas energy does exist 'out there'
"Out there" are fields, particles, interacting, moving, bumping into each other, turning into each other. Energy is a convenient description of some part of this process in many models. Just like with Jaynes' entropy, knowing more about the system changes its energy. For example, just like knowing about isotopes affects the calculated entropy of a mixed system, knowing about nuclear forces changes the calculated potential energy of the system.
Replies from: spxtr, passive_fist↑ comment by spxtr · 2015-01-19T21:19:25.924Z · LW(p) · GW(p)
I agree with passive_fist, and my argument hasn't changed since last time.
If we learn that energy changes in some process, then we are wrong about the laws that the system is obeying. If we learn that entropy goes down, then we can still be right about the physical laws, as Jaynes shows.
Another way: if we know the laws, then energy is a function of the individual microstate and nothing else, while entropy is a function of our probability distribution over the microstates and nothing else.
Replies from: shminux↑ comment by Shmi (shminux) · 2015-01-19T23:10:58.680Z · LW(p) · GW(p)
I agree that it feels different. It certainly does to me. Energy feels real, while entropy feels like an abstraction. A rock falling on one's head is a clear manifestation of its potential (turned kinetic) energy, while getting burned by a hot beverage does not feel like a manifestation of the entropy increase. it feels like the beverage's temperature is to blame. On the other hand, if we knew precisely the state of every water molecule in the cup, would we still get burned? The answer is not at all obvious to me. Passive_fist claims that the cup would appear to be a absolute zero then:
In the limit of perfect microstate knowledge, the system has zero entropy and is at absolute zero.
I do not know enough stat mech to assess this claim, but it seems wrong to me, unless the claim is that we cannot know the state of the system unless it's already at absolute zero to begin with. I suppose a toy model with only a few particles present might shed some light on the issue. Or a link to where the issue is discussed.
Replies from: spxtr, passive_fist, spxtr↑ comment by spxtr · 2015-01-21T05:01:33.030Z · LW(p) · GW(p)
An easy toy system is a collection of perfect billiard balls on a perfect pool table, that is, one without rolling friction and where all collisions conserve energy. For a few billiard balls it would be quite easy to extract all of their energy as work if you know their initial positions and velocities. There are plenty of ways to do it, and it's fun to think of them. This means they are at 0 temperature.
If you don't know the microstate, but you do know the sum of the square of their velocities, which is a constant in all collisions, you can still tell some things about the process. For instance, you can predict the average number of collisions with one wall and the corresponding energy, related to the pressure. If you stick your hand on the table for five seconds, what is the chance you get hit by a ball moving faster than some value that will cause pain? All these things are probabilistic.
In the limit of tiny billiard balls compared to pool table size, this is the ideal gas.
↑ comment by passive_fist · 2015-01-19T23:25:10.020Z · LW(p) · GW(p)
If you know precisely the state of every water molecule in the system, there's no need for your finger to get burned. Just touch your finger to the cup whenever a slow-moving molecule is approaching, and remove it whenever a fast-moving molecule is approaching (Maxwell's demon).
Replies from: shminux↑ comment by Shmi (shminux) · 2015-01-19T23:38:25.242Z · LW(p) · GW(p)
Right, supposing you can have a macroscopic Maxwell's demon. So the claim is not that it is necessarily at absolute zero, but that it does not have a well-defined temperature, because you can choose it to behave (with respect to your finger) as if it were at any temperature you like. Is this what you are saying?
Replies from: passive_fist↑ comment by passive_fist · 2015-01-20T19:54:43.995Z · LW(p) · GW(p)
Well, no.
Temperature is the thermodynamic quantity that is shared by systems in equilibrium. "Cup of tea + information about all the molecules in the cup of tea" is in thermodynamic equilibrium with "Ice cube + kinetic energy (e.g. electricity)", in that you can arrange a system where the two are in contact but do not exchange any net energy.
Note that it is NOT in thermodynamic equilibrium with anything hotter than an ice cube, as Eliezer described in spxtr's linked article: http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/
Basically, if you, say, try to use the information about the water and a Demon to put the system in thermal equilibrium with some warm water and electricity, you'll either be prevented by conservation of energy or you'll wind up not using all the information at your disposal. And if you don't use the information it's as if you didn't have it.
The salient point is that the system is not in thermal equilibrium with anything 'warmer' than "Ice cube + free energy."
If you know everything about the cup of tea, it really is at absolute zero, in the realest sense you could imagine.
Replies from: shminux↑ comment by Shmi (shminux) · 2015-01-20T22:01:54.435Z · LW(p) · GW(p)
Hm. I have to think more about this.
↑ comment by spxtr · 2015-01-21T20:41:37.342Z · LW(p) · GW(p)
Expanding on the billiard ball example: lets say one part of the wall of the pool table adds some noise to the trajectory of the balls that bounce off of that spot, but doesn't sap energy from them on average. After a while we won't know the exact positions of the balls at an arbitrary time given only their initial positions and momenta. That is, entropy has entered our system through that part of the wall. I know this language makes it sound like entropy is in the system, flowing about, but if we knew the exact shape of the wall at that spot then it wouldn't happen.
Even with this entropy entering our system, the energy remains constant. This is why total energy is a wonderful macrovariable for this system. Systems where this works are usually easily solved as a microcanonical ensemble. If, instead, that wall spot was at a fixed temperature, we would use the canonical ensemble.
↑ comment by passive_fist · 2015-01-19T21:04:58.523Z · LW(p) · GW(p)
Again, this is very different from the situation with entropy. I think you're confusing two meanings of the word 'model'. It's one thing to have an incomplete description of the physics of the system (for instance, lacking nuclear forces, as you describe). It's another to lack knowledge about the internal microstates of the system, even if all relevant physics are known. (In the statistics view, these two meanings are analogous to the 'model' and the 'parameters', respectively). Entropy measures the uncertainty in the distribution of the parameters. It measures something about our information about the system. The most vivid demonstration of this is that entropy changes the more you know about the parameters (microstates) of the system. In the limit of perfect microstate knowledge, the system has zero entropy and is at absolute zero. But energy (relative to ground state) doesn't change no matter how much information you gain about a system's internal microstates.
Replies from: shminux↑ comment by Shmi (shminux) · 2015-01-19T22:58:31.534Z · LW(p) · GW(p)
I understand what you are saying, but I am not convinced that there is a big difference.
Entropy measures the uncertainty in the distribution of the parameters. It measures something about our information about the system.
How would you change this uncertainty without disturbing the system?
But energy (relative to ground state) doesn't change no matter how much information you gain about a system's internal microstates.
How would you gain this information without disturbing the system (and hence changing its energy)?
EDIT: see also my reply to spxtr.
Replies from: passive_fist↑ comment by passive_fist · 2015-01-19T23:21:16.725Z · LW(p) · GW(p)
How would you change this uncertainty without disturbing the system?
You have to define what 'disturbing the system' means. This is just the classical Maxwell's demon question, and you can most definitely change this uncertainty without changing the thermodynamics of the system. Look at http://en.wikipedia.org/wiki/Maxwell%27s_demon#Criticism_and_development
Especially, the paragraph about Landauer's work is relevant (and the cited Scientific American article is also interesting).
↑ comment by Pfft · 2015-02-02T14:57:53.754Z · LW(p) · GW(p)
Isn't all this just punning on definitions? If the particle velocities in a gas are Maxwell-Boltzmann distributed for some parameter T, we can say that the gas has "Maxwell-Boltzmann temperature T". Then there is a separate Jaynes-style definition about "temperature" in terms of the knowledge someone has about the gas. If all you know is that the velocities follow a certain distribution, then the two definitions coincide. But if you happen to know more about it, it is still the case that almost all interesting properties follow from the coarse-grained velocity distribution (the gas will still melt icecubes and so on), so rather than saying that it has zero temperature, should we not just note that the information-based definition no longer captures the ordinary notion of temperate?
comment by DataPacRat · 2015-01-19T00:31:24.436Z · LW(p) · GW(p)
Not Quite the Prisoner's Dilemma
Evolving strategies through the Noisy Iterated Prisoner's Dilemma has revealed all sorts of valuable insights into game theory and decision theory. Does anyone know of any similar tournaments where the payouts weren't constant, so that any particular round might or might not qualify as a classic Prisoner's Dilemma?
Replies from: Gondolinian↑ comment by Gondolinian · 2015-01-19T01:03:25.165Z · LW(p) · GW(p)
Evolving strategies through the Noisy Iterated Prisoner's Dilemma has revealed all sorts of valuable insights into game theory and decision theory. Does anyone know of any similar tournaments where the payouts weren't constant, so that any particular round might or might not qualify as a classic Prisoner's Dilemma?
Do you have a link for the original tournament?
Replies from: DataPacRat↑ comment by DataPacRat · 2015-01-19T01:23:58.611Z · LW(p) · GW(p)
There have been many Iterated Prisoner's Dilemma tournaments; at least a couple were done here on Less Wrong. Most such tourneys haven't included noise; to find out about the ones that did, try googling for some combination of the phrases "contrite tit for tat", "generous tit for tat", "tit for two tats", "pavlov", and "grim".
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-01-20T21:16:51.295Z · LW(p) · GW(p)
Has there been research on Prisoner's Dilemma where the players have limited amounts of memory for keeping track of previous interactions?
Replies from: polymathwannabe, satt↑ comment by polymathwannabe · 2015-01-20T21:30:53.050Z · LW(p) · GW(p)
Google gives these:
http://www.pnas.org/content/95/23/13755.full.pdf
http://www.icmp.lviv.ua/journal/zbirnyk.79/33001/art33001.pdf
http://www.complex-systems.com/pdf/19-4-4.pdf
https://editorialexpress.com/cgi-bin/conference/download.cgi?db_name=ASSET2007&paper_id=287
http://ms.mcmaster.ca/~rogern4/pdf/publications_2009/annie_ltm.pdf
↑ comment by satt · 2015-01-22T02:53:37.664Z · LW(p) · GW(p)
That question's potentially ambiguous: does "previous interactions" mean previous moves within a single game, or previous games played? If the former, quite a bit of research on the PD played by finite state machines would fit. If the latter, Toby Ord's work on the "societal iterated prisoner's dilemma" would fit.
comment by [deleted] · 2015-01-22T16:18:20.018Z · LW(p) · GW(p)
Any worthwhile reading post that isn't found on the Sequences? (http://wiki.lesswrong.com/wiki/Sequences)
I recommend this one http://lesswrong.com/lw/iri/how_to_become_a_1000_year_old_vampire/ although I've read it a long time ago - I may have a different opinion on it currently. Re-reading it now.
Replies from: wobster109↑ comment by wobster109 · 2015-01-23T04:15:50.168Z · LW(p) · GW(p)
Can it be non-LW material? I found this to be an excellent no-background-needed introduction to AI. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
comment by Furcas · 2015-01-20T05:16:54.391Z · LW(p) · GW(p)
Is there an eReader version of the Highly Advanced Epistemology 101 for Beginners sequence anywhere?
comment by is4junk · 2015-01-20T02:03:28.033Z · LW(p) · GW(p)
Public voting and public scoring
I am sure this has been debated here before but I keep dreaming of it anyway. Let's say everyone's upvotes and downvotes were public and you could independently score posts using this data with your own algorithm. If the algorithms to score posts were also public then you could use another users scoring algorithm instead of writing your own (think lesswrong power-user).
As a simple example, lets say my algorithm is to average the score of user_Rational and user_Insightful and user_Rational algorithm is just lesswrong regular score minus User_troll's votes.
The benefits would be a better curated garden, more users, and more discussion.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-20T02:56:55.319Z · LW(p) · GW(p)
Currently, the backlog to changing the codebase here is so big and there's so little work going on it that even if there was a consensus for this change it would be unlikely to happen.
More specific to this proposal, there are at least two problems with this idea: First: it could easily lead to further group think: Suppose a bunch of Greens zero out all voting by certain people who have identified as Blues and a bunch of Blues do the same. Then each group will see a false consensus for their view based on the votes. Second, making votes public by default could easily influence how people vote if they are intimidated by repercussions for downvoting high-status users or popular arguments, or even just not downvoting because it could make enemies.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-01-20T08:00:06.998Z · LW(p) · GW(p)
Yeah, I suspect this would just move the game one step more meta. Instead of attacking enemies by mass downvoting now people would attack their enemies by public campaigns based on alleged patterns in the targets' votes. Then we could argue endlessly about what patterns are okay or not okay.
Replies from: is4junk↑ comment by is4junk · 2015-01-20T16:11:35.589Z · LW(p) · GW(p)
I agree there still would be very easy ways to punish enemies or even more common 'friends' that don't toe the line.
I do think it would identify some interesting cliques or color teams. The way I envision using it would be more topic category based. For instance, for topic X I average this group of peoples opinions but a different group on topic Y.
On the positive side, if you have a minority position on some topic that now would be downvoted heavily you could still get good feedback from your own minority clique.
comment by iarwain1 · 2015-01-19T21:34:52.965Z · LW(p) · GW(p)
General question: I've read somewhere that there's a Bayesian approach to at least partially justifying simplicity arguments / Occam's Razor. Where can I find a good accessible explanation of this?
Specifically: Say you're presented with a body of evidence and you come up with two sets of explanations for that evidence. Explanation Set A consists of one or two elegant principles that explain the entire body of evidence nicely. Explanation Set B consists of hundreds of separate explanations, each one of which only explains a small part of the evidence. Assuming your priors for each individual explanation is about equal, is there a Bayesian explanation for our intuition that we should bet on Explanation Set A?
What about if your prior for each individual explanation in Set B is higher than the priors for the explanations in Set A?
Example:
Say you're discussing Bible Criticism with a religious friend who believes in the traditional notion of complete Mosaic authorship but who is at least somewhat open to alternatives. To your friend, the priors for Mosaic authorship are much higher than the priors for a documentary or fragmentary hypothesis. (If you want numbers, say that your friend's priors are .95 in favor of Mosaic authorship.)
Now you present the arguments, many of which (if I understand them correctly) boil down to simplicity arguments:
- Mosaic authorship requires either a huge number of tortured explanations for individual verses, or it requires saying "we don't know" or "God kept it secret for some reason". Documentary-type hypotheses, on the other hand, postulate a few basic principles and use them to explain virtually everything.
- Several different lines of local internal evidence often point to exactly the same conclusions. For example, an analysis of the repetitions within a story might lead us to divide up the verses between authors in a certain way, while at the same time an independent stylistic analysis leads us to virtually the same thing. So we again have a single explanation set that resolves multiple sets of difficulties, which again is simpler / more elegant than the alternative of proposing numerous individual explanations to resolve each difficulty, or just throwing up our hands and saying God keeps lots of secrets.
The question is, is your friend justified in rejecting your simplicity-based arguments based on his high priors? What about if his priors were lower, say .6 in favor of Mosaic authorship? What about if he held 50-50 priors?
Replies from: IlyaShpitser, DanielLC, Vaniver, shminux↑ comment by IlyaShpitser · 2015-01-19T23:43:08.050Z · LW(p) · GW(p)
The B approach to Occam's razor is just a way to think carefully about your possible preference for simplicity. If you prefer simpler explanations, you can bias your prior appropriately, and then the B machinery will handle how you should change your mind with more evidence (which might possibly favor more complex explanations, since Nature isn't obligated to follow your preferences).
I don't think it's a good idea to use B in settings other than statistical inference, or probability puzzles. Arguing with people is an exercise in xenoanthropology, not an exercise in B.
Replies from: shminux, iarwain1↑ comment by Shmi (shminux) · 2015-01-20T00:30:29.454Z · LW(p) · GW(p)
Upvoted for
Arguing with people is an exercise in xenoanthropology
↑ comment by iarwain1 · 2015-01-20T00:39:11.461Z · LW(p) · GW(p)
I don't think it's a good idea to use B in settings other than statistical inference, or probability puzzles.
I'm not sure exactly what you mean by this. Do you mean that Bayesianism is inappropriate for situations where the data points are arguments and explanations rather than quantifiable measurements or the like? Do you mean that it shouldn't be used to prefer one person's argument over another's?
In any case, could you elaborate on this point? I haven't read through much of the Sequences yet (I'm waiting for the book version to come out), but my impression was that using Bayesian-type approaches outside of purely statistical situations is a large part of what they are about.
Arguing with people is an exercise in xenoanthropology, not an exercise in B.
Not sure I understand this. Assuming you're both trying to approach the truth, arguing with others is a chance to get additional evidence you might not have noticed before. That's both xenoanthropology and Bayesianism.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-01-20T01:19:47.296Z · LW(p) · GW(p)
my impression was that using Bayesian-type approaches outside of purely statistical situations is a large part of what they are about.
Yes. I disagree.
Do you mean that it shouldn't be used to prefer one person's argument over another's?
Look at our good friend Scott Alexander dissecting arguments. How much actual B does he use? Usually just pointing out basic innumeracy is enough "oh you are off by a few orders of magnitude" (but that's not B, that's just being numerate, e.g. being able to add numbers, etc.)
Assuming you're both trying to approach the truth...
I think the kind of stuff folks in this community use to argue/update internally is all fine, but I don't think it's a formal B setup usually, just some hacks along the lines of "X has shown herself to be thoughtful and sensible in the past, and disagrees w/ me about Y, I should adjust my own beliefs."
This will not work with outsiders, since they generally play a different game than you. I think the dominating term in arguments is understanding social context in which the other side is operating, and learning how they use words. If B comes up at all, it's just easy bookkeeping on top of that hard stuff.
I don't understand what people here mean by "B." For example, using Bayes theorem isn't "B" because everyone who believes the chain rule of probabilities uses Bayes theorem (so hopefully everyone).
Replies from: iarwain1↑ comment by iarwain1 · 2015-01-20T01:34:38.040Z · LW(p) · GW(p)
I don't understand what people here mean by "B."
Seems they're referring to Bayesian Epistemology / Bayesian Confirmation Theory, along with informal variants thereof. Bayesian Epistemology is a very well respected and popular movement in philosophy, although it is by no means universally accepted. In any case, the use of the term "Bayesian" in this sense is certainly not limited to LessWrong.
↑ comment by DanielLC · 2015-01-26T02:06:37.571Z · LW(p) · GW(p)
Assuming your priors for each individual explanation is about equal, is there a Bayesian explanation for our intuition that we should bet on Explanation Set A?
Do you mean your prior for A is about your prior for B, or your priors for each element are about the same?
If you mean the first, then there is no reason to favor one over the other. Occam's razor just says the more complex explanation has a lower prior.
If you mean the second, then there is a very good reason to favor A. If A has n explanations, B has m, all explanations are independant and of probability p, then P(A) = p^n and P(B) = p^m. A is exponentially more likely than B. In real life, assuming independence tends to be a bad idea, so it won't be quite so extreme, but the simpler explanation is still favored.
↑ comment by Vaniver · 2015-01-19T22:13:45.377Z · LW(p) · GW(p)
I think you'll get somewhere by searching for the phrase "complexity penalty." The idea is that we have a prior probability for any explanation that depends on how many terms / free parameters are in the explanation. For your particular example, I think you need to argue that their prior probability should be different than it is.
I think it's easier to give a 'frequentist' explanation of why this makes sense, though, by looking at overfitting. If you look at the uncertainty in the parameter estimates, they roughly depend on the number of sample points per parameter. Thus the fewer parameters in a model, the more we think each of those parameters will generalize. One way to think about this is the more free parameters you have in a model, the more explanatory power you get "for free," and so we need to penalize the model to account for that. Consider the Akaike information criterion and Bayesian information criterion.
↑ comment by Shmi (shminux) · 2015-01-20T00:46:43.323Z · LW(p) · GW(p)
General question: I've read somewhere that there's a Bayesian approach to at least partially justifying simplicity arguments / Occam's Razor. Where can I find a good accessible explanation of this?
This is a good question, but not when applied to the origin of the Torah example. There a more appropriate discussion is of the motivated cognition of the original Talmudic authors, who would have happily attributed 100% of the Torah to the same source, were it not for the 8 verses which do not fit. For a Christian these authors are already suspect because they denied the first coming of the Messiah, so one's priors of their trustworthiness should be low to begin with.
comment by Darklight · 2015-01-19T16:10:06.918Z · LW(p) · GW(p)
I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. The question is actually quite simple and so I offer it to the Less Wrong community to see what kind of answers people can come up with, as well as what the majority of Less Wrongers think. If you'd rather you can private message me your answer.
The question is:
Truth or Happiness? If you had to choose between one or the other, which would you pick?
Replies from: philh, adamzerner, DanielLC, Aleksander↑ comment by philh · 2015-01-19T17:07:06.791Z · LW(p) · GW(p)
I don't think this question is sufficiently well-defined to have a true answer. What does it mean to have/lack truth, what does it mean to have/lack happiness, and what are the extremes of both of these?
If I have all the happiness and none of the truth, do I get run over by a car that I didn't believe in?
If I have all the truth but no happiness, do I just wish I would get run over? Is there anything to stop me from using the truth to make myself happy again? Failing that is there anything that could motivate me to sit down for an hour with Eliezer and teach him the secrets of FAI before I kill myself? This option at least seems like it has more loopholes.
Replies from: Darklight↑ comment by Darklight · 2015-01-19T18:13:06.930Z · LW(p) · GW(p)
I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I'm curious how Less Wrongers would handle the ambiguity as well.
In the context of the question, it can perhaps be better defined as:
If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?
It's interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.
Replies from: Jiro↑ comment by Jiro · 2015-01-19T18:27:42.472Z · LW(p) · GW(p)
Defining happiness as "guaranteed increased utility" is questionable. It doesn't consider situations of blissful ignorance, where
- We can't seem to agree whether being blissfully ignorant about something one does not want is a loss of utility at all
- If that does count as a loss of utility, utility would not equate to happiness because you can't be happy or sad about something you don't know about.
↑ comment by Darklight · 2015-01-19T22:12:13.844Z · LW(p) · GW(p)
For simplicity's sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.
Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don't know about, but Eudaimonia is a bit of a strange concept.
Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?
Consider this rephrasing of the question:
If you were in a situation where someone (possibly Omega... okay let's assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?
Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren't sure what "Truth" or "Happiness" means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a "correct" rational answer?
If it's not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?
↑ comment by Adam Zerner (adamzerner) · 2015-01-20T06:30:36.369Z · LW(p) · GW(p)
Great question! I'm glad you brought it up!
Personally, it's a bit of an ugh field for me. And is something I'm confused about, and really wish I had a good answer to.
To me, this get's at a more general question of, "what should your terminal values be?". It is my understanding that rationality can help you to achieve terminal values, but not to select them. I've thought about it a lot and have tried to think of a reason why one terminal value is "better" or "more rational" than another... but I've pretty much failed. I keep arriving at the conclusion that "what should your terminal values be?" is a Wrong Question, which becomes pretty obvious once it's dissolved.
But at the same time... it's such an important question that the slightest bit of uncertainty really bothers me. Think of it in terms of expected value - a huge magnitude multiplied by a small probability can still be huge. If I misunderstood something and I'm pursuing the wrong terminal goal(s)... well that'd be bad (how bad depends on how different my current goals are from "the real goals").
I'd love to hear others' takes on this. It appears that people live their lives as if things other than Your Happiness matter. Like Altruism and Truth. Ie, people pursue terminal values other than their own happiness. Is this true? I've really be interested in seeing a LW survey on terminal goals.
↑ comment by Aleksander · 2015-01-22T05:13:16.011Z · LW(p) · GW(p)
Hey it's a good question. I'd pick Happiness.
When I was much younger I might have said Truth. I was a student of physics once and loved to repeat the quote that the end of man is knowledge. But since then I have been happy, and I have been unhappy, and the difference between the two is just too large.
comment by Scott Garrabrant · 2015-01-19T01:53:38.328Z · LW(p) · GW(p)
What app does less wrong recommend for to-do lists? I just started using Workflowy (recommended from a LW friend), but was wondering if anyone had strong opinions in favor of something else.
P.S. If you sign up for workflowy here, you get double space.
EDIT: The above link is my personal invite link, and I get told when someone signs up using it, and I get to see their email address. I am not going to do anything with them, but I feel obligated to give this disclaimer anyway.
Replies from: harshhpareek, Risto_Saarelma, None, beoShaffer, None, somnicule↑ comment by harshhpareek · 2015-01-20T23:37:45.098Z · LW(p) · GW(p)
It depends on why I'm making the list.
If I'm making a todo list for a project I'm working on, Workflowy is good because its simple and supports hierarchical lists.
For longer lived stuff where I add and delete stuff like grocery/shopping lists or books to read, I use wunderlist because they have an android app, a standalone windows app and it looks pretty. Browser-based apps annoy me so I like the windows app and the android app is nice to have when I'm actually in the grocery store.
When I'm making a list because I need to be productive and not as a way to plan, I use a paper todolist: http://www.amazon.com/gp/product/B0006HWLW2/ref=oh_aui_detailpage_o08_s00?ie=UTF8&psc=1. Checking things off on paper does wonders for productivity and having the printed thing helps set the mood.
↑ comment by Risto_Saarelma · 2015-01-20T01:56:55.941Z · LW(p) · GW(p)
I use a paper notebook, inspired by bullet journal and autofocus for daily/weekly goals when the list stays under 20 or so items. Recently a project started ballooning into more items than this system could handle, so I picked up todo.txt a month ago. I've been very happy with it so far. The system works with just a regular text editor and keeping all the lines in the file lexically sorted, but it's also a markup format that can be used with specific tools. I keep the project-specific list synced with a symbolic directory link from the project directory tree to Dropbox, and currently use the Simpletask app to update the list on my phone. Seems to work well for everything I need.
↑ comment by beoShaffer · 2015-01-23T00:57:44.990Z · LW(p) · GW(p)
I like Complice for having a daily to-do that allows you to track how much time you've spent on each of your items (if you're using its pomodoro timer), and to see which goals you did (and didn't) meet on past days. However, I know the founder through CfAR so I may be biased.
↑ comment by somnicule · 2015-01-19T05:36:17.994Z · LW(p) · GW(p)
I'm using workflowy as well, and it's the only to-do list software I've ever actually used for more than a few days.
One feature that I've wanted for a while is dependencies. Let's say you need to print out a form, but you need to purchase printer ink first. Being able to hide "print out form for xyz" until "buy printer ink" is completed would be great.
comment by [deleted] · 2015-01-25T04:07:49.945Z · LW(p) · GW(p)
Could use an editor or feedback of some kind for a planned series of articles on scarcity, optimization, and economics. Have first article written and know what the last article is supposed to say, and will be filling in the gaps for a while. Would like to start posting said articles when there is enough to keep up a steady schedule.
No knowledge of economics required, but would be helpful if you were pretty experienced with how the community likes information to be presented. Reply to this comment or send me a message, and let me know how I can send you the text of the article (only one at present).
comment by polymathwannabe · 2015-01-22T15:21:57.631Z · LW(p) · GW(p)
One one hand, gorillas are crucially important for the seed dispersion that maintains forests, so we need to save them from ebola, even if only for the human benefit that can be gained from those forests. On the other hand, ebola is killing humans, too. There's disagreement on how to allocate research funding.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2015-01-23T20:54:47.135Z · LW(p) · GW(p)
My feeling is that gorillas are pretty important just because they are apes (for practical research purposes, although I think they have a fair degree of intrinsic value too). Seed dispersion seems the least of these benefits. (On the other hand, I suppose the existence of other apes poses a disease threat to humans).
We should really demand more funding for research, in general. Under-funding research may be the single most irrational thing we do as a society, considering the return on investment.
comment by advancedatheist · 2015-01-19T00:21:43.943Z · LW(p) · GW(p)
Well, someone had to say it:
http://edge.org/response-detail/26073
Dylan Evans Founder and CEO of Projection Point; author, Risk Intelligence
The Great AI Swindle
Smart people often manage to avoid the cognitive errors that bedevil less well-endowed minds. But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool Aid.
This is not to say that superintelligent machines pose no danger to humanity. It is simply that there are many other more pressing and more probable risks facing us this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is very low, it is surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.
Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents. It involves a fallacy that has been termed "Pascal’s mugging," by analogy with Pascal’s famous wager. A mugger approaches Pascal and proposes a deal: in exchange for the philosopher’s wallet, the mugger will give him back double the amount of money the following day. Pascal demurs. The mugger then offers progressively greater rewards, pointing out that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and a rational person must surely admit there is at least some small chance that such a deal is possible. Finally convinced, Pascal gives the mugger his wallet.
This thought experiment exposes a weakness in classical decision theory. If we simply calculate utilities in the classical manner, it seems there is no way round the problem; a rational Pascal must hand over his wallet. By analogy, even if there is there is only a small chance of unfriendly AI, or a small chance of preventing it, it can be rational to invest at least some resources in tackling this threat.
It is easy to make the sums come out right, especially if you invent billions of imaginary future people (perhaps existing only in software—a minor detail) who live for billions of years, and are capable of far greater levels of happiness than the pathetic flesh and blood humans alive today. When such vast amounts of utility are at stake, who could begrudge spending a few million dollars to safeguard it, even when the chances of success are tiny?
Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.
But whenever an argument becomes fashionable, it is always worth asking the vital question—Cui bono? Who benefits, materially speaking, from the growing credence in this line of thinking? One need not be particularly skeptical to discern the economic interests at stake. In other words, beware not so much of machines that think, but of their self-appointed masters.
Replies from: James_Miller, gjm, jkaufman, gjm, RowanE, NancyLebovitz, JoshuaZ, blogospheroid, None↑ comment by James_Miller · 2015-01-19T00:40:50.548Z · LW(p) · GW(p)
it provides some of those who advance it with a lucrative income stream.
Not me! As I fully expected, I've earned less than the minimum wage for my book on the singularity. And I get the impression that most people involved in the singularity movement are earning far less than they could given their skill set.
↑ comment by gjm · 2015-01-19T00:50:27.796Z · LW(p) · GW(p)
someone had to say it
You say that as if the point of view expressed by Dylan Evans here is one that hasn't been expressed before. It seems to me more like what until recently was the default reaction to any concerns about unfriendly AI.
Replies from: emr↑ comment by emr · 2015-01-19T04:37:12.174Z · LW(p) · GW(p)
I've noticed a pattern: Someone implies that some (critical or controversial) position X isn't represented here, even though X is obviously represented, often by prominent posters in highly up-voted comments.
I think what happens is that some advocates of X literally cannot recognize their own position when it's presented in a non-tribal manner.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2015-01-19T04:56:35.469Z · LW(p) · GW(p)
Alternately, claiming novelty is something akin to a bravery debate.
↑ comment by jefftk (jkaufman) · 2015-01-20T15:41:09.625Z · LW(p) · GW(p)
It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.
GiveWell recommends extremely few charities. Unless you similarly write off the Red Cross, United Way, the Salvation Army, and everyone else GiveWell doesn't recommend, this looks like motivated skepticism.
↑ comment by gjm · 2015-01-19T00:47:18.863Z · LW(p) · GW(p)
It seems to me that there are two key points in Evans's argument where he makes a controversial claim and needs to justify it, and that at both he kinda cheats.
The first is where he goes from a description of the "Pascal's Mugging" scenario to saying that that's a good way to describe concerns over unfriendly AI. (Rather than, e.g., seeing them as analogous to insurance, where one pays a modest but annoying sum for alleged protection against various unlikely but potentially devastating events.) He doesn't make any attempt at all to justify this; I think he just hopes that the reader won't notice.
The second is where he suggests that "some of those who advance [UFAI arguments]" are getting a lucrative income stream from doing so. It seems to me that actually awfully few are, and most of those could have got richer faster and more reliably by other more normal means. So if he's saying about their motives what he seems to be, then again he really owes the reader some justification. Which, again, is not there.
(Maybe there's a third. I think his last paragraph is just repeating the one that precedes it. But maybe he's suggesting some other, more powerful "economic interests" at work; if so, it's not at all clear to me who he has in mind.)
↑ comment by RowanE · 2015-01-19T13:08:21.871Z · LW(p) · GW(p)
I think the entire core of his argument is a sleight-of-hand between "improbable" and "the kind of absurd improbability involved in Pascal's wager", without even (as others have pointed out) giving any arguments for why it's improbable in the first place.
↑ comment by NancyLebovitz · 2015-01-20T21:15:31.956Z · LW(p) · GW(p)
Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding.
I think this is a bad line of thought even before we get to the hypothesis that people are pushing UFAI risks for the money.
For one thing, people just get things wrong a lot-- it doesn't take bad motivations.
For another, it's very easy to jump to the conclusion that what seems to be correct to you is so obviously correct that other people must be getting it wrong on purpose.
For a third, even if you're right that other people are engaged in motivated thinking, you might be wrong about the motivation. For example, concern about UFAI might be driven by anxiety, or by "ooh, shiny! cool idea!" more than by narcissism or money.
advancedatheist, how sure are you of your motivations?
↑ comment by JoshuaZ · 2015-01-19T01:35:36.911Z · LW(p) · GW(p)
The idea that AI is a low probability risk is one that has some merit, but one doesn't need a Pascal's Mugging sort of scenario to consider it to be a problem. If it is only 5 or 10 percent of existential risk in the next century then it is already a serious problem. In general, all existential risks are underfunded by a lot. The only difference with AI is that for a long time it has been even more underfunded than other sources of existential risk.
↑ comment by blogospheroid · 2015-01-20T11:50:57.031Z · LW(p) · GW(p)
A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.
To illustrate - You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI. You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.
Replies from: JoshuaZ