Open thread, Nov. 3 - Nov. 9, 2014
post by MrMind · 2014-11-03T09:55:07.288Z · LW · GW · Legacy · 315 commentsContents
315 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
315 comments
Comments sorted by top scores.
comment by jaime2000 · 2014-11-03T15:31:11.018Z · LW(p) · GW(p)
Eliezer Yudkowsky is writing a new sequence called "Yudkowsky’s Abridged Guide to Intelligent Characters." On the one hand, it's great; quite interesting to read and very useful to rational fiction writers. On the other hand, I'm kinda saddened that Eliezer appears to have given up on LessWrong; the sequence is posted entirely on his Tumblr, and uses his Facebook as a discussion forum.
Replies from: passive_fist, Vaniver, None, Evan_Gaensbauer, army1987, gwillen↑ comment by passive_fist · 2014-11-03T21:03:05.042Z · LW(p) · GW(p)
I'm curious as to why he's given up on lesswrong. My knee-jerk reaction would be "because the attitude on lesswrong is now incredibly bad and fully counter to the goals he had for the place", but I'd like to know in his own words why he's less active in the LW community.
Replies from: jaime2000, None↑ comment by jaime2000 · 2014-11-03T21:16:54.756Z · LW(p) · GW(p)
I can't find the link now, but I seem to recall him saying that Facebook was more hedonic than LessWrong because he could simply delete and block people who lowered the discussion quality without technical obstacles or social controversy.
Replies from: Viliam_Bur, passive_fist↑ comment by Viliam_Bur · 2014-11-04T09:54:25.143Z · LW(p) · GW(p)
Well, if posting on LW is no longer fun, shouldn't we try to go more meta and fix the problem?
Of course, this shouldn't be Eliezer's top priority. And generally, it shouldn't be left to Eliezer to fix every single detail.
I think it would be good to have some kind of psychological task force on LessWrong. By which I mean people who actually study and apply the stuff, in the same way we have math experts here.
The next step in the Art could be to make rationality fun. And I don't mean "do funny things that signal your membership in LW community" but rather invent systematic ways how to make instrumentally rational things feel better, so you alieve they are good.
More generally, to overcome the disconnection between what we believe and how we feel. I think many people are doing the reversed stupidity here. We have learned that letting our emotions drive our thoughts is wrong. So the solution was to disconnect emotions from thoughts. That is a partial solution which works, but has a costly impact on motivation. Eliezer wrote that it is okay to accept some emotions, if they are compatible with the rational thoughts. But the full solution would be to let our thoughts drive our emotions. Not merely to accept the rational feeling, if it happens to exist, but to engineer it, by changing our internal and external environments. (On the other hand, this is just another way how insufficiently rational people could hurt themselves.)
Replies from: John_Maxwell_IV, passive_fist↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-11-04T10:09:29.213Z · LW(p) · GW(p)
I linked to this a few days ago. I've been experimenting with using the technique described over the past few days, and it seems to work pretty well. For example, trying to spend all of my mental bandwidth noticing good things (re-noticing good things I already noticed was allowed) seemed to get me out of a depressive funk in an hour or two. The technique also has some other interesting benefits: some of the positive things I notice are good things that I did, which has the effect of reinforcing those behaviors, and by noticing the good things that are going on in social interactions, I enjoy myself more and become more relaxed and fun to be around (in theory at least--only limited experience with this one thus far), and sometimes I get valuable ideas through realizing that something that initially seemed bad actually has a hidden upside (reminds me of research I've read about lucky people).
At this point, I'm left wondering why humans evolved to be so gosh-darn negative all the time. It feels like there must be some hidden upside to being negative that just hasn't occurred to me.
Replies from: Viliam_Bur, None, Viliam_Bur↑ comment by Viliam_Bur · 2014-11-04T11:06:03.193Z · LW(p) · GW(p)
I like that link!
At this point, I'm left wondering why humans evolved to be so gosh-darn negative all the time. It feels like there must be some hidden upside to being negative that just hasn't occurred to me.
Some guesses:
Compared with the rest of the nature, and even with large parts of humankind, we live incredibly lucky lives. Our monkey brains were not designed for this, they are probably designed to keep a certain level of unhappiness, so they invent some if they don't enough from outside. Similarly how our immune systems in absence of parasites develop alergies. Our mechanisms for fighting problems do not have an off switch, because in nature there was no reason to evolve one.
There is probably also some status aspect in this. If you are low status, you better don't express too much happiness in front of higher status monkeys, because they will punish you just to teach you where is your place. That's probably because low status itself makes people unhappy, so if you are not unhappy enough, it seems like you are claiming higher status.
I would expect many people to provide a rationalization: "But if I will be happy, that will make me less logical! And I will not be motivated to improve things." (But I think that is nonsense, because unhappiness is also an emotion, and also interferes with logic. And unhappy people probably have less "willpower" to improve things.)
Replies from: John_Maxwell_IV, John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-11-04T23:32:45.233Z · LW(p) · GW(p)
I'll use the term "threat" for a problem where avoidance and/or submission is a good way of dealing it.
If a tiger is known to live in a particular part of a forest, that is a threat: Avoiding that part of the forest is a good way of dealing with the problem. If I take part in a hunting expedition and I don't do my part because I'm too much of a coward, that is also a threat: If I act as if nothing happened and eat as much food as I want, etc. then my fellow tribespeople will think I'm an obnoxious jerk and I'll be liable to get kicked out. So submission is a good way of dealing with this problem.
If I'm hungry or sleepy or I have homework to do or I need to get a job, those are not threats, even though they have potentially dire consequences: ignoring these problems is not going to make them go away.
Hypothesis: the EEA was full of threats according to my definition; the modern world has fewer such threats. However, we're wired to assume our environment is full of threats. We're also wired to believe that if a problem is a serious one, it's likely a threat. So we're more likely to exhibit the avoidance behavior for serious problems like finding a job than for trivial ones like solving a puzzle.
(I like the idea of co-opting the word "threat" because then you can repeat phrases like "this is not a threat" in your internal monologue to reassure yourself, if you've checked to see if something is a threat and it doesn't seem to be.)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-11-05T09:50:58.620Z · LW(p) · GW(p)
This seems correct. In a jungle, the cost of failure is frequently death. In our society, when you live an ordinary life (so this does not apply to things like organized crime or playing with explosives), the costs are much smaller, and there is much fun to be gained. But our brains are biased to believe they are in the jungle; they incorrectly perceive many things as tiger equivalents.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-11-05T16:51:52.992Z · LW(p) · GW(p)
This is kind of nitpicky, but "the cost of failure is frequently death" is not the same as "avoidance and/or submission is a good way of dealing with the problem". It's not enough to show that in the EEA things could kill you... you have to show that they could kill you, and that trying hard not to think about them was the best way to avoid having them kill you.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-11-08T06:20:02.278Z · LW(p) · GW(p)
I found some interesting thoughts in the book Learned Optimism about the evolutionary usefulness of pessimism:
The benefits of pessimism may have arisen during our recent evolutionary history. We are animals of the Pleistocene, the epoch of the ice ages. Our emotional makeup has most recently been shaped by one hundred thousand years of climactic catastrophe: waves of cold and heat; drought and flood; plenty and sudden famine. Those of our ancestors who survived the Pleis- tocene may have done so because they had the capacity to worry incessantly about the future, to see sunny days as mere prelude to a harsh winter, to brood. We have inherited these ancestors' brains and therefore their ca- pacity to see the cloud rather than the silver lining.
...
Pessimism produces inertia rather than activity in the face of setbacks.
If the weather is very cold and your brain's probability estimate of finding any game in the frost is low, maybe inactivity really is the best approach. But if I, as a modern human, am not calorie-constrained, then inactivity seems less wise.
↑ comment by [deleted] · 2014-11-09T12:32:45.059Z · LW(p) · GW(p)
At this point, I'm left wondering why humans evolved to be so gosh-darn negative all the time. It feels like there must be some hidden upside to being negative that just hasn't occurred to me.
It's not so much that there's an upside to negativity as that continued positivity is evolutionarily useless. Evolution wants you to "chase the dragon" of steep, exciting highs rather than maintain a reasonably happy steady-state or, worse yet from Its perspective, "go full transhuman" and rewrite your own mind-design to bring Being Happy and Doing the Right Things into perfect alignment (which we can't do yet, but probably will be able to someday).
↑ comment by Viliam_Bur · 2014-11-04T11:58:02.967Z · LW(p) · GW(p)
For a community-scale solution, this article seems correct.
I expect spats, arguments, occasional insults, and even inevitable grudges. We've all done that. But in the end, I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what games some of us like or dislike.
One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities.
Hate is easy to recognize. Cruelty is easy to recognize. You do not tolerate these in your community, full stop. But what about behavior that isn't so obviously corrosive? What about behavior patterns that seem sort of vaguely negative, but … nobody can show you exactly how this behavior is directly hurting anyone?
Disagreement is fine, even expected, provided people can disagree in an agreeable way. But when someone joins your community for the sole purpose of disagreeing, that's Endless Contrarianism. If all a community member can seem to contribute is endlessly pointing out how wrong everyone else is, and how everything about this community is headed in the wrong direction – that's not building constructive discussion – or the community.
Axe-Grinding is when a user keeps constantly gravitating back to the same pet issue or theme for weeks or months on end. This rapidly becomes tiresome to other participants who have probably heard everything this person has to say on that topic multiple times already.
Griefing is when someone goes out of their way to bait a particular person for weeks or months on end. By that I mean they pointedly follow them around, choosing to engage on whatever topic that person appears in, and needle the other person in any way they can, but always strictly by the book and not in violation of any rules… technically.
In any discussion, there is a general expectation that everyone there is participating in good faith – that they have an open mind, no particular agenda, and no bias against the participants or the topic. While short term disagreement is fine, it's important that the people in your community have the ability to reset and approach each new topic with a clean(ish) slate. When you don't do that, when people carry ill will from previous discussions toward the participants or topic into new discussions, that's a grudge. Grudges can easily lead to every other dark community pattern on this list. I cannot emphasize enough how important it is to recognize grudges when they emerge so the community can intervene and point out what's happening, and all the negative consequences of a grudge.
↑ comment by passive_fist · 2014-11-04T21:28:22.790Z · LW(p) · GW(p)
Perhaps it would be best to learn from psychology. Psychology has shown that there's very little you can do to make yourself 'more rational.' Knowing about biases does little to prevent them from happening, and you can't force yourself to enjoy something you don't enjoy. Further, it takes a lot of conscious, slow effort to be rational. In the face of real-life problems, true rationality is often pretty much impossible as it would take more computing power than available in the universe. It's pretty clear that our irrationality is a mechanism to cope with the information overload of the real world by making approximate guesses.
It's because of things like this that I think maybe LW has gone severely overboard with the instrumental rationality thing. Note that knowing about biases is a noble goal that we should strive towards, but trying to fix them often backfires. The best we can usually hope for is to try to identify biases in our thinking and other people's.
But anyway, a lot of the issues of this site could simply be a matter of technical fixes. It was never really a good idea to base a rationality forum on a reddit template. Instead of the 'everyone gets to vote' system, I prefer the system where there are a handful of moderators. Moderators could be selected by the community and they would not be allowed to moderate discussions they themselves are participating in. This is the system that slashdot follows and I think it seems to work extremely well.
Replies from: shminux, Viliam_Bur, Lumifer, Brillyant↑ comment by Shmi (shminux) · 2014-11-04T23:12:18.271Z · LW(p) · GW(p)
you can't force yourself to enjoy something you don't enjoy
This particular point is demonstrably false, at least as a general one: people acquire taste for foods and activities they previously disliked all the time.
Knowing about biases does little to prevent them from happening
There are plenty of (anecdotal) examples to the contrary. I find myself thinking something like "am I being biased in assuming..." all the time, now that I have been on this forum for years. I heard similar sentiments from others, as well.
it takes a lot of conscious, slow effort to be rational
That's true enough. But it is also true in general for almost every System 2-type activity (like learning to drive), until it gets internalized in System 1.
In the face of real-life problems, true rationality is often pretty much impossible as it would take more computing power than available in the universe.
Indeed it is impossible to get a perfectly optimal solution, and one of the biases is the proverbial "analysis paralysis", where an excuse for doing nothing is that anything you do is suboptimal. However, an essential part of being instrumentally rational is figuring out the right amount of computing power to dedicate to a particular problem before acting.
a lot of the issues of this site could simply be a matter of technical fixes
Indeed a different template could have worked better. Who knows. However, a decision had to be made within the time and budget constraints, and, while suboptimal, it was good enough to let the site thrive. See above about bounded rationality.
This is the system that slashdot follows and I think it seems to work extremely well.
Except Reddit is clearly winning, in the "rationalists must win" sense, and Slashdot has all but disappeared, or at least has been severely marginalized compared to its late 90s heydays .
Replies from: passive_fist↑ comment by passive_fist · 2014-11-05T04:35:04.384Z · LW(p) · GW(p)
This particular point is demonstrably false, at least as a general one: people acquire taste for foods and activities they previously disliked all the time.
I've done this a lot. Each time I did, it wasn't because I forced myself, it was because I saw some new attractive thing in those foods or activities that I didn't see before. Perception and enjoyment aren't constant. People are more likely to try new activities when they are in a good mood (for instance). Mood alters perception. In that sense I actually agree with Villiam_Bur. You can get more people to become 'rationalists' through engaging and fun activities. But you have to ask yourself what the ultimate goal is and if it can succeed for making people more rational.
However, an essential part of being instrumentally rational is figuring out the right amount of computing power to dedicate to a particular problem before acting.
The most powerful 'subsystem' in the brain is the subconscious system 1 part. This is the part that can bring the most computational power to bear on a problem. Making an effort to focus your system 2 cognition on solving a problem (rather than simply doing what comes instinctively) can backfire. But it gets worse. There's no 'system monitor' for the brain. And even if there was, if you go even more meta, optimizing resource allocation for solving problem X may itself be a much harder problem than solving X using the first method that comes to mind.
Except Reddit is clearly winning, in the "rationalists must win" sense, and Slashdot has all but disappeared, or at least has been severely marginalized compared to its late 90s heydays .
I know it's an extremely subjective opinion, but it seems to me that the slashdot system reduces spread of misinformation and reduces downvote fights (and overall flamewars). As for why slashdot has shrunk as a community, I suppose it's partly because reddit has grown, and reddit seems to have grown because of the 'digg exodus' (largely self-inflicted by digg) and the subreddit idea. Remember that there used to be many news aggregators (like digg) that have all but disappeared.
The idea here shouldn't be "let's adopt the most popular forum system", it should be "let's adopt the forum system that is most conducive to the goals of the community." And we have at least one important data point (Eliezer) indicating the contrary.
Replies from: bogus↑ comment by bogus · 2014-11-07T00:51:18.940Z · LW(p) · GW(p)
The idea here shouldn't be "let's adopt the most popular forum system", it should be "let's adopt the forum system that is most conducive to the goals of the community."
Disregarding your use of the word "community" for what's best described as an online social club, who's to say that we're not doing this already? The "forum system that is most conducive" to our goals might well be a combination of one very open central site (LessWrong itself) supplemented by a variety of more private sites that discuss rationality in different ways, catering to a variety of niches. Not just Eliezer's Facebook page, but including things like MoreRight, Yvain's blog, Overcoming Bias, Give Well etc.
Replies from: Vulture, Vulture↑ comment by Vulture · 2014-11-08T18:33:11.250Z · LW(p) · GW(p)
The "forum system that is most conducive" to our goals might well be a combination of one very open central site (LessWrong itself) supplemented by a variety of more private sites that discuss rationality in different ways, catering to a variety of niches. Not just Eliezer's Facebook page, but including things like MoreRight, Yvain's blog, Overcoming Bias, Give Well etc.
This makes me a little suspicious as a solution, only because there doesn't seem to be anything particularly special about it besides being precisely the system that is already in place.
↑ comment by Vulture · 2014-11-08T18:31:56.151Z · LW(p) · GW(p)
What do you see as being the distinction between a "community" and a mere "online social club"? Genuinely confused.
Replies from: bogus↑ comment by bogus · 2014-11-08T23:35:08.841Z · LW(p) · GW(p)
Because, y'know, communities actually exist, like, in the real world. More relevantly, they have a fairly important goal in protecting real, actual people from bodily harm and providing a nurturing environment for them to thrive in. Since this does not apply to virtual, Internet sites, calling them "communities" is quite misleading and can have bad side-effects if the metaphor is taken seriously, either by accident or through sneaking connotations. So I think it's better if folks are sometimes encouraged to taboo this particular term.
↑ comment by Viliam_Bur · 2014-11-04T21:56:12.635Z · LW(p) · GW(p)
you can't force yourself to enjoy something you don't enjoy
Perhaps "force" isn't the right approach (and the whole "willpower" is just a red herring). But don't we have many examples where people changed their emotions because of an external influence? Charismatic people can motivate others. People sometimes like something because their friends like it. Conditioning.
I believe with a strategic approach people can make themselves enjoy something more. It may not be fast or 100% reliable or sufficiently cheap, but there is a way. A rational person should try finding the best way to enjoy something, if enjoying that thing is desirable. (For example, people from Vienna meetup are going to gym together after the next meetup, so they can convert enjoying a rationalist community into enjoying exercise.)
Replies from: passive_fist↑ comment by passive_fist · 2014-11-04T23:19:55.409Z · LW(p) · GW(p)
Charismatic people can motivate others. People sometimes like something because their friends like it. Conditioning.
Now that's slightly better, and I agree. But again, you have to ask yourself what the ultimate purpose is and if it's going to backfire or not.
For example, people from Vienna meetup are going to gym together after the next meetup, so they can convert enjoying a rationalist community into enjoying exercise.
That sounds like an interesting idea, if perhaps slightly naive. I get what the goal is: Channel the enjoyment of a rationality meeting to start exercising, then hope that after a while the enjoyment of exercise will itself act as a positive feedback loop. But then you have to ask the question: Why weren't they already exercising in the first place? And if they hope to achieve something positive by exercising, wasn't that enough to get them start exercising? It's possible that after the initial good feelings wear off ("Yay, the rationality community is exercising together!") the root causes of exercise avoidance will kick in again and dissolve the entire idea. Or worse: get them to do extremely unenjoyable exercises just for the sake of the community, which will ultimately get them to resent exercise even more than before.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-11-05T09:32:33.172Z · LW(p) · GW(p)
Why weren't they already exercising in the first place? And if they hope to achieve something positive by exercising, wasn't that enough to get them start exercising?
I think that humans usually are not strategic goal seekers. That's how an ideal rational being should be, but ordinary humans are not like that. We do have goals, and sometimes even strategies, but most things are decided emotionally or by habit.
So the answer to "why weren't they already exercising" could well be: a) Because they didn't have a habit of exercising. When you are doing something for the first time, there is a lot of logistic overhead; you must decide when and where to exercise, which specific exercises are you going to do, et cetera; while the next time you can simply decide to do the same thing you did yesterday. b) Because they didn't have positive memories connected with exercising in past, so while their heads are thinking that it would be good to exercise and become more fit and healthy, their hearts try to avoid the whole thing.
If this model is correct (well, that's questionable, but I suppose it is) the next time there is an advantage that you can follow the strategy of doing the same thing as the last time, and you already have some positive memories. And this could be enough for some people to change the balance. And may be not enough for others. In this specific case, we will later have experimental data.
Speaking for myself, many people I know who exercise or do sport regularly, do it with their friends. If those were my friends, I would be also tempted to join. But I am rather picky about choosing my friends. And the people who pass my filter are usually just as lazy as I am, or too individualistic do agree on doing something together. A few times I went to gym, it was incredibly boring. (I imagine having there someone to talk with would change that. Or if I would just remember to always bring a music player, perhaps with an audio book.) I do some small exercise at home. I imagine that if I had an exercise machine at home, I would use it, because the largest inconvenience for me is to go somewhere outside.
get them to do extremely unenjoyable exercises just for the sake of the community, which will ultimately get them to resent exercise even more than before
That would be obviously wrong, I agree. I just don't expect this to happen. But it is better to mention it explicitly.
↑ comment by Lumifer · 2014-11-04T21:35:44.765Z · LW(p) · GW(p)
Psychology has shown that there's very little you can do to make yourself 'more rational.'
Citation needed.
Not to mention that what an average person can or can not do isn't particularly illuminating for non-representative subsets like LW.
maybe LW has gone severely overboard with the instrumental rationality thing
I am not sure that is possible. Instrumental rationality is just making sure that what you are doing is useful in getting to wherever you want to go. What does "severely overboard" mean in this context?
Replies from: passive_fist↑ comment by passive_fist · 2014-11-04T23:07:17.496Z · LW(p) · GW(p)
Citation needed.
Read Dan Kahneman's work. He's spent his entire lifetime studying this and won a nobel prize for it too. A good summary is given in http://www.newyorker.com/tech/frontal-cortex/why-smart-people-are-stupid Here's an excerpt:
'as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes. '
Not to mention that what an average person can or can not do isn't particularly illuminating for non-representative subsets like LW.
In fact it is; there is no substantial difference when it comes to trying to control biases between highly educated and non-educated people.
I am not sure that is possible. Instrumental rationality is just making sure that what you are doing is useful in getting to wherever you want to go. What does "severely overboard" mean in this context?
There is nothing wrong with 'making sure that what you are doing is useful in getting to wherever you want to go'. The problem is the idea of trying to 'fix' your behavior through self-imposed procedures, trial & error, and self-reporting. Experience shows that this often backfires, as I said. It's pretty amazing that "I tried method X, and it seemed to work well, I suggest you try it!" (look at JohnMaxwellIV's comment below for just one example) is taken as constructive information on a site dedicated to rationality.
Replies from: Lumifer, gwern↑ comment by Lumifer · 2014-11-05T01:08:15.864Z · LW(p) · GW(p)
First, rationality is considerably more than just adjusting for biases.
Second, in your quote Kahneman says (emphasis mine): "My intuitive thinking is just as prone...". The point isn't that your System 1 changes much, the point is that your System 2 knows what to look for and compensates as best as it can.
In fact it is; there is no substantial difference when it comes to trying to control biases between highly educated and non-educated people.
Sigh. Citation needed.
The problem is the idea of trying to 'fix' your behavior through self-imposed procedures, trial & error, and self-reporting.
And what it the problem, exactly? I am also not sure what the alternative is. Do you want to just assume your own behaviour is immutable? Magically determined without you being able to do anything about it? Do you think you need someone else to change your behaviour for you? What?
↑ comment by gwern · 2014-11-04T23:15:59.129Z · LW(p) · GW(p)
A good summary is given in http://www.newyorker.com/tech/frontal-cortex/why-smart-people-are-stupid Here's an excerpt:
'as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes. '
Disagree. See comments in http://lesswrong.com/lw/d1u/the_new_yorker_article_on_cognitive_biases/
Replies from: passive_fist↑ comment by passive_fist · 2014-11-04T23:26:59.760Z · LW(p) · GW(p)
I'm not talking about the bias blind spot. I agree that more educated people are better able to discern biases in their own thoughts and others. In fact that's exactly what I said, not once but two times.
I'm talking about the ability to control one's own biases.
Replies from: Lumifer, gwern↑ comment by gwern · 2014-11-04T23:53:51.606Z · LW(p) · GW(p)
I agree that more educated people are better able to discern biases in their own thoughts and others...I'm talking about the ability to control one's own biases.
Huh? So what are more intelligence - and more educated - people doing, exactly, if not controlling their biases?
↑ comment by passive_fist · 2014-11-04T04:00:47.477Z · LW(p) · GW(p)
Thanks. That sounds a bit more like trying to get "out of the spotlight", if you get what I mean. Which I can understand. But it's possible that he's just being polite.
Replies from: Vulture↑ comment by Vulture · 2014-11-08T18:35:43.235Z · LW(p) · GW(p)
Good point. Given that Eliezer seems somewhat prone to saying moderately embarrassing things, it makes sense that he would want to hold discussions somewhere that's not directly associated with PR-delicate organizations and communities, and where he might be more able to better control how public things are.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-09T03:15:51.363Z · LW(p) · GW(p)
I hope you're not calling LW "PR-delicate"...
Replies from: pragmatist↑ comment by pragmatist · 2014-11-09T05:49:17.848Z · LW(p) · GW(p)
I think Vulture's saying LW is "directly associated" with PR-delicate organizations. MIRI and CFAR, maybe?
Replies from: Lumifer, Vulture↑ comment by Vaniver · 2014-11-03T21:17:54.894Z · LW(p) · GW(p)
the sequence is posted entirely on his Tumblr
It's also posted in a weird way? Despite following him when he first advertised his Tumblr, I don't see it on my Tumblr dash (but obviously following the links works).
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-11-04T18:30:48.414Z · LW(p) · GW(p)
You can make posts on tumblr that don't show up on the wall of your followers. Most people don't use the functionality, but I believe Yudkowsky uses it so these post won't be reblogged. Helps centering the discussion in one place (facebook).
↑ comment by Evan_Gaensbauer · 2014-11-07T11:13:02.076Z · LW(p) · GW(p)
On the other hand, I'm kinda saddened that Eliezer appears to have given up on LessWrong
Less Wrong is not as shiny a beacon as it once was. It started fading by the time I got here. However, I'm not lamenting, because for me it remains a beacon nonetheless. For example, So8res, a.k.a. Nate Soares, still posts here regularly enough. He's a MIRI researcher. What's great about that is the flow of causality: like Luke Muehlhauser, he didn't end up posting on Less Wrong because he works for MIRI; he's working for MIRI because of what he's posted on Less Wrong. That's all inside of the last year alone. He doesn't post here as frequently, but his posts are still why I come here weekly. Now, I don't mean to use the MIRI as an applause light. Honestly, I don't know too much about their work outside of what I've gleaned from off-hand discussions here on Less Wrong. However, the folks running it sure seem interesting, and just how one user was able to transform his own life with Less Wrong, then became the best new contributor to the site in 2014 by recording that transformation, and then working for the MIRI is an example of what others of us can achieve as aspiring rationalists. Maybe Eliezer is lamenting that the rest of us aren't producing content as awesome as his, and are instead waiting for him to come back. I figure he'll come back if we double down in realizing he started Less Wrong so we could learn to be awesome, not tend to his every word, and follow through on 'going forth and creating the art'.
↑ comment by A1987dM (army1987) · 2014-11-08T20:16:56.915Z · LW(p) · GW(p)
I wouldn't be terribly weirded out if he had been writing it on say yudkowsky.net, but Tumblr of all places? What the what?
↑ comment by gwillen · 2014-11-03T21:30:13.916Z · LW(p) · GW(p)
I'm honestly not too sad about that. After The Redacted Incident, I think the community lost a lot of trust in Eliezer. (I know I did.) If he wants a space where he can moderate according to his whims, LessWrong need not be that space. Frankly I trust the mods here more than I trust him to make good mod decisions.
Replies from: jaime2000, the-citizen, solipsist, polymathwannabe↑ comment by jaime2000 · 2014-11-04T03:30:33.317Z · LW(p) · GW(p)
Oh, be serious. I wasn't crazy about Eliezer's handling of the basilisk, either, but ubermenschen do not grow on trees. Who do we have around who is willing and able to become LessWrong's great leader now that he has left? All of the potentially strong leaders I can think of are busy running their own websites, projects, and/or communities.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-04T03:38:49.789Z · LW(p) · GW(p)
Any particular reason you feel the need for a Great Leader?
Replies from: MathiasZaman, None↑ comment by MathiasZaman · 2014-11-04T18:38:42.378Z · LW(p) · GW(p)
I wouldn't say Less Wrong needs a single leader, but in general good communities tend to have figures that can serve as "pillars of the community." They tend to help group cohesion and provide good content. They can also serve the role of tutor for new people or by mapping out the direction a community can/should go in.
Replies from: gwillen↑ comment by gwillen · 2014-11-06T00:15:31.309Z · LW(p) · GW(p)
I think we have some excellent pillars. For example, I see Yvain as a pillar, more than Eliezer.
Replies from: Vulture, Brillyant, MathiasZaman↑ comment by Brillyant · 2014-11-06T00:32:06.999Z · LW(p) · GW(p)
Yeah but he's got his own blog.
Replies from: gwillen↑ comment by gwillen · 2014-11-06T00:51:39.508Z · LW(p) · GW(p)
Yeah, and I'm much sadder that Yvain doesn't post to LW as much as he used to. I'd love to hear from him about why. (At least I'd love it if he'd post links or crosspost.)
Replies from: Douglas_Knight, Brillyant↑ comment by Douglas_Knight · 2014-11-06T17:52:21.304Z · LW(p) · GW(p)
Yvain doesn't post to LW because he finds it too stressful. Partly this the stress of the watching the number after posting, but mainly the uncertainty ahead of time about whether his posts are good enough or appropriate.
Added: I was probably thinking of this comment, but it doesn't mention karma.
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-11-06T19:42:27.017Z · LW(p) · GW(p)
That seems analogous to the reasons EY stopped posting on LW, so maybe we can form a hypothesis about things that drive highly visible posts away from LW?
Replies from: Viliam_Bur, Douglas_Knight↑ comment by Viliam_Bur · 2014-11-07T07:47:23.975Z · LW(p) · GW(p)
Writing a good article is a lot of work. When you write it for LW, you risk that (a) at the end your hard work will get downvoted, or (b) the thread will be off-topic or otherwise bad and you will have no control over it.
A solution is to post your articles somewhere else, and only submit a link with a short summary on LW. The disadvantage of this solution is that the debate is now split in two places: your blog and LW.
↑ comment by Douglas_Knight · 2014-11-06T19:53:31.084Z · LW(p) · GW(p)
It sounds different to me. I thought Eliezer didn't like the comments.
↑ comment by MathiasZaman · 2014-11-06T19:40:30.595Z · LW(p) · GW(p)
I like reading SSC as much as most people here, but Yvain doesn't guide or lead conversations on LW. This is less true for the greater Aspiring Rationalist Community where SSC posts can cause plenty of ripples across the pond.
Can you (or someone else) list some other people they see as pillars? (Genuinely asking; I have trouble noticing users being frequently insightful because the lack of avatars.)
↑ comment by the-citizen · 2014-11-05T12:21:02.294Z · LW(p) · GW(p)
What in particular was wrong with his handling of this incident? I'm not aware of all the details of his handling so its an honest question.
Replies from: gattsuru↑ comment by gattsuru · 2014-11-05T17:48:57.399Z · LW(p) · GW(p)
Most obviously, the Streisand effect means that any effort used to silence a statement might as been used to shout it from the hilltops. The Basilisk is very heavily discussed despite its obvious flaws, in no small part because of the context of being censored. If we're actually discussing a memetic hazard, that's the exact opposite of what we want.
There are also some structural and community outreach issues that resulted from the effort and weren't terribly good. Yudkowsky's discussed the matter from his perspective here (warning: wall of text).
((On the upside, we don't have people intentionally discussing more effective memetic hazards in the open in contexts of developing stronger ones, nor trying to build intentional decision theory traps. There doesn't seem to be enough of a causative link to consider this a benefit to the censorship, though.))
Replies from: Vulture, the-citizen↑ comment by Vulture · 2014-11-08T18:40:41.450Z · LW(p) · GW(p)
On the upside, we don't have people intentionally discussing more effective memetic hazards in the open in contexts of developing stronger ones, nor trying to build intentional decision theory traps.
I just realized that it has become pretty low-status to spend time talking about decision-theoretic memetic hazards around here, which might be a good thing.
↑ comment by the-citizen · 2014-11-08T11:42:00.781Z · LW(p) · GW(p)
Cheers, that all seems to make sense. I wonder if the Basilisk with its rather obvious flaws actually provides a rather superb illustration of how memetic hazard works in practice, and so doing so provides a significant opportuntity of improving how we handle it.
↑ comment by solipsist · 2014-11-03T21:41:11.875Z · LW(p) · GW(p)
What is the Redacted Incident?
Replies from: CronoDAS, None↑ comment by polymathwannabe · 2014-11-05T16:33:19.675Z · LW(p) · GW(p)
What was the Redacted Incident?
comment by sixes_and_sevens · 2014-11-03T13:21:38.161Z · LW(p) · GW(p)
Person A is an Olympic-level athlete. He can perform amazing physical feats. The limits of his ability can be scored against some sort of metric (lap time, distance jumped, etc.), and since he's working to improve on them, his own personal limits are known to him.
Person B is of average physical fitness.
Person C has a moderate chronic illness. He struggles to perform basic physical feats, but can function independently with some difficulty.
If all three of these people were secretly transplanted into an environment with lower oxygen levels and began to experience mild hypoxia, it seems that Persons A and C would both be more sensitive to this change than Person B. Person A would notice it because he would no longer be able to perform outstanding physical feats to the level he's accustomed to. Person C would notice it because he'd struggle to carry out basic activities.
[Edit for clarity: I'm not saying that Person B would never notice this, but that he would be less sensitive to it, because his performance is higher-variance and subject to less of a "state change", and doesn't have a fine, frequently-scrutinised boundary between what he can and can't do.]
Alternatively:
Person D is a voracious infovore with high reading comprehension. She's used to grappling with precise language.
Person E is an average-level reader.
Person F has some sort of reading-related disability.
It seems that Persons D and F will be more sensitive to badly-punctuated writing than Person E. For example, Person D might be able to parse a sentence in two or three plausible ways, while Person F might not be able to parse the sentence at all.
Both of these cases involve both ends of an ability distribution being more sensitive to degradation of the environment than central cases. Are there better examples? Is this a phenomenon we actually see in the real world? If so, does it have a name?
Replies from: Lumifer, Sarunas, Artaxerxes, othercriteria, brazil84, marchdown, Azathoth123↑ comment by Lumifer · 2014-11-03T17:56:39.712Z · LW(p) · GW(p)
I think the basic difference is that people B and E just don't care and are less likely to notice these things because they're not interested in them.
Replace your person B with a person B' who is also of average fitness but recently started a new fitness regime and has been busy quantifying himself. He will notice his mild hypoxia as soon as A.
↑ comment by Sarunas · 2014-11-03T15:10:24.614Z · LW(p) · GW(p)
A and C are much closer to the limits of their respective physical bodies than B. A - due to his high motivation (he is a maximizer, while B is a satisficer, if A didn't have a motivation to maximize his results, he could do the same (or better) as B), C - due to his limited physical abilities. Therefore, they have lower tolerance for the worse environment. In other words, if we plot physical abilities on x axis and the expected/desired result (i.e.result that they have a motivation to achieve) on y axis, we would probably obtain a convex function whose graph is below the line y=x (which corresponds to pushing the physical limits of the body).
↑ comment by Artaxerxes · 2014-11-03T14:47:44.144Z · LW(p) · GW(p)
I'm really not seeing either of your examples, unfortunately. What's stopping average fitness person from noticing their times aren't as good in the same way olympian-level person would notice that their times aren't as good? What's stopping him from noticing his morning jog or whatever is tougher?
Why wouldn't average reader have more difficulty parsing badly punctuated writing as well? Why wouldn't they be able to parse it in different plausible ways too?
I'm just not seeing it.
Edit: To go into more detail:
Person A, B and C are secretly transplanted into a low oxygen universe.
Person A noticed their 200m backstroke time is consistently 4 seconds slower than usual.
Person B notices they have to take walking breaks more often than usual on their morning jog, and took 5 minutes extra to complete their usual route.
Person C has more difficulty doing basic tasks.
Person D, E and F are each given 10 badly punctuated sentences to read.
Person D finds they can parse 4 of them in different plausible ways.
Person E finds they can parse 2 of them in different plausible ways, and can't parse 2 of them at all.
Person F can't parse 4 of them at all.
I see these examples as being just as plausible if not more than yours.
Replies from: NancyLebovitz, sixes_and_sevens↑ comment by NancyLebovitz · 2014-11-03T15:05:03.014Z · LW(p) · GW(p)
I imagine person B as someone who doesn't do formal exercise at all. If their capacity for exercise goes down, they might think vaguely that they're a little sick or getting older, but they've never been concerned with the details of how much they can do, and it's going to a good-sized change for them to notice it.
However, I'm not sure this pattern extends to reading-- some smart people who read easily seem to have the metaphorical proof-reading/copy-editing gene.
Replies from: Artaxerxes↑ comment by Artaxerxes · 2014-11-03T15:16:06.524Z · LW(p) · GW(p)
Well then actual fitness level is basically irrelevant, and the ability for someone to notice the effects of the environment change is based mostly upon the factor of whether or not they do any excercise.
↑ comment by sixes_and_sevens · 2014-11-03T15:03:55.120Z · LW(p) · GW(p)
In the first example, the average-fitness person probably has a lot more variance and a lot less visibility on his physical performance than the Olympian. The Olympian presumably also has a selection of meta-skills surrounding his chosen discipline, and is capable of judging when he's off his game or when he falls short of his own standards.
In the second example, well...
I don't know if you've ever gotten into the timeless identi-discussion with someone who is literate, but refuses to learn how to adequately punctuate their sentences, because they don't see the difference. A lot of people decry poor standards of reading comprehension, and my pet hypothesis is that many readers don't actually parse and evaluate sentences, but just let their eyes suck in the words and have a good guess at what it's supposed to mean.
I've never explicitly stated that hypothesis, but now I have, it seems like a case of attribute substitution: i.e. "Parsing this sentence is hard, so I'll round it off to the nearest sentiment and assume that's what it's saying".
Replies from: Emily, ChristianKl, wadavis↑ comment by Emily · 2014-11-04T09:39:30.332Z · LW(p) · GW(p)
If you're interested in some actual research on that hypothesis, try Ferreira for a starting point. Any of the papers on her page with the phrase "good enough" in the title will be relevant.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-11-04T10:16:41.883Z · LW(p) · GW(p)
Thanks.
↑ comment by ChristianKl · 2014-11-04T00:22:33.596Z · LW(p) · GW(p)
A lot of people decry poor standards of reading comprehension, and my pet hypothesis is that many readers don't actually parse and evaluate sentences, but just let their eyes suck in the words and have a good guess at what it's supposed to mean.
If you read a legal contract it's important to understand every word. In most cases it isn't. If you focus too much on details you can also lose context.
Years ago when I was a forum moderator I was reading forum posts in a way where I could decently reidentify a banned member that reregistered. On LW I now don't seem to have that ability anymore to the same extend. My focus is elsewhere.
You trade different ways of reading against each other.
↑ comment by wadavis · 2014-11-03T15:48:41.247Z · LW(p) · GW(p)
refuses to learn how to adequately punctuate their sentences, because they don't see the difference
If someone wanted to check their map versus their territory on this one, wanted to determine if the problem was with their reading ability, the language itself, or the writing's author. What advice would you give them?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-11-03T16:08:02.568Z · LW(p) · GW(p)
(I see what you did there, if you did what I think you did. If you didn't do it, please disregard this.)
I can see two cases you're plausibly asking about here:
The case of not being able to make sense of a sentence and wanting to know whether it's you failing to read something that is well-formed, the author failing to write something that is well-formed, or the language being able to support this content in a well-formed manner.
The case of reading a lot, and apparently understanding it, but not being sure whether you're actually doing the necessary work to understand it, because the mechanisms of that work are inscrutable.
Which of these are you asking? Or are you asking something else?
Replies from: wadavis↑ comment by wadavis · 2014-11-03T16:47:42.681Z · LW(p) · GW(p)
I'm asking about the case of not being able to make sense of a sentence and wanting to know whether it's you failing to read something that is well-formed, the author failing to write something that is well-formed, or the language being able to support this content in a well-formed manner.
To add context, I frequently encounter writing in legalese that I can't break down into an if/else flow chart and the writing should be able to be broke down into a simple logical chain. If the author is using precise language to keep things as brief as possible and I'm unable to properly parse it, it means I have found a new blind spot. My old assumption was that the authors were constrained by word count and the language therefor settled with insufficient explination, or were oblivious to the ambiguous wording, which fits Hanlon's Razor better.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-11-03T18:11:24.635Z · LW(p) · GW(p)
Legal documentation has its own conventions which I am not remotely qualified to advise upon. In the event of uncertainty in legal documentation, it seems prudent to consult someone with a background in law.
(Law is an area I'm both quite interested in and very ignorant about, but I've yet to find a good on-ramp for learning about how to practically interact with legal institutions and processes. It seems like legal texts should be extremely unambiguous, because they are produced by institutions who are presumably aware of the problem of linguistic ambiguity, and otherwise how are legal documents supposed to do their job? Then I remember that the same argument can be made of technical documentation, and people who product that for a living can do a spectacularly poor job of it. Other fields presumably also have people who are bad at their jobs.)
Replies from: wadavis, ChristianKl↑ comment by wadavis · 2014-11-03T19:49:29.553Z · LW(p) · GW(p)
To be clear, I'm using the broadest definition of legalese, in this case: design guides, building codes. Technical material recognized by the authority having jurisdiction that is not intended to be ambiguous, it is just complex. Stuff where the advice is consult your engineer instead of consult your lawyer.
Replies from: gwillen, sixes_and_sevens↑ comment by gwillen · 2014-11-03T21:35:04.092Z · LW(p) · GW(p)
I find that often this sort of writing -- technical-ish, e.g. trying to describe a flowchart or a boolean circuit in casual text, as you see in law or documentation -- has various sorts of ambiguities (e.g. issues with associativity and quantifiers) that would be obvious if you tried to transcribe it into code.
↑ comment by sixes_and_sevens · 2014-11-03T20:50:48.345Z · LW(p) · GW(p)
OK. Gotcha.
If I understand the vocabulary of a text but the syntax is unclear, I'll assume it's badly written. If it's composed of pathologically malformed sentences, I'll assume my English skills are better than the author's. If the vocabulary is unusual, or it's a subject I'm unfamiliar with, I'll give more weight to my own ignorance being the problem.
↑ comment by ChristianKl · 2014-11-04T00:16:24.500Z · LW(p) · GW(p)
It seems like legal texts should be extremely unambiguous, because they are produced by institutions who are presumably aware of the problem of linguistic ambiguity, and otherwise how are legal documents supposed to do their job?
That assumes that the linguistic ambiguity is bad for enough stakeholders.
A lobbyist who writes an amendment for a law doesn't always want that the politicians who vote on the amendment understand what it does.
An ambiguous version might also be a compromise that allows both sides of a conflict to avoid losing.
↑ comment by othercriteria · 2014-11-03T15:31:19.758Z · LW(p) · GW(p)
A not quite nit-picking critique of this phenomenon is that it's treating a complex cluster of abilities as a unitary one.
In some of the (non-Olympic!) distance races I've run, it's seemed to me that I just couldn't move my legs any faster than they were going. In others, I've felt great except for a side stitch that made me feel like I'd vomit if I pushed myself harder. And in still others, I couldn't pull in enough air to make my muscles do what I wanted. In the latter case, I'd definitely notice the lower oxygen levels but in the former cases, maybe I wouldn't.
So dial down my oxygen and ask to do a road race? Maybe I'll notice, maybe I won't. But ask me to do a decathlon, and some medley swimming, and a biathlon? I bet I'll notice the low oxygen on at least some of those subtasks, whichever of them that require just the wrong mix of athletic abilities.
For the reading one, I can believe this if I'm doing some light pleasure reading and just trying to push plot into my brain as fast as possible. But if I'm reading math research papers, getting the words and symbols into my head is not the rate-limiting step. If there are some typos in the prose, or even in the results or proofs, it doesn't make much of a difference. There might be some second-order effects--when I try to fill in details and an equation doesn't balance, I can be less certain that the error is mine--but these are minor.
So maybe sharpen your claim down to unitary(-ish) abilities?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-11-03T15:52:08.271Z · LW(p) · GW(p)
Do you have any suggestions for such unitary(ish) abilities?
Replies from: othercriteria↑ comment by othercriteria · 2014-11-03T17:57:41.031Z · LW(p) · GW(p)
No idea. Factor analysis is the standard tool to see that some instrument (fancy work for ability) is not unitary. It's worth learning about anyways, if it's not in your toolbox.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-11-03T18:23:59.277Z · LW(p) · GW(p)
It is already in my toolbox, but I'm not sure how it helps figure out if this phenomenon is present in the real world. It's still not obvious to me that, if the phenomenon does exist, it would survive when reduced to a unitary ability. I can think of a couple of mechanisms by which it may be more prevalent in a multivariable scenario.
↑ comment by brazil84 · 2014-11-05T21:38:11.733Z · LW(p) · GW(p)
In medical literature, it's not uncommon to read about "U-shaped curves."
Anyway, I am reminded of this:
http://www.youtube.com/watch?v=Y3lQSxNdr3c&t=0m24s
Although I suppose it's pretty similar to the "Goldilocks Principle."
↑ comment by Azathoth123 · 2014-11-04T05:05:12.424Z · LW(p) · GW(p)
The actual key difference in the second example is that people D and F are using their system II, whereas person E is relying on his system I.
comment by Dias · 2014-11-04T00:21:39.466Z · LW(p) · GW(p)
I recently asked about the ethics of writing articles explaining how people applied the dark arts in practice. Hopefully, such an article would help people resist those dishonest approaches more than it would aid people in employing them.
So here you go: How to Pitch a Growth Stock: Cognitive Bias Edition. I'm not sure of what LW thinks about cross-posting in general, so here is just a highlight:
Replies from: Gunnar_ZarnckeThe key principle here is the conservation of conservativeness. You want an estimate for them that is both very large and sounds conservative. To do this, you take advantage of scope insensitivity and arbitrage between the TAM stage and the company-specific stage. By making the company-specific stages (market share, profit margin, valuation) sufficiently conservative sounding, you can get away with an aggressive TAM [Total Addressable Market] estimate while keeping the whole thing sounding conservative. Scope-insensitivity means you can increase the TAM estimate at a lower cost of conservativeness than you can the company-specific elements, so there are gains from trade.
So once you’ve multiplied your TAM, market share, profit margin and valuation, you come up with an estimate for what this company could be worth in the future. However, you now deny that this is an estimate. Instead, it’s just an idea of the size of the market – you don’t actually expect they’ll reach it. This explicit denial protects you against any accusations of over-optimism, but you’ve successfully primed your audience on a really high number. If market sentiment is a battle between greed and fear, you’ve helped the greed side.
And a crucial subtlety – that valuation that you didn’t make is what the stock might be worth in the future. Because of the time value of money, you would need to discount that back to get to a current valuation. Since it credibly might take 10 years for the market to mature, even with a moderate 10% discount rate your valuation should really take a 61% hit. But by denying it was a valuation, you’ve avoided this step.
↑ comment by Gunnar_Zarncke · 2014-11-04T22:42:26.519Z · LW(p) · GW(p)
There are quite a few cross-posts - at least in Discussion. If it's on-topic and you clearly indicate the cross-post (linking to it) I appears to be OK. Note that I can't judge your post as I'm neither deeply interested nor versed in this topic. Length and topic do seem OK though.
comment by palladias · 2014-11-03T14:33:28.001Z · LW(p) · GW(p)
I built my first arduino project this month! I was Alina Starkov, the Sun Summoner, for Halloween, so I built accelerometer controlled LED gauntlets so I could turn the lights at my wrists on and off with gestures.
The instructable I wrote is here.
I had an enormous amount of fun, and the arduino system (I was using LilyPad, since I needed it to be sewable) was very beginner friendly. Glad to answer questions/provide encouragement!
Oh, and here are pics of the final costume. (I ran into a HJPEV at my Halloween party)
comment by advancedatheist · 2014-11-03T14:58:16.585Z · LW(p) · GW(p)
You still have time to register for the END DEATH Cryonic Convention in Laughlin, Nevada, this coming week end:
comment by Artaxerxes · 2014-11-03T12:51:41.433Z · LW(p) · GW(p)
A Story on MIRI in the Financial Times.
Luke wrote a post on MIRI's blog acknowledging the story and making a few clarifications.
FAI concerns seem to be getting more and more high profile lately. MIRI, too, seem more competent now than ever, especially when compared to how they were only a few years ago. Am I alone in thinking these kinds of thoughts? Do others feel like these trends will continue?
Replies from: None↑ comment by [deleted] · 2014-11-03T15:31:19.564Z · LW(p) · GW(p)
Let the hype curve for certain recent advances in computer technology flatten out; the references will vanish again.
Replies from: Artaxerxes↑ comment by Artaxerxes · 2014-11-03T15:36:50.732Z · LW(p) · GW(p)
Plausible, but which certain advances are you thinking of? Do you think what you're saying is likely? Does that mean next time there are advances, the references will start up again?
Replies from: None↑ comment by [deleted] · 2014-11-04T04:39:15.425Z · LW(p) · GW(p)
I was specifically thinking of the preliminary successes with autonomous vehicles Google has been having, a few high-profile walking robots, and some natural language parsers. Seeing as similar hype clusters have occurred in the past, I would expect them to recur in the future.
Replies from: None↑ comment by [deleted] · 2014-11-04T09:30:40.900Z · LW(p) · GW(p)
Why do you think these advances will "flatten out"?
Replies from: None, gattsuru↑ comment by [deleted] · 2014-11-05T14:48:22.555Z · LW(p) · GW(p)
I was referring to the hype about them. When something's new its the subject of all kinds of breathless pronouncements about how it will utterly change the world - then when it enters actual use, people find all the pitfalls and limits that it has in practice that the abstract-concept-of-it does not have, and get disaffected with it. Then it just kind of becomes part of the background, not really noticed.
↑ comment by gattsuru · 2014-11-05T17:56:40.165Z · LW(p) · GW(p)
Some of these advances are also nearing the end of low-hanging fruit, most obviously image recognition. We're quickly approaching human levels for simple problems, and while there's a massive amount of space for optimization and better training, these aren't likely to be newsworthy in the same way.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-05T19:37:19.518Z · LW(p) · GW(p)
The link still suggest that humans are much better.
I don't see how better than human level at image recognition won't provide newsworthy stories.
We are still a long way from security cameras in companies simply identifying every person who walks around via facial recognition.
Question such as whether a school or university is allowed to track attendance rates via facial recognition software will produce social debates.
Evernote does a bit image recognition for documents but aside from that I haven't used any computer guided image recognition for a while.
comment by Ritalin · 2014-11-08T23:34:54.911Z · LW(p) · GW(p)
It just occured to me.. is there such a thing as Friendship Research? There's a lot of research on eros, sexuality and romance, and on family relations, storge, and on general altruism, agape, but what about plain old Friendship/Phileo?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-11-09T05:20:53.512Z · LW(p) · GW(p)
Apparently, yes, there is:
http://amityjournal.leeds.ac.uk
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-09T15:37:36.947Z · LW(p) · GW(p)
I don't see how papers in the second journal with titles like "How partners’ temptation leads to their heightened commitment: The interpersonal regulation of infidelity threats", "Passion for life: Self-expansion and passionate love across the life span", "Young adult romantic relationships in Mainland China: Perceptions of family of origin functioning are directly and indirectly associated with relationship success " and "A dynamic state-space analysis of interpersonal emotion regulation in couples who smoke " are about plain friendship.
All those titles are from the latest issue and none of the titles of that issue sound to me like the focus on plain friendship.
On further note, the fact that the first answer points to a journal that isn't about normal friendship illustrates that Ritalin's general question hits a deeper point.
Replies from: Ritalin↑ comment by Ritalin · 2014-11-10T01:27:21.560Z · LW(p) · GW(p)
Further research among the second journal's older issues shows that it treats at least two of the other "loves" (romance, family), and support networks in a general sense, but In five issues I've yet to find a single article about friendship as such. The one time it's mentioned in a title it's in relation to romance: "Creating positive out-group attitudes through intergroup couple friendships and implications for compassionate love."
It's like they take friendship for granted!
comment by polymathwannabe · 2014-11-05T14:42:20.893Z · LW(p) · GW(p)
Years ago a friend let me try his Zelda videogame to have a chance to poke unabashed fun at my clumsiness with console controls, but had a funnier time with my in-world cluelessness. At one moment, I was trapped in a cave whose exit was too high for walking, and I was armed only with a ranged weapon. Atop the exit was a ladder. I was lost. Exasperated, my friend showed me how to get out of there. Never in a million years would I have deduced that I was supposed to shoot the ladder so it would fall to the floor and let me climb to the exit. I was about 23 years old.
More recently, I helped my boss install some applications in Android tablets. I had (and have) never received formal training in the Android OS, and after testing several applications I found the problem of how to close them. I ended up developing the habit of going into the task manager to manually terminate any program I wanted to close. It would never have occurred to me that going into the open programs list and swiping them off the screen would close them, which I only learned two years later by accidentally watching someone else do it.
Programmers and I have clearly different ideas on what is intuitively obvious and what is not, but maybe it's just me being a clueless 1980s dinosaur. Opinions?
Replies from: Lumifer, TheOtherDave, philh, hyporational, Baughn, ChristianKl, pianoforte611↑ comment by TheOtherDave · 2014-11-05T19:56:38.485Z · LW(p) · GW(p)
IME, "intuitively obvious" is a red herring for any halfway interesting interface. This was best captured by a coworker of mine decades ago about a feature: "It does what you expect, but you have to expect the right things."
What I strive for in an interface is consistency (both internal and with existing conventions) which maximize the chances that someone will already have learned the relevant interface conventions (or trained their intuitions, if you like).
↑ comment by hyporational · 2014-11-05T15:14:18.067Z · LW(p) · GW(p)
I have an intuition that there's a trade-off between intuitiveness and efficiency. The most efficient ways, e.g. hotkeys, to use an application are usually not the most intuitive, but I'm glad that they exist in parallel with the more intuitive control features and the best apps employ both approaches.
I usually find apps that try to be both maximally efficient and maximally intuitive at the same time overly simplistic.
↑ comment by Baughn · 2014-11-06T01:33:19.608Z · LW(p) · GW(p)
Swiping applications off the screen doesn't actually close them. It just removes them from the list.
The Android OS transparently closes applications based on memory pressure. You're never supposed to have to do it yourself, and it'll transparently reopen them to their previous state based on persistence files.
It usually works. Of course, sometimes apps get it wrong.
Replies from: ChristianKl, bogus↑ comment by ChristianKl · 2014-11-06T14:20:33.185Z · LW(p) · GW(p)
It does seem to send a message to the app to which the App can listen with "onTaskRemoved()" and the default behavior is closing the App. If something really want to close an App the app can do it's cleanup under "onDestroy()". Because onDestroy doesn't get called directly it's up to the App to decide whether it wants to close.
My self written app with doesn't do anything specific to handle the case closes automatically based on what the console says.
↑ comment by bogus · 2014-11-06T03:19:40.254Z · LW(p) · GW(p)
Swiping applications off the screen doesn't actually close them. It just removes them from the list.
I think this is not actually true. The OS can close applications behind your back, but swiping an application off does remove it from memory, quite reliably. Things are done this way because Android devices have comparatively limited RAM and no swap space, so the system must proactively avoid memory pressure. This state of things might improve somewhat as recent Linux versions have added things like swap-to-compressed-memory and volatile memory that help save on RAM usage.
↑ comment by ChristianKl · 2014-11-05T17:27:20.925Z · LW(p) · GW(p)
As far as the Zelda game goes, seeing the ladder means that if you don't get out of the cave it's probably the solution. From there you can think of possible ways to interact with the ladder. Given the tools at your disposal trying to shoot it with the bow becomes one thing to try.
As far as closing programs go, the obvious first step is to search a list for open programs. Then you try the various ways you can interact with them. What does swiping do? What does long-clicking do?
In general there no closing applications in Android in the way that exists on Windows. The average user is not supposed to need that functionality. Android applications are supposed to be designed in a way that doesn't need the user to close them.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-11-05T20:05:31.467Z · LW(p) · GW(p)
I am highly resistant to mouse gestures. I prefer to use keyboard combinations for routine tasks.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-06T14:25:00.794Z · LW(p) · GW(p)
When it comes to routine tasks, often useful to Google around. If you don't find an easy answer, posting a question on Superuser.com usually get's you the answer very fast.
Googling [program name] cheat sheet, is also useful to get to know the keyboard shortcuts for various routine tasks.
↑ comment by pianoforte611 · 2014-11-06T18:44:50.261Z · LW(p) · GW(p)
You would probably find http://store.steampowered.com/app/26800/] either extremely frustrating or very rewarding side all of the solutions are very opaque and require you to get pretty creative with things to try.
I don't think that swiping is obvious either, but I just googled "how to close an app" and voila.
Also, portal 2 is another puzzle game with very cleverly hidden solutions.
comment by mesolude · 2014-11-03T21:17:36.174Z · LW(p) · GW(p)
I've recently begun to experiment with alcohol for entertainment. While intoxicated I attempt to retain my mental control despite handicaps as a challenge in rationality. This has led me to observe my thinking patterns while sober more often--to a hypothetical superrational being, humans in the best scenario must seem at least as impaired as those beings would in their version of drunkedness. Some of the things I'm hoping to test are how my ability to analyze logical propositions, assign probability to various outcomes, or determine the choice that maximizes utility decline.
While testing my physical capabilities is straightforward (line walking, raise one foot and count), testing mental capabilities is much harder and I'm struggling to think of tests that are simple enough to self-administer in a handicapped state and produce results that I can analyze then or later. This will help me answer the question of How scratched can the lens be before it can no longer can see its flaws? Any suggestions of tests would be appreciated.
Replies from: Lumifer, ChristianKl, hyporational↑ comment by ChristianKl · 2014-11-03T21:50:08.421Z · LW(p) · GW(p)
http://www.quantified-mind.com/ provides a good test suite for mental tests.
↑ comment by hyporational · 2014-11-04T14:30:27.706Z · LW(p) · GW(p)
I've seen worse excuses to get drunk :)
I recently did some sim racing while mildly intoxicated. I'm fairly consistent with certain car and track setups so my performance doesn't vary many tenths of a second per lap. Suprisingly getting buzzed seemed to improve my performance by several tenths for about half an hour and after that it plummeted to fairly embarrassing levels. I've felt this temporary performance boost in other tasks as well, but wouldn't have expected to measure it in a difficult motor task requiring quick reflexes and good eye-hand coordination.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-11-04T15:32:51.007Z · LW(p) · GW(p)
I played one of my best pinball games ever when I was mildly ill. I had enough energy to play, but not enough energy to foul myself up.
Replies from: hyporational↑ comment by hyporational · 2014-11-04T15:53:22.523Z · LW(p) · GW(p)
Hmmh. I definitely feel energetic at first while drinking alcohol. I wonder if being sick could be comparable to being intoxicated in the sense that both are emergencies for the body, and the body responds by releasing stress hormones, making you better at tasks straightforwardly improving survival, like some motor tasks for instance. This would improve performance until you cross the threshold where the direct bad effects of being sick or poisoned win.
comment by iarwain1 · 2014-11-03T16:26:49.063Z · LW(p) · GW(p)
I enjoy taking long walks outside, but it's starting to get cold out. I'd like to continue my walks, but I need better protective gear.
People who live in cold climates: How do you dress up to stay warm for long periods outside when it's freezing / windy / snowing? What advice would you give for choosing appropriate clothing? Any specific brands you'd recommend? Any links to guides for choosing appropriate clothing?
My area (Baltimore, MD) doesn't usually get much colder than around 0 degrees Fahrenheit even with the wind chill, so I don't need the type of gear that really cold climates require.
Replies from: wadavis, Metus, zedzed, Lumifer, hyporational, ChristianKl, L29Ah, kalium, BrassLion, Strangeattractor, James_Miller↑ comment by wadavis · 2014-11-03T17:43:53.860Z · LW(p) · GW(p)
If you want to be comfortable for an extended period the key is to have insulation everywhere.
Find boots that keep your feet dry and warm, use thick socks. You can cheap out on everything but boots, buy good boots.
Get two layers on your legs, long underwear and pants might cut it, but ski pants make a night and day difference.
Find a coat that is not drafty, if it is not warm enough, layer sweaters and long sleeve shirts until it is.
Toque (I've heard them called beanies) and scarf. Always cover your head and neck.
Gloves or mitts, go overboard on these nothing ruins your fun like cold hands. Wool is always warmer than it looks.
I touched on layering a few times, if you are not familiar with it it is the secret to staying warm (and comfortable, if the day warms up you just shed layers). Layers trap warm air and become more than a sum of the parts, for your core you don't need high quality attire, just something to block the wind and a few layers to trap air.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-11-04T11:28:33.060Z · LW(p) · GW(p)
Seconding much of this advice, especially high quality attire for the feet and the head. See also Suvorov's famous saying: "keep your stomach hungry, your head cool, and your feet warm." Russian flappy "Ushanka" hats (especially from real fur) are amazing. I am still trying to figure out a good compromise between hand dexterity and warmth, probably some high tech skiing gloves exist for this.
Qualifications: once visited relatives in Novosibirsk for Christmas.
edit: re: gloves: you know, it occurs me a simple thing you can do is have a kind of combination glove/mitten, where you have a regular thin glove but with an outer fur mitten pocket sown around it, so you are free to take the glove out of there if needed to manipulate something, or keep the gloves in the mitten for warmth if you just need to grasp something, like ski sticks or public transport handlebars.
I wonder if anyone invented that.
Or, you know, just get mittens and gloves.
Replies from: wadavis↑ comment by wadavis · 2014-11-04T15:12:07.499Z · LW(p) · GW(p)
These are the mittens you are looking for.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-11-04T15:23:05.423Z · LW(p) · GW(p)
Yup, thanks. This seemed like the kind of obvious thing someone would already patent.
↑ comment by Metus · 2014-11-03T16:34:13.066Z · LW(p) · GW(p)
0 degrees Fahrenheit
For our readers who like to use SI units: That is about -17.7°C
The trick to surviving in colder climates is layering. T-Shirt plus shirt plus pullover plus a good winter jacket should do the trick. Some people like to layer trousers but for me a good pair of jeans does the trick. Look for good winter boots as feet loose a lot of heat. Cover your head with a hat, wear a scarf. Experiment with these things as there is comfort and aesthetics to be considered. Wear gloves or start getting used to walking with your hands in your pockets. If you do wear gloves take them off to shake someones hand, anything else is extremely rude.
Replies from: Luke_A_Somers, othercriteria↑ comment by Luke_A_Somers · 2014-11-03T19:47:28.489Z · LW(p) · GW(p)
Long underwear is less ridiculous than actually layering trousers.
Don't forget to layer your socks! Feet are really important.
Replies from: Username↑ comment by Username · 2014-11-03T22:58:50.560Z · LW(p) · GW(p)
Pajama pants are an effective, more socially normal substitute for long underwear underneath jeans or trousers.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2014-11-03T23:56:56.393Z · LW(p) · GW(p)
where do you live that long underwear would come up in such a situation as to be "socially abnormal"?
Replies from: BrassLion↑ comment by othercriteria · 2014-11-03T17:53:53.129Z · LW(p) · GW(p)
Some people like to layer trousers
A simple way to do this is flannel-lined jeans. The version of these made by L.L. Bean have worked well for me. They trade off a bit of extra bulkiness for substantially greater warmth and mildly improved wind protection. Random forum searches suggest that the fleece-lined ones are even warmer, but you lose the cool plaid patterning on the rolled up cuffs.
Replies from: fezziwig↑ comment by zedzed · 2014-11-03T18:37:41.430Z · LW(p) · GW(p)
One approach is to acclimate. I live in central NY and walked home from school (~30 minutes) every day of the year in t-shirt and jeans. My body adapted to the cold so, even though my ears and, to a lesser extent, fingers were being frozen off, I was sufficiently warm.
The one thing you don't want to do is overdress so you wind up sweating. In particular, you're going to want to wear less than you would if you were just standing around outside. If you're going to be at different periods of activity, then layering is a must so you can layer down before the activity and layer up after. If not, layering is still a good idea.
Boots should be insulated, waterproof, and thick-soled. Mittens have less surface area than gloves and therefore keep hands warmer. Hats should cover ears. Pants should go most of the way down boots and belong on the outside, so if you walk through snow, it doesn't get caught on top of the boot and fall in, and generally benefit from being waterproof. Avoid cotton (it absorbs and holds onto moisture and loses all insulative properties when wet.)
This is likely overkill. You are going from indoors to plowed streets to indoors. There's things that you must do if you're going to survive multi-day winter camping trips, but "put on things to cover parts that are getting cold" is all you really need.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-11-03T19:48:59.624Z · LW(p) · GW(p)
The important thing about avoiding sweating is, you need to notice that you're too warm and unbundle just a little so you aren't, before you get wet. Once you're sweating, it's too late and you're kind of screwed.
↑ comment by Lumifer · 2014-11-03T17:42:55.726Z · LW(p) · GW(p)
You're overthinking it -- just get a jacket.
If you want to geek out about stuff like that :-) the recommendations depend on how heavily you sweat and so how good must your clothing be at getting rid of excess moisture produced by your body. The usual recommendations here go along the lines of "don't wear cotton" (aka "cotton kills") and "have a breathable shell".
In the case of Baltimore, if you don't anticipate walks in the rain, get a fleece and a down jacket, zip up or unzip as needed. If you do, you'll need a waterproof shell (Gore-Tex or equivalent). Or an umbrella.
↑ comment by hyporational · 2014-11-04T15:11:21.966Z · LW(p) · GW(p)
Simple, durable and sufficiently stylish, good from -10 down to -40 Celsius if I'm moving:
- wool coat, long so you don't necessarily need long underpants
- thick wool sweater with buttons, t-shirt or shirt with collar underneath
- loose leather mittens with wool lining, fingerless gloves underneath
- wool cap that reaches the bottom of your ear lobes, wool scarf
- slightly thicker jeans than usual, no long underpants needed with long coat
- thick leather boots with plenty of room for your toes, wool socks on top of normal socks if really cold
↑ comment by ChristianKl · 2014-11-03T23:24:13.026Z · LW(p) · GW(p)
Dressing warm enough depends on your own thermoregulation. If you frequently feel cold, doing sports can help you in a natural way to feel warmer.
Feeling warm enough is quite valuable. In addition to simply bad, feeling cold leads to bad body language, that can make you come across as closed.
When it's really cold I have found long underwear to be important. A jacket simply does nothing against lowing heat via your legs.
↑ comment by L29Ah · 2014-11-04T21:39:10.459Z · LW(p) · GW(p)
Polartec as a base/insulating layers all over the body, anything {wind,water}proof or +1 layer on top of them if i'm planning to go slow (like, 15l/m²/day) steam-permeable waterproof membrane if the rain or powerful wind is probably an issue. Does the job better than wool as it doesn't get wet. Balaklava on my head. // Russia/St. Petersburg here.
↑ comment by kalium · 2014-11-04T05:43:07.851Z · LW(p) · GW(p)
Layering is good, but it's much easier to apply to the torso and arms than the legs. So a coat that goes at least down to your knees is very handy. I also recommend wool socks and mittens, since unlike many fibers wool is just as insulating when wet. Source: used to live in Boston.
↑ comment by BrassLion · 2014-11-04T04:32:39.966Z · LW(p) · GW(p)
American Northeast proof: layers, as other posters have said. My strategy is, from bottom to top: sneakers (I don't have boots at the moment), thick socks, long underwear or pajama bottoms, pants, T-shirt, sweater or sweatshirt, waterproof winter jacket, balaclava, beanie or other warm and flexible hat, hood over the hat. This is enough to get you through the coldest day of the year almost everywhere people live, but since I mostly walk between heated building, this lets me strip down to long pants and a T-shirt if I need to.
↑ comment by Strangeattractor · 2014-11-06T23:10:53.256Z · LW(p) · GW(p)
If it's windy out, you might want to cover your face with something, such as a scarf.
When choosing clothing, pay attention to what type of fabric it is made of. You want something that is warm, but also that can dry quickly. If you get snow on you, it will melt at some point, even if you can brush most of it off. Twill is pretty good for pants and dries faster than denim. Knit fabrics like yoga pants aren't warm enough to be used in the freezing cold by themselves, but they can be ok as a layer underneath. Polartec is a nice fabric for hats. It is soft and warm and breathable and dries quickly. A waterproof and windproof outer layer on a jacket helps keep out the wind.
You might want to have more than one set of hat and gloves so that one set can dry while you use the other set. Having a place set to dry them helps, even if it's just a coat peg or hanger in a closet.
It helps to anticipate just what conditions you are going to face while you are out, and prepare for that. There won't necessarily be just one approach needed. If you are going to be out for both day while it is sunny and while it is cold at night, you might need to take more clothes with you. It might be as simple as an extra set of gloves that are warmer, to put on at night.
Checking the weather prediction, and thinking through in your mind all of the environments you will encounter, and at what times of day, can help with clothing choice.
For footwear traction matters, and how much maintenance is required to deal with the salt from the roads.
If you will be spending time sitting outside, much warmer clothing is required. Snow pants are helpful for this.
Snow pants and ski jackets sold at ski hills, or at stores that have skiers as clientele, are usually warmer and of higher quality than ones found in department stores. They are designed for people who spend an entire day outside in the cold.
Replies from: kalium↑ comment by kalium · 2014-11-07T08:47:50.193Z · LW(p) · GW(p)
Twill is pretty good for pants and dries faster than denim.
Denim is just a specific type of twill that's made from cotton. Fiber type is generally more relevant than how it's woven.
Replies from: Strangeattractor↑ comment by Strangeattractor · 2014-11-07T14:49:59.063Z · LW(p) · GW(p)
Ah, thank you for pointing that out. I think maybe the word I meant was chino. Which is also a twill made from cotton, but using finer threads. Or so Google tells me.
↑ comment by James_Miller · 2014-11-03T23:49:54.197Z · LW(p) · GW(p)
Change yourself so you will better tolerate the cold. At the end of your showers lower the water temperature as much as you can stand for a few minutes. Keep doing this until you can handle a shower of just cold water.
comment by pan · 2014-11-03T15:31:46.949Z · LW(p) · GW(p)
I'm going to be graduating with my PhD in physics (theory) this coming spring and am beginning to look for jobs.
Any tips? Any mistakes you made when looking for jobs that you can tell me about? For those already with jobs in the technology industry: if you could go back in time what would you change about the way you searched for jobs?
Less likely but still worth asking: if you happen to know of a job in either the Baltimore/Washington/Virginia area or in the Bay area that I might be qualified for don't hesitate to tell me about it.
Replies from: Punoxysm, lmm, iarwain1, None↑ comment by Punoxysm · 2014-11-03T19:56:45.465Z · LW(p) · GW(p)
First of all, don't neglect your university's resources. Network like hell. Find out where other recent graduates ended up. Ask all professors who will give you the time of day if they have industry connections they would refer you to. Go to the career center. Go to career fairs. Print out tons and tons of resumes.
Speaking of resumes, what are your skills other than theoretical physics? And how wedded to doing physics in your job are you? If you can reasonably put R or MATLAB or even SQL on your resume, let alone proper programming languages or projects, you'll be opening up worlds of opportunities as an analyst or data scientist. Learn about how to use LinkedIn. Optimize your resume for visibility to keyword-based recruiters.
I strongly recommend a job search approach where you try to get as many responses as you can, THEN prune down. You'll get interviewing experience and you'll get to see some options you might not have considered.
Replies from: pan↑ comment by pan · 2014-11-04T01:24:44.577Z · LW(p) · GW(p)
I'm not at all wedded to doing physics in my next job, I'd be happy to switch to something more engineering/computer based or even (slightly less so) financial.
Skills wise I try to stress that I have multiple first authored publications (so I'm decent at writing) and several presentations at conferences and to funding agencies (good at speaking). Outside of that though I am very proficient at Mathematica and have what I'd call 'hobbyist' knowledge of python (I can write small scripts and programs, use libraries like SciPy).
This leaves me in a spot where I'm almost qualified for data science positions but not quite what they're looking for because I don't have enough programming experience.
Thanks for the tips, I hadn't thought about approaching other professors besides my advisor for networking purposes.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-11-04T02:57:36.991Z · LW(p) · GW(p)
Okay. Sounds like you should consider finance/quant positions (distinguish the ones that expect C++ knowledge from those that are looking for the math background), technical writing, data science, and maybe McKinsey style consulting/analyst positions (lots of companies have internal positions like this, as do VC firms).
You have a while, so you could easily give yourself a crash course in SQL and bolster your python, which would put you into the "good at programming for a non-programmer" field in most people's estimation.
I mention consulting, because it does involve a lot of writing and presenting, you'll learn a lot about business and open up tremendous career opportunities. If you have kind of a workaholic personality, it could be a good decision (but if travel and stress and unbreakable deadlines aren't your thing, maybe steer clear). Similar positions internal to companies are lower-stress but lower-opportunity. Your degree is definitely an asset in applying too.
If you're a citizen and don't mind it, the department of defense consulting complex (MITRE is an interesting company) might be interesting to look at.
↑ comment by lmm · 2014-11-03T19:39:04.476Z · LW(p) · GW(p)
My experience is that the salary for your first job is more important than you think, because future salaries are always anchored off your previous one. So for the first job in particular it's worth going for a less nice but higher paying job (also, hedonic treadmill).
Keep your applications targeted. I'll normally apply to less than 10 places, but each will be one where I've read about the company and tailored my cover letter. I've found this more effective than spamming out my CV to a lot of places.
I've had success with and without recruiters, so I can't really say one way or the other there. That said, the best return on effort when looking for tech industry jobs is definitely Hacker News' monthly "who's hiring" thread.
Some companies will be shockingly unreasonable. Just be prepared for that to happen, and respond appropriately; don't assume there's some kind of EMH in play where anything a company does has to be sensible.
Get two offers. Don't stop looking just because you have an offer or two. You have a lot more leverage when you have multiple offers.
Trust your gut. If a place feels unpleasant, it probably is unpleasant. If the person who interviews you rubs you the wrong way, the company culture probably will (this is not true for initial phone screens, which tend to be done by someone unrelated to anything). Make sure you speak to the person who's going to be your direct manager - how you get on with them is probably going to have a much bigger impact on your workplace happiness than anything else.
Replies from: None, Nornagest↑ comment by [deleted] · 2014-11-03T21:14:02.332Z · LW(p) · GW(p)
Oh my goodness are you doing it wrong. Your next employer will ask what your last salary was. Yes, of course they will ask. But woe to you if you actually answer! You have much to learn young padawan.
- Employer: "So how much did you make at your last job?"
- Candidate: "My salary expectation is __."
- Employer: "Oh I see what you did there. Well how about ..."
- (negotiation starts here)
↑ comment by gwillen · 2014-11-03T21:37:15.935Z · LW(p) · GW(p)
Unfortunately it's often not that simple, and it frequently requires Advanced Techniques. (E.g.: The recruiter is pushy and insists on a salary, and you have to use verbal judo of some kind. OR some piece of technology insists on a salary in a form, and won't let you proceed without entering one. Etc.)
Replies from: None↑ comment by [deleted] · 2014-11-03T23:55:03.401Z · LW(p) · GW(p)
This is true, although the lesson I would draw is to build up your personal network so that you're not going through recruiters to get jobs, and specialize enough that you're not directly comparable with typical other candidates.
By the way, funny having this conversation with you! See you on friday ;)
Replies from: gwillen↑ comment by Nornagest · 2014-11-03T19:51:40.461Z · LW(p) · GW(p)
It is possible to de-anchor salary expectations from your old job. You just need a good excuse. Moving to an area with a high cost of living -- like, say, Silicon Valley -- is a good excuse. So is "I was working for a startup and most of my job's expected value was locked up in equity".
"My last company systematically underpaid everyone" isn't, even if your last company systematically underpaid everyone. It may be true, but the people interviewing you won't usually bother to find out, and it sounds bad either way.
↑ comment by iarwain1 · 2014-11-03T15:38:46.605Z · LW(p) · GW(p)
There's problems getting a physics job around the Washington area? I'd think with NASA, NSA, DoD, several large research universities (including the Johns Hopkins Applied Physics Lab), and all the large government contractors (Lockheed Martin in Bethesda MD, etc.) it would be relatively easy to find something.
Replies from: pan↑ comment by pan · 2014-11-03T15:54:58.837Z · LW(p) · GW(p)
The problem with many of the government labs is that they want post docs and not employees, and I'd rather just skip that and start as an actual employee somewhere.
In addition many of the places I've applied (many of which you listed) have very long application processes (months) which means I'm in the dark as to whether I'll get zero offers or an offer from every place which I applied. Therefore I'd like to be cautious and cultivate as many options as possible.
Lastly, I tend to get into situations like these (ones with big decisions and many unclear options) and end up realizing in retrospect that there were more interesting opportunities available than the one I took, but that I panicked and didn't properly explore the options. So I'm trying to make a serious effort of looking for and apply to 'out of the box' employment options.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-11-03T16:02:04.880Z · LW(p) · GW(p)
Have you thought about big bucks in Si-valley or finance? I kind of envy physicists because I feel like they have the kind of math skills that lets them figure out anything from first principles.
Replies from: pan, Lumifer↑ comment by pan · 2014-11-04T01:50:01.343Z · LW(p) · GW(p)
I'd be happy to work in Silicon valley or finance, and I've applied to the big ones like Google and Microsoft but it's kind of tough to find companies to apply to. Another commenter recommended the HN monthly hiring post, which is a good resource but very focused on programming.
↑ comment by Lumifer · 2014-11-03T17:46:38.132Z · LW(p) · GW(p)
Replies from: Luke_A_SomersI kind of envy physicists because I feel like they have the kind of math skills that lets them figure out anything from first principles.
↑ comment by Luke_A_Somers · 2014-11-03T19:46:11.657Z · LW(p) · GW(p)
That's not quite what it feels like from the inside either. It's more like, "You're looking at this enormous noodle soup! Well, let's see what we can say with certainty, and let's poke around for useful approximations. If that doesn't work, I got nuffin'."
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-11-04T12:18:07.210Z · LW(p) · GW(p)
Physicists tend to have very good modeling chops. A fellow in the early 1900s (Ising) was trying to come up with a model for ferromagnetism and came up with Markov random fields, basically. That is amazing to me.
Meanwhile, in psychometrics: "I know, let's model intelligence by one number!"
edit: There is some controversy about how much of this was Ising vs Ising's advisor. This does not affect my point about physicists, though.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-11-04T14:08:03.994Z · LW(p) · GW(p)
Right, I kind of swept that under the rug as part of approximations - as in, 'try making a seemingly overly-simple model of the individual components and see if the relevant behavior emerges'. Could have been clearer on that.
comment by advancedatheist · 2014-11-03T15:06:48.194Z · LW(p) · GW(p)
My father, Wendell Potts, died on Friday:
http://www.wassonfuneralhome.com/obituaries/Wendell-Potts/#!/Obituary
He suffered from some sort of dementia, and like a lot of people in that situation he faded away physically as well as mentally.
Years ago, before Dad became ill, we had discussed cryonics, and he said he didn't want it. I respected his wishes, so my sister Michele and agreed I to have his remains cremated.
Yet this coming week end I will help to run a cryonics convention in Laughlin, Nevada, for people who talk about "living forever," depending on how you define that phrase.
Replies from: None, hyporational↑ comment by [deleted] · 2014-11-03T21:15:33.484Z · LW(p) · GW(p)
I'm sorry for your loss.
Replies from: None↑ comment by [deleted] · 2014-11-04T04:28:14.790Z · LW(p) · GW(p)
Condolences seconded. There's only so much one can say...
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2014-11-15T10:55:38.370Z · LW(p) · GW(p)
More than saying something, we might be able to do something. Each of us, if so inclined, could donate in 'advancedatheist''s legal name, or not. Either way, he might appreciate it.
There is one Jewish custom associated with death that makes sense to me, which is contributing to charity on behalf of the departed. I am donating eighteen hundred dollars to the general fund of the Machine Intelligence Research Institute, because this has gone on long enough. If you object to the Machine Intelligence Research Institute then consider Dr. Aubrey de Grey's Methuselah Foundation, which hopes to defeat aging through biomedical engineering. I think that a sensible coping strategy for transhumanist atheists, to donate to an anti-death charity after a loved one dies. Death hurt us, so we will unmake Death. Let that be the outlet for our anger, which is terrible and just. I watched Yehuda's coffin lowered into the ground and cried, and then I sat through the eulogy and heard rabbis tell comforting lies. If I had spoken Yehuda's eulogy I would not have comforted the mourners in their loss. I would have told the mourners that Yehuda had been absolutely annihilated, that there was nothing left of him. I would have told them they were right to be angry, that they had been robbed, that something precious and irreplaceable was taken from them, for no reason at all, taken from them and shattered, and they are never getting it back.
-Eliezer Yudkowsky, "Yehuda Yudkowsky"
Alternatively, you could publicly write an open letter or an essay as Eliezer did. Reading this letter about his brother opened my mind, and steeled my resolve, that death is indeed horrible, that to accept is a rationalization, and that it's worth increasing healthspan and lifespan, if not having the audacity to end death itself. Nothing else I've ever read, including other works for Eliezer, did more for me in this regard. I don't know how many, but I'm confident it's done the same for others. If we're able, with art, with writing, through presentation, or explanation of the latest science, to do something similar, to increase conscientiousness on the topic, and to push the envelope forward.
To do so publicly requires courage. Right now, I don't have as much conviction as Eliezer had, and still has. I can't afford to donate, either, right now. However, that's because I've already donated my budget target for charitable donations this year, and I don't have savings earmarked for donations otherwise. So, to tell others to do this right now, in the name of Wendell Potts, when I won't, would be hypocrisy. So, I won't. However, it's a reminder for those who have the means to be more heroic than I. I strive to be able to give in this regard more as soon as I can.
↑ comment by hyporational · 2014-11-10T06:01:04.989Z · LW(p) · GW(p)
I suspect dementia is one of those cases where cryonics isn't of much help since much of the brain is long gone before the person is legally dead. It wouldn't have even occurred to me to suggest cryonics to my grandfather with whom I was very close and who died of Alzheimer's disease.
Sorry for your loss.
comment by Gavin · 2014-11-04T15:33:07.546Z · LW(p) · GW(p)
I was recently linked to this Wired article from a few months back on new results in the Bohmian interpretation of Quantum Mechanics: http://www.wired.com/2014/06/the-new-quantum-reality/
Should we be taking this seriously? The ability to duplicate the double slit experiment at classical scale is pretty impressive.
Or maybe this is still just wishful thinking trying to escape the weirdnesses of the Copenhagen and Many Worlds interpretations.
Replies from: pragmatist, Gunnar_Zarncke↑ comment by pragmatist · 2014-11-04T15:39:46.097Z · LW(p) · GW(p)
Bohmian mechanics and the Many Worlds interpretation make identical predictions (at least, as long as we ignore anthropic considerations). I haven't yet read the article, but if it is claiming that this experiment is some sort of vindication of Bohmian mechanics, then I suspect it is wrong.
Replies from: MrMind↑ comment by MrMind · 2014-11-05T09:01:17.196Z · LW(p) · GW(p)
Bohmian mechanics and the Many Worlds interpretation make identical predictions
Not exactly. Bohmian QM allows superluminal signalling in certain circumstances.
Replies from: pragmatist, ike, gjm↑ comment by pragmatist · 2014-11-05T12:54:45.674Z · LW(p) · GW(p)
Not sure what you mean by this. It is true that Bohmian particles can influence one another superluminally. However, if the experimenter's epistemic state is represented by the wave function (as Bohmian mechanics presupposes), then this superluminal influence can't be leveraged to transmit information superluminally.
Replies from: MrMind↑ comment by MrMind · 2014-11-06T09:10:59.545Z · LW(p) · GW(p)
then this superluminal influence can't be leveraged to transmit information superluminally.
Yes, theoretically it could, but not in an average sense. Quantum mechanics standard is recovered when the wave functions is in equilibrium, but in out-of-equilibrium states you can have violation of relativity or of the Heisenberg uncertainty.
Replies from: pragmatist↑ comment by pragmatist · 2014-11-06T14:35:54.152Z · LW(p) · GW(p)
Well, quantum non-equilibrium (based on your Wikipedia link) violates the condition I specified ("the experimenter's epistemic state is represented by the wave function"). I had assumed that was a pre-supposition of Bohmian mechanics, but apparently it is not (at least for some proponents of Bohmianism).
Interesting, thanks.
↑ comment by ike · 2014-11-05T20:21:05.665Z · LW(p) · GW(p)
Well, have any differences been tested, and if not, why not?
Replies from: MrMind↑ comment by MrMind · 2014-11-06T09:07:46.036Z · LW(p) · GW(p)
Yes, they have been tested and no, so far no such effect has been found.
Replies from: ike↑ comment by Gunnar_Zarncke · 2014-11-04T23:13:51.131Z · LW(p) · GW(p)
I have seen this some time ago when it was mentioned on slashdot. By now there should be lots of nice videos illustrating those on YouTube. One is this.
What I really like about this is that it allows to gain conflict-free intuitions about QM via macroscopic processes.
See also De Broglie–Bohm theory. I do not see a clear reason why MWI must be preferred. For me the deciding point is which can (better) be generalized relativistically. Apparently there are bohmian mechanic-based approaches
ADDED: The latter article contains the interesting conclusion: "if Bohmian mechanics indeed cannot be made relativistic, it seems likely that quantum mechanics can’t either".
comment by FiftyTwo · 2014-11-04T02:51:03.074Z · LW(p) · GW(p)
Working on a near future hard sci fi story. What are plausible economic reasons to have a fair number of space stations? (generally earth orbit but can be further out)
Replies from: garabik, Lumifer, MrMind, ChristianKl, Vaniver, drethelin, jaime2000, Emile, Izeinwinter, ike, Lalartu, Torello, hyporational, polymathwannabe, TrE↑ comment by garabik · 2014-11-04T08:34:32.351Z · LW(p) · GW(p)
Arthur Clarke's idea of reduced gravity prolonging significantly human life. Sadly, the available evidence does not quite point in this direction. But for a sci-fi story it might be quite OK (e.g. it is discovered that microgravity prevents Alzheimers').
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-11-04T12:37:34.065Z · LW(p) · GW(p)
Even though it could make for an interesting story, I would be very wary of making up scientific facts that future research may render obsolete.
↑ comment by Lumifer · 2014-11-04T05:28:07.198Z · LW(p) · GW(p)
Highly valuable technological processes that only work in zero g.
Replies from: FiftyTwo↑ comment by FiftyTwo · 2014-11-04T14:41:51.315Z · LW(p) · GW(p)
Such as?
Replies from: Lumifer, marchdown, DanielLC↑ comment by Lumifer · 2014-11-04T15:44:58.120Z · LW(p) · GW(p)
You're writing fiction, make it up :-) Off the top of my head, metals and alloys crystallize differently in microgravity. It's also easy to make perfect spheres. I'm sure that googling microgravity technology will give you further leads.
Replies from: None, ChristianKl↑ comment by [deleted] · 2014-11-05T14:41:49.526Z · LW(p) · GW(p)
From what I've seen the ISS is doing very interesting work on plasma physics in space due to having free high-vacuum available and the ability to inject highly-visible tracer particles into a plasma chamber which don't settle out, this also allowing interesting mixed particulate/plasma states.
↑ comment by ChristianKl · 2014-11-04T18:44:14.739Z · LW(p) · GW(p)
As far as application for crystallization goes, protein scanning needs the proteins in crystallized form. If someone would have a way to crystalize arbitrary proteins in zero-g that would be very valuable.
↑ comment by MrMind · 2014-11-04T08:11:52.178Z · LW(p) · GW(p)
Space factories for spaceship. It's much cheaper to build something heavy in space and have them launch from there. Of course you would have to mine asteroids instead of sending construction materials from the Earth.
Security concerns. If you want to test some form of nanotechnology, you better do that in space and nuclearize the whole thing (provided that nanotech is still extremely dangerous).
MIS (millionaires in space): once you live in outer space as a status signal, it's easier to befriend other rich weirdos in space.
Replies from: FiftyTwo, MrMind↑ comment by FiftyTwo · 2014-11-04T14:52:04.934Z · LW(p) · GW(p)
Space factories for spaceship.
You still need a strong economic reason for the spaceships if we're looking at a scarcity society with plausible tech. (Unless there's enough public and political will for exploration for its own sake which would rquire its own explanation)
↑ comment by MrMind · 2014-11-04T10:24:49.798Z · LW(p) · GW(p)
Huge computing facility! It's easier to dissipate heat in space.
Replies from: philh↑ comment by philh · 2014-11-04T11:26:07.478Z · LW(p) · GW(p)
Not an expert, but my understanding (from reading Heinlein, and I think other sources) is that it's hard to dissipate heat in space, because there's nothing to conduct it away.
Replies from: MrMind↑ comment by MrMind · 2014-11-04T11:39:07.553Z · LW(p) · GW(p)
But isn't space like a heat bath at -273° C? I think there's a finer point one or both of us is missing.
Replies from: Richard_Kennaway, MrMind↑ comment by Richard_Kennaway · 2014-11-04T14:03:21.101Z · LW(p) · GW(p)
Cooling is much easier on the ground.
In space you can only dissipate heat by radiation. In an atmosphere you can also transfer excess heat into matter that you can carry away and dump elsewhere, using conduction, convection, and forced circulation.
For a concrete example, consider Google's average 2011 power consumption of 258MW. What happens if they do all that in a huge server farm in space? Assume the exterior is a perfect black body and the interior is a perfect thermal conductor.
From the Stefan-Boltzmann law, for the equilibrium surface temperature to be at the boiling point of water, the surface area must be 235000 sq.m., or the area of a sphere of diameter 387m. Alternatively, if it was a large flat shape, edge-on to the Sun, it could be a 350m square.
Increasing the surface area with lots of large parallel fins, like on a heatsink, only works when immersed in a thermally conductive circulating medium. In space, the fins just block each others' view, and the effective radiative area is that of the convex hull.
Replies from: MrMind↑ comment by MrMind · 2014-11-05T08:41:13.116Z · LW(p) · GW(p)
Bummer!
But we could change that slightly (?): what about process that produces an enormous amount of radiation? In space you can just dump those pesky photons in the backyard, while on Earth there's always someone that owns this or that piece of land.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-11-05T14:01:34.268Z · LW(p) · GW(p)
High-intensity computing generates waste heat, though. You can't turn waste heat into directed energy within the laws of thermodynamics.
↑ comment by ChristianKl · 2014-11-04T16:25:27.252Z · LW(p) · GW(p)
At the moment asteroid mining is one of the best. Quantifies of various rare metals are limited on earth and we have companies working on making asteroid mining a reality.
Earth has laws that prevent certain economic transactions from happening. Various biotech and genetic engineering projects might get outlawed on earth. Human cloning that happens in space stations might make for an interesting story.
There might be nanotech that needs very high precision. Today an electron microscope is effected by a train braking a few kilometers away.
Nanotech could allow you to build cheap very big mirrors that redirect solar energy from one point to another. If you bundle it enough you could have a laser weapon that takes very little time to hit targets. The energy from the mirrors could also be used for electricity generation. On earth but also maybe on Mars.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-05T01:54:00.615Z · LW(p) · GW(p)
Earth has laws that prevent certain economic transactions from happening.
Wouldn't it be easier to bribe officials.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-05T13:13:57.714Z · LW(p) · GW(p)
If you want a cargo to pass through customs it might be enough to bribe a single official. On the other hand if you are doing something where a lot of people have knowledge of your illegal activity, preventing action can be more complicated.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-11-06T06:42:20.269Z · LW(p) · GW(p)
Depends on the country you are doing this in. Some place like China or Russia it shouldn't be too hard.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-06T13:41:55.235Z · LW(p) · GW(p)
With todays Russia you are right. As far as China goes, I think there are cases where you aren't.
China might outlaw companies that sell gender selection for babies in a way that isn't easily fought by monetary bribes. China in contrast to the US throws corrupt politicians into prison when it's in the interest of the party.
On the other hand scify isn't out todays world. It's about a future in which most of the powerful states could have functioning political systems that aren't easily bribed. Third world countries where you can bribe officials might have little power and be subject to the dictates of the powerful countries.
↑ comment by Vaniver · 2014-11-05T23:55:45.553Z · LW(p) · GW(p)
What are plausible economic reasons to have a fair number of space stations?
The other country's is slightly bigger.
(If not clear from the choice of 'status competition' as the most plausible economic reason, I think that all reasonable economic activity in space will not involve humans in space.)
↑ comment by drethelin · 2014-11-04T05:16:48.737Z · LW(p) · GW(p)
Extremely wealthy libertarian seperatists.
Replies from: FiftyTwo↑ comment by FiftyTwo · 2014-11-04T14:42:40.800Z · LW(p) · GW(p)
Assuming there is still land available on earth it would be orders of magnitude cheaper to stay groundside.
Replies from: gattsuru, drethelin↑ comment by gattsuru · 2014-11-04T16:39:45.001Z · LW(p) · GW(p)
The high cost of access could well be the point : if you can easily hire a boat to get to your private island, it's pretty simple for governments or peoples to do the same, club you, and take your stuff. A hundred thousand bucks would cover invading you, and make good return on investment.
By contrast, you'd have to have something of very high value to cover a rocket launch, and that something must be mobile enough to send down easily. (Or in extreme cases, you might be the only people who retain full knowledge of the manufacturing necessary to make the rockets, in some way that isn't easy to reverse engineer -- see the difficulty we have reproducing several engine designs as a guide here.)
Replies from: marchdown↑ comment by jaime2000 · 2014-11-04T03:57:40.280Z · LW(p) · GW(p)
Space stations? As in, stations with humans in them? Pretty much none. Your best bet is to postulate some sort of alternate history in which electronics and computers never took off. Or you can go in the other direction, and postulate tiny space stations which house computing hardware running uploaded humans.
Replies from: FiftyTwo, Emile, polymathwannabe↑ comment by Emile · 2014-11-07T21:04:59.157Z · LW(p) · GW(p)
As others mentioned: mining, special manufacturing exploiting microgravity.
A lot of competition and innovation in the area of data transfer protocols and encryption and localization and espionage increasing the need for engineers that can build, test and maintain new communications directly from orbit, which is cheaper than launching prototype after prototype.
A fad for having a marriage and honeymoon in space, making luxury space hotels commercially viable.
Companies having headquarters in space as the ultimate signal. Especially if it gives them an advantageous legal environment.
China wanting to outshine the US, so heavily subsidizing the stuff above for it's citizens / companies.
Space junk becoming enough of a problem that specialized repair and disposal jobs become viable, mostly financed by the satellite insurance companies.
Some of the things above increasing the number of space flights, and so decreasing prices and making a few more uses become viable.
↑ comment by polymathwannabe · 2014-11-04T12:41:32.882Z · LW(p) · GW(p)
alternate history in which electronics and computers never took off
Please, no. The world already has a sickening amount of steampunk.
Replies from: marchdown, RowanE↑ comment by RowanE · 2014-11-04T16:40:24.020Z · LW(p) · GW(p)
I don't think that's meant to refer to a world without electricity, just keeping computers at 1950s-60s sizes and efficiencies. The linked page describes "rocketpunk" further up, and it's quite different from steampunk.
Replies from: jaime2000↑ comment by jaime2000 · 2014-11-04T18:08:42.510Z · LW(p) · GW(p)
Can confirm. I meant a rocketpunk setting in which combustion engines and simple vacuum tube electronics work, but human operators are still required to run space stations capable of monitoring the weather, handling international communications, or spying on enemy countries.
↑ comment by Emile · 2014-11-07T22:41:57.583Z · LW(p) · GW(p)
As others mentioned: mining, special manufacturing exploiting microgravity.
A lot of competition and innovation in the area of data transfer protocols and encryption and localization and espionage increasing the need for engineers that can build, test and maintain new communications directly from orbit, which is cheaper than launching prototype after prototype.
A fad for having a marriage and honeymoon in space, making luxury space hotels commercially viable.
Companies having headquarters in space as the ultimate signal. Especially if it gives them an advantageous legal environment.
China wanting to outshine the US, so heavily subsidizing the stuff above for it's citizens / companies.
Space junk becoming enough of a problem that specialized repair and disposal jobs become viable, mostly financed by the satellite insurance companies.
Some of the things above increasing the number of space flights, and so decreasing prices and making a few more uses become viable.
↑ comment by Izeinwinter · 2014-11-06T17:11:11.203Z · LW(p) · GW(p)
Manned ones? . I can think of reasons to have lots and lots of industry and science in space, but reasons to have lots of bodies up there is harder....
Uhm. Someone develops a launch mechanism that is very, very cheap for small and durable components. - Railgun-to-orbit kind of deal. This necessitates assembly in-orbit of anything that doesn't fit inside a launch-shell, but also drives down the cost of keeping labor up there alive way down because water, air and food fit just fine. Result: Big boom in space activity, lots of engineers and master craftsmen in low earth orbit building probes, space radio telescopes and so on.
↑ comment by Torello · 2014-11-06T23:29:04.707Z · LW(p) · GW(p)
Rich people treat the space stations as cabins.
Alternately, artist colony for the next generation of super-wealthy artists like Damien Hearst (spelling?) need to go for "artistic inspiration" (scare quotes due to Hansonian signling).
Replies from: gwern, None↑ comment by gwern · 2014-11-08T04:02:37.945Z · LW(p) · GW(p)
for "artistic inspiration"
That's actually pretty plausible, given the https://en.wikipedia.org/wiki/Overview_effect http://www.fastcoexist.com/3036887/out-of-this-world-the-mysterious-mental-side-effects-of-traveling-into-space
↑ comment by [deleted] · 2014-11-07T03:18:00.916Z · LW(p) · GW(p)
I know whenever I don't need the second monitor at my desk at work or are at the lab bench next to said desk, I always put the perpetual ISS livestream on...
↑ comment by hyporational · 2014-11-04T19:12:47.723Z · LW(p) · GW(p)
- mining helium-3 for fusion power plants. (ETA: it seems a popular movie used this idea already)
- something bad happens, existential risk is suddenly taken seriously and demand for space survival skills increases
- rich people building their own colony with their own rules to escape government involvement
↑ comment by Lalartu · 2014-11-05T11:28:49.488Z · LW(p) · GW(p)
Mining helium-3 on the Moon does not make slightest sense. It can be manufactured from lithium for a little fraction of that cost.
Replies from: hyporational, hyporational↑ comment by hyporational · 2014-11-05T13:42:34.779Z · LW(p) · GW(p)
Source?
Replies from: None↑ comment by [deleted] · 2014-11-05T14:12:36.738Z · LW(p) · GW(p)
The fact that bombarding lithium with neutrons produces tritium that decays into helium 3 (this reaction is actually responsible for a good fraction of the yield of many fusion warheads, see Castle Bravo). And the helium 3 on the moon is A - on the moon, B - of such a low concentration that the total thermal energy possible to get from fusion of absolutely all the He3 present per cubic meter of the top foot or so of lunar soil is about a quarter that of the worst lignite coal on Earth. And lignite requires a hell of a lot less energy and infrastructure per unit mass to process. By my calculations, given the heat capacity of the minerals it has been blasted into by the solar wind, you need to use at least half that energy on site just to heat the rocks to the point it gets driven off as a vapor. Then there's all the movement of mass, purifying it down non-terrestrially from all kinds of other vapor, ridiculous launch costs... its not happening.
Replies from: hyporational↑ comment by hyporational · 2014-11-05T14:53:20.422Z · LW(p) · GW(p)
I've seen claims in several internet sources, a video documentary and the Wikipedia article that some countries have had plans to mine helium-3 on the moon. I wonder if the reporting is incompetent or if the plans are made by incompetent optimists if what you're saying is true. How did this meme originate?
Replies from: None, Lalartu↑ comment by [deleted] · 2014-11-06T17:01:14.939Z · LW(p) · GW(p)
For what it's worth, I just redid my calculations with the best figures I could find.
The comparison of energy density to lignite was just about the same as the calculations I remembered doing a year or two ago - average thermal energy density assuming complete fusion of the He3 present in lunar soil of about 6 megajoules per kilogram, as compared to 15 or so for average lignite. I found references to speculation that at the edges of cold traps where sunlight never falls near where the sunlight makes it you might have levels six or seven times that.
Something was apparently wrong with my old heat capacity calculations, though. The heating of crustal minerals by 300 kelvin or so would only take about a quarter of a megajoule per kilogram. Though that's still a twelfth the thermal energy used in ONE step at point-of-intake assuming perfect capture and efficiency, and at least a quarter of the extractable energy you could get out of a heat engine driven by such a reaction. Regardless of the particular numbers, the mere fact that it's going after an energy source several times less dense than the worst of the fossil fuels on another planet, requiring infrastructure that has not been invented for both extraction and energy-production, has often made me wonder why the meme persists.
Lalartu has a good response below.
↑ comment by Lalartu · 2014-11-06T09:30:53.797Z · LW(p) · GW(p)
Nobody has plans (in business meaning of term) to mine helium-3 on Moon. This idea was first proposed by Gerald Kulcinski in 1989. He gave very optimistic predictions regarging D-He3 fusion, underestimated cost of lunar mining and ignored possibility of specialized reactors for producing He3. Since then idea is pushed by space advocates.
↑ comment by hyporational · 2014-11-05T13:13:37.153Z · LW(p) · GW(p)
Rude tone without a source to back it up, tsk tsk :)
↑ comment by DanielLC · 2014-11-04T19:21:04.682Z · LW(p) · GW(p)
But you can't mine that from Earth's orbit, so why would there be space stations there?
Replies from: hyporational↑ comment by hyporational · 2014-11-04T19:38:16.832Z · LW(p) · GW(p)
I'm not sure if he meant the Earth's orbit around the Sun or a geocentric orbit. I assumed the latter, the moon is there and supposedly has a lot of helium-3.
Is it wrong to call a moon base a space station?
↑ comment by polymathwannabe · 2014-11-04T17:36:41.450Z · LW(p) · GW(p)
How many stations are you considering a "fair number?"
Replies from: FiftyTwo↑ comment by FiftyTwo · 2014-11-04T18:20:02.099Z · LW(p) · GW(p)
At least a dozen, with a few hundred people working spaceside full time
Replies from: None↑ comment by [deleted] · 2014-11-05T14:30:49.485Z · LW(p) · GW(p)
You could have human-crewed vehicles in orbit around various planets, tele-operating and managing things on their surfaces with much faster turnaround time than lightspeed delay to Earth. Depending on what's been found and what's being teleoperated, that could be worthwhile - consider the fact that our big $2 billion Mars rover moves something like 30 meters per day because with a 10+ minute turnaround time for signals, it's doing nothing most of the time since they want to make sure that everything it does is something they can recover from if something goes wrong and they don't know about it for fifteen minutes.
↑ comment by TrE · 2014-11-04T05:18:59.701Z · LW(p) · GW(p)
A valid reason would be the scarcity of resources. Further technological progress will be severely constrained by which chemical elements are available cheaply and which are not. Lots of interesting and useful chemical elements are not available in sufficiently concentrated ores, or they are rare in all of earth's crust, having sunken down inside earth's core during its formation.
These elements thusly are produced only as by-products of other elements which are more concentrated in their ores. This is valid not only for most of the lanthanides, but also for elements like indium, tellurium, gallium, germanium and the platinum group metals.
Asteroids might hold rich deposits of these elements because the elements could not sink down into their cores, and even if they did, most asteroids are small enough.
So if we don't want to substitute that indium tin oxide in our smartphone touchscreens with cheaper elements, we'll have to mine asteroids.
Edit²: Here's a relevant review article (Vesborg, Jaramillo 2012)
comment by NancyLebovitz · 2014-11-06T10:52:35.416Z · LW(p) · GW(p)
I watched a video about a Baptist minister trying to find a Christian in Sweden. It's amusing but not too surprising (he finds a Christian, but she's not from there, he finds someone who believes in God, but he's a Muslim), but then he finds someone who believes in Kopimism-- a religion in which copying is sacred.
Kopimism made simple:[9]
All knowledge to all;
The pursuit of knowledge is sacred;
The circulation of knowledge is sacred;
The act of copying is sacred
.
According to the Kopimist constitution:[10]
Copying of information is ethically right;
Dissemination of information is ethically right;
Copymixing is a sacred kind of copying, more so than the perfect, digital copying, because it expands and enhances the existing wealth of information;
Copying or remixing information communicated by another person is seen as an act of respect and a strong expression of acceptance and Kopimistic faith;
The Internet is holy (Not generally accepted by churches run by the Maesters);
Code is law.
↑ comment by gjm · 2014-11-10T02:42:09.945Z · LW(p) · GW(p)
I will hazard a guess that actually some people he went up to in the street were Christians -- but they didn't make it into the actual programme. Sure, Sweden isn't very religious. But if the figures in Wikipedia are to be believed, about 18% of Swedish citizens believe in a god, whereas somewhere between 1% and 5% are Muslims (and I bet the Kopimists are way, way, way under 1%; I bet the Kopimist wasn't encountered in random vox-pop interviews).
comment by SodaPopinski · 2014-11-07T06:14:19.616Z · LW(p) · GW(p)
What is the current status on formalizing timeless decision theory? I am new to LW, and have a mathematics background and would like to work on decision theory (in the spirit of LW). However, all I can find is some old posts (2011) of Eliezer saying that write ups are in process, as well as a 120 page report by Eliezer from MIRI which is mostly discussing TDT in words as well as the related philosophical problems. Is there a formal self contained definition of TDT out there?
Replies from: shminux, Artaxerxes↑ comment by Shmi (shminux) · 2014-11-07T07:24:57.606Z · LW(p) · GW(p)
Conveniently, So8res just posted a guide discussing this very issue in section 6.
↑ comment by Artaxerxes · 2014-11-07T07:10:57.084Z · LW(p) · GW(p)
Here is a page of all of MIRI's publications, you can click Decision Theory and it will show all of the relevant papers. It might not be quite what you're looking for, but it might help with working out what MIRI are up to and what they have done in the area.
There might also be something in the rough workshop writeups.
If I were you, once I had exhausted online resources, I would consider contacting MIRI themselves (via email perhaps) if I had any more questions, they're definitely the people to ask.
comment by Metus · 2014-11-04T23:11:21.314Z · LW(p) · GW(p)
I just went through a big pile of paper notes.
Only to find out that I already digitized all the stuff I deemed relevant into Evernote resulting in a full trash can. On a more positive note, I gained some perspective that the stuff that was important or a big problem at that time isn't one anymore, giving some tranquility in the long run.
comment by Curiouskid · 2014-11-04T20:45:17.758Z · LW(p) · GW(p)
I'd appreciate suggestions for resources on open relationships / polyamory.
Replies from: fubarobfusco, ChristianKl, dspeyer↑ comment by fubarobfusco · 2014-11-04T23:13:16.099Z · LW(p) · GW(p)
Two standard texts are Easton and Hardy's The Ethical Slut and Taormino's Opening Up.
Replies from: marchdown↑ comment by marchdown · 2014-11-05T01:03:51.072Z · LW(p) · GW(p)
I would also mention Deborah Anapol's "Polyamory: the new love for the 21st century". I think about it as a survey of polyamorous practices, struggles, communities. It was crucial for me to get the sense of normality. Haven't read Taormino.
↑ comment by ChristianKl · 2014-11-04T22:45:06.918Z · LW(p) · GW(p)
What specifically do you want to know about them?
↑ comment by dspeyer · 2014-11-05T21:44:39.537Z · LW(p) · GW(p)
The Ferrett often has interesting things to say on the subject.
comment by Metus · 2014-11-06T01:54:50.324Z · LW(p) · GW(p)
Speaking of books, is there place where iI can buy a printed version of the sequences? Reading on a screen is not as comfortable as a classical book for various reasons.
Replies from: Ixiel↑ comment by Ixiel · 2014-11-06T14:10:04.037Z · LW(p) · GW(p)
It's in progress apparently, but I'm told money won't speed it up. If you edit you can help I think.
Replies from: Metus↑ comment by Metus · 2014-11-06T16:00:05.330Z · LW(p) · GW(p)
I have absolutely no experience in editing.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-06T17:01:00.881Z · LW(p) · GW(p)
Take a look at http://lesswrong.com/lw/jc7/karma_awards_for_proofreaders_of_the_less_wrong/ for more information. Alex Vermeer seems to be responsible for the project and hopes that there a book by the end of the year.
Replies from: gattsuru↑ comment by gattsuru · 2014-11-06T19:11:50.390Z · LW(p) · GW(p)
Editing options do not appear to be available from the Youtopia list, and the only remaining Sequence-related tasks as of this post are related to reading and translation to foreign languages..
((Just trying to save folk time and account management.))
Replies from: Ixiel↑ comment by Ixiel · 2014-11-09T14:49:56.219Z · LW(p) · GW(p)
Oh great! If that's all that's left the English book is just about ready to sell then? I hope there's a big announcement because I suspect demand is high and awareness less so (given someone asked this before and after me, and not all who wonder ask)
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-09T21:53:46.630Z · LW(p) · GW(p)
In the linked post Alex answered my question by saying that it's hopefully finished by the end of the year.
Replies from: Ixielcomment by polymathwannabe · 2014-11-06T18:07:12.248Z · LW(p) · GW(p)
For most of my life, I've heard weather forecasts in the news be described as infamously unreliable. Has there been any serious advance in this field? Is the unreliability of weather forecasts just another of those too-popular memes about things we love to hate?
Replies from: NancyLebovitz, gattsuru, taelor, garabik, Salemicus, ChristianKl↑ comment by NancyLebovitz · 2014-11-06T19:44:16.389Z · LW(p) · GW(p)
For some of my life (till the 70s or 80s), weatherman jokes were a staple of humor, and the joke was simply that the weatherman was wrong. I haven't heard a weatherman joke for a long time.
The bizarre thing is that I'm sure the jokes existed, but I can't remember any of them.
Replies from: DanielLC↑ comment by gattsuru · 2014-11-06T19:21:40.736Z · LW(p) · GW(p)
Nate Silver (of 538) has some space that he's dedicated to this effort in The Signal and the Noise. Randal Olson's reproduced some of that related to current-day abilities, which show that we're currently able to give better-than-random results for a few days in advance, but not much better after that. And, unsurprisingly, data beats expertise when it comes to accuracy.
A good deal of the data-collecting tools have been developed or implemented relatively recently, and that seems to correlate with improvements to short-term forecasting, to the point where a five-day forecast in 1991 was roughly as likely to be accurate at a three-day forecast in 1981.
They've improved enough that they can probably be trusted to determine whether you should bring an umbrella tomorrow, but the historical numbers and especially expertise-based numbers were inaccurate enough to explain the origin of the meme.
↑ comment by taelor · 2014-11-07T03:41:47.898Z · LW(p) · GW(p)
Nate Silver's The Signal And The Noise has a chapter about this. The short answer is yes,, weather forcasting has gotten better, but comerical forcasts have a known "wet bias" in favor of predicting rain. The reason for this is that people get more upset at forcasters when they say it won't rain and it does than when they say it will rain and it doesn't. Acording to Silver, the National Weather Service's forcasts are the most reliable, followed by various large comercial services (e.g. weather.com. etc.), with local news forcasts being the least reliable.
↑ comment by garabik · 2014-11-07T08:11:18.656Z · LW(p) · GW(p)
Disclaimer: I am not a meteorologist, but a friend of mine is and I had discussed this with him some time ago. Paraphrasing from memory, I might make mistakes.
Only in recent decade or two we got enough computational power to run huge weather models (such as Aladin) comfortably, with live updating on incoming data. However, the model is only as good as the input data - if the meteorological weather stations are positioned every few kilometers, the prediction is extremely good, but if they are more sparse, the errors increase. The complexity of terrain plays a role too - on a flat plain, weather stations might reasonably be spaced tens of km's, but a small hill means they have to be spaced much more closely to get reasonable results.
There is also sometimes very helpful "local knowledge" - e.g. whatever the abrupt weather change in Germany, you can be reasonably sure in 3 days it will happen in Slovakia.
Taking this into account, professional weather forecast is very reliable for a day or two - enough to leave your umbrella at home. However, interpretation by news forecasters glosses over finer details, such as "sunny, high temperature with 10% chance of rain" will be interpreted as "sunny, high temperature, a little of rain" and give false signals.
↑ comment by Salemicus · 2014-11-06T21:51:46.044Z · LW(p) · GW(p)
The weather forecasts really were rubbish in the past.
To give one notorious example, a weatherman on Britain's main television station once mocked a member of the public's claim that the country was about to be struck by a hurricane. That very night, Britain was hit by one of the most severe hurricanes of modern times, killing 18 people.
↑ comment by ChristianKl · 2014-11-06T21:59:34.576Z · LW(p) · GW(p)
Given not complete reliability I would really like to get weather data via an app that gives me confidence intervals of various sizes.
Replies from: garabik↑ comment by garabik · 2014-11-07T08:13:34.648Z · LW(p) · GW(p)
Such as SHMUdroid (for Android)? Unfortunately, these apps tend to be country specific.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-07T12:07:36.402Z · LW(p) · GW(p)
I don't speak that language so it's hard for me to tell but probably.
I'm from Germany
comment by Artaxerxes · 2014-11-06T21:17:36.214Z · LW(p) · GW(p)
Dan Carlin talks about AI in the context of existential risks in a recent episode of his podcast Common Sense. The discussion on AI starts around 33 mins.
He does a pretty good job of introducing relevant concepts to his audience, and includes quotes from Steven Hawking, Elon Musk and Nick Bostrom.
comment by chaosmage · 2014-11-05T10:18:53.748Z · LW(p) · GW(p)
I just got warm fuzzies from this video, where His Cleverness Elon Musk says (about the necessity of soliciting negative feedback when starting a company):
You should take the approach that you the entrepreneur are wrong. Your goal is to be less wrong.
He also says in this much longer interview at MIT that unfriendly AI is "probably" "our biggest existential threat".
comment by Ritalin · 2014-11-10T01:15:28.236Z · LW(p) · GW(p)
Understanding how rights work:
This topic still confuses me greatly. Let's take the example of the "Right to Life, Liberty and the Security of Person". Can a "Right to Cryogenic Treatment" be argued from there? Would that, in turn, simply entail that I get to sign up for cryogenic treatment without obstacles and cannot be forbidden from doing so (for instance, cryo is illegal in France), or could it be spun otherwise?
Replies from: shminux, ChristianKl↑ comment by Shmi (shminux) · 2014-11-10T02:37:12.615Z · LW(p) · GW(p)
A right is a shortcut in consequentialism (or another normative ethics) turned into a lost purpose. Different rights can and do contradict each other, since they were derived in different circumstances. Thus you can argue for or against anything, by stretching the domain of validity of a suitable right. It all depends on how connected, influential and persuasive you are. Hence lawyers.
Replies from: Ritalin↑ comment by Ritalin · 2014-11-10T03:52:50.774Z · LW(p) · GW(p)
What'd be the difference between that and an ethical injunction?
Replies from: ChristianKl, shminux↑ comment by ChristianKl · 2014-11-10T11:10:29.690Z · LW(p) · GW(p)
Ethical injunctions aren't things that are argued in front of courts. Courts argue about rights.
↑ comment by Shmi (shminux) · 2014-11-10T04:20:23.093Z · LW(p) · GW(p)
I am not an ethics expert by any stretch, so I can only guess. It seems like the opposite of a right, restricting what you can do rather than enabling.
Replies from: Ritalin↑ comment by Ritalin · 2014-11-10T16:06:15.459Z · LW(p) · GW(p)
They are two sides of the same coin. "The right to circulation" tells people "you can go wherever you want", and tells States "you can't demand a travel permit every time someone wants to move". "The right to live" tells people "you may go on living if you want" but also "you can't stop people from living if they don't consent to it". The freedom to do something restricts another person's ability to stop you from doing that.
Replies from: Azathoth123↑ comment by ChristianKl · 2014-11-10T11:10:39.155Z · LW(p) · GW(p)
Let's take the example of the "Right to Life, Liberty and the Security of Person". Can a "Right to Cryogenic Treatment" be argued from there?
The right to life is a pretty recent idea.
The US constitution for example have a right not to be deprived of life without due process. It has a right to not to be tortured (cruel and unusual punishment) no matter what.
In the US some cryonics folks have registered a religion that allows them there cryogenic treatment without interference. Freedom of religion is actually a constitutional right in the US. In the case of abortion "pro-life" is also a position mainly argued from a position of religion.
The France state is strongly secular and you don't get many expectations just because you register a religion. Especially one without tradition such as the one of the cryonics folks. The French state doesn't allow any religion to block autopsies simply by claiming that they have a special burial ritual that forbids autopsies. That's why there's a different situation concerning cryonics in France.
At the moment no court considers a cryonic person alive. If it would then the whole scheme of using insurance contracts that trigger on the death of a person wouldn't work to finance cryonics in the first place.
Replies from: Ritalin↑ comment by Ritalin · 2014-11-10T16:08:25.440Z · LW(p) · GW(p)
How about rephrasing it as a "right to a chance to live again"?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-10T16:29:23.977Z · LW(p) · GW(p)
That's nothing that's found in any constitution or international treaty. It would be possible to create laws that grant rights to cryonics but at the moment we don't have them.
comment by A1987dM (army1987) · 2014-11-09T16:47:40.915Z · LW(p) · GW(p)
Nearly all of Ritalin's recent comments have been downvoted, most of them twice. Granted, some of them aren't exactly brilliant, but I don't think such a massive downvoting is warranted. What gives?
Replies from: jaime2000, ChristianKl↑ comment by jaime2000 · 2014-11-10T00:14:24.446Z · LW(p) · GW(p)
Most of Ritalin's recent comments have been on political subjects, namely the internet standards for undeveloped nations thread. Anybody who makes lots of political noise gets a downvote or two per comment; see, for example, Azathoth123 and advancedatheist.
↑ comment by ChristianKl · 2014-11-09T20:44:30.645Z · LW(p) · GW(p)
Quite a lot of posts have one or two downvotes. On LW single downvotes sometimes happen for reasons that are hard to understand. The reason those posts jump to your attention might rather be that nobody upvoted them and they are therefore in total negative karma.
Replies from: gjm↑ comment by gjm · 2014-11-09T23:49:24.066Z · LW(p) · GW(p)
army1987 isn't describing "single downvotes" but a pattern where just about everything one user posts is downvoted. Now, for sure, it might be that Ritalin's comments are almost all just about low-enough quality to get -1 or -2 (but not bad enough to be downvoted more heavily), but another obvious hypothesis is that someone's decided on a systematic let's-hurt-Ritalin campaign.
After a quick look through Ritalin's recent comment history, the second hypothesis seems somewhat more plausible to me.
(I hope it's wrong; I think that kind of behaviour is harmful to Less Wrong as well as unpleasant for the target.)
Replies from: shminux, ChristianKl↑ comment by Shmi (shminux) · 2014-11-09T23:58:21.903Z · LW(p) · GW(p)
Report to Viliam_Bur for investigating.
↑ comment by ChristianKl · 2014-11-10T00:02:09.342Z · LW(p) · GW(p)
Most posts on LW do get votes. It would surprise me to go through any users history and see that his recent posts all got no votes.
Things don't need to be generally low quality to get downvotes. Anything that's a bit controversial usually get's some downvotes on LW even if it's overall in positive karma.
Replies from: gjm↑ comment by gjm · 2014-11-10T02:18:02.937Z · LW(p) · GW(p)
Most posts on LW do get votes.
That would be relevant if army1987's observation had been that almost all of Ritalin's recent comments have been voted on. It wasn't.
Anything that's a bit controversial usually gets some downvotes on LW even if it's overall in positive karma.
That would be relevant if all those comments were at, say, +4 -3. They aren't. They're mostly at +0 -1 or +0 -2 or +1 -1. Which is why the top two competing hypotheses are "consistently lowish perceived quality" and "someone on an anti-Ritalin campaign".
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-10T11:15:59.005Z · LW(p) · GW(p)
Which is why the top two competing hypotheses are "consistently lowish perceived quality"
If the post is at +0 -1 there are fewer people who consider the post to be negative than if the post is at +3 -4. If nobody votes up certain posts than that has nothing to do with "someone on an anti-Ritalin campaign".
comment by Metus · 2014-11-04T23:49:23.274Z · LW(p) · GW(p)
There is Superintelligence reading group going on and I realised that I am actually member in another reading group. Which made me wonder: Is there a general website for reading groups? My reading list is long and I could use the mild social incentive to read through it. Also the discussion usually is very valuable.
comment by iarwain1 · 2014-11-04T19:19:53.410Z · LW(p) · GW(p)
What are excellent nonfiction books available in audio format? I'm especially looking for audio books in the following categories:
- Rationality = cognitive biases, thinking strategies, etc. (I've read / listened to all the popular books in CFAR's reading list, so don't include those.)
- Logic (beginner level)
- Philosophy (beginner / intermediate)
- Economics (beginner)
- Fundamental physics (beginner)
- Debating skills
- Research skills
Nothing that requires knowledge of calculus, please.
[Should I have posted this somewhere in the media thread?]
Replies from: hyporational↑ comment by hyporational · 2014-11-04T20:10:25.946Z · LW(p) · GW(p)
comment by Viliam_Bur · 2014-11-05T10:06:28.776Z · LW(p) · GW(p)
How often do you post on Twitter, on average?
[pollid:792]
I am asking this to improve my model of how is Twitter typically used. Because I have seen people with wildly different patterns.
Replies from: None, garabik, drethelin, blacktrance, Lumifer, RowanE, CAE_Jones↑ comment by blacktrance · 2014-11-05T22:34:37.423Z · LW(p) · GW(p)
I hardly ever post (somewhere between one post per month and one post per year), but I read my feed almost daily.
↑ comment by CAE_Jones · 2014-11-05T10:26:35.466Z · LW(p) · GW(p)
I chose "one post per day", although in practice it's more like "a few posts a week". Had "one post per week" been a choice, I would have rounded to that instead.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-11-05T17:21:09.575Z · LW(p) · GW(p)
Did I skip the "week"? Oh, my bad. Unfortunately, polls can't be fixed later. Thanks for all the answers, anyway, they help me to get an idea about how often people use this thing.
comment by polymathwannabe · 2014-11-08T18:42:44.098Z · LW(p) · GW(p)
comment by Evan_Gaensbauer · 2014-11-07T11:20:31.173Z · LW(p) · GW(p)
On Facebook, I ran a quasi-experiment on if it was possible to nerd-snipe philosophers. My conclusion is that it worked to a mild degree. Further conclusions on whether this is the sort of debates people should try avoiding pending.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-07T11:31:52.548Z · LW(p) · GW(p)
nerd-sniping suggests that you can stop what a person is currently doing to a good extend. Giving someone another way to procrastinate on facebook doesn't really fall under that category.
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2014-11-07T11:52:25.585Z · LW(p) · GW(p)
Hm, yeah, that's a good point. Facebook is sort of the ultimate confounding factor in this regard.