Open thread, 21-27 April 2014

post by Metus · 2014-04-21T10:54:16.422Z · LW · GW · Legacy · 349 comments

Contents

  If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
None
349 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Thread started before the end of the last thread to ecourage Monday as first day.

349 comments

Comments sorted by top scores.

comment by fubarobfusco · 2014-04-25T16:09:37.890Z · LW(p) · GW(p)

If not rationality, then what?

LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.

Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.

Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?


Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not oracles but rather heroes and villains who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Heroes and villains defy oracles, and come to their predicted triumphs or fates not through prediction, but in spite of it.

Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the world are relatively close to our priors, but our goals are not known to us initially, and are in fact very difficult to discover. We might consider this to be Buddha's world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. When we make choose actions that cause bad effects, we aren't so much acting on faulty beliefs about the world, but pursuing goals that are illusory or empty of satisfaction.

There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes' world. Each of these models should relate prediction, action, and goals in different ways. We might imagine Lovecraft's world, Qoheleth's world, or Nietzsche's world.


Each of these models of the world — Bayes' world, Cassandra's world, Buddha's world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes' world, what evidence might suggest that we are in Cassandra's or Buddha's world?


Edited lightly — In the first couple of paragraphs, I've clarified that I'm talking about epistemic and instrumental rationality as advice for humans, not about whether we live in a world where Bayesian math works. The latter seems obviously true.

Replies from: None, shminux, Tenoke, Lumifer, IlyaShpitser, Squark
comment by [deleted] · 2014-04-26T07:38:19.208Z · LW(p) · GW(p)

Replace religion with this dilemma and you have NS's Microkernel reliigon.

comment by shminux · 2014-04-25T18:12:28.532Z · LW(p) · GW(p)

I don't see these as alternatives, more like complements.

Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world

It's a memorable name, but it does not need to be called anything so dramatic, given that we live in this world already. For example, most of us make a likely correct prediction that if we procrastinate less then we will be better off, yet we still waste time and regret it later.

Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals.

Why this AIXIsm? We are a part of the world, and the most important part of it for many people, so updating your model of self is very Bayesian. Lacking this self-update is what leads to a "Cassandra's world".

comment by Tenoke · 2014-04-25T17:55:12.505Z · LW(p) · GW(p)

I'd tell you what method, I would use to evaluate the evidence to decide in which world we are, but it seems like you denied it in the premise. ;)

comment by Lumifer · 2014-04-25T16:44:59.437Z · LW(p) · GW(p)

That's an interesting post. Let me throw in some comments.

I am not sure about the Cassandra's world. Here's why:

  • Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not "enable goal-accomplishing actions" for him -- in the Bayes' world as well. Is the Cassandra's world defined by being powerless?

  • Heroes in myth defy predictions essentially by taking a wider view -- by getting out of the box (or by smashing the box altogether, or by altering the box, etc.). Almost all predictions are conditional and by messing with conditions you can affect predictions -- what will come to pass and what will not. That is not a low-level world property, that's just a function of how wide your framework is. Kobayashi Maru and all that.

As to the Buddha's world, it seems to be mostly about goals and values -- things on the subject of which the Bayes' world is notably silent.

Replies from: asr, fubarobfusco
comment by asr · 2014-04-25T17:06:31.754Z · LW(p) · GW(p)

Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not "enable goal-accomplishing actions" for him -- in the Bayes' world as well. Is the Cassandra's world defined by being powerless?

Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, "the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in."

I don't particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for "Cassandra World" reasons.

Replies from: Lumifer
comment by Lumifer · 2014-04-25T17:20:09.591Z · LW(p) · GW(p)

So then the Cassandra's world is essentially a predetermined world where fate rules and you can't change anything. None of your choices matter.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-04-26T01:06:42.581Z · LW(p) · GW(p)

Alternately, in such a world, it could be that improving your predictive capacity necessarily decreases your ability to achieve your goals.

Hence the classical example of Cassandra, who was given the power of foretelling the future, but with the curse that nobody would ever believe her. To paraphrase Aladdin's genie: "Phenomenal cosmic predictive capacity ... itty bitty evidential status."

Yes, a Zelazny or Smullyan character could find ways to subvert the curse, depending on just how literal-minded Apollo's "install prophecy" code was. If Cassandra took a lesson in lying from Epimenides, she mightn't have had any problems.

comment by fubarobfusco · 2014-04-25T19:57:30.392Z · LW(p) · GW(p)

You're right about the prisoner. (Which also reminds me of Locke's locked-room example regarding voluntariness.) That particular situation doesn't distinguish those worlds.

(I should clarify that in each of these "worlds", I'm talking about situations that occur to humans, specifically. For instance, Bayes math clearly works for abstract agents with predefined goals. What I want to ask is, to what extent does this provide humans with good advice as to how they should explicitly think about their beliefs and goals? What System-2 meta beliefs should we adopt and what System-1 habits should we cultivate?)

Heroes in myth defy predictions essentially by taking a wider view -- by getting out of the box (or by smashing the box altogether, or by altering the box, etc.).

I think we're thinking about different myths. I'm thinking mostly of tragic heroes and anti-heroes who intentionally attempt to avoid their fate, only to be caught by it anyway — Oedipus, Agamemnon, or Achilles, say; or Macbeth. With hints of Dr. Manhattan and maybe Morpheus from Sandman. If we think we're in Bayes' world, we expect to be in situations where getting better predictions gives us more control over outcomes, to drive them towards our goals. If we think we're in Cassandra's world, we expect to be in situations where that doesn't work.

As to the Buddha's world, it seems to be mostly about goals and values -- things on the subject of which the Bayes' world is notably silent.

That's pretty much exactly one of my concerns with the Bayes-world view. If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.

Replies from: Lumifer
comment by Lumifer · 2014-04-25T21:00:08.265Z · LW(p) · GW(p)

If we think we're in Bayes' world, we expect to be in situations where getting better predictions gives us more control over outcomes

No, not really. Bayes gives you information, but doesn't give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.

If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.

Um, Bayes doesn't give you any promises, never mind guarantees, about your satisfaction. It's basically like classical logic -- it tells you the correct way to manipulate certain kinds of statements. "Satisfaction" is nowhere near its vocabulary.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-04-25T21:05:22.208Z · LW(p) · GW(p)

Um, Bayes doesn't give you any promises, never mind guarantees, about your satisfaction. It's basically like classical logic -- it tells you the correct way to manipulate certain kinds of statements. "Satisfaction" is nowhere near its vocabulary.

Exactly! That's why I asked: "To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?"

We clearly do live in a world where Bayes math works. But that's a different question from whether it represents good advice for human beings' explicit, trained thinking about their goals.

Edit: I've updated the post above to make this more clear.

comment by IlyaShpitser · 2014-04-26T10:08:26.798Z · LW(p) · GW(p)

Other than Bayes' world, which other worlds might we be living in?

A world with causes and effects. (Bayes' world as described is Cassandra's world, for the usual reasons of "prediction" not being what you want for choosing actions).


[ There was something else here, having to do with how it is hard to use causal info in a Bayesian way, but I deleted it for now in order to think about it more. You can ask me about it if interested. The moral is, it's not so easy to just be Bayesian with arbitrary types of information. ]

Replies from: fubarobfusco
comment by fubarobfusco · 2014-04-27T18:43:31.577Z · LW(p) · GW(p)

Hmm. I think I know what you're referring to — aside from prediction, you also need to be able to factor out irrelevant information, consider hypotheticals, and construct causal networks. A world where cause and effect didn't work a good deal of the time might still be predictable, but choosing actions wouldn't work very effectively.

(I suspect that if I'd read more of Pearl's Causality I'd be able to express this more precisely.)

Is that what you're getting at, at all?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-04-27T20:08:10.830Z · LW(p) · GW(p)

Well, when you use Bayes theorem, you are updating based on a conditioning event. But with causal info, it is not a conditioning event anymore. I don't think it is literally impossible to be Bayesian with causal info, but it sounds hard. I am still thinking about it.

So I am not sure how practical this "be more Bayesian" advice really is. In practice we should be able to use information of the form "aspirin does not cause cancer", right?


[ I did not downvote the parent. ]

comment by Squark · 2014-04-25T20:34:59.302Z · LW(p) · GW(p)

For one thing, we already have strong evidence rationality is a useful idea: it's called science & technology.

Cassandra's world: Mythical predictions seem to be unconditional whether Bayesian predictions are conditional on your own actions and thus can be acted upon.

Buddha's world: Well, understanding your own values and understanding how to maximize them are two tasks none of which is redundant. I think rationality is useful in understanding your own values as well, for example by analyzing them through evolutionary psychology or cognitive neuroscience. Moreover, empirically understanding of our own values also improves when learning epistemic facts and analyzing hypothetical scenarios. Without rationality it is difficult to create sufficiently precise language for formulating the values.

comment by Vulture · 2014-04-23T18:49:40.528Z · LW(p) · GW(p)

Pure curiousity question: What is the general status of UDT vs. TDT among yall serious FAI research people? MIRI's publications seem to exclusively refer to TDT; people here on LW seem to refer pretty much exclusively to UDT in serious discussion, at least since late 2010 or so; I've heard it reported variously that UDT is now standard because TDT is underspecified, and that UDT is just an uninteresting variant of TDT so as to hardly merit its own name. What's the deal? Has either one been fully specified/formalized? Why is there such a discrepancy between MIRI's official work and discussion here in terms of choice of theory?

Replies from: Wei_Dai, Manfred, gsastry, Douglas_Knight, IlyaShpitser
comment by Wei Dai (Wei_Dai) · 2014-04-25T04:31:24.311Z · LW(p) · GW(p)

MIRI's publications seem to exclusively refer to TDT

Why do you say that? If I do a search for "UDT" or "TDT" on intelligence.org, I seem to get about an equal number of results.

people here on LW seem to refer pretty much exclusively to UDT in serious discussion

This seems accurate to me. I think what has happened is that UDT has attracted a greater "mindshare" on LW, to the extent that it's much easier to get a discussion about UDT going than about TDT. Within MIRI it's probably more equal between the two.

that UDT is just an uninteresting variant of TDT so as to hardly merit its own name

As I recall, Eliezer was actually the one who named UDT. (Here's the comment where he called it "updateless", which everyone else then picked up. In my original post I never gave it a name but just referred to "this decision theory".)

Has either one been fully specified/formalized?

There has been a number of attempts to formalize UDT, which you can find by searching for variations on "formal UDT" on LW. I'm not aware of a similar attempt to formalize TDT, although this paper gives some hints about how it might be done. It's not really possible to "fully" specify either one at this time because both need to interface with a to-be-discovered solution to the problem of logical uncertainty, and at this point we don't even know the type signature of such a solution. In the attempts to formalize UDT, people either make a guess as to what the type signature is, or side-step the problem by assuming that all relevant logical facts can be deduced by the agent.

Replies from: Vulture
comment by Vulture · 2014-04-25T13:05:45.839Z · LW(p) · GW(p)

Thanks! This is exactly the kind of answer I was hoping for. A lot of it was what I had sort of deduced from looking at MIRI docs and stuff, but having it laid out explicitly seems to have clicked the missing elements into place and I feel like I understand it much better now.

Replies from: Adele_L
comment by Adele_L · 2014-04-26T21:35:50.542Z · LW(p) · GW(p)

You might also find this honor's thesis by Daniel Hintze handy.

comment by Manfred · 2014-04-25T17:25:09.162Z · LW(p) · GW(p)

I'm not serious, but I'd say that there's little actual use of TDT because it requires us to solve the difficult problem of finding the right causal and logical structure of the problem - this can be handwaved in by the user, but doing that feels awkward. Folk-UDT ("just execute the best strategy") is sufficient for most purposes, both in application and in e.g. trying to understand logical uncertainty.

On the other hand, using causal structure is what lets us consider hypotheticals properly - so TDT will not have some issues that typical-UDT does with hypotheticals about its own actions. On the mutant third hand, TDT's solution of adding logical nodes to the causal structure might just be a simplification of something deeper, so it's not like we (us non-serious decision-theory dilettantes) should put all our eggs in one basket.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-04-25T19:55:25.897Z · LW(p) · GW(p)

What is an example of an issue that UDT has with hypotheticals that TDT does not?

Replies from: Manfred
comment by Manfred · 2014-04-25T20:48:46.255Z · LW(p) · GW(p)

The 5 and 10 problem is basically what happens when your agent asks "what are the logical implications if 5 is chosen?" rather than "If we do causal surgery such that 5 is chosen, what's the utility?"

There are other ways to avoid the 5 and 10 problem, but I think they're less general than using causality.

comment by gsastry · 2014-04-25T06:38:21.842Z · LW(p) · GW(p)

Has either one been fully specified/formalized?

Here's one attempt to further formalize the different decision procedures: http://commonsenseatheism.com/wp-content/uploads/2014/04/Hintze-Problem-class-dominance-in-predictive-dilemmas.pdf (H/T linked by Luke)

comment by Douglas_Knight · 2014-04-24T03:50:33.948Z · LW(p) · GW(p)

All the things you've heard are consistent and together they answer your final question by denying that there is a discrepancy in choice of theory, just in choice of name. (Not that I'm sure that all the things you've heard are true.)

Replies from: Vulture
comment by Vulture · 2014-04-24T15:19:13.567Z · LW(p) · GW(p)

That would make "TDT is underspecified" a rather odd thing for someone to say, though.

comment by IlyaShpitser · 2014-04-24T00:16:57.014Z · LW(p) · GW(p)

Good questions!

comment by Kaj_Sotala · 2014-04-21T20:09:42.172Z · LW(p) · GW(p)

I was feeling lethargic and unmotivated today, but as a way of not-doing-anything, I got myself to at least read a paper on the computational architecture of the brain and summarize the beginning of it. Might be interest to people, also briefly touches upon meditation.

Whatever next? Predictive brains, situated agents, and the future of cognitive science (Andy Clark 2013, Behavioral and Brain Sciences) is an interesting paper on the computational architecture of the brain. It’s arguing that a large part of the brain is made up of hierarchical systems, where each system uses an internal model of the lower system in an attempt to predict the next outputs of the lower system. Whenever a higher system mispredicts a lower system’s next output, it will adjust itself in an attempt to make better predictions in the future.

EDIT: Just realized, this model explains tulpas. Also has connections to perceptual control theory, confirmation bias and people's general tendency to see what they expect to see, embodied cognition, the extent to which the environment affects our thought... whoa.

Replies from: Vulture, Armok_GoB
comment by Vulture · 2014-04-24T15:31:36.673Z · LW(p) · GW(p)

This model explains tulpas

Could you elaborate? I haven't read the paper, but this connection doesn't seem obvious to me.

comment by Armok_GoB · 2014-04-27T00:33:38.843Z · LW(p) · GW(p)

O_O

This explains SO MUCH of things I feel from the inside! Estimating a small probability it'll even help deal with some pretty important stuff. Wish I could upvote a million times.

comment by april_flower · 2014-04-21T18:58:06.209Z · LW(p) · GW(p)

How strong is the evidence in favor of psychological treatment really?

I am not happy. I suffer from social anxiety. I procrastinate. And I have a host of another issues that are all linked, I am certain. I have actually sought out treatment with absolutely no effect. On the recommendation of my primary care physician I entered psychoanalytic counseling and was appalled by the theoretical basis and practical course of "treatment". After several months without even the hint of a success I aborted the treatment and looked for help somewhere else.

I then read David Burns' "Feeling Good", browsing through, taking notes and doing the exercises for a couple of days. It did not help, of course in hindsight I wasn't doing the treatment long enough to see any benefit. But the theoretical basis intrigued me. It just made so much more sense to be determined by one's beliefs than a fear of having one's balls chopped off, hating their parents and actively seeking out displeasure because that is what fits the narrative.

Based on the key phrase "CBT" I found "The now habit" and reading me actually helped to subdue my procrastination long enough to finish my bachelor's degree in a highly technical subject with grades in the highest quintile. Then I slipped back into a phase of relative social isolation, procrastionation and so on.

We see these phenomena consistently in people. We also see them consistently in animals being held in captivity not suited to their species' specific needs. I am less and less convinced that this block of anxiety, depression and procrastination is a disease but a reaction to an environment in the broadest sense inherently unsuitable to humans.

The proper and accepted procedure for me would be to try counseling again, this time with a cognitive behavioral approach. But I am unwilling to commit that much time for uncertain results, especially now that I want to travel or do a year abroad or just run away from it all. (Suicide is not an option) What lowers my odds of success even more is that I never feel understood by people put in place to understand in various venues. So how could such a treatment help?

I am open to bibliotherapy. I don't think I am open to traditional or even medical therapy.

Replies from: jobe_smith, witzvo, Lumifer, ChristianKl, NancyLebovitz, James_Miller, MarkL, Tripitaka, drethelin, None, kalium, Eugine_Nier
comment by jobe_smith · 2014-04-25T17:39:26.036Z · LW(p) · GW(p)

I have suffered from social anxiety continuously and depression off and on since childhood. I've sought treatment that included talk therapy and medication. Currently I am doing EMDR therapy which may or may not end up being helpful, but I don't expect it to work miracles. Everyone in my immediate family has had similar issues throughout their lives. I feel your pain. Despite not being perfect and being in therapy, I feel like my life is going pretty well. Here is what has worked for me:

Acceptance: Not everyone can be or should be the life of the party. Being quiet or reserved or shy is a perfectly acceptable way to live your life. You can still work on becoming comfortable in more social situations but you are fine right now. There are plenty of people who will like you just as you are, even if you social skills are far from perfect. Harsh self-judgement can make anxiety worse and lead to procrastination and depression. What I try to do as best I can is to just do whatever I feel like in the moment, and just let the world correct me. I try not to develop too many theories about how the world will react to me since I know from experience that those theories will be biased and pessimistic.

Decide what you want from the world: I guess this is somewhat generic life advice, but it has really worked for me. I decided fairly early on what I wanted to get from the social world. I wanted 3 things.

  • marriage
  • children
  • a good career

Deciding those things, I plugged away at getting them. I was completely incompetent at talking to women but with some help from e-harmony I found one who I was able to be comfortable with and who liked me. We got married 6.5 years ago and we have a 2 year old daughter and another child on the way. Professionally, I found a career that involves a minimum of politicking and no customer interaction. And yet it is both intellectually satisfying and highly remunerative. Even though neither my home life nor my professional life are perfect, achieving my basic life goals has given me a deep feeling of confidence and satisfaction that I can use to counter feelings of anxiety and depression as they come.

Each step I took along the path towards my goals gave me more confidence to move forward, but that confidence wasn't necessarily automatic. I have to periodically brag to myself about myself because otherwise I will naturally focus on my failures and weaknesses and start to feel like a loser. You should be very proud of your accomplishments in college. Most people could not do what you have done. Remind yourself of that. Feel good about yourself.

comment by witzvo · 2014-04-22T08:02:27.541Z · LW(p) · GW(p)

but a reaction to an environment in the broadest sense inherently unsuitable to humans.

So, can you say more about what aspect of your environment is bugging you? Captivity?? Do you want to try living somewhere more "outdoors"?

Replies from: Tripitaka, april_flower
comment by Tripitaka · 2014-04-26T08:57:07.503Z · LW(p) · GW(p)

I am imagining that some issues of depression/social anxiety might be a lot easier resolved in an ancestral environment. Especially the social anxiety part.

comment by april_flower · 2014-04-22T22:24:30.311Z · LW(p) · GW(p)

It was mainly a thought that occured to me to write down as the rest of the story wrote itself. My problem is more social anxiety, which of course pertains to the social environment. Moving of course will not help this anxiety one bit, more probably even amplify it.

comment by Lumifer · 2014-04-21T20:55:32.958Z · LW(p) · GW(p)

How strong is the evidence in favor of psychological treatment really?

I think the evidence shows that it works for some people, doesn't work for other people, and the spectrum of outcomes stretches all the way from "miraculously fixed everything" to "made everything worse" :-/

Oh, and "some people" and "other people" refers not just to the person being treated, but to a patient/psychotherapist pair. It is fairly common for people to have no success with a chain of therapists until they find "the one" who clicks and can effectively help with whatever the problem is.

Sorry, but there is really no answer to the question as posed.

Replies from: april_flower
comment by april_flower · 2014-04-22T22:26:52.947Z · LW(p) · GW(p)

So continue burning through therapists in the hope of being understood. Is there any shred of evidence that I should try psychoanalytic treatment again? From my impression the effect of it is similar to homeopathic treatment.

Sorry, but there is really no answer to the question as posed.

How can I restate it to get a more answerable question?

Replies from: Lumifer
comment by Lumifer · 2014-04-23T00:34:46.970Z · LW(p) · GW(p)

So continue burning through therapists in the hope of being understood.

I don't know. Note that this answer is different from "continue with what you were doing". One of the points here is that any advice has to be highly personalized and generic recommendations are quite useless.

As an aside, are you looking for a therapist to understand you, or to effect some change in you?

How can I restate it to get a more answerable question?

I don't think you can get a useful answer from strangers on the 'net.

comment by ChristianKl · 2014-04-22T14:05:53.767Z · LW(p) · GW(p)

We see these phenomena consistently in people. We also see them consistently in animals being held in captivity not suited to their species' specific needs. I am less and less convinced that this block of anxiety, depression and procrastination is a disease but a reaction to an environment in the broadest sense inherently unsuitable to humans.

What does it mean for a dog to be procrastinating?

Procrastination usually involves human wanting to do things that are not natural.

I used to believe that procrastination was something very unique to me but today I believe that nearly everyone struggles with it to some extend. Even someone like Tim Ferriss who advises a dozen startups and writes a book at the same time still deals with it. People who are productive simply have found strategies to still be productive despite being imperfect humans.

I am open to bibliotherapy.

You already read Burns. How about doing 15 minutes per day of his exercises for the next year?

Replies from: april_flower, Lumifer
comment by april_flower · 2014-04-22T22:30:55.654Z · LW(p) · GW(p)

Indeed I can try again. Though social cues are quite powerful in maintaining the routine.

Having options is nice. Also more varied experiences tend to stick better, like reading two different explanations of the same phenomenon.

comment by Lumifer · 2014-04-22T14:55:10.777Z · LW(p) · GW(p)

Procrastination usually involves human wanting to do things that are not natural.

Not at all. Procrastination is letting near and immediate incentives overcome far and remote ones.

People procrastinate by browsing the 'net instead of going running -- which one is more "natural"?

Replies from: ChristianKl, drethelin
comment by ChristianKl · 2014-04-22T15:33:24.116Z · LW(p) · GW(p)

Going running for the sake of doing exercise isn't natural.

comment by drethelin · 2014-04-23T03:33:48.355Z · LW(p) · GW(p)

Browsing the net= being sedentary, saving energy, staying in a place you know is safe and has access to food and water. Running= Wasting a shit ton of energy and putting yourself into the world and at risk for no immediate gain.

Seems obvious to me which you would be more naturally inclined to do.

comment by NancyLebovitz · 2014-04-22T11:13:39.919Z · LW(p) · GW(p)

We see these phenomena consistently in people. We also see them consistently in animals being held in captivity not suited to their species' specific needs. I am less and less convinced that this block of anxiety, depression and procrastination is a disease but a reaction to an environment in the broadest sense inherently unsuitable to humans.

I've heard the idea from Somatic Experiencing-- unfortunately, I haven't found anything that goes into detail about that particular angle, except that part of it seems to be about having a tribe-- it's not just about spending time out of doors.

I'll be keeping an eye out for information on the subject, but meanwhile, you might want to look into Somatic Experiencing and Peter A. Levine.

Replies from: april_flower
comment by april_flower · 2014-04-22T22:29:01.805Z · LW(p) · GW(p)

This scratches on some things some popular people sometimes note: A feeling of being derooted, having no sense of belonging or meaning. Maybe this is a reason for the recent resurging of religious organizations. Of course if this vague shred of an idea has some truth to it one should be able to create or find a tribe substitute.

I will look into it, thank you.

comment by James_Miller · 2014-04-23T00:04:13.664Z · LW(p) · GW(p)

Consider neurofeedback administered by a professional. In the U.S. it will cost between $50/200 a session. You probably need at least 20 sessions for permanent results, but you might be able to feel some effects during the first session.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-04-23T20:28:25.498Z · LW(p) · GW(p)

Source of information about effectiveness and duration?

Replies from: James_Miller
comment by James_Miller · 2014-04-23T20:45:12.446Z · LW(p) · GW(p)

None online. I have read several books on the topic and undergo it myself.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-04-23T21:10:29.987Z · LW(p) · GW(p)

If you don't mind, what were the books, and what changes have you noticed in yourself?

Replies from: James_Miller
comment by James_Miller · 2014-04-23T22:10:44.331Z · LW(p) · GW(p)

Protocol Guide for Neurofeedback Clinicians (very expensive but the best); The Neurofeedback Solution How to Treat Autism, ADHD, Anxiety, Brain Injury, Stroke, PTSD, and More; and Getting Started with Neurofeedback (Norton Professional Books).

Neurofeedback has many different targets. I have used it to become more relaxed and focused. Most of what I learned came from talking to neurofeedback professionals. I strongly suggest you not experiment on yourself, but rather do so under the care of a professional.

comment by MarkL · 2014-04-22T10:26:05.676Z · LW(p) · GW(p)

How strong is the evidence in favor of psychological treatment really?

Existent. But psychological treatment is in it's infancy. I am not a licensed mental health professional, but watch this:

https://www.youtube.com/watch?v=_V_rI2N6Fco

Now, go find a therapist who's at least 45 years old, preferably 50-plus, is not burned out, and loves what they do. It doesn't really matter what the therapeutic modality is. Don't go to a thirty-something CBT-weenie.

Edit: A bunch of recent posts on my blog are about therapy. May or may not be useful:

http://meditationstuff.wordpress.com/

comment by Tripitaka · 2014-04-26T08:52:55.621Z · LW(p) · GW(p)

some personal anecdotes /data points from someone in a similar situation (social anxiety,depression,procrastination to the point of dropping out of uni,going abroad): I was lucky with my CBT-psychotherapist,they helped me unravelling that big knot of connected issues. I am still suffering, but now equipped to deal with it. That said, I decided to travel for 8month (NZ),basically as frontal override for some of my issues. Be aware traveling with mental issues can terribly backfire; you are on your own without your usual escape strategies. Depending on your flavour of issues, strategies to get around that vary but expect and be resolved to have the same bad days like at home. Having heaps of money to get your own room/room service/fast food/return tickets helps; ensure a really solid safety net from home (someone to lend you money,do minor services for you,people to call at strange times, people to call you regularly). Do not expect the condition "abroad" to change you quickly- you'll still find it harder to get to know people than others. Expect lots of gras is greener-fallacy; I caught my brain giving the exactly same reasons for going home early that it gave month ago for going away. That said,was going away a good decision? Yes. Was it the optimal decision? I am not sure.

comment by drethelin · 2014-04-23T03:40:12.893Z · LW(p) · GW(p)

Making friends is hard with social anxiety but I think it's your best bet.

comment by [deleted] · 2014-04-21T22:43:11.823Z · LW(p) · GW(p)

Who are you, what are your physical and social environments like, and do you do the obvious things like lifting weights (or at least similar if you're female) and eating "right"?

The only reason to pay someone for non-specific therapy is if you don't have any friends, and even then you can't be truly honest without risking being institutionalized.

Replies from: kalium
comment by kalium · 2014-04-22T02:30:43.478Z · LW(p) · GW(p)

The only reason to pay someone for non-specific therapy is if you don't have any friends, and even then you can't be truly honest without risking being institutionalized.

Disagree. Frequent discussion of one's anxieties can be a heavy burden on a friendship, and it's vulnerable to cascading failures. If I have four friends and spread my worries evenly between them, and one finds this exhausting and decides to spend less time with me, then I have three friends I can talk to, each of whom will suddenly find me even more stressful to be around.

Replies from: None
comment by [deleted] · 2014-04-22T03:52:52.531Z · LW(p) · GW(p)

Friends include family.

If you have a shitty relationship with your family, then that sucks. If you're male, suck it up and be a man. If you're female and not ugly, you have an unlimited number of guys to dump your feelings into. If you're female, ugly, and have a shitty relationship with your family, then you probably have simiar friends and already share your feelings a lot with each other without fear of rejection so you're good.

Unless you're just gaming the system for pills (which is fine, if you know what you're doing), then professional therapy for non-specific stuff is pointless.

comment by kalium · 2014-04-22T02:27:50.180Z · LW(p) · GW(p)

It's not useful to discuss whether or not anxiety, depression, or procrastination is a "disease." It either is or isn't a useful way to adapt to the current environment, and if it's not useful you want to change either your reaction or your environment.

comment by Eugine_Nier · 2014-04-27T04:34:48.910Z · LW(p) · GW(p)

But the theoretical basis intrigued me. It just made so much more sense to be determined by one's beliefs than a fear of having one's balls chopped off, hating their parents and actively seeking out displeasure because that is what fits the narrative.

If by psychological treatment, you mean the Freudian kind, that's mostly BS.

comment by 2ZctE · 2014-04-22T01:20:56.422Z · LW(p) · GW(p)

I get confused when people use language that talks about things like "fairness", or whether people are "deserving" of one thing or another. What does that even mean? And who or what is to say? Is it some kind of carryover from religious memetic influence? An intuition that a cosmic judge decides what people are "supposed" to get? A confused concept people invoke to try to get what they want? My inclination is to just eliminate the whole concept from my vocabulary. Is there a sensible interpretation that makes these words meaningful to atheist/agnostic consequentialists, one that eludes me right now?

Replies from: fubarobfusco, Alsadius, IlyaShpitser, ChristianKl, Torello, iconreforged, Lumifer, michaelkeenan, Squark, Username, Eugine_Nier
comment by fubarobfusco · 2014-04-22T16:06:18.086Z · LW(p) · GW(p)

Here are some things people might describe as "unfair":

  • Someone shortchanges you. You buy what's advertised as a pound of cheese, only to find out at home that it's only four-fifths of a pound; the storekeeper had their thumb on the scale to deliberately mis-weigh it.
  • Someone passes off a poor-quality item as a good one. You buy a sealed box of cookies, only to find out that half of them are broken and crumbled due to mishandling at the store.
  • Someone entrusted with a decision abuses that trust to their advantage. The facilities manager of a company doesn't hire the landscaping company that makes the best offer to the company, but instead the one that offers the best kickback to the facilities manager.
  • Someone uses a position of power to take something that isn't theirs; especially when the victim can't do anything about it. A boy's visiting grandmother gives him $50 to buy a video game for his birthday; but as soon as the grandmother has left, the boy's mother takes the money away and uses it to buy liquor for herself.
  • Someone abandons a responsibility, leaving it to others to cover. Four people go out to dinner together; and the bill comes to $100. One person excuses himself "to go to the restroom," but doesn't come back, so the others have to pay his share of the bill as well as their own.
  • Someone takes advantage of a person's weak or ignorant position. A taxi driver, knowing that a tourist doesn't know the city, takes a deliberately circuitous route to run up the meter.
  • Someone uses asymmetrical information to deprive others of a stronger negotiating position. An employer tells each of her employees individually that they are poor performers, easily replaceable, and unlikely to get a raise; so that they do not realize that together they are not easily replaceable and that by collective bargaining they could negotiate for higher wages.
  • Someone breaks agreed-upon rules to take something of value. A poker player uses a trick to put a card into play that wasn't dealt to him — the classic "ace up the sleeve" — in order to win money that another player would have won.
  • Someone entrusted to do a good job instead does a bad job in order to gain an advantage some other way. A star sports player deliberately plays poorly so his team will lose a game they are strongly favored to win, allowing people who have bet against his team to win big.
  • Someone gets away with breaking the rules by making outside arrangements with those responsible for enforcing them. By donating to the "police charitable fund," you get a bumper sticker that makes it less likely the police will pull you over if you break the traffic laws.

What sorts of things do you see in common among these situations?

Replies from: Lumifer, blacktrance
comment by Lumifer · 2014-04-22T16:23:10.569Z · LW(p) · GW(p)

What sorts of things do you see in common among these situations?

Your list seems a bit... biased.

Let's throw in a couple more situations:

  • A homeless guy watches a millionaire drive by in a Lamborghini. "That's not fair!" he says.
  • An unattractive girl watches an extremely cute girl get all the guys she wants and twirl them around her little finger. "That's not fair!" she says.
  • A house owner learns that his house will be taken away from him under an eminent domain claim by the state which wants a developer to build a casino on the land. "That's not fair!" he says.
  • A union contractor is undercut on price by a non-union contractor. "That's not fair!" he says.
Replies from: blacktrance, fubarobfusco, Eugine_Nier
comment by blacktrance · 2014-04-22T17:32:42.501Z · LW(p) · GW(p)

While people say "That's not fair" in the above examples and in these, it seems there are two different clusters of what they mean. In the first group, the objection seems to be to self-serving deception of others, particularly violation of agreements (or what social norms dictate are implicit agreements). Your examples don't involve deception or violation of agreements (except perhaps in the case of eminent domain), and the objection is to inequality. I find it strange that the same phrase is used to refer to such different things.

Replies from: Kaj_Sotala, Lumifer
comment by Kaj_Sotala · 2014-04-23T07:24:43.260Z · LW(p) · GW(p)

I think you could say that in both groups, people are objecting because society is not distributing resources according to some norm of what qualities the resource distribution is supposed to be based on.

In the first group of examples, people are deceiving others and violating agreements, and society says that people are supposed to be rewarded for honest behavior and keeping agreements.

For the second group of examples:

  • The homeless person example is a bit tricky, since there are multiple different norms that they might be appealing to, but suppose that the homeless person used to be a hard worker before he got laid off and lost his home. The homeless person may then be objecting that society is supposed to reward a willingness to put in hard work, whereas he doesn't perceive the millionaire as having worked equally hard. Or, the homeless person may think that society should provide some minimum level of resources to everyone, and the fact that he has nothing while another person has millions demonstrates a particularly blatant violation of this rule.
  • There's a social ideal saying that people should be rewarded for their "internal" characteristics (like honesty) rather than "external" ones (like appearance), so the unattractive girl is objecting to the attractive girl being rewarded for something she's not supposed to be rewarded for.
  • The house owner is objecting because we usually think that people should be allowed to keep the property they have worked to have, and the eminent domain claim is violating that intuition.
  • The union contractor is complaing because he thinks that being unionized provides benefits for the profession as a whole, and that the non-union contractor is getting a personal benefit while defecting against the rest of the profession.

Regardless of what your ideal society looks like, creating it probably requires consistently maintaining some algorithm that rewards certain behaviors while punishing others. Fairness violations could be thought of as situations where the algorithm doesn't work, and people are being rewarded for things that an optimal society would punish them for, or vice versa.

You could also say that in both groups, there is actually an implicit agreement going on, with people being told (via e.g. social ideals and what gets praised in public) that "if you do this, then you'll be rewarded". If you buy into that claim, then you will feel cheated if you do what you think you should do, but then never get the reward.

Of course, the situation is made more complicated by the fact that there is no consistent, univerally-agreed upon norm of what the ideal society should be, nor of what would be the optimal algorithm for creating it. People also have an incentive to push ideals which benefit them personally, whether as a conscious strategy or as an unconscious act of motivated cognition. So it's not surprising that people will have widely differing ideas of what "fair" behavior actually looks like.

comment by Lumifer · 2014-04-22T18:25:15.461Z · LW(p) · GW(p)

I find it strange that the same phrase is used to refer to such different things.

However looking at reality, the phrase is used in all these ways, isn't it?

Replies from: Eugine_Nier, blacktrance
comment by Eugine_Nier · 2014-04-22T23:42:54.547Z · LW(p) · GW(p)

As Bart Wilson mentions here, a century ago the word "fairness" referred exclusively to the first cluster. However, due to various political developments during the past century it has drifted and now refers to a confused mix of both.

comment by blacktrance · 2014-04-22T18:31:32.381Z · LW(p) · GW(p)

Indeed it is, which is evidence for the two different types of situations feeling similar to people.

comment by fubarobfusco · 2014-04-22T17:24:27.243Z · LW(p) · GW(p)

Your list seems a bit... biased.

That's odd ... I was specifically trying to choose examples that would be relatively uncontroversial — cases of cheating, betrayal of trust, abuse of power, and so on; as opposed to cases of mere inequality of outcome.

Replies from: Lumifer
comment by Lumifer · 2014-04-22T18:23:34.590Z · LW(p) · GW(p)

I was specifically trying to choose examples

That's a bias, isn't it? :-)

If you're choosing examples to construct a definition from, already having a definition in mind makes the exercise pointless.

If you choose examples of fraud and abuse of power you essentially force the definition of "unfair" be "fraud and abuse of power".

Replies from: fubarobfusco
comment by fubarobfusco · 2014-04-22T23:04:26.820Z · LW(p) · GW(p)

Wow, and here I thought I'd be dinged for including such mildly politicized examples as the police one and the collective-bargaining one. Instead, I get dinged for not including a bunch of stuff likely to provoke a political foofaraw about class, gender, or eminent domain? Weird.

Okay, this is getting excessively meta. I'm done here.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T00:36:21.779Z · LW(p) · GW(p)

Instead, I get dinged for not including a bunch of stuff likely to provoke a political foofaraw

Maybe you should have been more concerned with figuring out how stuff really works and less with the possibility of provoking a political foofaraw on an internet forum...

comment by Eugine_Nier · 2014-04-23T00:19:10.131Z · LW(p) · GW(p)

Nickpick: Your third example:

A house owner learns that his house will be taken away from him under an eminent domain claim by the state which wants a developer to build a casino on the land. "That's not fair!" he says.

Is similar to one of fubarobfusco's examples:

Someone uses a position of power to take something that isn't theirs; especially when the victim can't do anything about it. A boy's visiting grandmother gives him $50 to buy a video game for his birthday; but as soon as the grandmother has left, the boy's mother takes the money away and uses it to buy liquor for herself.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T00:41:39.624Z · LW(p) · GW(p)

There is a subtle, but important difference. Many people (here and elsewhere) would consider the exercise of eminent domain powers by the state to be ethical and correct application of state powers for the betterment of society -- a few suffer but for the greater good.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-04-23T00:45:46.457Z · LW(p) · GW(p)

Yes, and if the example had involved a road or other public works project, as opposed to immediately selling the land to a developer, your objection would have been appropriate.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T00:53:30.285Z · LW(p) · GW(p)

Oh, but the developer will provide jobs, and serve as an attractor for other businesses, and generally lift the area economically, and pay taxes into state coffers, and there will be gallivanting unicorns under the rainbows, and the people will look at the project and say "This is good".

If you believe what the state will tell you.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-04-23T03:13:16.493Z · LW(p) · GW(p)

So whether that example fits with the first set depends on whether the state's claim that the project is good is true, and thus whether this example it is perceived as fitting with them depends on whether the perceiver believes the claim. Similarly, the Lamborghini example fits if one accepts the Marxist theory about the origin of income inequality.

Now we come to your example of the two girls. It's hard to make it an example of "fraud or abuse of power" (although it might be possible with enough SJ-style rhetoric about how beauty is an oppressive social construct). Notice that it is similar to the Lamborghini example otherwise, in particular it seems like the kind of thing that fits in the category whose archetypical member is the Lamborghini example.

So we can now reconstruct a history of the meaning of "unfair". Originally, i.e., about a century ago, it meant basically "fraud, cheating, or abuse of power". As Marxism became popular it expanded to include income inequalities, which fit that definition according to Marxist theory. Later as differences of income became one of the archetypical examples of "unfairness" and as the theory underlying its inclusion became less well-known, more things such as the two girls example came to be included in the category. See the history of verbs meaning "to be" in Romance Languages for another (less mind-killing) example of how semantic drift can produce these kinds of Frankencategories.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T04:14:16.249Z · LW(p) · GW(p)

I think it's simpler, without getting Marxism involved. The key word is "entitlement". If you feel entitled to something, then if you don't have it, someone is cheating you out of your right -- it's unfair! Doesn't really matter who, too -- nowadays people point at the universe and shout "Unfair!" :-/

comment by blacktrance · 2014-04-22T16:13:28.253Z · LW(p) · GW(p)

The general principle seems to be that there's an expectation of certain behavior, but one person acts deceptively in a way that harms the other people.

comment by Alsadius · 2014-04-22T04:19:14.170Z · LW(p) · GW(p)

It's not a theistic concept - if anything, it predates theology(some animals have a sense of fairness, for example). We build social structures to enforce it, because those structures make people better off. The details of fairness algorithms vary, but the idea that people shouldn't be cheated is quite common.

comment by IlyaShpitser · 2014-04-22T17:38:05.649Z · LW(p) · GW(p)

What does that even mean?

I am with Stanislaw Lem -- it's hard to communicate in general, not just about fairness. I find so many communication scenarios in life resemble first contact situations..

comment by ChristianKl · 2014-04-22T12:42:50.011Z · LW(p) · GW(p)

It's a cultural norm. If someone constantly defects in prisoner dilemma he's violating the norm of fairness and deverses to be punished for doing so.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-04-22T23:29:45.747Z · LW(p) · GW(p)

Except that in a lot of accusations of "unfairness" there is no obvious prisoner-dilemma-defection going on.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-23T13:07:46.946Z · LW(p) · GW(p)

Not lynching rich bankers means choosing to cooperate. Having a social landscape that's peaceful and without much violence isn't something to take for granted.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2014-04-23T14:42:49.725Z · LW(p) · GW(p)

Not lynching rich bankers means choosing to cooperate.

That is not a prisoner's dilemma.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-23T15:10:14.132Z · LW(p) · GW(p)

We sort of have an informal agreement of the proletarians not making a revolution and hanging the rich capitalists in return for society as a whole working in a way that makes everyone better of.

Rich bankers not fulfilling their side of working to make everyone in society better of is defecting from that agreement.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T15:16:10.974Z · LW(p) · GW(p)

We sort of have an informal agreement of the proletarians not making a revolution and hanging the rich capitalists in return for society as a whole working in a way that makes everyone better of.

No, we don't have anything of that sort.

Marx was wrong. He is still wrong.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-23T16:39:16.838Z · LW(p) · GW(p)

Marx was wrong.

Marx argued that a revolution is the only way to create meaningful social change. That's not what I'm saying in this instance.

Political power is justified in continental Europe through the social contract. Hobbes basically made the observation that every men can kill very other man in the state of nature and that we need a sovereign to wield power to prevent this from happening.

Even British Parliamentary Style debate that's not continental in nature usually doesn't put the same value on freedom as a political value as people in the US tend to do.

As far as the US goes the American dream is a kind of informal agreement. You had policies like the New Deal to keep everyone in society benefiting from wealth generation.

Then in the last 3 decades most of the new wealth went to the upper class instead of being distributed through the whole society as it had been in the decades before that point.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T16:50:05.385Z · LW(p) · GW(p)

Marx argued that a revolution is the only way to create meaningful social change.

Marx argued for a lot of things. The particular thing that I have in mind here is his position that the society consists of two classes -- a dispossessed ("alienated") proletariat and fat-cat capitalists, that these two classes are locked in a struggle, and that the middle class is untenable and is being washed out. This is the framework which your grandparent comment relied on.

It was wrong and is wrong.

Replies from: Douglas_Knight, ChristianKl
comment by Douglas_Knight · 2014-04-24T03:27:00.854Z · LW(p) · GW(p)

I don't think saying "That is not a prisoner's dilemma" is a useful way of communicating "those players don't exist."

Also, the topic at hand is what do people mean by "fair," not whether the situations they do or do not call fair are real situations.

comment by ChristianKl · 2014-04-23T20:23:12.704Z · LW(p) · GW(p)

The notion of "middle class" is involves having more than two sides. People calling themselves "upper-middle class" is a very American thing to do. In the US ideal a person of middle class is supposed to own his own home and therefore own capital.

Workers do organize in unions and use their collective bargaining power to achieve political ends in the interests of their members. When a union makes a collective labor agreement with industry representatives you do have two clearly defined classes making an agreement with each other.

In the late 19th century a bunch of unions did support the communist ideal of revolution but most of them switched.

Groups like the US Chamber of Commerce do have political power. Money of capitalists funds a bunch of think tanks who do determine a lot of political policy. Do you think that the Chamber of Commerce isn't representing the interest of a political class of capitalist?

Yes, individual people might opt out of being part of politics. We aren't like the Greek who punished people by death for not picking political sides.

Lastly, I would point out that I speak about political ideas quite freely and without much of an attachment. It might be that you take a point I'm making overly seriously.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T20:33:25.123Z · LW(p) · GW(p)

Lastly, I would point out that I speak about political ideas quite freely and without much of an attachment. It might be that you take a point I'm making overly seriously.

Ah. OK then.

comment by Eugine_Nier · 2014-04-24T01:32:48.024Z · LW(p) · GW(p)

How would you apply that to Lumifer's second example?

An unattractive girl watches an extremely cute girl get all the guys she wants and twirl them around her little finger. "That's not fair!" she says.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-24T09:41:08.965Z · LW(p) · GW(p)

The usual way groups of girls deal with this is to call the girl who actually twirls around a lot of guys around her little finger a slut. The punishment isn't physical violence but it's there.

comment by Torello · 2014-04-24T03:09:54.275Z · LW(p) · GW(p)

The sense of fairness evolved to make our mental accounting of debts (that we owe and are owed) more salient by virtue of being a strong emotion, similar to how a strong emotion of lust makes the reproductive instinct so tangible. This comes in handy because humans are highly social and intelligent and engage in positive-sum economic transactions, so long as both sides play fair... according to your adapted sense of what's fair. If you don't have a sharp sense of fairness other people might walk all over you, which is not evolutionarily adaptive. See "The Moral Animal" or "Nonzero" by Robert Wright, or the chapter "Family Vaules" in Steven Pinker's "How the Mind Works."

This sense of fairness may have been co-opted at other levels, like a religious or political one, but it's quite instinctual. Very young children have a strong sense of fairness before they could reason to it, just as they can acquire language before they could explicitly/consciously reason from grammar rules to produce grammatical sentences. It's very engrained in our mental structure, so I think it would take quite an effort to "wipe the concept."

comment by iconreforged · 2014-04-23T16:21:18.961Z · LW(p) · GW(p)

So, as I've heard Mike Munger explain it, fairness is evolution's solution to the equilibrium outcome selection problem. "Solution to the what?" you ask. This would be easy to explain if you're familiar with the Edgeworth box.

In a simplified economy consisting of two people and two goods, where the two people have some combination of different tastes and different initial baskets of things. Suppose that you have 20 oranges and 5 apples, and that I have 3 oranges and 30 apples, and that we each prefer more even numbers of fruits than either extreme. We can trade apples and oranges to make each of us strictly better off, but there's a whole continuum of possible trades that make us better off. And with your highly advanced social brain, you can tell that some of these trades are shit deals, like when I offer you 1 apple for 12 of your oranges. Even though we'd both mutually benefit, you'd be inclined to immediately counteroffer with something a closer to the middle of the continuum of mutually beneficial exchanges, or a point that benefits you more as a reprimand for my being a jerk. Dealing fairly with each other skips costly repeated bargaining, and standing up to jerks who deviate from approximate fairness preserves the norm.

This is the sort of intuition that we're trying to test for in the Ultimatum game.

comment by Lumifer · 2014-04-22T15:03:53.051Z · LW(p) · GW(p)

"Fairness" generally means one out of two things.

Either it's, basically, a signal of attitude -- to call something "fair" is to mean "I approve of it" -- or it is a rhetorical device in the sense of a weapon in an argument.

I think that people generally have gut ideas about what fairness entails, but they are fuzzy, bendable, and subject to manipulation, both by cultural norms and by specific propaganda/arguments.

comment by michaelkeenan · 2014-04-22T08:29:30.486Z · LW(p) · GW(p)

According to Moral Foundations Theory, fairness is one of the innate moral instincts.

According to Scott Adams, fairness was invented so children and idiots can participate in arguments.

I think we have a fairness instinct mostly so we can tell clever stories about why our desire for more stuff is more noble than greed.

comment by Squark · 2014-04-22T07:06:53.807Z · LW(p) · GW(p)

It might be that "fairness" is part of our ingrained terminal values. Of course it doesn't mean you shouldn't violate "fairness" when the violation is justified by positive utility elsewhere. However, beware of over-trusting your reasoning.

comment by Username · 2014-04-22T04:26:05.430Z · LW(p) · GW(p)

Tracing the memetic roots back, you could say that 'fairness' derives from the assumption that all humans have equal inherent worth, which I suppose you could link back to religious ideals. Natural rights follow from this same chain, but it's not obvious to me what concepts came first and caused the others (never mind what time they were formalized).

If you want to strike it from your thinking, keep in mind that fairness is a core assumption of our social landscape, for better or worse. It can be worth keeping solely because people might hate you if you don't.

comment by Eugine_Nier · 2014-04-22T07:42:53.563Z · LW(p) · GW(p)

The word "fairness" has been subject to a lot of semantic drift during the past century. Here is a blog post by Bart Wilson, describing the older definition, which frankly I think makes a lot more sense.

comment by Metus · 2014-04-21T11:16:04.535Z · LW(p) · GW(p)

Humans are diverse.

I mean this not only in the sense of them coming all kinds of shapes, colours and sizes, having different world views and upbringings attached to them, but also in the sense of them having different psychological, neurological and cultural makeup. It does not sound like something that needs to explicitly said but apparently it needs to be said.

Of course first voices have realised that the usual population for studies is WEIRD but the problem goes deeper and further. Even if the conscientious scientist uses larger populations, more representative for the problem at hand, the conclusions drawn tend to ignore human diversity.

One of the culprits is the concept of "average" or at least a misuse of it. The average person has an ovary and a testicle. Completely meaningless to say, yet we are comfortable in hearing statements like "going to college raises your expected income by 70%" (number made up) and off to college we go. Statements like these suppress a great deal of relevant information, namely the underlying, inherent diversity in the population. Going to college may increase lifetime earnings, but the size of this effect might be highly dependent on some other factor like inherent cognitive ability and choice of major.

Now that is obvious, you might say, but virtually all research shows that this is not the case. It was surprising to see that the camel has two humps, that is, one part of the population seems to be incapable of learning programming, while the other is. And this can be determined by the answer to a single question. Research on exercise and diet is massively convoluted with questions about endurance/strength and carbs/fats. May this be because of ignoring underlying biological factors?

People are touting the coming age of personalised medicine as they see massively diminishing returns on generic medicine. Ever more diseases are hypothesised to have very specific causes for each person necessitating ever more specialised treatment. The effects of psychedelic substances are found to be dependant on the exact psychological makeup, e.g. cannabis causing psychosis only in individuals already at risk for such episodes.

There is no exact point to this rant. Just the observation that ever more statements are similar to saying "having unprotected sex with your partner has a high probability of leading to pregnancy" to homosexual man.

Replies from: fubarobfusco, Kaj_Sotala, IlyaShpitser, Punoxysm
comment by fubarobfusco · 2014-04-21T15:24:03.051Z · LW(p) · GW(p)

It was surprising to see that the camel has two humps, that is, one part of the population seems to be incapable of learning programming, while the other is.

The study you're probably thinking of failed to replicate with a larger sample size. While success at learning to code can be predicted somewhat, the discrepancies are not that strong.

http://www.eis.mdx.ac.uk/research/PhDArea/saeed/

The researcher didn't distinguish the conjectured cause (bimodal differences in students' ability to form models of computation) from other possible causes. (Just to name one: some students are more confident; confident students respond more consistently rather than hedging their answers; and teachers of computing tend to reward confidence).

And the researcher's advisor later described his enthusiasm for the study as "prescription-drug induced over-hyping" of the results ...

Clearly further research is needed. It should probably not assume that programmers are magic special people, no matter how appealing that notion is to many programmers.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2014-04-22T15:08:53.763Z · LW(p) · GW(p)

The failure to replicate was of their test, not of the initial observation. Specifically it was considered interesting why the distribution of grades in CS (apparently typically two-humped) was different from eg mathematics (apparently typically one-humped). As far as I know this still remains to be explained.

comment by Kaj_Sotala · 2014-04-21T14:35:07.596Z · LW(p) · GW(p)

See also the comments of Yvain's What Universal Human Experiences Are You Missing Without Realizing It? for a broad selection of examples of how human minds vary.

Replies from: raisin
comment by raisin · 2014-04-21T15:53:30.254Z · LW(p) · GW(p)

Oh, now I realized the point of that article was the comments, not the article itself. Thanks for clarifying this!

comment by IlyaShpitser · 2014-04-21T14:28:45.383Z · LW(p) · GW(p)

There are three separate issues:

(a) The concept of averaging. There is nothing wrong with averages. People here like maximizing expected utility, which is an average. "Effects" are typically expressed as averages, but we can also look at distribution shapes, for instance. However, it's important not to average garbage.

(b) The fact that population effects and subpopulation effects can differ. This is true, and not surprising. If we are careful about what effects we are talking about, Simpson's paradox stops being a paradox.

(c) The fact that we should worry about confounders. Full agreement here! Confounders are a problem.


I think one big problem is just the lack of basic awareness of causal issues on the part of the general population (bad), scientific journalists (worse!), and sometimes even folks who do data analysis (extremely super double-plus awful!). Thus much garbage advice gets generated, and much of this garbage advice gets followed, or becomes conventional wisdom somehow.

Replies from: Lumifer
comment by Lumifer · 2014-04-21T17:37:18.684Z · LW(p) · GW(p)

There is nothing wrong with averages.

That depends. Mostly they are used as single-point summaries of distributions and in this role they can be fine but can also be misleading or downright ridiculous. The problem is that unless you have some idea of the distribution shape, you don't know whether the mean you're looking at is fine or ridiculous. And, of course, the mean is expressly NOT a robust measure.

comment by Punoxysm · 2014-04-22T21:22:16.852Z · LW(p) · GW(p)

The Eurythmics said it best:

I travel the world and the seven seas

Everybody's looking for something

Some of them want to use you

Some of them want to get used by you

Some of them want to abuse you

Some of them want to be abused

comment by apeterson · 2014-04-21T12:19:16.282Z · LW(p) · GW(p)

I've been struggling with how to improve in running all last year, and now again this spring. I finally realized (after reading a lot of articles on lesswrong.com, and specifically the martial arts of rationality posts) that I've been rationalizing that Couch to 5k and other recommended methods aren't for me. So I continue to train in the wrong way, with rationalizations like: "It doesn't matter how I train as long as I get out there."

I've continued to run intensely and in short bursts, with little success, because I felt embarrassed to have to walk any, but I keep finding more and more people who report success with programs where you start slowly and gradually add in more running.

Last year, I experimented with everything except that approach, and ended up hurting myself by running too far and too intensely several days in a row.

It's time to stop rationalizing, and instead try the approach that's overwhelmingly recommended. I just thought it would be interesting to share that recognition.

Replies from: Nisan, niceguyanon, TylerJay, None, MathiasZaman
comment by Nisan · 2014-04-21T14:33:47.176Z · LW(p) · GW(p)

You might also want to work on eliminating embarrassment.

Replies from: raisin
comment by raisin · 2014-04-21T15:54:08.932Z · LW(p) · GW(p)

Any guides on how to do that?

Replies from: khafra, None, Nisan
comment by khafra · 2014-04-21T16:15:36.467Z · LW(p) · GW(p)

Rejection Therapy is focused in that direction.

Replies from: Error, Metus
comment by Error · 2014-04-21T22:39:23.392Z · LW(p) · GW(p)

That game is terrifying just to think about.

comment by Metus · 2014-04-22T00:56:47.608Z · LW(p) · GW(p)

Awesome, do you have more like that?

comment by [deleted] · 2014-04-22T01:29:28.744Z · LW(p) · GW(p)

Maximize embarrassment until you're no longer capable of feeling shame from the foibles and sensibilities of mere humans.

comment by Nisan · 2014-04-22T00:56:45.028Z · LW(p) · GW(p)

Psychological theories like IFS would recommend charitably interpreting the inclination to embarrassment as a friendly impulse to protect oneself by protecting one's reputation. For example, some people are embarrassed to eat out alone; a charitable interpretation is that part of their mind wants to avoid the scenario of an acquaintance of theirs seeing the lonely diner and concluding that they have no friends, and then concluding that they are unlikable and ostracizing them. Or a minor version of the same scenario.

Then one can assess just how many assetts are at stake: Realistically, nothing bad will happen if one eats out alone. Or one might decide that distant restaurants are safe. The anticipation of embarrassment might respond with further concerns, and by iterating one might arrive at a more coherent mental state.

comment by niceguyanon · 2014-04-21T16:02:51.502Z · LW(p) · GW(p)

It's time to stop rationalizing, and instead try the approach that's overwhelmingly recommended.

Have you considered not running as your primary exercise program? If you aren't specifically going for the performance of running, I would shelve it and instead cut calories (assuming you have extra weight to lose) and lift heavy things at the gym. Distance running is great for distance running.

I have been in multiple running groups and they are great for achieving goals like 26.2 miles, but after that, I wanted to optimize for looks and not for long distances (any more).

Replies from: apeterson
comment by apeterson · 2014-04-21T16:15:57.174Z · LW(p) · GW(p)

Unfortunately, I live in a rural area where gyms are hard to come by. I have enjoyed running for its own sake in the past, that's a part of why I want to get back into running shape, but I will try to add in some body weight exercises as well as my running.

Replies from: Lumifer, niceguyanon
comment by Lumifer · 2014-04-21T18:16:30.972Z · LW(p) · GW(p)

Unfortunately, I live in a rural area where gyms are hard to come by.

You don't need a gym to exercise. Google up "paleo fitness", Crossfit is full of advice about how to build a basic gym in your garage, etc. etc.

comment by niceguyanon · 2014-04-21T16:49:09.363Z · LW(p) · GW(p)

I have enjoyed running for its own sake in the past

That's great, it would be such a problem to not like running and not live near a gym. Good luck.

comment by TylerJay · 2014-04-21T17:53:51.425Z · LW(p) · GW(p)

The best general advice I can give you is:

  1. Be honest with yourself when determining your current abilities. There's no shame in building slowly. It just means you get to improve even more.

  2. Not every day is a hard day. There are huge benefits to varying your workouts. If you're running about the same distance each day you run, you're doing it wrong. Some days should be shorter, more intense intervals broken up by very slow jogs or walks, while other days should be "active recovery" days of short, slow runs, while other days you might go for distance and a sustained pace. Just to give an idea, even elite athletes will not usually do more than 2-3 hard (interval) days each week. You will want to start with 0 or 1.

  3. Watch your volume: Slowly increase your total miles / week over time. Make sure you start low enough not to get repetitive stress injuries.

I was once a fairly successful runner and have a lot of experience with designing training programs for both distance running and weightlifting. I'd be happy to help you design your running program or to look over your program once you do some research and put something together. Let me know!

Replies from: Lumifer
comment by Lumifer · 2014-04-21T18:18:34.675Z · LW(p) · GW(p)

A side question: from a joint-stress point of view, is it better to have a heavily cushioned running shoe or it's better to go for minimal shoes and avoid heel strike and running on hard surfaces?

Replies from: TylerJay
comment by TylerJay · 2014-04-21T22:20:36.804Z · LW(p) · GW(p)

That's a tough question, and one I've actually struggled to answer myself.

If you ask anyone in the mainstream competitive running community, they'll tell you to get a good, cushioned running shoe, but also to work on your form to develop a good midfoot strike. Runners often to barefoot drills and other drills to develop proper midfoot strike, but still run in cushioned running shoes. They'll also go running on the beach barefoot if they can to improve foot strength and form.

Repetitive stress injuries (shin splints, stress fractures, joint and tendon problems) are the single most common injury in runners and have taken me out of the game many times, even when actively trying to prevent them and with proper coaching. Proper shoes and good running form are both supposed to reduce these injuries.

However, there are a lot of successful barefoot runners and I do think there is something to learn from the ancestral health and fitness communities. There are a lot of runners who go completely barefoot and a lot who use minimalist footwear like Vibrams and don't report any issues. They claim that your body mechanics are better barefoot and I have to agree that we were built to run barefoot. However, a lifetime of wearing shoes could definitely make a difference on whether or not running barefoot is still a good idea.

I suspect that you just have to be a lot more careful with barefoot running and that it's probably not a good idea for your joints or back longterm to run barefoot or minimal with high volume for years. But honestly, I don't know if it's any worse than doing it with cushioned running shoes. Runners in proper shoes also have joint problems when they get older.

comment by [deleted] · 2014-04-23T04:58:55.736Z · LW(p) · GW(p)

I've continued to run intensely and in short bursts

Do you mean walk-run-walk-run in a single session? Or that you do short intense sessions with no walking?

Replies from: apeterson
comment by apeterson · 2014-04-23T12:35:30.748Z · LW(p) · GW(p)

I would just set up short runs around my apartment that were all "run" no walk and gradually increase my distance. But one of the problems was that I just wasn't out there very long. It was a convenient excuse when I was busy to just run a 15 minute loop instead of run/walking for 30 minutes+.

comment by MathiasZaman · 2014-04-21T14:44:12.764Z · LW(p) · GW(p)

Is there any specific reason why you've been avoiding those approaches (e.g. where you slowly increase)? You mention that you told yourself "It isn't for me," but haven't told us why.

because I felt embarrassed to have to walk any

Something I've had trouble with now that I'm starting to run is finding a running/jogging speed that takes as little energy, while still not walking. The last time I ran I finally found it and severely decreased the time I spend walking. It might be helpful to find that speed. I can guarantee you that it will feel very slow.

Replies from: apeterson
comment by apeterson · 2014-04-21T15:27:08.465Z · LW(p) · GW(p)

It's mostly just the contrast between how I learned running in High school cross country and what's actually recommended now. There were no real rest days, we ran 5 days a week and we were supposed to run at least once on the weekends. We ran hill reps two days a week, and long runs on the other days. We were all on the same training program regardless of where we started from.

What I've read recently is that about 4 days a week is a better way to do it, at least during your early progress, with a mixture of long slow runs and some interval work outs once you've reached a good level of fitness.

comment by NancyLebovitz · 2014-04-26T03:15:38.902Z · LW(p) · GW(p)

Research on mindfulness meditation

Mindfulness meditation is promoted as though it's good for everyone and everything, and there's evidence that it isn't-- going to sleep is the opposite of being mindful, and a mindfulness practice can make sleep more difficult. Also, mindfulness meditation can make psychological problems more apparent to the conscious mind, and more painful.

The difficulties which meditation can cause are known to Buddhists, but have not yet known by researchers or the general public. The commercialization of meditation is part of the problem.

Replies from: ChristianKl, raisin, Tenoke
comment by ChristianKl · 2014-04-26T15:07:20.833Z · LW(p) · GW(p)

there's evidence that it isn't-- going to sleep is the opposite of being mindful, and a mindfulness practice can make sleep more difficult.

It's opposite in some regards but not all of them. Both sleep and mindfulness meditation usually lead to very little beta wave activity in the brain.

I don't have a Zeo myself but it wouldn't surprise me if I could reach a state in meditation where I'm mindful but the Zeo labels me as sleeping.

As far as researching whether meditation improves how well you feel, I think that's hard. 5 years ago, if you asked me how I'm feeling than a real answer might be good or bad. Maybe even 7 different stages of a Likert scale. Today a full answer might take 5 minutes because I have awareness of a lot of stuff that goes on inside myself. If you simply compare the values towards those 5 years ago I don't think that would tell you very much. There was a while when I tried to keep numbers about my daily happiness level but after a while I simply gave up because it didn't seem to provide useful insight because they reference points aren't stable.

comment by raisin · 2014-04-26T13:04:58.272Z · LW(p) · GW(p)

After I started meditating mindfully, my anxiety got worse, a lot worse. I talked about this on meditation forums and they said it means that "I'm working on my problems" and I should just keep doing it more and more and I would somehow overcome it. Well, I tried to, but my anxiety only got worse. Currently I have a small break from meditating.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-26T13:57:44.355Z · LW(p) · GW(p)

After I started meditating mindfully

How do you know that you are meditating mindfully? If you ask that question on a meditation forum they have no way to know whether you are doing things right.

If you want help in this venue it would help if you describe exactly what you think you are doing when you are "meditating mindfully". It would also help to know what you exactly observed that makes you conclude that your anxiety got worse.

Replies from: raisin
comment by raisin · 2014-04-26T14:20:28.651Z · LW(p) · GW(p)

After I made that post I thought I should have put "tried" before "meditating mindfully", but then I forgot about it. You're right, I'm probably not doing it correctly.

I focus on my breath, but it's of course really hard for me and I don't know if I'm doing it properly. More specifically I focus on the feeling when air goes in and out of my nose. The problem is that I can either focus on my breath and breath forcefully, or I daydream and breath naturally. This process feels like a cat chasing its tail. In the "mindfulness in simple English" they said that I shouldn't control my breathing, but I don't know how to do that. It's really hard for me to focus on my breath without trying to control it.

What exactly I observed? Usually I feel more tense and focus on myself more after I've meditated. I'm not sure if I can give more specific examples because I haven't kept a diary about this.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-26T15:42:15.304Z · LW(p) · GW(p)

I focus on my breath, but it's of course really hard for me and I don't know if I'm doing it properly. More specifically I focus on the feeling when air goes in and out of my nose.

As far as I understand some traditional Buddhist do advocate to feel the air going in and out of your nose. I think that practice might make sense for people who aren't present in their head. For Western intellectuals who already spent a lot of time in their head I think it makes more sense to feel the breath in the belly.

Here on LW we also don't meditate with the main purpose of speaking spiritual experiences. Opening the third eye isn't the point of the exercise for us but it might be for some Buddhists who like focusing on the breath and if do that I feel that part of my attention is on the chakra generally called third eye.

To speak in a bit more New Age language focusing on your belly instead of your nose will make you more grounded.

From a more Western perspective good German physiotherapy says that it's beneficial to breath with the belly instead of breathing higher in the body.

My first meditation book was from Aikido master Koichi Tohei. Tohei advocates a type of meditation where one is focused on the tan-diem as the locus of attention while meditating. The tan-diem is a chakra around an two finger breadth under the belly button. Tohei also calls it the center of the body and the one-point.

After googling a bit around the solar plexus might also be a good point but you don't need to focus on a single point. The belly is good enough as an area.

If you are completely unable to be in a state where you don't control your breath and don't day dream start by focusing on deep long breathes will being focused on the belly and go for maximum length of breaths.

It's unfortunate that I have to use words like chakra while speaking on LW but those words have some use. You don't need to believe that chakras really exist. Just take them as crude approximations that the kind of people with experience of meditation use. Unfortunately I also don't have good scientific evidence to back up what I said.

Usually I feel more tense and focus on myself more after I've meditated.

Meditation increases self awareness. That's the point. The interesting thing would be whether you are also more tense by objective measures such as increased pulse or blood pressure. If you live together with other people you might also have them rate the level of your tension.

I'm not sure if I can give more specific examples because I haven't kept a diary about this.

The Feeling Good handbook get's frequently referenced on LW. In it Burns advocates that people who want to self treat anxiety spent 5 minutes every week to fill out a questionnaire that measures their anxiety levels.

If I would struggle with anxiety that I wanted to go away I would make myself a Google form with Burns anxiety list and answer it every sunday to see whether I'm improving as time goes on.

Having a free text diary is also valuable.

Replies from: raisin, raisin
comment by raisin · 2014-04-27T13:01:39.140Z · LW(p) · GW(p)

I tried what you suggested. I sat in one position for 50 minutes and tried to focus on the feeling of breathing in my belly (see how I tabooed my earlier use of "meditating mindfully"?) Here's what I observed:

At first it was a bit hard to find the breathing, it's more subtle than the feeling in my nostrils. But I was able to occasionally focus and my focus gravitated towards that region close to the belly button. It feels better to focus on my belly than on my nostrils. Focusing on nostrils feels heavy and shallow, while focusing on belly feels a bit more light and deep.

What surprised me most was that I felt like I was actually able to focus on the feeling in my stomach without trying to control my breathing as much. At least I was able to more easily convince that this was the case. It feels like nostrils are so close to where the act of breathing happens, while my belly is more distanced from this thing that does the breathing. It feels more like focusing on an external object.

It was mostly fantasizing and daydreaming and I was able to focus only for short periods of time, maybe a few seconds and just occasionally. I got obsessive-compulsive thoughts like "focus on your nostrils", but I tried to be mindful about those and mostly succeeded. I was a bit tense and at least at one point I noticed my heartbeat was quite fast, which made me more anxious. Part of this tenseness was due to the fact that I decided a poor posture when I started. I decided not to change this posture along the way.

I feel more relaxed than when I started and I don't usually feel like that when I've meditated. So overall, a positive experience, placebo or not.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-27T16:54:39.801Z · LW(p) · GW(p)

Great. Thank you for sharing your experience. It sounds like you are moving in the right direction.

The fact that your heartbeat gets fast and an emotion comes up that makes you anxious is no bad sign. If you stay present and your body processes the emotion it's dealt with. After processing strong emotions my body usually feels more relaxed than before. In meditation tension can rise to uncomfortable levels. Then the body recognizes the tension as unnecessary and the tension falls off.

I think 50 minutes are probably too much for you at your stage. Staying focused for 50 minutes is very hard and you are likely to lose your focus.

In your situation I would rather go for 10 or 15 minutes for meditating alone. Set an alarm clock. Once you reach the point where you feel like you can focus for longer periods of time you can increase the time you meditate.

If you want to spent more time writing down what you experienced like you just did is very useful. It allows you to make sense of the experience. That's what diary writing is about.

(I'm also embarrassed if someone notices I'm keeping a diary which is of course really stupid and something I should work on).

I personally keep information like that in my own Evernote account and don't have a physical diary that could lie around that someone could notice. You don't need to talk with the kind of people who would look down on you for having a diary about the fact that you have a diary.

The point of writing things done in a diary is to refine your thinking. You force yourself to bring clarity into your thought. For me writing a post on LW like the one above about why I recommend focusing on the belly instead of the nose, refines my own thinking about meditation. Using you and LW as an audience instead of simply writing down my thoughts in a private journal has advantages or disadvantages. When writing for an LW audience I have to be more careful with terms like chakra then when I'm just writing for myself. Writing emails to friends can also be useful to refine your thoughts. You probably have a bunch of different friends with different perspectives on life and different level of trust when it comes to sharing personal experiences.

All my writing still goes into my Evernote account. Meditation can lead to perceiving a bunch of new things that you never experienced before. If you don't want to become a mystic, putting cognitive labels on experience is important to keep your orientation and be able to navigate the world.

I'm not sure if I got anything else out of your post, but I will try to focus on my belly the next time I meditate.

It was the main point of my first half. At this point in time understanding the "why" isn't that important.

It [my belly] feels more like focusing on an external object.

That's an interesting way of putting it. With time your belly won't feel like an external object anymore but will feel internal. At that point a lot of your anxiety issues will likely solve themselves.

comment by raisin · 2014-04-26T19:58:25.418Z · LW(p) · GW(p)

I'm not sure if I got anything else out of your post, but I will try to focus on my belly the next time I meditate. The chakra and third eye stuff didn't bother me, just maybe confused a little, but I have a vague feeling of what they might describe. I've actually downloaded the Feeling Good handbook, but reading the whole book is currently a pretty daunting task. That questionare seems easy so it might just be something I could do. Diary is also something I've tried to do, but akrasia has prevented me from doing it frequently (I'm also embarrassed if someone notices I'm keeping a diary which is of course really stupid and something I should work on).

Thanks for being kind, I expected a more hostile reply.

Replies from: moridinamael
comment by moridinamael · 2014-04-27T01:33:31.064Z · LW(p) · GW(p)

One piece of advice, sort of a shot in the dark but aimed at addressing a common failure mode. If you were trying to force yourself to meditate while sitting in an uncomfortable position or for excessive lengths of time, don't do that. All you're doing is training yourself to be pissed off and tense about meditating. Try just sitting comfortably in a chair and focusing on your breath for ten minutes, or even just five minutes at first if it's really that arduous.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-27T13:11:18.615Z · LW(p) · GW(p)

I agree. Just be sure that you sit in a stable position.

I personally can sit comfortably in lotus. It's a learned skill but it's not something you need to learn to be able to meditate and if you focus on it at the beginning you focus on the wrong thing.

comment by Tenoke · 2014-04-26T08:00:12.408Z · LW(p) · GW(p)

Thanks, this is one of the very few meditation papers that seem to be worth reading, since as they observe:

The important thing to understand about the report is that they were looking for active control groups, and they found that only 47 out of over 18,000 studies had them, which is pretty telling

comment by fire_alarm · 2014-04-23T15:44:49.322Z · LW(p) · GW(p)

How do I decide whether to get married?

  • My girlfriend of four years and I are both graduating college.
  • I haven't found employment yet, and she's returning home for work.
  • As near as I can tell, we're very compatible.

Pros

  • We are very fond of each other, get a lot of value out of each other's time.
  • We've been able to talk about the subject sanely.
  • Status
  • We agree on religion and politics.
  • Married guys make more on average, but the arrow of causality could point in either direction or come from something else.
  • Financial benefits

Cons

  • Negative Status associated with marrying young?
  • No jobs yet, no clear home or area to live in.
  • She sometimes gets mad at me for things I'm "just supposed to know" to do, not do, say, or not say. I'm not sure if she's right and I'm a jerk.

She has said that she doesn't want to marry me if she's just my female best friend that I sleep with. But I don't know how to evaluate what she's asking. There are a number of possibilities. Maybe I don't feel the requisite feelings and thus she wouldn't want to be married. Maybe I do have the feelings and I have no way to evaluate whether I do or not. Maybe I'm not ever going to feel some extra undetected thing X, ever, and so I should just go through the motions saying that I do, and our marriage prospects are entirely unchanged. Maybe this is just some signalling ritual we have to go through.

We both are concerned that I've not really had a relationship not with her, so there are no points of comparison for me to make.

Replies from: ChristianKl, Lumifer, ephion, shminux, Squark, army1987, None
comment by ChristianKl · 2014-04-23T20:23:57.651Z · LW(p) · GW(p)

In your list you didn't mention the topic of getting children. If you marry someone with the intention of spending the rest of your life together with them, I think you should be on the same page with regards to getting children before you marry.

comment by Lumifer · 2014-04-23T15:54:37.643Z · LW(p) · GW(p)

What exactly do you think/hope will change between the current situation (which I assume involves you two living together) and the situation if you were to marry?

comment by ephion · 2014-04-23T19:06:08.900Z · LW(p) · GW(p)

Don't get married unless there is a compelling reason to do so. There's a base rate of 40-50% for divorce, and at least some proportion of existing marriages are unhealthy and unhappy. Divorce is one of the worst things that can happen to you, and many of the benefits of marriage to happiness are because happier people are more likely to get married in the first place.

comment by shminux · 2014-04-23T21:30:46.706Z · LW(p) · GW(p)

She has said that she doesn't want to marry me if she's just my female best friend that I sleep with.

What are her feelings about you? Are you "just" her "male best friend that she sleeps with"? Your post comes across as rather asymmetric.

We both are concerned that I've not really had a relationship not with her, so there are no points of comparison for me to make.

Aren't you "both concerned" that she had too many relationships and so may decide that you are not for her precisely because she has these "points of comparison"? I suspect that she is the dominant partner in this relationship, possibly because she is more mentally mature, and this is often a warning flag.

She sometimes gets mad at me for things I'm "just supposed to know" to do, not do, say, or not say. I'm not sure if she's right and I'm a jerk.

Do you get mad at her for things she is just supposed to know to do, say or not say?

Anyway. DO NOT GET MARRIED YET until you figure out how to be an equal in this relationship (and if you think that you are, then you are fooling yourself).

comment by Squark · 2014-04-23T19:53:17.394Z · LW(p) · GW(p)

I don't know what is the significance of marriage for you, except symbolic. IMO the truly critical point is having kids. You probably want to have stable income before that.

Regarding things you're "just supposed to know": same thing happens to me with my wife. Haven't stopped us from being together for 10 years and raising a 4 year old son. Different people see things differently and have different assumptions on what is "obvious". The important thing is being mutually patient and forgiving (I know it's easier said than done, but it's doable).

Regarding the "extra feeling". Don't really know what to tell you. It is difficult to compare emotional experiences of different people. When our relationship started, it was mad, passionate infatuation. Now it's something calmer but it is obvious to me we love each other.

I had few relationships apart from my wife and virtually no serious relationships. Never bothered me.

comment by A1987dM (army1987) · 2014-04-26T20:18:34.195Z · LW(p) · GW(p)

Married guys make more on average, but the arrow of causality could point in either direction or come from something else.

And married women make less, so even assuming the arrow of causality is entirely from marital status to income it's not clear to me what would happen to your combined income.

Replies from: 9eB1
comment by 9eB1 · 2014-04-27T19:38:35.650Z · LW(p) · GW(p)

Even if your combined income decreases, your combined consumption probably increases, because many goods are non-rivalrous in a marriage situation. See here for a discussion.

Replies from: Eugine_Nier, army1987
comment by Eugine_Nier · 2014-05-03T03:14:42.397Z · LW(p) · GW(p)

your combined consumption probably increases,

I believe you meant decreases.

Replies from: gwern
comment by gwern · 2014-05-03T03:33:48.472Z · LW(p) · GW(p)

I think he means increases. If your consumption decreases, then your standard of living is falling and that doesn't sound good at all.

comment by A1987dM (army1987) · 2014-04-28T07:35:27.058Z · LW(p) · GW(p)

Good point, but doesn't that also apply to unmarried cohabitation?

EDIT: BTW, the bottom of your post says “[...] marriage makes family income go up via the large male marriage premium minus the small female marriage penalty”, which answers my question upthread.

Replies from: Lumifer, 9eB1
comment by Lumifer · 2014-04-28T19:59:42.910Z · LW(p) · GW(p)

but doesn't that also apply to unmarried cohabitation?

It also applies in interesting ways to communal living.

In fact, given the magnitude of the effect, the question becomes "Why would anyone ever live alone?". And the fact that a lot of people do this, by choice, leads into interesting directions...

comment by 9eB1 · 2014-04-28T19:22:21.507Z · LW(p) · GW(p)

Yes it does, so it's not really an argument for the act of marriage itself, but on marriage-like behaviors.

comment by [deleted] · 2014-04-26T07:51:11.864Z · LW(p) · GW(p)

As a starting point, run through this: http://www.justfourguys.com/female-divorce-risk-calculator/

Also, you should be the reluctant one, not her.

And, if neither of you are willing to live an at-least vaguely biblical marriage, then civilization would probably be better of with you just donating sperm to a sperm bank, keeping her from sleeping around, and encouraging her to marry someone who is stereotypically Christian and for whom she would be willing to convert.

She sometimes gets mad at me for things I'm "just supposed to know" to do, not do, say, or not say.

Well, you are. Pending more life experience, find the most un-politicallycorrect Game blogger you can stomach and start there.

comment by passive_fist · 2014-04-24T05:47:41.829Z · LW(p) · GW(p)

This isn't a question, just a recommendation: I recommend everyone on this site who wants to talk about AI familiarize themselves with AI and machine learning literature, or at least the very basics. And not just stuff that comes out of MIRI. It makes me sad to say that, despite this site's roots, there are a lot of misconceptions in this regard.

Replies from: Squark, ChristianKl, Nectanebo
comment by Squark · 2014-04-24T09:09:32.910Z · LW(p) · GW(p)

Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?

Replies from: Punoxysm, passive_fist
comment by Punoxysm · 2014-04-25T02:31:47.591Z · LW(p) · GW(p)

Not so much a specific misconception, but understanding the current state of AI research and understanding how mechanical most AI is (even if the mechanisms are impressive) should make you realize that being a "Friendly AI researcher" is a bit like being a unicorn tamer (and I mean that in a nice way - surely some enterprising genetic engineer will someday make unicorns).

Edit: Maybe I was being a little snarky - my meaning is simply this: Given how little we know about what actual Strong AI will look like (And we genuinely know very very little), any FAI effort will face tremendous obstacles in transforming theory into practice - both in the fact that the theory will have been developed without the guidance that real-world constraints and engineering goals provide, and the fact that there is always overhead and R&D involved in applying theoretical research. I think many people here underestimate this vast difference.

Replies from: ChristianKl, Squark, Risto_Saarelma, ChristianKl
comment by ChristianKl · 2014-04-25T20:11:49.452Z · LW(p) · GW(p)

both in the fact that the theory will have been developed without the guidance that real-world constraints and engineering goals provide, and the fact that there is always overhead and R&D involved in applying theoretical research. I think many people here underestimate this vast difference.

Some people might underestimate the difficulty. On the other hand even if doing FAI research is immensely difficult that doesn't mean that we shouldn't do FAI research. The stakes are to high to avoid doing the best we can.

comment by Squark · 2014-04-25T20:01:54.510Z · LW(p) · GW(p)

I think that if we only start friendliness research when we're obviously close to building an AGI, it will be too late.

Replies from: Punoxysm
comment by Punoxysm · 2014-04-25T20:03:48.216Z · LW(p) · GW(p)

I think that almost all research done before that will have to be thrown out. Maybe the little that isn't will be worth it given the risks, but it will be a small amount.

Replies from: Squark
comment by Squark · 2014-04-25T20:41:17.046Z · LW(p) · GW(p)

How did you reach that conclusion? To me it seems very unlikely. For example it seems that there's a good chance the AGI will have something called "utility function". So we can start thinking of what is the correct utility function for a FAI even if we don't know how to build an optimizer around it. We can study problems like decision theory to better understand the domain on the utility function. etc.

Replies from: Punoxysm
comment by Punoxysm · 2014-04-25T20:44:55.681Z · LW(p) · GW(p)

It's not clear at all that AGI will have a utility function. But furthermore, bolting a complex, friendly utility function onto whatever AI architecture we come up with will probably be a very difficult feat of engineering, which can't even begin until we actually have that AI architecture.

Replies from: Squark
comment by Squark · 2014-04-25T21:17:36.584Z · LW(p) · GW(p)

It's not clear at all that AGI will have a utility function.

That's something I'm willing to take bets on. Regardless, it is precisely the type of question we better start studying right now. It is a question with high FAI-relevance which is likely to be important for AGI regardless of friendliness.

But furthermore, bolting a complex, friendly utility function onto whatever AI architecture we come up with will probably be a very difficult feat of engineering...

I doubt it. IMO AGI will be able to optimize any utility function, that's what makes it an aGi. However, even if you're right, we still need to start working on finding that utility function.

Replies from: Punoxysm
comment by Punoxysm · 2014-04-26T01:33:18.893Z · LW(p) · GW(p)

I question both of these premises. It could be like you or I, in the sense that it simply executes a sequence of actions with no coherent or constant driving utility function (even long-term goals are often inconsistent with each other), and even if you could demonstrate to it a utility function that met some extremely high standards, it would not be persuaded to adopt it. Attempting to build in such a utility function could be possible, but not necessarily natural at all; in fact I bet it would be unnatural and difficult.

I understand your rebuttal to "friendliness research is too premature to be useful" is "It is important enough to risk being premature", but I hope you can agree that stronger arguments would put forward stronger evidence that the risk is not particularly large.

But let's leave that aside. I'll concede that it is possible that developing a strong friendliness theory before strong AI could be the only path to safe AI under some circumstances.

I still think that it is mistaken to try to ignore intermediate scenarios and focus only on that case. I wrote about this in a post before, How to Study AGIs safely which you commented on.

Replies from: Squark
comment by Squark · 2014-04-28T19:04:50.094Z · LW(p) · GW(p)

It could be like you or I, in the sense that it simply executes a sequence of actions with no coherent or constant driving utility function...

I doubt the first AGI will be like this, unless you count WBE as AGI. But if it will, it's very bad news, since it would be very difficult to make it friendly. Such an AGI is akin to an alien species which evolved under conditions vastly different from ours: it will probably have very different values.

comment by Risto_Saarelma · 2014-04-25T11:27:33.575Z · LW(p) · GW(p)

So for example when Stuart Russell is saying that we really should get more serious about doing Friendly AI research, it's probably because he's a bit naive and not that familiar with the actual state of real-world AI?

Replies from: asr, Punoxysm
comment by asr · 2014-04-25T16:48:51.628Z · LW(p) · GW(p)

I have updated my respect for MIRI significantly based on Stu Russell signing that article. (Russell is a prominent mainstream computer scientist working on related issues; as a result his opinion I think has substantially more credibility here than the physicists.)

Replies from: XiXiDu
comment by XiXiDu · 2014-04-25T19:44:17.758Z · LW(p) · GW(p)

I have updated my respect for MIRI significantly based on Stu Russell signing that article.

If you don't think that MIRI's arguments are convincing, then I don't see how one outlier could significantly shift your perception, if this person does not provide additional arguments.

I would give up most of my skepticism regarding AI risks if a significant subset of experts agreed with MIRI, even if they did not provide further arguments (although a consensus would be desirable). But one expert does clearly not suffice to make up for a lack of convincing arguments.

Also note that Peter Norvig, who coauthored 'Artificial Intelligence: A Modern Approach' with Russell, does not appear to be too worried.

comment by Punoxysm · 2014-04-25T19:01:47.481Z · LW(p) · GW(p)

I mean to say that if you understand the work of Russell or other AI researchers, you understand just how large the gap is between what we know and what we could possibly apply friendliness to. Friendliness research is purely aspirational and highly speculative. It's far more pie-in-the-sky than anti-aging research, even. Nothing wrong with Russell calling for pie-in-the-sky research, of course, but I think most people don't understand the gulf.

When somebody says something like "Google should be careful they don't develop Skynet" they're demonstrating the misunderstanding that we even have the faintest notion of how to develop Skynet (and happily that means AI safety isn't much of a problem).

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-04-25T22:35:45.754Z · LW(p) · GW(p)

I've read AIMA, but aren't really up speed on the last 20 years of cutting edge AI research, which it addresses less. I don't have the same intuition about AGI concerns being significantly more hypothetical than anti-aging stuff. For me that would mean something like "any major AGI development before 2050 or so is so improbably it's not worth considering", given how I'm not very optimistic on quick progress in anti-aging.

This would be my intuition if I could be sure the problem looks something like "engineer a system at least as complex as a complete adult brain". The problem is that an AGI solution could also be "engineer a learning system that will learn to behave at human level or above intelligence at human life timespan or faster", and I have much shakier intuitions about what the minimal required invention is for that to happen. It's probably still ways out, but I have nothing like the same certainty of it being ways out as I have for the "directly engineer an adult human brain equivalent system" case.

So given how this whole thread is about knowing the literature better, what should I go read to build better intuition on how to estimate limits for the necessary initial complexity of learning systems?

comment by ChristianKl · 2014-04-25T10:31:59.996Z · LW(p) · GW(p)

What do you mean with the term "mechanical"?

Replies from: Nornagest
comment by Nornagest · 2014-04-25T19:42:27.487Z · LW(p) · GW(p)

I'm guessing Punoxysm's pointing to the fact that the algorithms used for contemporary machine learning are pretty simple; few of them involve anything more complicated than repeated matrix multiplication at their core, although a lot of code can go into generating, filtering, and permuting their inputs.

I'm not sure that necessarily implies a lack of sophistication or potential, though. There's a tendency to look at the human mind's outputs and conclude that its architecture must involve comparable specialization and variety, but I suspect that's a confusion of levels; the world's awash in locally simple math with complex consequences. Not that I think an artificial neural network, say, is a particularly close representation of natural neurology; it pretty clearly isn't.

Replies from: Punoxysm
comment by Punoxysm · 2014-04-25T20:07:33.921Z · LW(p) · GW(p)

I agree with you on both counts - that most human cognition is simpler than it appears in particular. But some of it isn't, and that's probably the really critical part when we talk about strong AI.

For instance, I think that a computer could write a "Turing Novel" that would be indistinguishable from some human-made fiction with just a little bit of human editing, and that would still leave us quite far from FOOMable AI (I don't mean this could happen today, but say in 10 years).

comment by passive_fist · 2014-05-05T08:45:29.229Z · LW(p) · GW(p)

OK. I've seen a lot of people here say that Eliezer's idea of a 'Bayesian intelligence' won't work or is stupid, or is very different from how the brain works. Those familiar with the machine intelligence literature will know that, in fact, hierarchical Bayesian methods (or approximations to them) are the state of the art in machine learning, and recent research suggests they very closely model the workings of the cerebral cortex. For instance, refer to the book "Data Mining: Concepts and Techniques, 3rd edition" (by Han and Kamber) and the 2013 review "Whatever next? Predictive brains, situated agents, and the future of cognitive science" by Andy Clark: http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=8918803

The latter article has a huge number of references to relevant machine learning and cognitive science work. The field is far far larger and more active than many people here imagine.

Replies from: gwern
comment by ChristianKl · 2014-04-24T11:36:11.757Z · LW(p) · GW(p)

Do you have a recommendation for a resource that explains the basics in a decent matter?

comment by Nectanebo · 2014-04-24T13:38:04.527Z · LW(p) · GW(p)

What would you consider the "very basics"?

there are a lot of misconceptions in this regard.

What are some of the most blatant? Sorry to ask a question so similar to Squark's.

comment by [deleted] · 2014-04-21T21:21:27.479Z · LW(p) · GW(p)

A koan:

A monk came to Master Banzen and asked, "What can be said of universal moral law?"

Master Banzen replied, "Among the Tyvari of Arlos, all know that borlitude is highly frumful. For a Human of Earth, is quambling borl forbidden, permissible, laudable or obligatory?"

The monk replied, "Mu."

Master Banzen continued, "Among the Humans of Earth, all know that friendship is highly good. For a Tyvar of Arlos, is making friends forbidden, permissible, laudable or obligatory?"

The monk replied, "Mu," and asked no more.

Qi's Commentary: The monk's failure was one of imagination. His question was not foolish, but it was parochial.

Replies from: gjm, None, Nornagest, shminux
comment by gjm · 2014-04-22T19:03:11.648Z · LW(p) · GW(p)

Shouldn't Banzen's second question be something like "For a Tyvar of Arlos, is making friends frumful, flobulent, grattic, or slupshy?"?

comment by [deleted] · 2014-04-22T01:27:02.652Z · LW(p) · GW(p)

I don't really know anything about the Tyvar of Arlos, so I'm pretty confused on this front, but I'm fairly sure you're relating a Talmudic anecdote, not a Zen one ;-). "Forbidden, permissible, laudable, or obligatory" says to me that we're contemplating halachah.

Replies from: None
comment by [deleted] · 2014-04-22T10:32:33.862Z · LW(p) · GW(p)

I would hope you don't know anything about them—they were made up on the spot. ^_^

And yes, I suppose the style here might well have been influenced from more than one place.

comment by Nornagest · 2014-04-21T23:42:36.214Z · LW(p) · GW(p)

Sounds to me like the master's jumping to more conclusions than the student is, here. His response makes sense if he wanted to break a sufficiently specific deontology (at least at interspecies scope), but there are a lot of more general things you could say about morality that aren't yet ruled out by the student's question.

comment by shminux · 2014-04-21T23:33:28.574Z · LW(p) · GW(p)

How is this a failure of imagination? Why is the question parochial?

Replies from: None
comment by [deleted] · 2014-04-22T10:29:36.280Z · LW(p) · GW(p)

Parochial because he mistook a local property of mindspace for a global one; unimaginative because he never thought of frumfulness when considering what things a mind might value. "Good" is no more to a Tyvar than "frumful" to Clippy or "clipful" to a human.

Replies from: drethelin
comment by drethelin · 2014-04-23T03:26:53.744Z · LW(p) · GW(p)

this is silly. Good is a quite useful concept that easily stretches to cover entities with different preferences, but even if it does not, it's STILL meaningful, and your clippy example shows us exactly why. The meaning of clipful, something like "causes there to be more paperclips" or whatever, is perfectly clear to if not really valued by humankind.

Replies from: fubarobfusco, None
comment by fubarobfusco · 2014-04-23T14:55:23.372Z · LW(p) · GW(p)

Is "good" what many sorts of intelligent beings strive to do? Then "good" is such things as self-improvement, rationality, survival of one's values, anti-counterfeiting of value, personal survival, and resource acquisition. For any intelligent being that does not expend energy to survive will be washed away by entropy. And so, "good" is universal. (The sage Omohundro does not call it "good", though; that is a novice's word.)

Is "good" the noise that one group of one species of social creatures say when they comfort and praise their tribemates? Then "good" is such things as singing with a regular melody and rhythm, or setting up certain sorts of economic deals among tribemates and others; or leading the tribe's warriors to dismember the others instead of being dismembered themselves; and it is parochial.

comment by [deleted] · 2014-04-23T05:49:43.380Z · LW(p) · GW(p)

Ah, I see I was unclear. By "is no more to a Tyvar" I meant "is no more significant to a Tyvar" rather than "is no more comprehensible to a Tyvar." Sorry; my fault.

comment by ChristianKl · 2014-04-23T23:56:58.031Z · LW(p) · GW(p)

How good is the case for taking adderall if you struggle with a lot of procrastination and have access to a doctor to give you a prescription?

Replies from: Vulture
comment by Vulture · 2014-04-24T01:40:16.663Z · LW(p) · GW(p)

It worked reasonably well for me.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-24T10:33:10.412Z · LW(p) · GW(p)

For what kind of timeframe? Do the effects stay the same over time? Are there meaningful side effects?

Replies from: Vulture
comment by Vulture · 2014-04-24T12:41:27.957Z · LW(p) · GW(p)

Disclaimer: This stuff varies from person to person. I had already tried a number of similar medications before going on generic adderall, all without success. I've been on it now for almost a year, and it's had a noticeable effect on my ability to concentrate on tasks and feel motivated to complete them. Often when I am struggling to focus on something to the point of not getting anything done, I'll suddenly realize that I didn't take my pill that morning. As far as I can tell these effects have been pretty consistent since the first week or so that I started taking it, although it's possible that there was a "ramping up" period that I've since forgotten about. In terms of side effects, I didn't need to take caffeine for the first few weeks while I acclimated to the drug, and was sort of jittery in an irritating way. That died down, obviously, although it remains stupendously unwise for me to take the pill at 10:30 or so or later, since it then keeps me up the following night.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-04-24T18:45:58.499Z · LW(p) · GW(p)

What other drugs did you try?

Replies from: Vulture
comment by Vulture · 2014-04-25T02:51:13.138Z · LW(p) · GW(p)

Strattera and Focalin, and possibly another one that I'm forgetting.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-04-25T04:39:40.785Z · LW(p) · GW(p)

When you say "without success" do you mean that these drugs did nothing useful, or just that they weren't good enough? I don't know strattera, but I think of methylphenidate (focalin) as very similar to amphetamine (adderall). Certainly methylphenidate is weaker than amphetamine, but I'd expect it to be a pretty good predictor of whether the amphetamine would work. So I am very surprised that I think you are saying that the one worked and the other didn't, which is why I'm asking for clarification.

Replies from: Vulture
comment by Vulture · 2014-04-25T13:15:17.324Z · LW(p) · GW(p)

Strattera was actually quite a while ago (sadly I don't remember the generic name) but I'm pretty sure it had no noticeable effect. I should probably clarify that the focalin actually did have a noticeable effect, but it was very weak and it had the same sleep-disturbing side effects as adderall, so it was not really worth it.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-04-25T19:33:35.585Z · LW(p) · GW(p)

Thanks!

comment by Metus · 2014-04-22T22:59:18.696Z · LW(p) · GW(p)

So the quantified self (QS) community has been existing for a while. Just as bodybuilding groups should be excellent test beds for what kind of exercises and chemicals will yield high results, the QS community should yield a preferably small, low-cost set of measures you should determine about yourself. Do these exist? Can be any blood measure, rhythm, time, psychological value, net worth ...

Replies from: ChristianKl, iconreforged
comment by ChristianKl · 2014-04-22T23:05:33.110Z · LW(p) · GW(p)

There's no standardized list.

Basically it turns out that it's really hard to get people to measure specific stuff and it's often a lot more useful if people measure value that they care about.

Replies from: RomeoStevens
comment by RomeoStevens · 2014-04-23T00:29:46.436Z · LW(p) · GW(p)

Agreed. QS seems most helpful for providing people tools to attack problems they are having (sleep, weight, etc.) rather than make a normal person superhuman.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-23T17:37:47.135Z · LW(p) · GW(p)

I wouldn't frame it that way. Talking of "problem" indicates that you compare yourself to the average person. There no reason why you have to do that. You can also track a variable where you are already above average and work on improving on that variable.

On the other hand you have to care about improving on that variable.

comment by iconreforged · 2014-04-23T16:49:19.743Z · LW(p) · GW(p)

I think that Quantified Mind provides some high-value tests. So long as you're willing to sit down and take a test, you can get data on:

  • Reaction Time
  • Visuo-spacial memory
  • Executive Function
  • Working Memory
  • Verbal learning
  • Motor function

Also, looking at what Gwern tracks, it seems helpful to have long-run data on subjective mood and energy. I randomly sample myself on that with PACO. PACO can allow you to poll yourself on any kind of thing you can imagine, like whether you're sitting, standing, or walking, or whether you're in public or private.

Edit: Added detail on QM.

comment by free_rip · 2014-04-22T08:02:20.477Z · LW(p) · GW(p)

I've been reading about maximizers and satisficers, and I'm interested to see where LessWrong people fall on the scale. I predict it'll be signficantly on the maximizer side of things.

A maximizer is someone who always tries to make the best choice possible, and as a result often takes a long time to make choices and feels regret for the choice they do make ('could I have made a better one?'). However, their choices tend to be judged as better, eg. maximizers tend to get jobs with higher incomes and better working conditions, but to be less happy with them anyway. A satisficer is someone who tries to make a 'good enough' choice - they tend to make choices faster and be happier with them, despite the choices being judged (generally) as worse than those of maximizers.

If you want, take this quiz

And put your score into the poll below: [pollid:682]

Replies from: gjm, ChristianKl, daenerys, philh
comment by gjm · 2014-04-22T18:45:11.802Z · LW(p) · GW(p)

I wonder what the person who submitted the number 1488 was thinking. (Maximizing their answer, perhaps.)

comment by ChristianKl · 2014-04-22T21:34:48.025Z · LW(p) · GW(p)

The quiz seems to target at people who are different then me. I don't watch TV so, it's hard for my to give an answer about channel surfing. I don't listen to the radio. The same goes for renting videos.

comment by daenerys · 2014-04-22T21:41:55.314Z · LW(p) · GW(p)

That quiz looks like it could use an update to fit modern society. It was hard to answer questions about "channel surfing" or "renting videos" in the modern era of hulu, Netflix, and Amazon Prime. Also, thinking back to the days of actual video rental stores, it was much easier to choose a movie there than it is to choose one on Netflix. Possibly because the Netflix selections tends towards "second rate movies I've never heard of OR first rate movies that I've already watched or am not interested in")

Anyways, I am a natural maximizer, which causes lots of stress towards decisions, so I've trained myself towards being a satisficer. I often try to think of decisions in the framework of "it doesn't matter that much WHAT I decide to do here, so long as I just make a decision and move forward with it".

I think about research where they show that the hardest decisions are the least important (if it was obvious which option was significantly better, then it wouldn't be a hard decision.) I think about research where they show that people are happier with decisions when they can't back out of them, so don't second-guess them. I think about cost-benefit analysis and how maximizing that particular decision probably isn't worth the time or stress.

A specific example: I tend to have trouble deciding what to order at restaurants. Knowing that whatever they serve at a restaurant is going to be relatively good, it's not that important what I decide. So when the waitress asks if everyone is ready to order I say "yes", even though I'm not ready, knowing that I will have to choose SOMETHING when it gets to me, and in reality I would be happy with any of the options.

comment by philh · 2014-04-22T12:48:43.237Z · LW(p) · GW(p)

Giving neutral answers to every question is 'maximizer tendencies', which seems odd.

Replies from: Vulture
comment by Vulture · 2014-04-22T17:10:16.135Z · LW(p) · GW(p)

You mean alternately picking 3 and 4? I was momentarily puzzled because seven is an odd number but I assume that's what you mean. If so, hmm, that is odd.

Replies from: gjm
comment by gjm · 2014-04-22T18:46:25.628Z · LW(p) · GW(p)

Neutral would mean 4 for each one. (123 4 567.)

It's not necessarily odd for neutral answers to count as "maximizing tendencies" -- perhaps most people lean distinctly towards satisficing in the situations described by the questions.

Replies from: Vulture
comment by Vulture · 2014-04-23T02:38:13.697Z · LW(p) · GW(p)

Derp derp derp. Clearly I need to review the difference between odd and even numbers.

A good point about the maximisation tendencies, too, although it strikes me as a little implausible that this was deliberate on the part of the quiz's designer(s).

comment by iarwain1 · 2014-04-24T16:27:48.499Z · LW(p) · GW(p)

I'm an Orthodox Jew, and I'd be interested to connect with others on LW who are also Orthodox. More precisely, I'm interested in finding other LWers who (a) are Jewish, (b) are still religious in some form or fashion, and (c) are currently Orthodox or were at some point in the past.

Replies from: shminux, noonehomer
comment by shminux · 2014-04-25T06:25:50.432Z · LW(p) · GW(p)

You must have excellent compartmentalization skills.

comment by noonehomer · 2014-04-25T00:29:26.328Z · LW(p) · GW(p)

I'm an Orthodox Jew (Modern Orthodox). Since Mr. Yudkowsky's work is - obviously - apikorsus of the highest level, I read it l'havin ul'horos, mostly, but enjoy the thinking in it anyway.

Replies from: gjm
comment by gjm · 2014-04-25T09:00:30.289Z · LW(p) · GW(p)

In case anyone else is curious, it appears that:

"apikorsus" has a range of meanings including "heretic", "damned person", "unbeliever"; the term may or may not be derived from the name of Epicurus.

[EDITED to add: As pointed out by kind respondents below, I was sloppy and mixed up "apikores" (which has the meanings above) and "apikorsus" (which means something more like "the sort of thing an apikores says"). My apologies.]

"l'havin ul'horos" means "to understand and to teach", as opposed to "to agree" or "to practice" or whatever. In the Bible, when the Israelites invade Canaan they are told not to learn to do as the natives do, and there's some famous commentary that says "but you are allowed to learn in order to understand and to teach".

[EDITED to add: I am not myself Jewish, nor do I know more than a handful of Hebrew words; if I have got the above wrong then I will be glad to learn.]

Replies from: iarwain1, noonehomer
comment by iarwain1 · 2014-04-25T14:28:12.670Z · LW(p) · GW(p)

More accurately: Apikores = heretic in modern parlance; apikorsus = heretical views.

As an aside, Maimonides is the medieval Jewish authority generally associated with the view that the term apikores is not derived from the name Epicurus. Maimonides was a world-class Aristotelian philosopher and quotes Epicurus several times in his works. Since the words apikores and Epicurus have identical spellings in medieval Hebrew, the fact that Maimonides proposes a different etymological theory begs for an explanation. Maimonides' theory is that the term is from the Aramaic "apkeirusa" (this is hard to translate, especially in the way Maimonides seems to be using it; I think it implies something like "people doing whatever they feel like instead of listening to authority figures"). I've long felt that this derives from the fact that the Talmud's discussion of the term doesn't have anything to do with dogma or heretical beliefs but rather with belittling authority figures. Maimonides himself, however, converts the term in his other works into the current usage of referring to heretical beliefs. Based on this, I strongly suspect that Maimonides thought that the original term does stem from Epicurus (who held precisely those beliefs that Maimonides identifies as heretical), but that the rabbis of the Talmud borrowed the term and used it as a sort of Aramaic-Greek pun to refer to belittling authority figures.

Also in case anybody else is curious, Modern Orthodox is as opposed mainly to Ultra-Orthodox (also known as "hareidi" or "frum"). Hassidim are their own sub-group of Ultra-Orthodox.

As an interesting intellectual challenge, try steelmanning some of the hareidi sociopolitical positions, such as their extreme opposition to the Israeli draft law. And it does need steelmanning - I personally know several very well-thought-out, very smart, very well-meaning, very knowledgeable rabbis who strongly agree with the hareidi positions.

Replies from: Username, noonehomer
comment by Username · 2014-04-27T21:33:11.859Z · LW(p) · GW(p)

As an interesting intellectual challenge, try steelmanning some of the hareidi sociopolitical positions, such as their extreme opposition to the Israeli draft law. And it does need steelmanning - I personally know several very well-thought-out, very smart, very well-meaning, very knowledgeable rabbis who strongly agree with the hareidi positions.

I think that actually if you accept a certain basic worldview, they have a rather strong case. I strongly disagree with that worldview, but that's a diffrent matter.

Let's lay it out:

Axiom 1: Everything happens acording to God's will.

Axiom 2: If we behave righeously, God's will will be favourable.

Example: Again and again in the past, this has happened. "בכל דור ודור עומדים עלינו לכלותינו והקדוש ברוך הוא מצילנו מידם" [Rough translation: "In each and every generation our foes have tried to destroy us, and each time the Holy One Blessed Be He saves us from them.]

Corollary 1: If we are righteous, we can expect this to carry on in the present and the future.

Axiom 3: The most righteous thing to be doing is to be studying the Holy Texts.

Lemma: We need to have as large a number of people as possible studing in yeshiva as their day-to-day occupation.

Proof of the lemma: Follows from Corollary 1 and Axiom 3.

Preposition: "Much as it pains us, we acknowledge that not everyone has it in them to spend all day studying Torah. We don't want to force people who don't want to study in yeshiva to do so (much as it aches the very bottoms of our souls), but at least you can let those who want to do so get on with it, and not waste their time on your secular 'army' nonsense, which has nothing to do with our defense, as our only true defense is God."

Proof of the preposition: The lemma says that we need lots of yeshiva bochers, so let's provide them! If you don't have the proper כונה we cannot effectivly force you to study Torah (even if the hareidim had the political power), but at least we can take the masses of willing hareidi young men and allow the to do their job for the defense of our people, in order to protect what fragment of spiritual defense we still have.

Corollary 2: The state of Israel shouldn't draft the yeshiva bochurs. Doing so removes our only true line of defense, and so is taramount to the genocide of the Jewish people.

If you accept the three axioms, it leads invariably to the Preposition, and so to Corollary 2.

Q.E.D.

Replies from: noonehomer
comment by noonehomer · 2014-04-27T22:13:21.542Z · LW(p) · GW(p)

That would work... but the Chareidim don't actually believe in their defenses (they flee places getting bombed and leave the soldiers to defend people's lives), nor are these defenses backed up in any way by halacha (they're misinterpreted that one text they use as a source). Also, they don't allow anyone in their community to go into the army. Ever. And they don't let non-Chareidim join them in their learning for defense, either.

Replies from: iarwain1, Username
comment by iarwain1 · 2014-04-27T23:24:06.077Z · LW(p) · GW(p)

I suspect noonehomer's correct in part and that the chareidim don't actually believe everything that Username says.

Also, I don't think it's true they don't let anyone go to the army (or at least it didn't use to be), just that it's discouraged.

If anyone's interested in my own thoughts, I posted them in a comment here. Just look for the comment by iarwain. Sorry, you may need to understand some hebrew terms to understand it. But then again, you'll need to understand hebrew terms to read Username's comment as well.

comment by Username · 2014-04-29T12:18:35.007Z · LW(p) · GW(p)

the Chareidim don't actually believe in their defenses

Yes.

What I wrote was a steelman of their positions, and must be taken as such. They themselves do not have such sophisticated mental models of the world. The answer to why they hate the IDF and the state of Israel is simply one of tribal affiliation. <>

[Edit: Also see point 3 in iarwain1's linked comment. It explains the hareidi attitude to all this.]

comment by noonehomer · 2014-04-27T21:15:07.614Z · LW(p) · GW(p)

I don't know about "frum". Badly educated and mistakenly chumradik is more like it.

comment by noonehomer · 2014-04-27T21:13:40.370Z · LW(p) · GW(p)

The hardest part of reading things l'havin ul'horos is that I can't recommend them to anyone else because it's assur for non-learned people to read them (possibly even non-Jews, in this case). And yes, iarwain1 is correct that apikorsus is a thing and an apikores is a person. But thank you for translating.

Replies from: gjm
comment by gjm · 2014-04-27T21:48:12.287Z · LW(p) · GW(p)

it's assur for non-learned people to read them

Can you recommend such things to other people considered learned? (And: is there an important distinction between "assur" and "forbidden"? A little googling suggests that "assur" is less emphatic somehow; is that right?)

apikorsus ... apikores

Yup, inexcusably sloppy of me. Thanks.

Replies from: noonehomer
comment by noonehomer · 2014-04-27T22:10:13.595Z · LW(p) · GW(p)

Almost certainly I can. But right now I'm in high school, so I don't know that many people who qualify.

Um... assur means you can't do it. It's not less severe than "forbidden", I don't think. It literally means "bound". It's important to note that it doesn't mean something's morally wrong, but in this case, independent of the prohibition (non-literal translation of the noun form, issur) the act of reading foreign philosophy without knowledge of the corresponding arguments in one's own can cause stupid questions, not smart ones, and is considered to be wrong, not just forbidden (in my father's circles, anyhow).

Replies from: iarwain1
comment by iarwain1 · 2014-04-28T14:39:07.894Z · LW(p) · GW(p)

I commend you for your self-control in not telling other people about these issues. I'd also add that for many people who aren't the intellectual type, you'd be doing them a major disservice by exposing them to arguments that can easily cause them massive psychological stress issues. As I know from personal experience with people who that happened to.

It might be worth thinking about switching to a different high school where there are more intellectual-type people around. Also, if you go to Yeshiva University for college you'll find plenty of smart people, both staff and students, who are quite educated in foreign philosophies.

comment by Jayson_Virissimo · 2014-04-23T16:39:45.579Z · LW(p) · GW(p)

Tyler Cowen talks with Nick Beckstead about x-risk here. Basically he thinks that "people doing philosophical work to try to reduce existential risk are largely wasting their time" and that "a serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."

My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.

Replies from: Richard_Kennaway, gwern, Squark, ChristianKl, Douglas_Knight, Lumifer
comment by Richard_Kennaway · 2014-04-24T08:42:54.843Z · LW(p) · GW(p)

My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.

A "serious" MIRI would operate in absolute secrecy, and the "public" MIRI would never even hint at the existence of such an organisation, which would be thoroughly firewalled from it. Done right, MIRI should look exactly the same whether or not the secret one exists.

comment by Squark · 2014-04-23T19:34:50.908Z · LW(p) · GW(p)

Hackers / assassins would at best postpone the catastrophe, not avoid it.

comment by ChristianKl · 2014-04-23T20:49:41.711Z · LW(p) · GW(p)

My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.

If you ideas of being serious is to train a team of hacker-assassins that might indicate that your project is doomed from the start.

parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."

As far as I know there are still nuclear weapons in the post-collapse Soviet Union.

Replies from: knb
comment by knb · 2014-04-24T03:19:52.810Z · LW(p) · GW(p)

As far as I know there are still nuclear weapons in the post-collapse Soviet Union.

Pretty clear that he meant the "loose nukes" that went unaccounted for in the administrative chaos after Soviet Collapse.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-24T14:47:58.188Z · LW(p) · GW(p)

How many nuclear weapons did get neutralized in that way?

Replies from: knb
comment by knb · 2014-04-24T21:46:41.730Z · LW(p) · GW(p)

Most of this information isn't being released to the public. It is known that the entire Kazakhstan arsenal was left unguarded after the fall of the Soviet Union, and it was eventually secured by the US.

Replies from: ChristianKl
comment by ChristianKl · 2014-04-24T23:54:12.213Z · LW(p) · GW(p)

How do you know?

The official story that the Kazakhstani tell seems to be:

Kazakhstan followed this move with an even more historic initiative when we voluntarily renounced the world's fourth largest nuclear arsenal, which we inherited on the break-up of the Soviet Union. No country has done more to bring the goals of the NPT closer.

US official history as retold by the Council of Foreign relations seems to be:

The former Soviet republics of Ukraine, Belarus, and Kazakhstan—where the Soviets based many of their nuclear warheads—safely returned their Soviet nuclear weapons to post-communist Russia in the 1990s, but all three countries still have stockpiles of weapons-grade uranium and plutonium.

comment by Douglas_Knight · 2014-04-24T03:44:33.985Z · LW(p) · GW(p)

"the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."

What is he talking about? Sam Nunn?

comment by Lumifer · 2014-04-23T16:51:08.186Z · LW(p) · GW(p)

training a team of hacker-assassins

A team of slightly more sophisticated Terminators, right?

Oh, wait... :-D

comment by Stabilizer · 2014-04-22T17:24:10.347Z · LW(p) · GW(p)

I have trouble with the statement "In the end, we're all insignificant." I mean I get the sentiment, which is of awe and aims to reduce pettiness. I can get behind that. But I have trouble if someone uses it in an argument, such as: "Why bother doing X; we're all insignificant anyway."

Because, if you look closely, "significance" is not simply a property of objects. It is, at the very least, a function of objects, agents and scales. For example you can say that we're all insignificant on the cosmic scale; but we're also all insignificant on the microscopic scale. We're also insignificant for some trees in the middle of the rainforest or an alien in another galaxy. We're almost completely insignificant to some random person in the past, present or future, but much more significant to the people around us.

Replies from: Squark
comment by Squark · 2014-04-22T18:57:40.433Z · LW(p) · GW(p)

To put differently, given two actions A & B with expected utilities U & V, you should choose A over B iff U > V. Only the relative ordering of U & V is meaningful, not the absolute difference (the utility function can be scaled arbitrarily anyway).

Replies from: Vulture
comment by Vulture · 2014-04-23T19:00:45.797Z · LW(p) · GW(p)

Good point. I guess you could rephrase some of the existential angst over insignificance as despairing at the tiny amounts of utility we can manipulate given a utility function scaled to the entire world/universe/whatever.

comment by Manfred · 2014-04-21T19:35:46.992Z · LW(p) · GW(p)

Can anyone share the story behind the Future of Life Institute?

There are a lot famous people on their list, and presumably FLI is behind the recent article in the huffington post, but how much does this indicate that said famous people are on board with the claims in the article? The top non-famous person on their list of people studies monte-carlo methods and volunteers for CFAR - is this an indication that they're bringing on someone to do actual work? Or does Alan Alda being at the top of their list of advisors mean they're going to focus on communications?

Replies from: Manfred
comment by UmamiSalami · 2014-04-24T02:00:08.853Z · LW(p) · GW(p)

Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.

The abstract: "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."

Quite simple, really, but I found it extremely interesting.

http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf

Replies from: gwern, Squark, None
comment by Squark · 2014-04-24T09:14:56.217Z · LW(p) · GW(p)

The argument falls apart once you use UDT instead of naive anthropic reasoning: http://lesswrong.com/lw/jv4/open_thread_1117_march_2014/aoym

Replies from: UmamiSalami
comment by UmamiSalami · 2014-04-25T02:18:43.917Z · LW(p) · GW(p)

Maybe I am unfamiliar with the specifics of simulated reality. But I don't understand how it is assumed (or even probable, given Occam's Razor) that if we are simulated then there are copies of us. What is implausible about the possibility that I'm in a simulation and I'm the only instance of me that exists?

Replies from: Squark
comment by Squark · 2014-04-25T19:58:26.676Z · LW(p) · GW(p)

In the Tegmark IV multiverse all consistent possibilities exist so there is always a universe in which you are not in a simulation. The only meaningful question is what universes you should pay more attention to.

See also this.

comment by [deleted] · 2014-04-24T02:19:48.098Z · LW(p) · GW(p)

I've seen it. It seemingly ignores the possibility that humanity will not go extinct [EDIT: in the near future, possibly into the tens of megayears] but will also never reach a 'posthuman state' capable of doing arbitrary ancestor simulations.

Replies from: RowanE
comment by RowanE · 2014-04-25T11:37:06.041Z · LW(p) · GW(p)

I think "extinct before reaching a "posthuman" stage" covers that also.

Replies from: None
comment by [deleted] · 2014-04-27T05:32:40.273Z · LW(p) · GW(p)

True - I guess I was reading it in the context of the usual singulatarian assumptions of quick take-off.

comment by NancyLebovitz · 2014-04-23T23:04:32.033Z · LW(p) · GW(p)

Honey badger intelligence

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2014-04-24T00:00:47.247Z · LW(p) · GW(p)

When I was a kid, our cats used a similar tactic to escape the laundry room with a closed door. One would sit on the dryer and turn the handle with both paws and the other would push against the door with their head.

comment by [deleted] · 2014-04-22T01:20:29.327Z · LW(p) · GW(p)

Since LW is the place where I found out about App Academy... I started working through their sample problems today, and at what level of perceived difficulty / what number of stupid mistakes should I give up? Both in the sense of giving up on working toward getting into App Academy specifically [because I doubt I think fast enough / have a good enough memory to pass the challenges -- the first four problems in their second problem-set took me over an hour, and I had to look a few things up despite having gone through the entire Codecademy Ruby course] and in the sense of giving up on programming as an at-least-short-term job plan?

Not sure how much of this is lack of practice (maybe implementation / avoiding stupid errors would get better with practice, but designing the algorithms takes me a while, and I'm not new to programming at all), how much is overconfidence / unrealistically high expectations wrt skill (but they say the code challenges are supposed to take 45 minutes each) and how much is that I really don't have the talent to get into that particular program, or to not fail miserably at the job, or to develop the skills to be able to even get a programming job...

Replies from: TylerJay, Barry_Cotter
comment by TylerJay · 2014-04-22T22:50:31.213Z · LW(p) · GW(p)

Hey, I have good news for you. I just tried those practice problems and timed myself to see if I could give you something to compare to (and for fun). I completed the first four in about an hour and 10 minutes (though I am a bit out of practice). Those practice problems are not trivial; they take some thought. I didn't have to use any outside resources, but I did have to test quite a few things out in the terminal as I was coding it.

For background: I am self taught, but I've been programming for almost 2 years. I have done freelance rails programming. I have built multiple rails apps from the ground up by myself. One of these is still in use by multimillion dollar company as a part of their client onboarding program. I've been offered a job as a rails developer, though I didn't end up taking it as I had a higher paying offer on the business end of things.

So I say don't worry if you have a bit of trouble with it. If you felt like you were looking things up all the time, then you just need some more practice. For the algorithm design part (especially the mathy ones), look into Project Euler. It's a great list of problems to get practice and you can use whatever language you want to find the answer, so use Ruby. Practice taking the problems apart into pieces, using helper functions, and writing the pseudocode before you actually code anything. That will make this style of thinking feel more natural.

Feel free to PM me if you want to talk more.

comment by Barry_Cotter · 2014-04-22T08:59:57.798Z · LW(p) · GW(p)

Use the try harder Luke.

What do you mean by "not new to programming at all"? How many hours programming have you done? How many projects have you completed? Because unless you've had a job as a programmer before or you did CS as a college degree your previous experience will be utterly swamped by App Academy. If you feel insecure about algorithms specifically practice them specifically. If you want more practice with Ruby maybe do Hartl's book. The Codecademy Ruby course is not the end of the world. If programming appeals to you prepare, apply and let App academy do the judging.

Edit: Remember, many people who have had jobs as programmers can't do FizzBuzz if asked to in an interview. Retain hope.

comment by Stabilizer · 2014-04-21T19:01:01.563Z · LW(p) · GW(p)

Please recommend some good sources/material (books, blogs, also advice from personal experience) for techniques of self-analysis and introspection? Basically, I'm looking for things to keep in mind while I attempt to find patterns of behavior in myself and ways for changing them. I realize that this is a very broad category. But roughly, material akin to Living Luminously.

Replies from: PECOS-9
comment by PECOS-9 · 2014-04-22T00:14:43.479Z · LW(p) · GW(p)

The Feeling Good Handbook. It focuses specifically on Depression and Anxiety, but could probably be useful for anyone.

comment by Ben Pace (Benito) · 2014-04-22T22:01:40.387Z · LW(p) · GW(p)

I'd like to gauge interest in posting bulleted, section-by-section, non-fiction book summaries, with the intention of some discussion. I think that it would be of high utility to those who want knowledge but haven't the time to read a book, and for me who wants to read a book and work through the ideas more thoroughly. The first two books I have in mind are Understanding Uncertainty which has been recommended by Lukeprog, and The Moral Animal which has been recommended by EY.

It could be chapter by chapter, perhaps in weekly open threads, or the whole book in a discussion post. The summary would mostly consist of select quotes with commentary to summarise longer passages.

The poll is just for interest, comment if you have strong preference for the books I choose,

[pollid:683]

comment by Metus · 2014-04-21T20:55:48.260Z · LW(p) · GW(p)

Something that keeps nagging me in my mind: A young college graduate comes up to you and asks "Where should I look for what kind of work to have the highest living standard?"

Remember, a lower nominal wage in a country where this wage has higher purchasing power should be better suited to this individual. Naively I might say the US or Switzerland but something tells me I am overlooking a gigantic hole.

Replies from: Dagon, RomeoStevens, mare-of-night, drethelin
comment by Dagon · 2014-04-21T23:39:35.014Z · LW(p) · GW(p)

For someone skilled enough to choose your location and who thinks long-term enough to live very cheaply for a number of years, higher nominal wages means higher absolute savings amounts.

Live somewhere expensive when you're getting started, and move somewhere cheap when you're slowing down.

comment by RomeoStevens · 2014-04-23T00:27:47.973Z · LW(p) · GW(p)

Cost of living is an overblown statistic because dumb people spend their money poorly. You can live in expensive areas on the cheap without that much effort. This isn't to say that living in the bay area isn't more expensive than many other areas, but it certainly isn't as expensive as the cost of living calculations would make it seem.

Replies from: Lumifer
comment by Lumifer · 2014-04-23T00:43:32.689Z · LW(p) · GW(p)

You can live in expensive areas on the cheap without that much effort.

Yes, provided you're young, healthy, and childless.

Replies from: Vulture
comment by Vulture · 2014-04-23T18:38:44.790Z · LW(p) · GW(p)

young, healthy, and childless

What makes youth a necessary condition independent of overall health?

Replies from: Lumifer
comment by Lumifer · 2014-04-23T19:17:04.012Z · LW(p) · GW(p)

Mostly risk and stress tolerance.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-04-24T06:28:00.641Z · LW(p) · GW(p)

...but also less established social ties. And less settled long-term investments (though this correlates with with risk part).

comment by mare-of-night · 2014-04-22T13:41:20.682Z · LW(p) · GW(p)

In some fields, doing freelance work for clients in a country with low purchasing power while living in one with high purchasing power is an option.

comment by drethelin · 2014-04-23T03:30:11.350Z · LW(p) · GW(p)

Living standard as quantified is not particularly helpful to the individual. Someone might be comparatively far better off living in malaysia with a long-distance high paying freelance programming job, but I think you'll find that being around cultural compatriots is not to be ignored.

comment by Clamwinds · 2014-04-23T20:47:37.508Z · LW(p) · GW(p)

I do not know if this is the best place, but I have lurked here and on OB for roughly a year, and have been a fellow traveler for many more. However specifically I want to talk to any members that have ADHD, and how they specifically go about treating their disorder. On the standard anti-akrasia topics, the narrative is that if you have anxiety,depression, xyz that you should treat that first, but there seems to be a lack of quantity of members that have this. Going to other forums to talk about stuff like which medication is "better" is filled with a lot of bad epistemology, conclusions, people faking their disorder, and much more. Any other members have it, and want to talk about it? I was hoping there could be maybe a general discussion thread for people with it if enough people have it. I've poured through studies and journals but it is difficult to do alone.

Replies from: Clamwinds, hamnox
comment by Clamwinds · 2014-05-10T03:40:02.843Z · LW(p) · GW(p)

Alright, i'm going to get enough karma and just start this myself until some one stops me. I also kind of need this, so I don't destroy my life through some other unspecified means.

comment by hamnox · 2014-04-24T03:06:11.109Z · LW(p) · GW(p)

I was diagnosed non-hyperactive ADD as a kid, though I haven't done meds for that since middle school. It's been suggested that it was a misdiagnosis for asperger's.

comment by beoShaffer · 2014-04-27T04:20:48.550Z · LW(p) · GW(p)

Does anyone have suggestions for Android self-tracking/quantified-apps? I just got an Android phone an am hoping to begin tracking my diet, exercise ect. as well as various outcomes and try to find correlations.

Replies from: jaime2000, None
comment by jaime2000 · 2014-04-27T13:44:12.149Z · LW(p) · GW(p)

LifeTracking

Replies from: beoShaffer, beoShaffer
comment by beoShaffer · 2014-04-28T01:07:04.530Z · LW(p) · GW(p)

I was able to get it installed, but get a message saying "Unfortunately, LifeTracking has stopped" whenever I try to go past the first page.

comment by beoShaffer · 2014-04-27T18:49:25.158Z · LW(p) · GW(p)

The marketplace link doesn't work. I tried searching for LifeTracking but only found LifeTrack, are they the same thing?

Replies from: jaime2000
comment by jaime2000 · 2014-04-27T22:35:15.770Z · LW(p) · GW(p)

Probably not, though I have never had access to the Android marketplace, so I'm not sure. Have you tried installing the app directly from the downloadble .apk file?

Replies from: beoShaffer
comment by beoShaffer · 2014-04-28T00:20:08.780Z · LW(p) · GW(p)

That seems to have worked.

comment by [deleted] · 2014-04-27T05:28:28.624Z · LW(p) · GW(p)

Sleep as Android is what I use on a tablet under my pillow to keep track of how long i actually spend trying to sleep, as well as if my sleep cycle seems to contain coherent deep-to-not-deep cycles.

comment by FiftyTwo · 2014-04-25T14:06:04.513Z · LW(p) · GW(p)

Brienne Strohl mentioned she was reading "Robby's re-sequencing of Eliezer's Sequences" on facebook/twitter, can anyone link me to it?

comment by hamnox · 2014-04-25T13:49:00.395Z · LW(p) · GW(p)

Hi, CFAR alumni here. Is there something like a prediction market run somewhere in discussion?

Going mostly off of Gwern's recommendation, it seems like PredictionBook is the go-to place to make and calibrate predictions, but it lacks the "flavour" that the one at CFAR did. CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.

What would it take to get such a thread going online? I believe one of the reasons it worked so well at minicamp was because we were all in the same area for the same period of time, so it was simple to restrict bets to relevant things we could all verify. Even if most of the posts wind up being relevant only to the local meetups, it would be nice to have them up in the same place for unofficial competition. Is that something you would use?

comment by John_Maxwell (John_Maxwell_IV) · 2014-04-23T03:29:21.576Z · LW(p) · GW(p)

I haven't been following LW discussions of Löb's theorem etc. very much at all but this guide to the m4 macro language (a standard Unix tool) seemed to have the same character, especially this section. Dunno if this is interesting to people who are interested in Löb's theorem.

comment by sixes_and_sevens · 2014-04-23T00:17:03.559Z · LW(p) · GW(p)

Fairly off-topic question, but I imagine there'll be suitable people to answer it on LW. Any recommendations for cheap and cheerful VPS hosting? Just somewhere to park a minimum CentOS install. It's for miscellaneous low-priority personal projects that I might abandon shortly after starting, so I'm hesitant to pay top dollar for a quality product that I might end up not using. On the other hand, I want to make sure I get what little I'm paying for.

I promise I'm not a stingy unfriendly AI looking for a new home.

Replies from: NancyLebovitz
comment by listic · 2014-04-21T12:42:27.101Z · LW(p) · GW(p)

I would like to learn drawing.

I would like to be able to have fun expressing myself via art. How long does it takes to learn to draw from zero to good enough not to be embarrassed of oneself?

What techniques are useful? Is there any sense in e.g. Drawing on the Right Side of the Brain?

Replies from: raisin, JayDee
comment by raisin · 2014-04-21T14:07:48.508Z · LW(p) · GW(p)

Drawing from the real life is especially useful for someone who is learning to draw. It teaches you that drawing is not simply about holding a pen and drawing the correct lines, but it's also about seeing and thinking correctly. We tend to think in terms of shapes, outlines and symbols, but such things don't represent the reality very well. You should be thinking in terms of form and contour.

Here's a good video about it.

I think this post is a good start:

So forget drawing humans for a while and start painting simple primitives. Cylinders, spheres, spheres with a section sliced off, cylinders which end with a sphere, stuff like that. Don't even color them, just work in grayscale and try to get the lighting right. Use real cylinders or photos of cylinders as reference. Buy some clay, sculpt some simple primitives and draw those. Do it over and over again while looking at what you see instead of drawing what you think you see. The more you do it, the faster you'll get better.

And when you feel comfortable with basic forms, you can start combining them into more elaborate forms. When you use reference, try to see the basic shapes of which your reference form consists of. A human head is like a sphere, with both sides sliced off, attached to a cylinder. Of course, there's a LOT more to human heads. Keep it simple until you're comfortable with drawing tens, if not hundreds of primitive contours together, because that's what heads consist of.

If you're drawing from reference (which you should be doing), try flipping the reference image upside down before drawing a single stroke. This can trick your brain into actually looking at form instead of sticking with the symbolic shapes which are deeply ingrained with how you look at the world. Crop the reference image to a tiny area, then try to replicate it as closely as possible. Then do it again even better. And again and again etc.

So draw a lot, draw from the real life and draw from reference and begin to think in 3D.

I think Drawing on the Right Side of the Brain is probably pretty effective because one of its main point is the above - that you should just draw what you see and not think in terms of symbols when you draw. The underlying idea about the brain hemispheres is pseudoscience, but that doesn't mean it can't still teach useful lessons.

Replies from: None
comment by [deleted] · 2014-04-23T05:55:23.048Z · LW(p) · GW(p)

Drawing on the Right Side is great for this reason. The hemisphere stuff is quite tangential to the book's utility.

If you want to see examples of "visual symbols", look at the drawings of children. In particular, look at drawings of the human face. The prototypical symbols for something like an eye, just don't look that much like a human eye. This sounds obvious, but it's very hard to just draw what you see, and not draw what you "think you ought" to see.

For example, imagine a face lit from one side. Visually, the illuminated side of the face will show the "expected" details: You'll see the folds in both lids of the eye, and the fine curves of the face and ear. But the dark side of the face will look nothing like this. You'll only see broad dark areas and broad light areas. However, most people who'd identify as "bad at drawing", will draw the same details on both sides of the face, and will be genuinely unaware that this isn't what they really "see".

This isn't to say that artists don't make use of visual symbols, etc, but skill is the ability to take both approaches.

I'd actually advance this as a example of the fundamental analysis of one type of "talent". The "good at drawing" people grokked the connection between seeing and drawing, and the "bad at drawing" people didn't.

I've wondered for some time if something similar isn't present in musical talent, where the basic "mindset" has to do with some connection of sound to expression, rather than a connection between sound and physical ritual.

Replies from: raisin
comment by raisin · 2014-04-26T13:20:14.510Z · LW(p) · GW(p)

I looked at those links JayDee posted below, namely

http://lesswrong.com/lw/8i1/drawing_less_wrong_observing_reality/

and this is what was said about Edwards' book:

Later on, neuroscientists learned that while the two processing centers are real, they are not neatly divided between brain hemispheres. The modern edition of the book uses the terms "left mode" and "right mode" to distinguish between the modes of though

Since she recognized this, it seems my critique about the hemisphere stuff is not meaningful anymore.

comment by JayDee · 2014-04-22T04:26:16.209Z · LW(p) · GW(p)

There's an (unfinished) set of posts about rationality and drawing written by Raemon, Drawing LessWrong p2 p3 p4 p5 that might answer your questions (in the articles or comments.)

comment by [deleted] · 2014-04-26T17:29:44.258Z · LW(p) · GW(p)

What's the current policy on bare downvoting, as in downvoting a comment/post without giving at least a short explanation for why one did so? I've had some comments downvoted recently, and without explanations it's frustrating and a poor feedback mechanism.

Replies from: Lumifer, Vladimir_Nesov, shminux, Squark, Eugine_Nier
comment by Lumifer · 2014-04-26T20:14:58.556Z · LW(p) · GW(p)

What's the current policy on bare downvoting, as in downvoting a comment/post without giving at least a short explanation for why one did so?

There ain't no policy. People up- and down-vote as they please.

comment by Vladimir_Nesov · 2014-04-26T18:52:33.650Z · LW(p) · GW(p)

If the alternative is no feedback at all, downvoting without explanation is a better option.

comment by shminux · 2014-04-28T22:24:25.285Z · LW(p) · GW(p)

This is a common question from the new participants. First, there is no policy on downvoting. There can't be, because there is no enforcement mechanism. There are, however, recommendations, like "downvote something you would like to see less of", which is often mixed up with "downvote everything I disagree with", or worse, with "downvote every comment by a user I dislike, regardless of content, to force them post less". At least one prominent regular has been accused of this last one. Second, commenting on why you downvote tends to result in the comment being downvoted, which discourages such comments very effectively.

it's frustrating and a poor feedback mechanism.

Yes, but only in the beginning. Once you have a few hundred karma, a downvote is just an indication that someone disliked your post, nothing more. And if all your comments are universally liked, you must be doing something wrong.

Replies from: None
comment by [deleted] · 2014-04-29T02:57:11.798Z · LW(p) · GW(p)

I've been here since the beginning of LW, off and on, actually. (This is sort of an alt account.) I just recall discussion on such a policy a while ago, but didn't see a wikipage giving such recommendations.

It was frustrating because it was on the order of 5 or 15 downvotes, without a single reply. My initial reactions were surprise and then disappointment at the community. I'd rather not be disappointed, so I thought re-focusing on more beneficial norms would be more productive.

If the reply is thoughtful, then it's much less discouraging (And if you, for rhetorical purposes, claim the downvote was from someone else, then do so), e.g. "This was probably downvoted because X and Y, what do you mean by N. Also here's some relevant resources/links A and B." I guess it's a lot more work than just downvoting, but it's hardly discourging of done non-patronizingly.

Replies from: drethelin
comment by drethelin · 2014-04-30T14:07:38.207Z · LW(p) · GW(p)

The goal of downvotes is to be discouraging.

comment by Squark · 2014-04-26T19:34:54.879Z · LW(p) · GW(p)

I'd say that even more important than giving explanation is not downvoting merely because you disagree. The signal transmitted by downvoting is "I don't want the hear this" or in simpler language "shut up". This should be reserved to fight content which is offensive, spam, trolling, rampant crackpottery, blatant off-topic etc. Mistakes made in good faith don't deserve a downvote. I'd say it is an extension of the "Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever." rule. The alternative is death spirals, blue-green politics and plainly ruining the community experience for everyone.

I personally made a rule of upvoting any content with net negative score which doesn't deserve a downvote, even if I disagree, especially when it's a comment of a person I'm currently arguing against. I want arguments that are discussions in which both sides are trying to arrive at the truth, not fights or two-people-showing-off-how-smart-they-are (is there a name for it?).

Replies from: Vladimir_Nesov, army1987, Nornagest, Tenoke, None
comment by Vladimir_Nesov · 2014-04-26T20:24:37.621Z · LW(p) · GW(p)

This should be reserved to fight content which is offensive, spam, trolling, rampant crackpottery, blatant off-topic etc.

Not if you aim to enforce a level of discussion higher than mere absence of pathology. I like for there to be places that distance themselves from (particular kinds of) mediocrity...

I personally made a rule of upvoting any content with net negative score which doesn't deserve a downvote

...which is made more difficult by egalitarian instincts.

I'd say it is an extension of the "Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."

It's not. Punishment is different enough from deciding who to talk with. See also Yvain on safe spaces.

Replies from: Squark, Squark
comment by Squark · 2014-04-27T06:57:45.451Z · LW(p) · GW(p)

Not if you aim to enforce a level of discussion higher than mere absence of pathology.

Downvotes are not the way to achieve it. The way to achieve it is by positive personal example and upvoting content which is exemplary. Why are downvotes bad? Because:

  • We want to allow "mediocre" people (some of which have an unrealized potential to be excellent) that want to learn from excellent people (I hope you agree). Such people can make innocent mistakes. There's no reason to downvote them as long as they're willing to listen and aren't arrogant in their ignorance. Downvoting will only drive them away.

  • Even smart people occasionally say foolish things. Downvoting sends such a strong negative signal that it discourages even people that get much more upvotes than downvotes. By "discourages" I don't mean "discourages from saying foolish things", I mean discourages from participating in the community in general.

  • Most content is not voted upon by most of the community, therefore statistical variance is large. Again, since the discouragement of downvotes is not cancelled out by the encouragement of upvotes, you get much more discouragement than you want.

  • Downvotes transform arguments into sort of arena fights where the people in the crowd are throwing spoiled vegetables on the players they don't like. The emotional aura this creates is very bad for rationality. It's excellent for blue-green politics (downvote THEM!) and death spirals.

Punishment is different enough from deciding who to talk with.

If you don't want to talk to someone, don't upvote her and don't reply to her. The psychological impact of downvoting is equivalent to punishment.

See also Yvain on safe spaces.

This is completely different. "Safe spaces" are about banning content which might offend someone's sensibilities. My suggestion is about "banning" less content.

Replies from: Vladimir_Nesov, Tenoke, ChristianKl
comment by Vladimir_Nesov · 2014-04-27T11:06:40.741Z · LW(p) · GW(p)

Why are downvotes bad? Because:

I agree with enough of this. I know there are immediate downsides and hypothetical dangers. But the upsides seem indispensable. The argument needs to consider the balance of the two.

If you don't want to talk to someone, don't upvote her and don't reply to her.

They remain in the fabric of the forum, making it less fun to read. Not upvoting doesn't address this issue.

"Safe spaces" are about banning content which might offend someone's sensibilities. My suggestion is about "banning" less content.

Things that are not fun (for certain sense of "fun") offend my sensibilities (for certain sense of "offend"). My suggestion is to discourage them by downvoting. (This is the intended analogy, which is strong enough to carry over a lot of Yvain's discussion, even if the concept "safe spaces" doesn't apply in detail, although I think it does to a greater extent than I think you think it does.)

Replies from: Squark
comment by Squark · 2014-04-28T19:34:58.401Z · LW(p) · GW(p)

Let me rephrase. I suggest downvoting a comment only when it makes you think "I don't want this person in this community". Don't downvote comments which might be reasonably attributed to an OK person making an honest mistake.

Replies from: drethelin
comment by drethelin · 2014-04-30T14:14:58.736Z · LW(p) · GW(p)

This sabotages any chance of using karma to find and sort good comments from bad in the future. I want good content to be differentiated from bad regardless of source. I upvote known trolls when they say smart shit and I downvote eliezer when he's being a douchebag.

Replies from: Squark
comment by Squark · 2014-05-05T09:53:06.684Z · LW(p) · GW(p)

I wasn't at all suggesting to upvote / downvote on ad hominem basis. When someone is being a douchebag, downvote her by all means. When someone is stating an opinion you consider to be wrong while doing it in honest and respectful manner, don't downvote. If you want to express your disagreement, reply and (politely) explain why you disagree.

comment by Tenoke · 2014-04-27T07:16:44.289Z · LW(p) · GW(p)

We want to allow "mediocre" people (some of which have an unrealized potential to be excellent) that want to learn from excellent people (I hope you agree).

I have no problem with that, my problem is with the opposite - people learning from mediocre (or worse) folk, because they don't realize that their content is flawed (which downvotes signal).

Replies from: Squark
comment by Squark · 2014-04-28T19:28:49.814Z · LW(p) · GW(p)

IMO on the Light side you learn from something when you can tell it's correct, not when someone tells you it's correct, much less when someone anonymous tells you it's correct.

comment by ChristianKl · 2014-05-05T13:43:42.504Z · LW(p) · GW(p)

We want to allow "mediocre" people (some of which have an unrealized potential to be excellent) that want to learn from excellent people (I hope you agree).

To some extend yes, but we don't want eternal September either. There concern about the average IQ that reported in the LW census dropping over time.

Downvoting sends such a strong negative signal that it discourages even people that get much more upvotes than downvotes.

If we would have less downvotes in general then every single downvote would create a much stronger negative signal than it does at the moment.

Replies from: Squark
comment by Squark · 2014-05-05T15:31:10.873Z · LW(p) · GW(p)

Hi Christian, thx for commenting!

We want to allow "mediocre" people (some of which have an unrealized potential to be excellent) that want to learn from excellent people (I hope you agree).

To some extend yes, buat we don't want eternal September either. There concern about the average IQ that reported in the LW census dropping over time.

I'm not that concerned about average IQ. The crucial questions here are what is the purpose you see in LW and how you envision its future. If you want LW to be an elitist discussion forum for high-IQ people comfortable with a relatively aggressive / competitive environment, then it makes sense for you to use downvotes relatively liberally.

I think that the greatest potential value in LW lies elsewhere. I think LW can become a community and a cultural movement that promotes rationality and humanist values. A movement that has the power to steer history into a direction more of our liking. If you accept this vision, then you should be aiming at a much broader group (while making sure the widening circle doesn't water down our spirit and values). I envision LW as a place where people come to connect to other people that share similar worldview and values, not necessarily all of them being in the top IQ percentile. The "spiritual leadership" of the movement should consist predominantly of highly intelligent people that everyone can learn from, but it is not a necessary requirement for every member.

If we would have less downvotes in general then every single downvote would create a much stronger negative signal than it does at the moment.

This effect is only significant for people who spend sufficient time on the forum to get used to the "downvote background". Moreover, I think it is far from strong enough to cancel the reduction in downvotes.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2014-05-05T15:59:23.588Z · LW(p) · GW(p)

The LessWrong brand is not optimized for reaching a broad public. To the extend that's the goal "effective altruism" is a more effective label under which to operate.

In my view the goal of LessWrong is to provide a forum for debating complex intellectual ideas. Specifically ideas about how to improve human thinking and the FAI problem. Having a good signal-to-noise ratio matters for that purpose.

comment by Lumifer · 2014-05-05T15:46:39.363Z · LW(p) · GW(p)

I think LW can become a community and a cultural movement that promotes rationality and humanist values. A movement that has the power to steer history into a direction more of our liking.

Steer history?

When you said "cultural movement", did you really mean "social and political movement" for it is those which steer history?

And what gives you the idea that LW could become massively popular, anyway? There's nothing here particularly interesting for hoi polloi.

comment by Squark · 2014-04-26T20:32:06.493Z · LW(p) · GW(p)

What do you mean by "fighting mediocrity"? Should I interpret it literally as "I don't like mediocre people"? Or as "I want to reward excellence"? If it is the latter you are aiming it, use upvotes, not downvotes (for ideal rational agents the two might be symmetric, but for people they aren't: the emotional signal from getting a downvoting is very different from the emotional signal of not getting an upvote).

Replies from: Vladimir_Nesov, Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2014-04-26T20:46:04.926Z · LW(p) · GW(p)

the emotional signal from getting a downvoting is very different from the emotional signal of not getting an upvote

Exactly, and this is a reason why downvoting is important (and shouldn't be systematically countered): it allows scaring people away (who are not of our tribe). A forum culture that would merely abstain from upvoting is worse at scaring people away than one that actively downvotes.

comment by Vladimir_Nesov · 2014-04-26T20:37:35.597Z · LW(p) · GW(p)

(Sorry, I heavily edited the grandparent since the first revision.)

comment by Vladimir_Nesov · 2014-04-26T20:43:15.179Z · LW(p) · GW(p)

Should I interpret it literally as "I don't like mediocre people"? Or as "I want to reward excellence"?

Neither, it's not about what I like (in the sense of emotional response), or about what other people experience, but about what to encourage on the forum to make it a better place.

(Right now it's not particularly relevant, at least as an intervention on the level of social norms, because the main current issue seems to be that too little meaningful discussion is happening lately, and that doesn't seem fixable by changing/maintaining voting attitudes.)

comment by A1987dM (army1987) · 2014-04-27T07:08:25.634Z · LW(p) · GW(p)

"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."

The same person who said that also said this, so I guess he meant something narrower by “bullet” than you think.

Replies from: Squark
comment by Squark · 2014-04-28T19:26:41.645Z · LW(p) · GW(p)

Upvoted for making an interesting point.

However: I was not appealing to Eliezer's authority. I was just making a parallel with a similar (but more extreme) phenomenon.

Regarding well-kept gardens. Let me put things in perspective. If you see a comment along the lines of "jesus is our lord" or "rationality is wrong because the world is irrational" or "a machine cannot be intelligent because it has no soul", by all means downvote. However, if you see two people debating e.g. whether there will be an AI foom or whether consequentialism is better than deontology or whether AGI will come before WBE, don't downvote someone just because you disagree. Downvote when the argument is so moronic that you're confident you don't want this person in our community.

Replies from: Vaniver, army1987
comment by Vaniver · 2014-04-29T00:05:25.188Z · LW(p) · GW(p)

Downvote when the argument is so moronic that you're confident you don't want this person in our community.

People change. People change even faster when you give them feedback. I downvote things I don't want to see from people I like and respect the same way I would frown at a friend if they did something I didn't want them to do.

So instead of 'I'm confident I don't want you in our community,' I view a downvote more as 'shape up or ship out.'

Replies from: Squark
comment by Squark · 2014-04-29T19:13:15.290Z · LW(p) · GW(p)

People change. People change even faster when you give them feedback.

It depends what you mean by "feedback". If "feedback" is a polite, respectful reply explaining the mistake then, yes, it is something the other party can learn from. If "feedback" is a downvote chances are it is only going to hurt the other party and possibly make her even more entrenched in her position out of anger. When you argue respectfully, the other party can admit her mistake with small emotional cost. If you call her an idiot, admitting the mistake will become much more difficult for her (since it will become emotionally equivalent to admitting being an idiot).

I downvote things I don't want to see from people I like and respect the same way I would frown at a friend if they did something I didn't want them to do.

First, you can allow yourself more with friends because they are friends. Second, a downvote is a sort-of public humiliation, it is much worse than a frown. Imagine that a person you would like and respect makes one of her first comments on the forum and gets downvoted. She might become so upset she won't return here again.

Replies from: Vaniver, Lumifer
comment by Vaniver · 2014-04-29T20:51:47.144Z · LW(p) · GW(p)

There are several points here that seem entangled, but I'll try listing them separately.

First, it is a desirable quality to be able to work out what one did wrong from minimal evidence, or repeated experimentation.

Second, it seems to me that rationality is strengthened by the ability to joyfully accept contradictions and corrections. A view that sees a downvote as a sort-of public humiliation is probably too sensitive.

Third, politeness is costly, in several ways. Most relevant to the others is the time cost of writing a reply. It often takes much longer to instill clarity than it takes to display confusion.

Fourth, as the benefits mostly accrue to the corrected, and the costs mostly accrue to the corrector, it is not clear why we should expect such correction to be the norm instead of virtuous on the part of the corrector.

She might become so upset she won't return here again.

LWers differ in how hard they want LW to be on its new users. I tend to be softer than, say, Lumifer, but I am not certain that this is a bug instead of a feature. There are people we don't want to discuss things here on LW, and that sort of reaction may be a decent filter.

Replies from: Lumifer
comment by Lumifer · 2014-04-30T14:35:15.591Z · LW(p) · GW(p)

LWers differ in how hard they want LW to be on its new users. I tend to be softer than, say, Lumifer

I don't want to set up a hazing ritual to weed out the misfits from among the newbies.

What I want to avoid is LW evolving towards being victim-centric where the main concern is the possibility of giving offence.

comment by Lumifer · 2014-04-29T19:48:36.279Z · LW(p) · GW(p)

...it is only going to hurt the other party... a downvote is a sort-of public humiliation

Oh, dear. HTFU already. People who think of downvotes as hurtful and public humiliation really shouldn't venture into the wilds of 'net forums.

comment by A1987dM (army1987) · 2014-04-29T16:54:14.612Z · LW(p) · GW(p)

don't downvote someone just because you disagree.

Agreed, but...

Downvote when the argument is so moronic that you're confident you don't want this person in our community.

Nope. Sometimes otherwise-okay people make moronic arguments because they're mind-killed, they're tired, etc.

Replies from: drethelin, Squark
comment by drethelin · 2014-04-30T14:11:13.844Z · LW(p) · GW(p)

THE WHOLE POINT OF DOWNVOTES IS TO HAVE LESS BAD STUFF AND MORE GOOD STUFF. This applies not just to making people leave but making people who stay post tbings of higher quality.

If you don't downvote "otherwise-okay" people when they say dumb shit, how are they supposed to learn. Downvote the comment, not the person .

Replies from: MugaSofer, army1987
comment by MugaSofer · 2014-07-03T16:18:30.738Z · LW(p) · GW(p)

I think the point is that you shouldn't conclude "that you're confident you don't want this person in our community" just because "the argument is so moronic".

(Because there's too much noise with individual arguments to deduce a person's general competence.)

In other words, yes, downvote the comment - not the person.

comment by A1987dM (army1987) · 2014-04-30T15:47:43.346Z · LW(p) · GW(p)

Er... That was my point.

comment by Squark · 2014-04-29T19:15:25.948Z · LW(p) · GW(p)

This is exactly why you shouldn't downvote such comments: they hurt good people and discourage them from participating in the community. Also, consider the possibility your own judgement is affected by tiredness or mind-murder.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-04-29T20:28:08.995Z · LW(p) · GW(p)

Also, consider the possibility your own judgement is affected by tiredness or mind-murder.

I guess you are talking of conditions in which someone makes a downvoting decision. But then underconfidence is also possible, and also a pathology, making one unable to act on a correct judgement. This point might be a reason that The Sin of Underconfidence is a prerequisite for Well-Kept Gardens Die By Pacifism.

Replies from: Squark
comment by Squark · 2014-05-05T09:46:50.070Z · LW(p) · GW(p)

I agree that both overconfidence and underconfidence is possible, but the potential damage from downvoting is larger than the potential damage from not downvoting. Therefore, let's err on the side of not downvoting.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-05-05T17:21:39.149Z · LW(p) · GW(p)

the potential damage from downvoting is larger than the potential damage from not downvoting.

This is what I disagree with.

comment by Nornagest · 2014-04-28T20:23:14.987Z · LW(p) · GW(p)

The signal transmitted by downvoting is "I don't want the hear this" or in simpler language "shut up". This should be reserved to fight content which is offensive, spam, trolling, rampant crackpottery, blatant off-topic etc.

I think you're drawing a false equivalence here. While a downvote does carry the meaning of "I don't want to hear this", most of the meaning of "shut up" is connotation, not denotation, and those connotations don't necessarily carry over.

Mere disagreement generally isn't enough to justify a downvote, no. But we want to see well-reasoned disagreement: it signifies a chance to teach or to learn, even if it's unpleasant in the moment. On the other hand, there are plenty of things short of Time Cube or cat memes that one might legitimately not want to see here, even if posted in good faith; restricting the option to those most extreme cases robs it of most of its power to improve discussion.

comment by Tenoke · 2014-04-27T06:10:22.072Z · LW(p) · GW(p)

I personally made a rule of upvoting any content with net negative score which doesn't deserve a downvote, even if I disagree

I donwvoted you, because you seem to use upvotes in a way that diminishes the value of the karma system in my eyes - an undeserved downvote is as bad as an undeserved upvote.

I've seen a lot of low quality posts getting some karma, and coming back to positive scores without a good reason - and now I know the behaviour that is partially responsible.

(and the above comes from someone with a mass downvoter after him, who gets a downvote on every single comment he makes)

Replies from: Squark
comment by Squark · 2014-04-27T07:11:00.401Z · LW(p) · GW(p)

I donwvoted you, because you seem to use upvotes in a way that diminishes the value of the karma system in my eyes - an undeserved downvote is as bad as an undeserved upvote.

Downvotes and upvotes are not symmetric, see my reply to Vladimir.

comment by [deleted] · 2014-04-26T20:00:23.754Z · LW(p) · GW(p)

It shouldn't matter why you downvote something, just give an explanation for why you did so. Ideally the same goes for upvotes, where you should explain why you upvoted (if your explanation is any more valuable than "This.").

Trying to define what an upvote or downvote "means" or "shouldn't mean" is futile and beside the point.

Replies from: drethelin, Squark
comment by drethelin · 2014-04-30T14:16:43.510Z · LW(p) · GW(p)

No no no no: the beauty of votes is it gives us a very quick and easy way of knowing comment quality without flooding the forum with "good post!" Or countless explanations of things people already know.

comment by Squark · 2014-04-26T20:27:34.223Z · LW(p) · GW(p)

Trying to define what an upvote or downvote "means" or "shouldn't mean" is futile and beside the point.

Why? What is "the point"? For me, the point is creating a community that is fun, useful and lives up to its ideals of rationality and humanist virtue (whatever the latter means for you, be it utilitarianism, effective altruism etc).

Replies from: None
comment by [deleted] · 2014-04-26T20:48:47.734Z · LW(p) · GW(p)

The point is for commenters (and the audience for that matter) not to have to wonder about why they got downvoted/upvoted, in other words for the meaning of that partcular upvote/downvote to be made explicit by the upvoter/downvoter.

Replies from: Lumifer, Squark
comment by Lumifer · 2014-04-26T22:36:41.310Z · LW(p) · GW(p)

The point is for commenters (and the audience for that matter) not to have to wonder about why they got downvoted/upvoted

And why not? Some introspection does a body good...

Replies from: None
comment by [deleted] · 2014-04-26T23:45:20.600Z · LW(p) · GW(p)

...

It would do good to encourage more explaining of upvotes and downvotes. We're not at the point where there's "too much" of it. And, if there was "just the right" amount of it, then we wouldn't be having this discusison.

Replies from: Lumifer
comment by Lumifer · 2014-04-27T00:00:52.570Z · LW(p) · GW(p)

And, if there was "just the right" amount of it, then we wouldn't be having this discusison.

For a diverse population of people there is no such thing as "just the right amount". Even if you set it at some kind of a central measure (mean, weighted mean, median, etc.), the left tail would complain it's too little and the right tail will complain it's too much.

Speaking personally, most of my downvotes are because the post seemed to me either stupid or dickish. I am not sure LW will gain much if I start posting dick ASCII art as an explanation for downvotes... X-D

Replies from: None
comment by [deleted] · 2014-04-27T00:23:33.509Z · LW(p) · GW(p)

Well, if you're adament about it not being systemic, then (if you or someone reading this would be so kind) help me understand my own case, of a few of my comments before this conversation being severely downvoted. I was surprised at the responses, and without any replies, I'm still in the dark. If you could show me the light, then I'd be grateful.

Replies from: Lumifer
comment by Lumifer · 2014-04-27T02:07:19.339Z · LW(p) · GW(p)

Please provide links as it's hard to see comments at -5 and below. The only strongly downvoted comment of yours that I see itself says "hard downvote for stupendous arrogance" so I'm not sure why are you surprised...

Replies from: None
comment by [deleted] · 2014-04-27T03:39:35.945Z · LW(p) · GW(p)

In response to someone wholesale dismissing an entire area of scientific study without having had any experience in it, "stupendous arrogance" is both accurate and tame. I guess "stupendous" kind of sounds like "stupid", but that's probably not why people downvoted the comment.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2014-04-27T05:32:57.215Z · LW(p) · GW(p)

is both accurate and tame

I thought you were interested in why people downvoted you and not in justifying your comments..?

Replies from: None
comment by [deleted] · 2014-04-27T05:41:33.281Z · LW(p) · GW(p)

I'm interested, that's why I'm dissectng the post to try and find the reason that it was downvoted. My conclusion is that it was downvoted because the phrase you quoted sounds unnecessarily harsh out of context, and not because of anything regarding facts or offense.

comment by ChristianKl · 2014-05-05T13:55:16.617Z · LW(p) · GW(p)

Basically you are engaging in an ad hominem argument and not making decent argument for your position.

Asking people on a public forum for whether the have experience with illegal drugs is also a big no.

Replies from: None
comment by [deleted] · 2014-05-05T17:27:17.190Z · LW(p) · GW(p)

Psychonautics is entirely about the "hominem" and inner experience, it can't not be relevant. I'm not sure what you're getting at.

And, depending on where you live, I wouldn't worry about revealing anything, especially if you don't deal, especially if you can feign not currently using it. There are plenty of places on the internet where people talk about psychedelic drug usage openly, and they've been around for a while and not been shut down. To worry at all would be insanely paranoid.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-05T21:27:08.381Z · LW(p) · GW(p)

Psychonautics is entirely about the "hominem" and inner experience, it can't not be relevant. I'm not sure what you're getting at.

LW is a place where people know their fallacies and pattern match to them. You will get downvotes for things like that. That's simply the kind of place that LW happens to be.

As far as your argument goes you haven't made clear why someone can't get knowledge about psychonautics by reading what other people who have experiences write about psychonautics. How LW you do have a burden to make that argument in more depth if you want to get away with ad hominem.

And, depending on where you live, I wouldn't worry about revealing anything, especially if you don't deal, especially if you can feign not currently using it. There are plenty of places on the internet where people talk about psychedelic drug usage openly, and they've been around for a while and not been shut down. To worry at all would be insanely paranoid.

If you want a security clearance in the US than you need to answer questions about past drug use. If you say on that form that you don't have used LSD in the past but there a record of you on the internet admitting to LSD usage that might bring you into major trouble is someone finds out. The same goes for other jobs. Basic courtesy is to allow others the freedom to choose whether or not to reveal information like that about themselves and therefore don't put other in a situation where they are obliged to reveal information like that.

Replies from: None
comment by [deleted] · 2014-05-06T04:06:23.609Z · LW(p) · GW(p)

LW is a place where people know their fallacies and pattern match to them.

God I hope not, that's like not having heard of the Disagreement Hierarchy. The "central point" was about inner experience, so pattern matching towards "DH6" is the more "lesswrong" thing to do then to pattern match towards "ad hominem". Pattern matching towards "ad hominem" is an example of the "standard rationality" thing that Eliezer spend the entire Sequences attempting to deconstruct and improve upon. If LW has degenerated back to that, then maybe we need another read-through of the sequences.

If you want a security...

If you actually use your real name for everything you say online, then it's your own fault when you get in such a bind. Basic courtesy is to know when to use your real name and when not to, and to not let that shit happen.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-06T13:09:45.346Z · LW(p) · GW(p)

In reality rationality is about accepting that the world is the way it is and not as you want it to be. In this case it seems like you don't want to accept it the way it is. In this case it always useful to keep your audience in mind and if you are making some far off point about psychonautics then you have to be extra careful or accepted that you get downvoted.

If you actually use your real name for everything you say online, then it's your own fault when you get in such a bind. Basic courtesy is to know when to use your real name and when not to, and to not let that shit happen.

Stylometry is pretty good these days. At the 29C3 there was a talk that demostrated a 72% successful author attribution rate for some underground online forums. Underground meaning forums where illegal goods where sold, so the participants are interested in being anonymous. The idea that you can reasonable protect your anonymity by using a nickname is naive.

Replies from: asr
comment by asr · 2014-05-06T13:48:41.585Z · LW(p) · GW(p)

The idea that you can reasonable protect your anonymity by using a nickname is naive.

I think not so naive as all that. The effectiveness of a security measure depends on the threat. If your worry is "employers searching for my name or email address" then a pseudonym works fine. If your worry is "law enforcement checking whether a particular forum post was written by a particular suspect," then it's not so good. And if your worry is "they are wiretapping me or will search my computer", then the pseudonym is totally unhelpful.

I think in most LW contexts -- including drug discussions -- the former model is a better match. My impression is that security clearance investigations in the United States involve a lot of interviews with friends and family, but, at the present time, don't involve highly sophisticated computer analysis.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-06T14:18:41.563Z · LW(p) · GW(p)

I think in most LW contexts -- including drug discussions -- the former model is a better match. My impression is that security clearance investigations in the United States involve a lot of interviews with friends and family, but, at the present time, don't involve highly sophisticated computer analysis.

Given the way the NSA works I would highly doubt that they don't check information in their databases when handing out a security clearance and run highly sophisticated computer analysis. The actual capabilities of those programs are going to be classified. The NSA doesn't want people to know about the capabilities they have.

In addition the internet doesn't forget. NSA computer programs might not be good enough at the present to catch it but they might be in five years. Especially the whole Snowden episode encouraged the NSA to invest a lot more effort into gathering data about possible leakers and have computer programs that analyse the behavior of people with a security clearance.

comment by Squark · 2014-04-27T07:00:28.239Z · LW(p) · GW(p)

Is that a terminal goal? Or is it an instrumental goal serving to achieve something else?

Replies from: None
comment by [deleted] · 2014-04-27T07:17:39.367Z · LW(p) · GW(p)

Both/neither? It's a reasonable norm and would also help alleviate some personal frustrations. (Sidenote: invoking "Terminal" anything is usually dangerous and unnecessary, c.f. this.)

comment by Eugine_Nier · 2014-04-27T05:16:37.285Z · LW(p) · GW(p)

What's the current policy on bare downvoting, as in downvoting a comment/post without giving at least a short explanation for why one did so?

Well, Eliezer's policy tends towards "replying to downvote-worthy comments tends to start flame wars and is thus discouraged".

Replies from: None
comment by [deleted] · 2014-04-27T05:19:59.607Z · LW(p) · GW(p)

Right, but then we invented "Tapping out" so that wouldn't become an issue.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-04-27T05:23:06.572Z · LW(p) · GW(p)

"Tapping out" can be interpreted as conceding and is thus low status.

Replies from: None
comment by [deleted] · 2014-04-27T05:29:54.333Z · LW(p) · GW(p)

If you're that worried, link to the wikipage which defines away that connotation, like "I'm tapping out.".

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-04-27T05:36:01.194Z · LW(p) · GW(p)

Signaling doesn't work that way. I'd think someone who reads Game blogs would know that.

Replies from: None
comment by [deleted] · 2014-04-27T06:03:45.812Z · LW(p) · GW(p)

Call it something else then, or be more direct and paraphrase the wikipage, or take it into PMs, whatever you fancy. The point is that you shouldn't feel guilty replying to a comment just because it was downvoted.