Open Thread, April 15-30, 2013

post by diegocaleiro · 2013-04-15T19:57:51.597Z · LW · GW · Legacy · 467 comments

Contents

467 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

467 comments

Comments sorted by top scores.

comment by lsparrish · 2013-04-15T22:09:51.798Z · LW(p) · GW(p)

I know this comes up from time to time, but how soon until we split into more subreddits? Discussion is a bit of firehose lately, and has changed drastically from its earlier role as a place to clean up your post and get it ready for main. We get all kinds of meetup stuff, philosophical issues, and so forth which mostly lack relevance to me. Not knocking the topics (they are valuable to the people they serve) but it isn't helpful for me.

Mostly I am interested in scientific/technological stuff, especially if it is fairly speculative and in need of advocacy. Cryonics, satellite-based computing, cryptocurrency, open source software. Assessing probability and/or optimal development paths with statistics and clean epistemology is great, but I'm not super enthused about probability theory or philosophy for its own sake.

Simply having more threads in the techno-transhumanist category could increase the level of fun for me. But there also needs to be more of a space for long-term discussions. Initial reactions often aren't as useful as considered reactions a few days later. When they get bumped off the list in only a few days, that makes it harder to come back with considered responses, and it makes for fewer considered counter-responses. Ultimately the discussion is shallower as a result.

Also, the recent comments bar on the right is less immediately useful because you have to click to the Recent Comments page and scroll back to see anything more than a few hours in the past.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-16T07:14:34.890Z · LW(p) · GW(p)

I guess instead of complaining publicly, it would be better to send a private message to a person who can do something about it, preferably with a specific suggestion, and a link to a discussion which proves that many people want it.

Long-term threads separately seems to be a very popular idea... there were even some polls in the past to prove it.

comment by Kaj_Sotala · 2013-04-17T10:57:18.501Z · LW(p) · GW(p)

MIRI's strategy for 2013 involves more strongly focusing on math research, which I think is probably the right move, even though it leaves them with less use for me. (Math isn't my weakest suit, but not my strongest, either.)

comment by falenas108 · 2013-04-16T21:27:37.983Z · LW(p) · GW(p)

Do we know any evolutionary reason why hypnosis is a thing?

Replies from: MixedNuts, jimmy
comment by MixedNuts · 2013-04-17T10:45:52.907Z · LW(p) · GW(p)

My current understanding of how hypnosis works is:

  • The overwhelming majority of our actions happen automatically, unconsciously, in response to triggers. Those can be external stimuli, or internal stimuli at the end of a trigger-response chain started by an external stimulus. Stimulus-response mapping are learnt through reinforcement. Examples: walking somewhere without thinking about your route (and sometimes arriving and noticing you intended to go someplace else), unthinkingly drinking from a cup in front of you. (Finding and exploiting those triggers is incredibly useful if you have executive function issues.)

  • In the waking state, responses are sometimes vetted consciously. This causes awareness of intent to act. Example: those studies where you can predict when someone will press a button before they can.

  • This "free won't" isn't very reliable. In particular, there's very little you can do about imagery ("Don't think of a purple elephant"). Examples: advertising, priming effects, conformity.

  • Conscious processes can't multitask much, so by focusing attention elsewhere, stimuli cause responses more reliably and less consciously. See any study on cognitive load.

  • Hypnosis works by putting you in a frame of mind where cooperation is easy; that's mostly accomplished by your expectation to be hypnotised. For self-hypnosis you're pretty cooperative already ("I am doing that, therefore it works and it's good."), otherwise rapport with the hypnotist and yes sets (consenting to hypnosis, agreeing to listen/sit/look at something, truisms) help. Inducing trance seems to be mostly a matter of directing attention elsewhere while preserving this frame of mind. Old school hypnotists liked external foci like swinging pocket watches, candle flames and spirals; mindfulness inductions work similarly; Erickson was fond of pleasant imagery; I'm partial to thinking about the process of hypnosis itself.

Modern writers tend to use "trance" to mean a highly suggestible state, whereas older ones just mean a state where you act on autopilot. Flow is the latter kind of trance but not the former, as the thing you're concentrating on does prompt you to take some actions ("play these notes") but not in any form that resembles suggestion. I'm less certain about this than about the rest of my model, the link between trance and suggestibility might be deeper.

So the evolutionary explanation for hypnosis would look something like this:

  • It's easier to build a reflex agent than a utility maximiser, so evolution did that.

  • However, conscious decision-making does better, especially if you're going to be all technological and social, so evolution added one on top of the preexisting connectionist idiot.

  • It is easily disrupted, because evolution is a complete hack and only builds things that are robust as long as you don't do anything unusual.

comment by jimmy · 2013-04-17T06:49:02.064Z · LW(p) · GW(p)

As far as I can tell, it's more of a spandrel than anything. As a general rule, anything you can do with "hypnosis", you can do without. Depending on what you're doing with it, it can be more of a feature or more of a bug that comes inherent to the architecture.

I could probably give a better answer if you explained exactly what you mean by "hypnosis", since no one can agree on a definition.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-04-22T20:17:45.576Z · LW(p) · GW(p)

Dennett makes a good case for the word "spandrel" not really meaning much in "Darwin's Dangerous Idea".

comment by [deleted] · 2013-04-16T06:15:52.928Z · LW(p) · GW(p)

Recantation by Gregory Cochran

Replies from: None, knb
comment by [deleted] · 2013-04-16T22:33:37.820Z · LW(p) · GW(p)

Could you give an example of a true but unsayable thing?

Replies from: faul_sname
comment by faul_sname · 2013-04-17T15:38:01.644Z · LW(p) · GW(p)

HaydnB cannot truthfully make this statement.

comment by knb · 2013-04-16T06:44:42.650Z · LW(p) · GW(p)

Cochran's posts are often interesting, but he frequently comes across as a sour-tongued bitter crank.

Replies from: David_Gerard, None
comment by David_Gerard · 2013-04-16T07:19:40.267Z · LW(p) · GW(p)

After reading the linked post, that seems a non sequitur.

Replies from: knb
comment by knb · 2013-04-16T08:29:24.684Z · LW(p) · GW(p)

Tone is never irrelevant. Not for humans anyway.

Edit: or did you miss the part about him calling homosexuals human petri dishes?

Replies from: None
comment by [deleted] · 2013-04-16T13:45:58.948Z · LW(p) · GW(p)

or did you miss the part about him calling homosexuals human petri dishes?

The way you phrase that makes it sound way worse than the original, I think you misunderstood him. Here is the relevant part of the text you refer to:

One of my all-time favorites involved a New York City public health official talking about AIDS and homosexuality. He wasn’t saying anything generally verboten – he wasn’t pointing out that homosexual men are nature’s Petri dishes. He said this: the health department had previously estimated the number of homosexual men with AIDS in the Big Apple by doing a survey of the AIDS rate among gay men and then multiplying by someone else’s estimate of the prevalence of homosexuality. He announced that further work indicated that although their estimate of the frequency of AIDS among homosexual men in New York seemed correct, their new estimate of total cases was down by half. One of the (sharper) reporters asked ” So, does this mean that according to your new estimate, there are only half as many gay men in New York as you previously thought?” The hapless health official said “Yes, that would follow. “

He was giving this as an example of an offensive, perhaps even derogatory phrasing of something true (the higher STD rates of homosexuals etc.) that would be forbidden to say and that we would expect someone to get into trouble over. Then he contrasted it with the plain statement based on very hard to dispute reasonable inference that is apparently enough to get someone into trouble. Enough trouble to pressure them into publicly proclaiming something rather absurd.

After a week or so, he had to give a press conference. He said ” I said A. there are only half as many cases as we thought, B. We had the percentage of gay men infected right C. But I never said that there only half as many gay men in New York as previously thought. “

He had been forced to publicly renounce arithmetic.

In general he does not shy away from using controversial examples or poking fun of social norms, he nearly always speaks in similar tone, so this is not a "nasty" setting for homosexuals in particular if that is what you fear. He is quite the jerk when criticizing any position he thinks is wrong. But I'll be honest, it generally makes him a better writer. See this piece for example of his style.

Replies from: knb
comment by knb · 2013-04-16T19:54:51.474Z · LW(p) · GW(p)

He is quite the jerk when criticizing any position he thinks is wrong.

It's amazing to me that you don't understand that this was exactly my point.

In general he does not shy away from using controversial examples or poking fun of social norms, he nearly always speaks in similar tone, so this is not a "nasty" setting for homosexuals in particular if that is what you fear.

In fact, If you actually read my comment I said his posts are often interesting but that he frequently comes across as a bitter and sour-tongued. From this context, you should have been able to understand that I'm familiar with his writing style.

comment by [deleted] · 2013-04-16T13:36:40.454Z · LW(p) · GW(p)

He is rude, he certainly isn't a crank however.

Replies from: knb
comment by knb · 2013-04-16T19:27:26.301Z · LW(p) · GW(p)

Crank:

  1. Informal a. A grouchy person.

If you don't think he is a grouchy person, then you may want to work on your reading comprehension.

comment by dspeyer · 2013-04-15T17:32:45.747Z · LW(p) · GW(p)

The Linear Interpolation Fallacy: that if a lot of something is very bad, a little of it must be a little bad.

Most common in politics, where people describe the unpleasantness of Somalia or North Korea when arguing for more or less government regulation as if it had some kind of relevance. Silliest is when people try to argue over which of the two is worse. Establishing the silliness of this is easy. Somalia beats assimilation by the borg, so government power is bad. North Korea beats the Infinite Layers of the Abyss, so government power is good. Surely no universal principle of government can be changed by which contrived example I pick.

And, with a little thought, it seems clear that there is some intermediate amount of goverment that supports the most eudaemonia. Figuring out what that amount is and which side of it any given goverment lies on are important and hard questions. But looking at the extremes doesn't tell us anything about them.

(Treating "government power" as a scalar can be another fallacy, but I'll leave that for another post.)

Replies from: Viliam_Bur, None, army1987
comment by Viliam_Bur · 2013-04-16T07:18:21.346Z · LW(p) · GW(p)

it seems clear that there is some intermediate amount of goverment that supports the most eudaemonia

More nasty details: An amount of government which supports the most eudaemonia in the short term, may not be the best in the long term. For example, it could create a situation where the government can expand easily and has natural incentives to expand. Also, the specific amount of government may depend significantly on the technological level of society; inventions like internet or home-made pandemic viruses can change it.

comment by [deleted] · 2013-04-15T20:12:17.508Z · LW(p) · GW(p)

I think the "non-scalar" point is a much more important take-away.

Generalizing: "Many concepts which people describe in linear terms are not actually linear, especially when those concepts involve any degree of complexity."

Replies from: dspeyer
comment by A1987dM (army1987) · 2013-04-20T08:52:59.445Z · LW(p) · GW(p)

The Linear Interpolation Fallacy: that if a lot of something is very bad, a little of it must be a little bad.

I've seen that applied to all kinds of things, ranging from vitamines to sentences starting with “However”, to name the first two that spring to mind.

comment by diegocaleiro · 2013-04-15T23:31:33.433Z · LW(p) · GW(p)

What is the smartest group/cluster/sect/activity/clade/clan that is mostly composed of women? Related to the other thread on how to get more women into rationality besides HPMOR.

Ashkenazi dancing groups? Veterinarian College students? Linguistics students? Lilly Allen admirers?

No seriously, name guesses of really smart groups, identity labels etc... that you are nearly certain have more women than men.

Replies from: knb, Mitchell_Porter, NancyLebovitz, drethelin, jooyous
comment by knb · 2013-04-16T08:45:10.344Z · LW(p) · GW(p)

Academic psychologists are mostly female. That would seem to be a pretty good target audience for LW. There are a few other academic areas that are mostly female now, but keep in mind that many academic fields are still mostly male even though most new undergraduates are female in the area.

There are lists online of academic specialty by average GRE scores. Averaging the verbal and quantitative scores, and then determining which majority-female discipline has the highest average would probably get you close to your answer.

Replies from: army1987
comment by A1987dM (army1987) · 2013-04-16T19:26:07.920Z · LW(p) · GW(p)

but keep in mind that many academic fields are still mostly male even though most new undergraduates are female in the area

Well, keep in mind that 75% of LWers are under 31 anyway, so it's the sex ratio among the younger cohorts you mainly care about, not the sex ratio overall.

Replies from: knb
comment by knb · 2013-04-17T01:25:05.458Z · LW(p) · GW(p)

But it isn't the undergrads you're looking for if you want the "smartest mostly female group." Undergrads are less bright on average than advanced degree holders due to various selection effects.

Replies from: diegocaleiro, army1987
comment by diegocaleiro · 2013-04-18T18:33:19.107Z · LW(p) · GW(p)

I think we are aiming for "females who can become rationalists" which means that expected smarts are more valuable then real smarts, in particular if the real ones were obtained through decades (implying the person will then be less flexible, since older).

comment by A1987dM (army1987) · 2013-04-20T08:44:45.118Z · LW(p) · GW(p)

IME, among post-docs there might not be as many females as among freshers, but there definitely are more than among tenured professors.

comment by Mitchell_Porter · 2013-04-25T07:00:35.308Z · LW(p) · GW(p)

Professional associations for women in the smartest professions.

comment by NancyLebovitz · 2013-04-16T02:46:21.483Z · LW(p) · GW(p)

One of my friends has nominated the student body at Bryn Mawr.

Replies from: Dias, NancyLebovitz
comment by Dias · 2013-04-17T21:35:11.386Z · LW(p) · GW(p)

Bryn Mawr has gone downhill a lot since the top female students got the chance to go to Harvard, Yale, etc. instead of here. Bryn Mawr does have a cognitive bias course (for undergraduates) but the quality of the students is not that high.

Of course, Bryn Mawr does excellently at the only-women part, and might do well overall once we take into account that constraint.

comment by NancyLebovitz · 2013-04-17T00:05:23.230Z · LW(p) · GW(p)

And another friend has recommended DC WebWomen.

comment by drethelin · 2013-04-16T05:34:23.393Z · LW(p) · GW(p)

Gender studies graduate programs.

Replies from: Jonathan_Graehl, ThrustVectoring
comment by Jonathan_Graehl · 2013-04-16T06:05:35.742Z · LW(p) · GW(p)

aren't plenty of other arts+humanities fields female-majority now when you look at newly minted phds?

Replies from: drethelin
comment by drethelin · 2013-04-16T06:35:38.531Z · LW(p) · GW(p)

dunno! It was just a guess

comment by ThrustVectoring · 2013-04-16T14:12:38.308Z · LW(p) · GW(p)

I'm not entirely sure that targeted recruitment of feminists is a good idea. It seems to me like a good way to get LW hijacked into a feminist movement.

Replies from: Randy_M, bogus
comment by Randy_M · 2013-04-16T15:02:50.572Z · LW(p) · GW(p)

LessWrong+?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-17T06:22:49.210Z · LW(p) · GW(p)

LessIncorrect

comment by bogus · 2013-04-16T15:21:48.593Z · LW(p) · GW(p)

I agree, and would expand this to any politically motivated movement (including libertarians, moldbuggians etc.). After all, this is the main rationale for our norm of not discussing politics on LW itself.

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-04-16T17:53:53.890Z · LW(p) · GW(p)

Political movements in general care more about where you are and your usefulness as a soldier for their movement than how you got there. It's something that we are actively trying to avoid.

comment by jooyous · 2013-04-16T00:06:56.671Z · LW(p) · GW(p)

I'm going to take a blind guess and say nurses. Someone tell me how I did!

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-04-16T06:04:29.188Z · LW(p) · GW(p)

nurses are smart, but not impressively so.

comment by Dahlen · 2013-04-16T07:57:58.442Z · LW(p) · GW(p)

How much difference can nootropics make to one's studying performance / habits? The problems are with motivation (the impulse to learn useful stuff winning out over the impulse to waste your time) and concentration (not losing interest / closing the book as soon as the first equation appears -- or, to be more clear, as soon as I anticipate a difficult task laying ahead). There are no other factors (to my knowledge) that have a negative impact on my studying habits.

Or, to put it differently: if a defective motivational system is the only thing standing between me and success, can I turn into an uber-nerd that studies 10 h/day by popping the right pills?

EDIT: Never messed with my neurochemistry before. Not depressed, not hyperactive... not ruling out some ADD though. My sleep "schedule" is messed up beyond belief; in truth, I don't think I've even tried to sleep like a normal person since childhood. Externally imposed schedules always result in chronic sleep deprivation; I habitually push myself to stay awake till a later hour than I had gone to sleep at the previous night (/morning/afternoon) -- all of this meaning, I don't trust myself to further mess with my sleeping habits. Of what I've read so far, selegiline seems closest to the effects I'm looking for, but then again all I know about nootropics I've learned in the past 6 hours. I can't guarantee I can find most substances in my country.

Replies from: Izeinwinter, Qiaochu_Yuan, RomeoStevens, ThrustVectoring
comment by Izeinwinter · 2013-04-22T18:45:34.079Z · LW(p) · GW(p)

... Bad or insufficient sleep can cause catastrophic levels of akrasia. Fix that, then if you still have trouble, consider other options. Results should be apparent in days, so it is not a very hard experiment to carry out - set alarms on your phone or something for when to go to bed, and make your bedroom actually dark (this causes deeper sleep) you should get more done overall because you will waste less of your waking hours.

Replies from: Dahlen
comment by Dahlen · 2013-04-22T19:59:16.181Z · LW(p) · GW(p)

You're right about that, but the problem with lack of motivation persists even during times when I can set my own schedule and get as much sleep as I need. (Well, to put it precisely, not sleeping enough guarantees that I won't get anything done out of my own choice, but sleeping enough doesn't guarantee that I will, not even closely.)

comment by Qiaochu_Yuan · 2013-04-17T07:05:56.275Z · LW(p) · GW(p)

I agree with ThrustVectoring that you'll probably get more mileage out of implementing something like a GTD system (or at least that doing this will be cheaper and seems like it would complement any additional mileage you get out of nootropics). There are lots of easy behavioral / motivational hacks you can use before you start messing with your neurochemistry, e.g. rewarding your inner pigeon.

I've had some success recently with Beeminding my Pomodoros. It forces me to maintain a minimal level of work per unit time (e.g. recently I was at the MIRI workshop, and even though ordinarily I would have been able to justify not doing anything else during that week I still spent 25 minutes every day working on problem sets for grad school classes) which I'm about to increase.

Replies from: Dahlen
comment by Dahlen · 2013-04-22T14:45:52.113Z · LW(p) · GW(p)

Tried. Failed. Everything that requires me, in my current state, to police myself, fails miserably. It's like my guardian demon keeps whispering in my ear, "hey... who's to stop me from breaking the same rules that I have set for myself?" -- cue yet another day wasted.

Eat candy every time I clear an item off my to-do list? Eat candy even when I don't!

Pomodoros? Y-yeah, let's stop this timer now, shall we -- I've just got this sudden imperious urge to play a certain videogame, 10 minutes into my Pomodoro session...

Schedule says "do 7 physics problems"? Strike that, write underneath "browse 4chan for 7 hours"!

... I don't know, I'm just hopeless. Not just lazy, but... meta-lazy too? Sometimes I worry that I was born with exactly the wrong kind of brain for succeeding (in my weird definition of the word); like utter lack of conscientiousness is embedded inextricably into the very tissues of my brain. That's why nootropics are kind of a last resort for me.

Replies from: gothgirl420666, TheOtherDave, ciphergoth, Qiaochu_Yuan, OrphanWilde
comment by gothgirl420666 · 2013-04-24T21:30:00.045Z · LW(p) · GW(p)

... I don't know, I'm just hopeless. Not just lazy, but... meta-lazy too? Sometimes I worry that I was born with exactly the wrong kind of brain for succeeding (in my weird definition of the word); like utter lack of conscientiousness is embedded inextricably into the very tissues of my brain. That's why nootropics are kind of a last resort for me.

I could have easily written this exact same post two years ago. I used to be incredibly akratic. For example, at one point in high school I concluded that I was simply incapable of doing any schoolwork at home. I started a sort of anti-system where I would do all the homework and studying I could during my free period the day it was due, and simply not do the rest. This was my "solution" to procrastination.

Starting in January, however, I made a very conscious effort to combat akrasia in my life. I made slow, frustrating progress until about a week and a half ago where something "clicked" and now I spend probably 80% of my free time working on personal projects (and enjoying it). I know, I know, this could very easily be a temporary peak, but I have very high hopes for continuing to improve.

So, keep your head up, I guess.

I think on LessWrong, quick simple "tricks" like Pomodoro / feeding yourself candy / working in the same room as someone else / disabling Chrome are way, way, over emphasized. (The only trick I use is writing down my impulses e.g. "check reddit" before indulging in them.) What actually helped/helps me is introspection. Try to figure out what is it about working that's so unpleasant. Why does your brain resist it so much? Luke's algorithm for beating procrastination is something along the lines of what I'm talking about. I think a lot of people have a "use willpower in order to fight through the pain" mentality, but I think what you really want to do is eliminate the pain. If work is torture for you, then I don't really think you can ever be productive unless you change that fact.

From books that I've read and my own experience, it seems to me that one of the easiest traps to fall into (and one of the most fatal) is tying your productivity to your sense of self-worth, especially if you use use your self-worth to motivate yourself ("If I can complete this assignment, I'll be like who my dad wanted me to be!"), especially if you use your self-worth to negatively motivate yourself ("If I don't pass this test, I'll basically be a failure in life"), especially if you actively foster this attitude in order to push yourself, and especially if you suffer or have recently suffered from depression or low self-esteem.

I can say more, but I don't want to waste my time typing it all out if nobody's going to read it, so just reply to this post if you want me to share more of my experiences. (That goes for anyone reading this, not just the OP).

Replies from: Dahlen
comment by Dahlen · 2013-04-24T21:48:19.165Z · LW(p) · GW(p)

Please do go on; I'd be very much interested in what you have to say.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-04-27T01:25:11.870Z · LW(p) · GW(p)

Okay.

To be honest, it's really hard to say exactly what lead to my change in willpower/productivity. Now that I actually try to write down concrete things I do that I didn't do two months ago, it's hard, and my probability that my recent success is a fluke has gone up a little.

I feel like what happened is that after reading a few self-help books and thinking a lot about the problem, I ended up completely changing the way I think about working in a difficult-to-describe way. It's kind of like how when I first found LessWrong, read through all the sequences, and did some musings on my own, I completely changed the way I form beliefs. Now I say to myself stuff like "How would the world look differently if x were true?" and "Of all the people who believe x will happen to them, how many are correct?", even without consciously thinking about it. Perhaps more importantly, I also stopped thinking certain thoughts, like "all the evidence might point to x, but it's morally right to believe y, so I believe y", etc.

Similarly, now, I now have a bunch of mental habits related to getting myself to work harder and snap out of pessimistic mindstates, but since I wasn't handed them all in one nicely arranged body of information like I was with LessWrong, and had to instead draw from this source and that source and make my own inferences, I find it really hard to think in concrete terms about my new mental habits. Writing down these habits and making them explicit is one of my goals, and if I end up doing that, I'll probably post it somewhere here. But until then, what I can do is point you in the direction of what I read, and outline a few of what I think are the biggest things that helped me.

The material I read was

  • various LessWrong writings
  • PJ Eby's Thinking Things Done
  • Succeed: How We Can Reach Our Goals by Heidi Halvorson
  • Switch: How to Change When Change Is Hard by Chip and Dan Heath
  • Feeling Good: The New Mood Therapy by David D. Burns
  • The Procrastination Equation by Piers Steel
  • Getting Things Done by David Allen

Out of all of these, I most recommend Succeed and Switch. PJ Eby is a weird example because he is One Of Us, but he has no credentials, the book is actually unfinished, and he now admits on his website that writing it was one of the worst periods in his life and he was procrastinating every day. So it makes sense to be very skeptical. However, I actually really enjoyed Thinking Things Done and I think that it's probably the best book out of all of these to get you into the "mind hacking" mindset that I attributed my success to, even if its contents aren't literally true. So you can make your own decision on that. Feeling Good isn't a productivity book at all, but I found it really helpful in dealing with akrasia for reasons that I'll sort of explain later. I wouldn't bother to read the Procrastination Equation because there's a summary by lukeprog on this site that basically says everything the book says. And Getting Things Done just describes an organizational system that seems tailored for very busy white collar professionals, so if that doesn't describe you I don't think it's worth it.

Obviously if your akrasia extends to reading these books then this isn't very helpful, but perhaps you could make it your goal to read just one of them (I recommend Succeed) over a period of two months or so. I think this would go a long way.

And then here are the things that most helped me, and can actually be written down at this time. I have the impression that there isn't a singular "key to success" - instead, success requires a whole bunch of attributes to all be in place, and most people have many but not all. So the insights that you need might be very different than the ones I needed, but perhaps not.

1: Not tying my self-worth to my success

The thesis of PJ Eby's Thinking Things Done is that the main reason why people are unsuccessful is that they use negative motivation ("if I don't do x, some negative y will happen") as opposed to positive motivation ("if i do x, some positive y will happen"). He has the following evo-psych explanation for this: in the ancestral environment, personal failure meant that you could possibly be kicked out of your tribe, which would be fatal, and animals have a freezing response to imminent death, so if you are fearing failure you will freeze up.

In Succeed, Heidi Halverson portrays positive motivation and negative motivation as having pros and cons, but has her own dichotomy of unhealthy motivation and healthy motivation: "Be good" motivation, which is tied to identity and status and focuses on proving oneself and high levels of performance, and "get better" motivation, which is what it sounds like. According to her and several empirical studies, "get better" is better than "be good" in almost every way.

In Feeling Good, David Burns describes a tendency of behavior he calls "do-nothingism" where depressed people will lie in bed all day, then feel terrible for doing so, leading them to keep lying in bed, leading them to feel even worse, etc. etc.

It seems like a pretty intuitive for a depressed, lazy person to motivate themselves by saying "Okay, self, gotta stop being lazy. Do you want to be a worthless, lazy failure in life? No you don't. So get moving!" But it seems like synthesizing these three pieces of information informs us that this is basically the worst thing you can possibly do. I definitely fell into this trap, and climbing out of it was probably one of the biggest things that helped me.

2: Being realistic

I feel like something a lot of people tend to do is tell themselves "From this day now on, I'll be perfect!" and then try to spend six hours a day working on personal projects, along with doing 100 push ups and meditating. This is obviously stupid, but for some reason at least for me was a really hard trap to get out of.

For example, I've always been a person who is really easily inspired i.e. if I read a good book, I'll want to write a book, if I listen to a good rap album, I'll want to become a rapper. Due to this tendency, I've done a fair bit of exploration in visual art, music, and video game programming. When I initially attempted my akrasia intervention, I tried to get myself to work on all three of these areas and achieve meaningful results in all of them. I held onto the naive belief that this was possible for far too long, and eventually had a mini-crisis of faith where I decided that I would cut my losses and from then on exclusively work on video game programming. Since then, things have been going much better.

This also goes with the get better mindset from the last point. If you are the worst procrastinator you know, your initial goal should be to be a merely below average procrastinator, then to be an average procrastinator, and on and on until you cure akrasia.

3: Realistic optimism

All the studies show that optimists are more successful in almost every domain. So how is that compatible with my "being realistic" point? The key is that the best, most healthy kind of optimism is the belief that you can eventually succeed in your goals (and will if you are persistent), but that it will take a lot of effort and setbacks along the way to do so. This is usually a valid belief, and combines the motivation of optimism and the cautiousness of pessimism. (This is straight from Succeed, by the way.)

4: Elephant / Rider analogy

I'm not going to go into detail about this because this post is getting long as fuck, but if this idea is unfamiliar to you, search for it on Google and LessWrong, it's been written about extensively and is a very very useful (and liberating)metaphor for how your brain works.

5: Willpower is like a muscle

Willpower is like a muscle and if you give it regular workouts it gets stronger. People who quit smoking often also start exercising or stop drinking, depressed people who are given a pet to care for often become much happier because the responsibility encourages them to enact changes in their own life, etc.

This implies that once you start changing a little, it will be easier to change more and more. But you can also artificially jump start this process by exercising your willpower. Probably the best willpower exercises are physical exercise and meditation (and they both of course have numerous other benefits), but if you lack the energy/time/desire to do either of those, you could always do something very simple and gradually build. If you have a bad habit like biting your nails, that could be a good starting point.

So yeah, this post is long as fuck, didn't really mean to write that much. Hope it helped, though. Maybe I'll revise this and turn it into a discussion post.

comment by TheOtherDave · 2013-04-22T16:18:47.059Z · LW(p) · GW(p)

Some people in a similar position recruit other people to police us when our ability to police ourselves is exhausted/inadequate. Of course, this requires some kind of policing mechanism... e.g., whereby the coach can unilaterally withhold rewards/invoke punishments/apply costs in case of noncompliance.

comment by Paul Crowley (ciphergoth) · 2013-04-22T20:21:05.423Z · LW(p) · GW(p)

Have you tried setting very small and easy goals and seeing if you can meet those?

Replies from: Dahlen
comment by Dahlen · 2013-04-22T22:22:43.765Z · LW(p) · GW(p)

I have made many incremental steps towards modifying some behaviours in a desired direction, yes, but they don't tend to be consciously directed. When they are, I abandon them soon; no habit formation occurs of these attempts. I am making progress, but it seems to be largely outside of my control.

comment by Qiaochu_Yuan · 2013-04-22T17:57:48.927Z · LW(p) · GW(p)

Have you tried Beeminder? That's less self-policing and more Beeminder policing you, as long as you haven't sunk so low as to lie to Beeminder. Alternatively, there are probably things you can do to cultivate self-control in general, although I'm not sure what those would be (I've been practicing with denying myself various things for awhile now).

Replies from: Dahlen
comment by Dahlen · 2013-04-22T18:57:32.114Z · LW(p) · GW(p)

No way, it's the stupidest thing I could do with my already very very limited financial resources. That sort of way of motivating yourself is really sort of a luxury, at least when viewed from my position. Lower middle class folks in relatively poor countries can't afford to gamble their meagre savings on a fickle motivation; any benefit I could derive from it is easily outweighted by the very good chance of digging myself into a financial hole... so, I can't take that risk.

Replies from: Qiaochu_Yuan, gwern
comment by Qiaochu_Yuan · 2013-04-22T19:56:14.007Z · LW(p) · GW(p)

I can think of much stupider things. Doesn't the fact that you have limited finances make this an even better tool to use (in that you'll be more motivated not to lose money)? The smallest pledge is $5 and if you stay on track (it helps to set small goals at first) you never have to pay anything. I think you're miscalibrated about how risky this is.

And how were you planning on obtaining nootropics if your finances are so limited?

Replies from: Dahlen
comment by Dahlen · 2013-04-22T22:12:37.386Z · LW(p) · GW(p)

Doesn't the fact that you have limited finances make this an even better tool to use (in that you'll be more motivated not to lose money)?

... No. It doesn't work like that at all. That's the definition of digging myself into a hole. Will I be struggling to get out of it all the more so? Yes, I will, but at a cost greater than what I was initially setting out to accomplish. I'd rather be unmotivated than afraid of going broke.

I think you're miscalibrated about how risky this is.

Possibly. The thing is, around here, even $5 is... Well, not much by any measure, but it doesn't feel negligible, you know what I'm saying? Someone of median income couldn't really say it's no big deal if they come to realize the equivalent of $5 is missing from their pockets. It probably doesn't feel like that to an American, so I understand why you may think I'm mistaken.

And how were you planning on obtaining nootropics if your finances are so limited?

I can afford to spend a few bucks on a physical product with almost guaranteed benefits. I can't afford to bet money on me doing things I have a tendency to do very rarely. In one case I can expect to get definite value from the money I spend, in the other I'm basically buying myself some worries. (I should, perhaps, add that the things I want to motivate myself to do don't have a chance of earning me income any time soon.)

I can think of much stupider things.

Of course; it wasn't meant to be understood literally.

--

The bottom line is, they're not getting my money. I'm really confident that it's a good decision, and have really good reasons to be suspicious of any attempts to get me to pay for something, and there are really many things out there that are obviously useful enough that I don't need to be persuaded into buying them. So... I appreciate that you mean to help, it's more than one can ask from strangers, but I strongly prefer alternatives that are either free, guaranteed, or ideally both.

comment by gwern · 2013-04-22T19:58:24.174Z · LW(p) · GW(p)

You could try using Beeminder without giving them money.

comment by OrphanWilde · 2013-04-22T15:25:39.549Z · LW(p) · GW(p)

A habit I'm working on developing is to ask a mental model of a Manager what I -should- be doing right now. As long as I don't co-opt the Manager, and as long as there's a clearly preferable outcome, it seems to work pretty well.

Even when there isn't a clearly preferable outcome, the mental conversation helps me sort out the issue. (And having undertaken this, I've discovered I used to have mental conversations with myself all the time, and at some point lost and forgot the habit.)

Replies from: DaFranker
comment by DaFranker · 2013-04-22T16:18:11.313Z · LW(p) · GW(p)

I've tried similar approaches. From that opening line and with sane priors, you can probably get a pretty good idea of what the results were.

For me, and I suspect many others for whom all self-help and motivational techniques and hacks just "inexplicably" fail and which "they must be doing wrong", the problem is almost entirely within one single, simple assumption that seems to work naturally for the authors, but which is for me a massive amount of cognitive workload that is continuously taxing on my mental energy.

And said assumption that I refer to is precisely here:

A habit I'm working on developing is to ask a mental model of a Manager what I -should- be doing right now.

The question I shall ask, to illustrate my point, is: If you were programming a computer to do this (e.g. open a chat window with someone posing as a Manager for the appropriate discussion), how would you go about it?

More importantly, how does the program know when to open the window?

Suppose the program has access to your brain and can read what you're thinking, and also has access to a clock.

Well, there are three most obvious, simple answers, in order of code complexity:

  1. Keep the chat window open all the time. This is obviously costly attention-wise (but not for the program), and the window is always in the way, and chances are that after a while you'll stop noticing that window and never click on it anymore, and it will lose all usefulness. It then becomes a flat tax on your mind that is rendered useless.
  2. Open the chat window at specific intervals. This brings another question: how often? If it's too often, it gets annoying, and it opens too many times when not needed, and eventually that'll cause the same problems as solution 1. If it's not often enough, then you won't get much benefit from it whenever you would need it. And even if it is a good interval, you'll still sometimes open it when not needed, or not open it when it was needed more often that day or in the middle of an interval. We can do better.
  3. Look for the kind of situations in which the Manager will help you, by reading what you're thinking about, and then whenever certain conditions are met (procrastinating, not doing any work, spending too much time reading wikipedia articles, etc.), bring up the chat window. However, this is a large endeavor, because the program has to be constantly running and reading every thought that passes by, and then using (read: computing, running) heuristics to tell whether the conditions are met (read: run a complex function with the current thoughts as arguments/parameters, for every single given thought).

See, while I was writing this, I had forgotten about a specific work-related thing I was supposed to do at a certain condition. It's only when I wrote point 3 above that my brain actually connected this to "checking for events", which led to "I have events to check for!" which led to "Oh, right, that person got back, I should go ask them X".

The key point being that the very thought of even checking for conditions upon which to act is something that does not occur naturally or on its own for me - it has to come about by being linked to from another thought and brought to my conscious attention. Any technique that relies on consciously doing X inevitably stumbles on this key factor for me.

Running computations on every single thought all the time is extremely tiring and mentally exhausting. It's much more daunting than any task I would usually need "motivation" for. It means I stop after every few thoughts and think of the thing I have to remember to do. And then remember to think that I have to think about this again in a few more thoughts. And then try to resume whatever other thoughts I had. It's pretty much impossible to focus and concentrate on anything while doing this.

Which means whenever the set of conditions for talking to the Manager are met, I will not automatically open the chat window. It just won't detect the conditions. The conditions won't, on their own, open the chat window - the conditions themselves (I'm tabsploding on wikipedia) were not designed such that they always open the chat window with the Manager each time they happen.

So the tabsploding process happens, without ever calling on the remote parts of my brain that have little bits of code to open chat windows when tabsplosions happen, and so those remote parts of my brain keep on sleeping, and so chat windows do not open, and so tabsplosions go on merrily uninterrupted for hours until I read an article about business management, and the word management triggers me to remember the Manager process, and then I suddenly realize that I've been procrastinating all this time and need to get back to work (Note: I get back to work without even needing said Manager chat window, by this point, so the problem is clearly not "motivation" in this case).

And all that is the hidden assumption, the obvious thing that no one mentions in "making a habit of doing X" or "using GTD" or "using pomodoro". It's the single most brain-computationally-intensive process I can think of that people have ever actually seriously implied I should use. My subconscious, unfortunately, doesn't do it for me. It seems like most other people have it easier. Well, good for them. I'm still stuck here unable to realize that I need to do the dishes, and so I keep on reading forums, and my forum-reading thoughts don't have any bits dedicated to remembering whether or not dishes need to be done, so the forum-reading begets more forum-reading and tabsploding, and my mind never brings up the issue of having something to do.

And yes, this applies to meta concerns. So training myself to be more mindful and conscientious of these things fails because I fail to think of applying techniques to make myself more mindful and conscientious. Everything I've tried has failed to produce the amazing results others report.

I have no idea of how common this problem is, or whether nootropics might be a solution.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-22T17:13:27.433Z · LW(p) · GW(p)

Am I correct in ascertaining that your issue is less making the right decisions, and more trying to remember to consciously make decisions at all?

Replies from: DaFranker
comment by DaFranker · 2013-04-22T17:27:26.968Z · LW(p) · GW(p)

In some sense, yes.

However, sometimes it gets much more complex. It can very well happen that I insert a trigger to "must go do dishes once X is done", but then I think "Hmm, maybe I should go do dishes" at some point in the future when I'm in-between activities, and X happens to be done, but (and this is the gut-kicking bastard):

Thinking that I should do the dishes is not properly linked to checking whether X is done, and thus I don't see the process that tells me that X is done so I should do the dishes!

And therefore what happens afterwards, instead of realizing that X is done me getting up to do dishes, is me thinking "yeah I should, but meh, this is more interesting". And X has never crossed my mind during this entire internal exchange. And now I'm back to tabsploding / foruming / gaming. And then three hours later I realize that all of this happened when I finally think of X. Oops.

So yes. "Trying to remember" is an active-only process for me. Something must trigger it. The thoughts and triggers do not happen easily and automatically at the right and proper times. Once the whole process is there and [Insert favorite motivational hack] is actually in my steam-of-consciousness, then this whole "motivation" thing becomes completely different and much easier to solve.

Unfortunately, I do not yet have access to technology of sufficient sophistication to externalize and fully automate this process. I've dreamed of it for years though.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-22T18:08:51.174Z · LW(p) · GW(p)

This may be a stupid question, but I have to ask:

Have you tried designing solutions for this problem? Pomodoro and the like are designed to combat akrasia; they're designed to supplement or retrain willpower. They're solutions for the wrong problem; your willpower isn't entering into it. Hypothesis: Pomodoro kind-of sort-of worked for you for a short period of time before inexplicably failing. You might not have even consciously noticed it going off.

Replies from: DaFranker
comment by DaFranker · 2013-04-22T18:19:44.128Z · LW(p) · GW(p)

If I'm reading you correctly, that hypothesis is entirely correct. Pomodoro is also not the only thing where this has happened. In most cases, I don't consciously realize what happens until later, usually days or weeks after the fact.

I've tried coming up with some solutions to the problem, yes, but so far there's only three avenues that I've tried that had promising results:

  • Use mental imagination techniques to train habits: imagine arriving in situation or getting feelings X, anchor that situation or feeling to action Y. This works exceptionally well and easily for me, but... Yep. Doing the training is itself something that suffers from this problem. I would need to use it to train using it. Which I can't, 'caus I'm not good enough at it (I tried). Some bootstrapping would be required for this to be a reliable method, but it's also in itself a rather expensive and time-consuming exercise (not the same order of magnitude as constant mindfulness, though), so I'd prefer better alternatives.
  • Spam post-its or other forms of fixed visual / auditory reminders in the appropriate contexts, places and times. Problem is, this becomes like the permanent or fixed-timed chat windows in the programmed Manager example - my brain learns to phase them out or ignore them, something which is made exponentially worse when trying to scale things up to more things.
  • Externalize and automate using machines and devices. Setting programmatic reminders on my phone using tasker is the best-working variant I've found so far, but the app is difficult to handle and crashes often - and every single time it crashes, I lose everything (all presets, all settings, all events, everything - as if I had reinstalled the app completely). I gave up on that after about the fourth time I spent hours configuring it and then lost everything from a single unrelated crash.
Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-22T18:48:27.275Z · LW(p) · GW(p)

I actually suffer from exactly the same issue. (I opted to try to run the Manager app full-time, although I'm not having a lot of luck training myself to actually do it. I figure any wasted brain cycles probably weren't being used anyways on account that I couldn't remember to do things that required using them.)

Thus far the only real "hack" I've worked out is to constantly change reminder mechanisms. I'm actually fine with highly disruptive alerts - my favorite alarm is also the most annoying - but the people around me tend to hate them.

Hacks aside, routine has been the only thing I've found that helps, and helps long-term. And given my work schedule, which can vary from "Trying to find something to do" to "Working eighteen hours days for two weeks straight" with just about everything in the middle, routine has been very hard to establish.

However, I have considerably better luck limiting my routine; waking up at 6 AM every day, and dedicating this time strictly to "Stuff that needs doing", has worked for me in the past. (Well, up until a marathon work period.)

comment by RomeoStevens · 2013-04-16T23:36:38.068Z · LW(p) · GW(p)

nicotine has been a significant help with motivation. I only vape eliquid with nicotine when I am studying. This seems to have resulted in a large reduction in ugh fields.

comment by ThrustVectoring · 2013-04-16T14:14:51.737Z · LW(p) · GW(p)

It depends on what you get way too much to give a blanket category answer. Adderall/other ADD medications have a proven track record. Modafinil is likely also helpful, esp with concentration and having more time in general to get things done.

Honestly, if you're anything like me, you'd get a lot more mileage out of implementing and organization and time management system.

comment by mstevens · 2013-04-16T10:39:59.217Z · LW(p) · GW(p)

I've been reading Atlas Shrugged and seem to have caught a case of Randianism. Can anyone recommend treatment?

Replies from: moridinamael, Jayson_Virissimo, Vaniver, OrphanWilde, TimS, CarlShulman, FiftyTwo, VCavallo, RomeoStevens, Douglas_Knight, None, Yuyuko
comment by moridinamael · 2013-04-16T16:20:58.660Z · LW(p) · GW(p)

My own deconversion was prompted by realizing that Rand sucked at psychology. Most of her ideas about how humans should think and behave fail repeatedly and embarrassingly as you try to apply it to your life and the lives of those around you. In this way, the disease gradually cures itself, and you eventually feel like a fool.

It might also help to find a more powerful thing to call yourself, such as Empiricist. Seize onto the impulse that it is not virtuous to adhere to any dogma for its own sake. If part of Objectivism makes sense, and seems to work, great. Otherwise, hold nothing holy.

comment by Jayson_Virissimo · 2013-04-17T01:06:03.923Z · LW(p) · GW(p)

Michael Huemer explains why he isn't an Objectivist here and this blog is almost nothing but critiques of Rand's doctrines. Also, keep in mind that you are essentially asking for help engaging in motivated cognition. I'm not saying you shouldn't in this case, but don't forget that is what you are doing.

With that said, I enjoyed Atlas Shrugged. The idea that you shouldn't be ashamed for doing something awesome was (for me, at the time I read it) incredibly refreshing.

Replies from: mstevens, mstevens, blacktrance
comment by mstevens · 2013-04-18T11:14:17.947Z · LW(p) · GW(p)

Quoting from the linked blog:

"Assume that a stranger shouted at you "Broccoli!" Would you have any idea what he meant? You would not. If instead he shouted "I like broccoli" or "I hate broccoli" you would know immediately what he meant. But the word by itself, unless used as an answer to a question (e.g., "What vegetable would you like?"), conveys no meaning"

I don't think that's true? Surely the meaning is an attempt to bring that particular kind of cabbage to my attention, for as yet unexplained reasons.

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T21:04:10.763Z · LW(p) · GW(p)

I don't think that's true? Surely the meaning is an attempt to bring that particular kind of cabbage to my attention, for as yet unexplained reasons.

That's a possible interpretation, but I wouldn't say "surely."

Some other possibilities.

The person picked the word apropos of nothing because they think it would be funny to mess with a stranger's head.

It's some kind of in-joke or code word, and they're doing it for the amusement of someone else who's present (or just themselves if they're the sort of person who makes jokes nobody else in the room is going to get.)

The person is confused or deranged.

Replies from: TheOtherDave, mstevens
comment by TheOtherDave · 2013-04-21T01:54:56.962Z · LW(p) · GW(p)

If I heard someone shout "Broccoli" at me without context, my first assumption would be that they'd actually said something else and I'd misunderstood.

comment by mstevens · 2013-04-22T11:17:06.258Z · LW(p) · GW(p)

But this doesn't seem particularly different from the ambiguity in all language. The linked site seems to suggest there's some particular lack of meaning in isolated words.

comment by mstevens · 2013-04-18T11:10:15.014Z · LW(p) · GW(p)

My reaction to Rand is pretty emotional, rather than "I see why her logic is correct!", which I think justifies the motivated cognition aspect a little bit.

comment by blacktrance · 2013-04-17T02:41:08.496Z · LW(p) · GW(p)

Some of Huemer's arguments against Objectivism are good (particularly the ones about the a priori natures of logic and mathematics), but his arguments against the core of Objectivism (virtue ethical egoism) fall short, or at best demonstrate why Objectivism is incomplete rather than wrong.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-18T03:42:59.549Z · LW(p) · GW(p)

His arguments against her ethical system seem... confused.

She pretty much acknowledged that life as a good thing is taken as a first principle, what he calls a suppressed premise - she was quite open about it, in fact, as a large part of her ethical arguments were about ethical systems which -didn't- take life as a good thing as a first principle.

His arguments about a priori knowledge, however, are fatally flawed. What he calls a priori knowledge only seems intuitive once you've understood it. Try teaching addition to somebody sometime. Regardless of whether a priori truths exist, we only recognize their truth by reference to experience. Imagine you live someplace where addition and subtraction never work - addition wouldn't be intuitively true; it would be nonsense. Do you think you could get a child to grasp addition while you perform sleight of hand and change the number of apples or whatnot as you demonstrate the concepts? He's regarding knowledge from the perspective of somebody who has already internalized it.

You have to have a strong grasp of abstract concepts before you can start building the kind of stuff he insists is a priori, concepts which are built up through experience with things like language. Mathematics wasn't discovered, it was invented, just as much as an electric motor was invented. (You can suppose that mathematics exists in an infinite plane of possible abstractions, but the same is true of the electric motor.) That we chose the particular mathematics we did is a result of the experiences we as a species have had over the past few dozen thousand years.

(Or, to take a page out of Descartes - what would his a priori knowledge look like if a devil were constantly changing the details of the world he lived in? Playing sleight of hand with the apples, as it were?)

Replies from: blacktrance
comment by blacktrance · 2013-04-18T06:09:40.867Z · LW(p) · GW(p)

The idea of a priori knowledge is not that it's intuitive, but that it is not dependent on experience for it to be conceivable. Though addition may be hard to teach without examples, it abstractly makes sense without reference to anything in the physical world. Similarly, the truth of the statement "a bachelor is an unmarried man" does not require any experience to know - its truth comes from the definition of the word "bachelor".

Replies from: Strelok, OrphanWilde
comment by Strelok · 2013-04-27T10:14:22.457Z · LW(p) · GW(p)

The idea of a priori knowledge is not that it's intuitive, but that it is not dependent on experience for it to be conceivable.

If I am understanding your statement here correctly, you are saying that a priori knowledge hinges on the idea that concepts can be acquired independently of experience. If that is what you are saying, then you would be incorrect. Very few philosophers who accept the idea of a priori knowledge—or more appropriately: a priori justification—think that human-beings ever acquire concepts innately or that they can otherwise conceive of them independently of experience. A proposition is knowable a priori if it is justifiable by appeal to pure reason or thought alone. Conversely, a proposition is knowable a posteriori if it is justifiable in virtue of experience; where any relevant, constitutive notion of experience would have as its meaning (a) some causally conditioned response to particular, contingent features of the world, and (b) doxastic states that have as their content information concerning such contingent features of the actual world as contrasted with other possible worlds.

comment by OrphanWilde · 2013-04-18T07:01:17.477Z · LW(p) · GW(p)

Somebody defined the operation of addition - it did not arise out of pure thought alone, as is evidenced by the fact that nobody bothered to define some other operation by which two compounds could be combined to produce a lesser quantity of some other compound (at least until people began formalizing chemistry). There are an infinite number of possible operations, most of which are completely meaningless for any purpose we would put them to. Knowledge of addition isn't knowledge at all until you have something to add.

"Qwerms are infantile eloppets." Is this a true statement or not? I could -define- a qwerm to be an infantile eloppet, but that doesn't represent any knowledge; in the pure abstract, it is an empty referential, devoid of meaning. Everything in the statement "a bachelor is an unmarried man" is tied to real-world things, whatever knowledge is contains there is experience driven; if the words mean something else - and those words are given meaning by our experiences - the statement could be true or false.

Kant, incidentally, did not define a priori knowledge to be that which is knowable without experience (the mutation of the original term which Ayn Rand harshly criticized), but rather that which is knowable without reference to -specific- experience, hence his use of the word "transcendent". If putting one rock and another together results in three rocks, our concept of mathematics would be radically different, and addition would not merely fail to reflect reality, it would not for any meaningful purpose exist. Transcendent truths are arrived at through experience, they simply don't require any -particular- experience to be had in order to be true.

In Kantian terms, a priori, I know if I throw a rock in the air it will fall. My posterior knowledge will be that the rock did in fact fall. There are other transcendental things, but transcendental knowledge is generally limited to those things which can be verified by experience (he argued that transcendental knowledge could not extend beyond those experiences we can anticipate). Without going into his Critique of Pure Reason, which argues for some specific exceptions (causality and time, for example) as bootstraps to get the whole mess moving, future philosophers by and large completely ignored what he had written about transcendental knowledge in general, and lifted it out of the realm of experience entirely. (With some ugly results, as you're left with nothing but tautologies.)

Replies from: Strelok
comment by Strelok · 2013-04-27T10:27:17.899Z · LW(p) · GW(p)

Somebody defined the operation of addition - it did not arise out of pure thought alone, as is evidenced by the fact that nobody bothered to define some other operation by which two compounds could be combined to produce a lesser quantity of some other compound (at least until people began formalizing chemistry). There are an infinite number of possible operations, most of which are completely meaningless for any purpose we would put them to. Knowledge of addition isn't knowledge at all until you have something to add.

The problem here is that you seem to be presupposing the odd idea that, in order for any proposition to be knowable a priori, its content must also have been conceived a priori. (At least for the non-Kantian conceptions of the a priori). It would be rare to find a person who held the idea that a concept be acquired without having any experience related to it. Indeed, such an idea seems entirely incapable of being vindicated. If I expressed a proposition such as "nothing can be both red and green all over at the same time" to a person who had no relevant perceptual experience with the colors I am referring to and who had failed to acquire the relevant definitions of the color concepts I am using, then that proposition would be completely nonsensical and unanalyzable for such a person. However, this has no bearing on the concept of a priori knowledge whatsoever. The only condition for a priori knowledge is for the expressed proposition be justifiable by appeal to pure reason.

comment by Vaniver · 2013-04-16T13:04:53.311Z · LW(p) · GW(p)

I've been reading Atlas Shrugged and seem to have caught a case of Randianism. Can anyone recommend treatment?

Are you looking to treat symptoms? If so, which ones?

comment by OrphanWilde · 2013-04-16T13:15:03.066Z · LW(p) · GW(p)

Laughs I'm an Objectivist by my own accord, but I may be able to help if you find this undesirable.

The shortest - her derivations from her axioms have a lot of implicit and unmentioned axioms thrown in ad-hoc. One problematic case is her defense of property - she implicitly assumes no other mechanism of proper existence for humans is possible. (And her "proper existence" is really slippery.)

This isn't necessarily a rejection - as mentioned, I am an Objectivist - but it is something you need to be aware of and watch out for in her writings. If a conclusion doesn't seem to be quite right or doesn't square with your own conception of ethics, try to figure out what implicit axioms are being slipped in.

Reading Ayn Rand may be the best cure for Randianism, if Objectivism isn't a natural philosophy for you, which by your apparent distress it isn't. (Honestly, though, I'd stay the hell away from most of the critics, who do an absolutely horrible job of attacking the philosophy. They might be able to cure you of Randianism, but largely through misinformation and unsupported emotional appeals, which may just result in an even worse recurrence later.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-16T14:00:08.649Z · LW(p) · GW(p)

Please correct me if I'm wrong, but it seems to me that she also did some variant of "Spock Rationality". More precisely, it seems to me as if her heroes have one fixed emotion (mild curious optimism?) all the time; and if someone doesn't, that is only to show that Hero1 is not as perfect as Hero2 whose emotional state is more constant.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-16T14:24:32.219Z · LW(p) · GW(p)

I've mentioned this here before, but prior to reading Atlas Shrugged, I truly believed in Spock Rationality. I used meditation to eliminate emotions as a teenager because I saw them as irrelevant.

Atlas Shrugged convinced me that emotions were a valuable thing to have. So I don't really see Spock Rationality in the characters.

The closest any of the characters comes to that is Galt, and it is heavily implied he went through the same kind of utter despair as all the other characters in the book. It's more or less stated outright that the Hero characters experience greater emotions, in wider variety, than other characters, and particularly the villains; the level emotions of, for example Galt, is not a result of having no emotions, but having experienced such suffering that what he experiences in the course of the book is insignificant by comparison.

(Relentless optimism and curiousity are treated as morally superior attitudes, I grant, but I'd point out that this is a moral standard held to some degree here as well. Imagine the response to somebody who insisted FAI was impossible and we were all doomed to a singularity-induced hell. This community is pretty much defined by curiousity, and to a lesser but still important extent optimism, in the sense that we can accomplish something.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-16T17:03:17.257Z · LW(p) · GW(p)

Some explanation: Recently I watched the beginning of Atlas Shrugged: Part I, and there was this dialog, about 10 minutes from the beginning:

James Taggart: You've never had any feelings. I don't think you've ever felt a thing.
Dagny Taggart: No, Jim. I guess I've never felt anything at all.

I didn't watch the whole movie yet, and I don't remember whether this was also in the book. But this is what made me ask. (Also some other things seemed to match this pattern.)

Of course there are other explanations too: Dagny can simply be hostile to James; both implicitly understand the dialog is about a specific subset of feelings; or this is specifically Dagny's trait, perhaps because she hasn't experienced anything worth being emotional about, yet.

EDIT: Could you perhaps write an article about the reasonable parts of Objectivism? I think it is worth knowing the history of previous self-described rationality movements, what they got right, what they got wrong, and generally what caused them to not optimize the known universe.

Replies from: None, OrphanWilde
comment by [deleted] · 2013-04-16T17:28:47.315Z · LW(p) · GW(p)

I thought the exchange was supposed to be interpreted sarcastically, but the acting in the movie was so bad it was hard to tell for sure. Having read most of Rand during a misspent youth, I agree with OrphanWilde's interpretation of Rand's objectivist superheroes being designed specifically to feel emotions that are "more real" than everyday "human animals."

For what it's worth, in my opinion the only reasonable part of Objectivism is contained in The Romantic Manifesto, which deals with all of this "authentic emotions" stuff in detail.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-16T19:30:15.721Z · LW(p) · GW(p)

I also read it as Dagny being sarcastic, or at least giving up on trying to convey anything important to James. (I haven't seen the movie-- Dagny was so badly miscast that I didn't think I could enjoy it.)

I think a thing that's excellent in Rand not put front and center by much of anyone else is that wanting to do things well is a primary motivation for some people.

Replies from: None
comment by [deleted] · 2013-04-16T19:33:02.510Z · LW(p) · GW(p)

I think a thing that's excellent in Rand not put front and center by much of anyone else is that wanting to do things well is a primary motivation for some people.

Not to be snide, but... Plato? Aristotle? Kant? Nietzsche?

comment by OrphanWilde · 2013-04-16T17:11:24.152Z · LW(p) · GW(p)

I'd have to buy another copy of the book (I have a tendency to give my copies away - I've gone through a few now), so I'm not sure. In the context of the book, this would be referring to a specific subset of feelings (or more particularly, guilt, which Ayn Rand utterly despised, and which James was kind of an anthropomorphism of). Whether that's an appropriate description in the context of the scene itself, I'm not sure.

(God the movie sucked. About the only thing I liked was that the villains were updated to fit the modern era to be more familiar. They come off as strawmen in the book unless you're familiar with the people they're caricatures of.)

Replies from: mstevens
comment by mstevens · 2013-04-18T14:28:10.206Z · LW(p) · GW(p)

I initially thought she was being sarcastic. However on seeing this discussion I find the "specific subset of feelings" theory more plausible. She's rejecting the "feelings" James has.

comment by TimS · 2013-04-16T13:04:13.660Z · LW(p) · GW(p)

Heinlein? I found Stranger in a Strange Land to be an interesting counterpoint to Atlas Shrugged.

Both feature characters with super-human focus / capability (Rearden and Valentine Micheal Smith). And they have totally different effects on societies superficially similar to each other (and to our own).

There's more to say about Rand in particular, but we should probably move to the media thread for that specifically (Or decline to discuss for Politics is the Mindkiller reasons). Suffice it to say that uncertainty about how to treat the elite productive elements in society predates the 1950s and 1960s.

Replies from: None
comment by [deleted] · 2013-04-18T12:19:43.251Z · LW(p) · GW(p)

Time Enough for Love is an even better anti-Atlas Shrugged.

Replies from: NancyLebovitz, mstevens
comment by NancyLebovitz · 2013-04-19T01:25:20.562Z · LW(p) · GW(p)

Why?

comment by mstevens · 2013-04-22T11:18:04.259Z · LW(p) · GW(p)

I like my Heinlein, but I don't see the connection.

comment by CarlShulman · 2013-04-19T23:54:30.835Z · LW(p) · GW(p)

The (libertarian, but not Randian) philosopher Michael Huemer has an essay entitled "Why I'm not an objectivist." It's not perfect, but at least the discussion of Rand's claim that respect for the libertarian rights of others follows from total egoism is good.

comment by FiftyTwo · 2013-04-19T22:00:39.878Z · LW(p) · GW(p)

Genuine question: What do you find appealing about it? I've always found the writing impenetrable and the philosophy unappealing.

Replies from: mstevens
comment by mstevens · 2013-04-22T11:23:48.012Z · LW(p) · GW(p)

The writing, I agree, is pretty bad, and she has an odd obsession with trains and motors. I can just about understand the "motor" part because it allows some not very good "motor of the world" metaphors.

The appealing part is the depiction of the evil characters as endlessly dependant on the hero characters, and their view of them as an inexhaustible source of resources for whatever they want, and the rejection of this.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-23T13:10:45.175Z · LW(p) · GW(p)

The obsession with trains is probably because in era when Ayn Rand lived, people working with trains were an intellectual elite. They (1) worked with technology, and often (2) travelled across the world and shared ideas with similar people. If you worked at a railroad, sometimes you got free rides anywhere as an employment benefit. It was an era before internet, where the best way to share ideas with bright people was to meet them personally.

In other words, if she lived today, she would probably write about hackers, or technological entrepreneurs. John Galt would be the inventor of internet, or nanotechnology, or artificial intelligence. (And he would use modafinil instead of nicotine.)

comment by VCavallo · 2013-04-16T16:41:02.413Z · LW(p) · GW(p)

Can you explain what you mean by this? I ask because I don't know what this means and would like to. Others here clearly seem to get what you're getting at. Some Google searching was mostly fruitless and since we're here in this direct communication forum I'd be interested in hearing it directly.

Thanks!

Replies from: mstevens
comment by mstevens · 2013-04-18T14:31:29.847Z · LW(p) · GW(p)

I read the book Atlas Shrugged by Ayn Rand where she sets out her philosophical views.

I found them worryingly convincing. Since they're also unpleasant and widely rejected, I semi-jokingly semi-seriously want people to talk me out of them.

comment by RomeoStevens · 2013-04-16T23:34:56.986Z · LW(p) · GW(p)

Think carefully through egoism.

hint: Vs rtbvfg tbnyf naq orunivbef qba'g ybbx snveyl vaqvfgvathvfunoyr sebz gur tbnyf naq orunivbef bs nygehvfgf lbh'ir cebonoyl sbetbggra n grez fbzrjurer va lbhe hgvyvgl shapgvba.

Replies from: blacktrance
comment by blacktrance · 2013-04-17T02:43:33.098Z · LW(p) · GW(p)

Ubjrire, gur tbnyf bs rtbvfgf qb ybbx qvssrerag sebz gur tbnyf bs nygehvfgf, ng yrnfg nygehvfgf nf Enaq qrsvarq gurz.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-04-17T04:08:31.413Z · LW(p) · GW(p)

Fur svtugf fgenj nygehvfgf jvgu n fgenj rtbvfg ervasbeprq jvgu n pbng unatre be gjb.

Replies from: Viliam_Bur, blacktrance
comment by Viliam_Bur · 2013-04-17T06:46:59.199Z · LW(p) · GW(p)

I don't have a link, but I remember reading somewhere that originally the altruism was defined as a self-destructive behavior -- ignoring one's own utility function and only working for the others -- and only later it was modified to mean... non-psychopatology.

In other words, it was the "egoism" which became a strawman by not being allowed to become more reasonable, while its opposite the "altruism" was allowed to become more sane than originally defined.

In a typical discussion, the hypothetical "altruist" is allowed to reflect on their actions, and try to preserve themself (even if only to be able to help more people in the future), while the hypothetical "egoist" is supposed to be completely greedy and short-sighted.

Replies from: OrphanWilde, Richard_Kennaway
comment by OrphanWilde · 2013-04-18T03:52:12.461Z · LW(p) · GW(p)

http://hubcap.clemson.edu/~campber/altruismrandcomte.pdf

Page 363 or so.

Auguste Comte coined the term "altruist", and it's been toned down considerably from his original version of it, which held, in James Feiser's terms, that "An action is morally right if the consequences of that action are more favorable than unfavorable to everyone except the agent"

It's a pretty horrific doctrine, and the word has been considerably watered down since Comte originally coined it. That's pretty much the definition that Ayn Rand assaulted.

comment by Richard_Kennaway · 2013-04-17T08:29:39.952Z · LW(p) · GW(p)

In other words, it was the "egoism" which became a strawman by not being allowed to become more reasonable, while its opposite the "altruism" was allowed to become more sane than originally defined.

In a typical discussion, the hypothetical "altruist" is allowed to reflect on their actions, and try to preserve themself (even if only to be able to help more people in the future), while the hypothetical "egoist" is supposed to be completely greedy and short-sighted.

Depends on the discussion. Reasonable egoism is practically the definition of "enlightened self-interest".

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-17T17:41:22.693Z · LW(p) · GW(p)

Yeah, that's the point. To get the answer "egoism", one defines egoism as enlightened self-interest, and altruism as self-destructive behavior. To get the answer "altruism", one defines altruism as enlightened pro-social behavior, and egoism as short-sighted greed. Perhaps less extremely than this, but usually from the way these words are defined you understand which one of them is the applause light for the person asking the question.

(I typically meet people for whom "altruism" is the preferred applause light, but of course there are groups which prefer "egoism".)

comment by blacktrance · 2013-04-17T05:41:13.978Z · LW(p) · GW(p)

Juvyr ure ivyynvaf ner fbzrjung rknttrengrq va gur frafr gung crbcyr va cbjre hfhnyyl qba'g guvax va gubfr grezf (gubhtu gurve eurgbevp qbrf fbzrgvzrf fbhaq fvzvyne), va zl rkcrevrapr gurer vf n tbbq ahzore bs beqvanel crbcyr jub guvax dhvgr fvzvyneyl gb ure ivyynvaf. Enaq'f rknttrengvba vf cevznevyl gung vg vf ener gb svaq nyy bs gur artngvir genvgf bs ure ivyynvaf va crbcyr jub qb zbenyyl bowrpgvbanoyr guvatf, ohg ng yrnfg n srj bs gubfr genvgf ner gurer.

Gung'f fbzrjung orfvqrf gur cbvag, gubhtu. Znal crbcyr jubz Enaq jbhyq qrfpevor nf nygehvfgf ner abg yvxr gur ivyynvaf bs ure obbxf va gung gurl trarenyyl qba'g jnag gb sbepr bguref gb borl gurve jvyy (ng yrnfg abg rkcyvpvgyl). Vafgrnq, gurve crefbany orunivbe vf frys-unezvat (vanccebcevngr srryvatf bs thvyg, ynpx bs nffregvirarff, oryvrs gung gur qrfverf bs bguref ner zber vzcbegnag guna gurve bja, qrfver gb cyrnfr bguref gb gur cbvag gung gur ntrag vf haunccl, npgvat bhg bs qhgl va gur qrbagbybtvpny frafr, trahvar oryvrs va Qvivar Pbzznaq, rgp). Nygehvfz vf arprffnevyl onq, ohg nygehvfgf ner abg arprffnevyl crbcyr jub unez bguref - vg vf cbffvoyr naq pbzzba sbe gurve orunivbef/oryvrsf gb znvayl unez gurzfryirf.

Enaq'f ivyynvaf ner nygehvfgf, ohg abg nyy Enaqvna nygehvfgf ner ivyynvaf - znal ner ivpgvzf bs artngvir fbpvrgny abezf, pbtavgvir qvfgbegvbaf, onq cneragvat, rgp.

comment by Douglas_Knight · 2013-04-22T23:20:40.265Z · LW(p) · GW(p)

I think that most people find that it wears off after a couple of months.

comment by [deleted] · 2013-04-16T19:46:01.625Z · LW(p) · GW(p)

What do you believe, and why do you believe it?

Alternatively: What do you value, and why do you value it?

comment by Yuyuko · 2013-04-17T19:11:47.448Z · LW(p) · GW(p)

We find that death grants a great deal of perspective!

Replies from: mstevens
comment by mstevens · 2013-04-18T16:33:48.120Z · LW(p) · GW(p)

Sadly no-one has reported back.

comment by FiftyTwo · 2013-04-15T21:51:13.425Z · LW(p) · GW(p)

Request for practical advice on determining/discovering/deciding 'what you want.'

Replies from: ModusPonies, lsparrish, Armok_GoB, ModusPonies
comment by ModusPonies · 2013-04-18T19:52:25.327Z · LW(p) · GW(p)

Find at least one person who you can easily communicate with (i.e., small inferential distances) and whose opinion you trust. Have a long conversation about your hopes and dreams. I recommend doing this in person if at all possible.

comment by lsparrish · 2013-04-15T22:20:14.576Z · LW(p) · GW(p)

A good place to start the search is the intersection of "things I find enjoyable" and "things that are scarce / in demand".

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-15T23:25:13.001Z · LW(p) · GW(p)

See which time discounts and distance discounts you make for how much you care about others. Compare how much you care about others with how much you care about you. act accordingly.

To know what you care about in the first place, either assess happiness at random times and activities, or go through Connection Theory and Goal factoring.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-16T13:27:35.076Z · LW(p) · GW(p)

Why do you recommend Connection Theory?

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-17T03:16:35.374Z · LW(p) · GW(p)

It's been done to me and I like it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-17T03:26:37.745Z · LW(p) · GW(p)

It's been done to me, too, and as I recall, it didn't do all that much good. The major good effect that I can remember is indirect-- it was something to be able to talk about the inside of my head with someone who found it all interesting and a possibly useful tool for untangling problems-- this helped pull me away from my usual feeling that there's something wrong/defective/shameful about a lot of it.

What did you get out of Connection Theory?

comment by Armok_GoB · 2013-04-16T20:06:21.954Z · LW(p) · GW(p)

Looki into my eyes. You want to give all your money to the MIRI. You want to give all your money to the MIRI. You want to give all your money to the MIRI.

Replies from: FiftyTwo
comment by FiftyTwo · 2013-04-20T09:54:30.086Z · LW(p) · GW(p)

Sadly I have +2 hypnosis resistance, nice try.

comment by ModusPonies · 2013-04-18T19:49:09.498Z · LW(p) · GW(p)

Find at least one person who you can easily communicate with (i.e., small inferential distances) and who you trust. Talk at length.

comment by [deleted] · 2013-04-26T08:19:50.928Z · LW(p) · GW(p)

More Right

Edit: We reached our deadline on May 1st. Site is live.

Some of you may recall the previous announcement of the blog. I envisioned it as a site that discusses right wing ideas. Sanity but not value checking them. Steelmanning both the ideas themselves and the counterarguments. Most of the authors should be sympathetic to them, but a competent loyal opposition should be sought out. In sum a kind of inversion of the LessWrong demographics (see Alternative Politics Question). Outreach will not be a priority, mutual aid on an epistemically tricky path of knowledge seeking is.

The current core group working on making the site a reality consists of me, ErikM, Athrelon, KarmaKaiser and MichaelAnissimov and Abudhabi. As we approach launch time I've just sent out an email update to other contributors and those who haven't yet contributed but have contacted me. If you are interested in the hard to discuss subjects or the politics and want to join as a coauthor or approved commenter (we are seeking more) send me a PM with an email adress or comment here.

Replies from: bogus, drethelin, None, MugaSofer, None, shminux
comment by bogus · 2013-04-26T10:41:11.829Z · LW(p) · GW(p)

This is a great idea. We should create rationalist blogs for other political factions too, such as progressivism, feminism, anarchism, green politics and others. Such efforts could bring our programme of "raising the sanity waterline" to the public policy sphere -- and this might even lay some of the groundwork for eventually relaxing the "no politics at LW" rule.

Replies from: None, Viliam_Bur
comment by [deleted] · 2013-04-26T10:44:12.641Z · LW(p) · GW(p)

As I wrote before:

LWers having blogs elsewhere is a good thing!

I don't expect LessWrong itself to become a good venue to discuss politics. I do think LessWrong could keep its spot at the center of a "rationalist" blogosphere that may be slowly growing. Discussions between different value systems part of it might actually be worth following! And I do think nearly all political factions within such a blogosphere would find benefits in keeping their norms as sanity friendly as possible.

comment by Viliam_Bur · 2013-04-26T16:44:14.856Z · LW(p) · GW(p)

I would like to see one site to describe them all. To describe all those parts which can be defended rationally, with clear explanations and evidence.

Replies from: bogus
comment by bogus · 2013-04-26T18:57:56.861Z · LW(p) · GW(p)

Yes, the issue-position-argument (IPA) model was developed for such purposes, and similar models are widely cited in the academic literature about argumentation and computer support for same, etc. (One very useful elaboration of this is called TIPAESA, for: time, issue, position, argument, evidence, source, authority. Unfortunately, I do not know of a good reference for this model; it seems that it was only developed informally, by anonymous folks on some political wikis.) But it's still useful to have separately managed sites for each political faction, if only so that each faction can develop highly representative descriptions of their own positions.

comment by drethelin · 2013-04-26T14:57:26.046Z · LW(p) · GW(p)

"Approved Commenter" sounds pretty thought police-ey

Replies from: drethelin, wedrifid
comment by drethelin · 2013-04-26T14:57:32.530Z · LW(p) · GW(p)

so sign me up!

comment by wedrifid · 2013-04-26T16:47:33.346Z · LW(p) · GW(p)

"Approved Commenter" sounds pretty thought police-ey

That would seem to fit with the theme rather well.

comment by [deleted] · 2013-05-01T18:12:35.904Z · LW(p) · GW(p)

James Goulding aka Federico formerly of studiolo has joined us as an author.

comment by MugaSofer · 2013-04-28T21:13:44.973Z · LW(p) · GW(p)

I hold more liberal than conservative beliefs, but I'm increasingly reluctant to identify with any position on the left-right "spectrum". I definitely hold or could convincingly steelman lots of beliefs associated with "conservativism", especially if you include criticism of "liberal" positions. Would this be included in the sort of demographic you're seeking?

comment by [deleted] · 2013-04-26T15:16:22.605Z · LW(p) · GW(p)

As we approach launch time I've just sent out an email update to other contributors and those who haven't yet contributed but have contacted me.

* checks e-mail *

Yeah, you didn't.

comment by Shmi (shminux) · 2013-04-26T16:34:07.059Z · LW(p) · GW(p)

Having read Yvain's excellent steelmanning and subsequent critique of conservatism on his blog, I wonder what else can be usefully said about the subject.

EDIT: changed wording a bit. Hopefully someone will reply, not just silently downvote.

Replies from: MugaSofer, David_Gerard
comment by MugaSofer · 2013-04-28T21:09:20.487Z · LW(p) · GW(p)

Yup, no way there could be anything more to say on the subject of a huge and varied group of ideologies.

More seriously, what about, y'know, counterarguments? Steelmanning is all very well, but this would involve steelmanning by people who actually ascribe to conservative positions.

comment by David_Gerard · 2013-04-27T18:16:09.236Z · LW(p) · GW(p)

I predict this will not occur.

comment by FiftyTwo · 2013-04-20T09:59:37.657Z · LW(p) · GW(p)

Article on an attempt to explain intelligence in thermodynamic terms.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-04-20T16:57:42.020Z · LW(p) · GW(p)

Interesting stuff. Some links to the original material:

Original paper (paywalled)

Original paper (free). (Does not include supplementary material.)

Summary paper about the paper.

Their software. Demo video, further details only on application.

Author 1. Author 2.

On the one hand, these are really smart guys, no question. On the other, toy demos + "this could be the solution to AI!" => likely to be a damp squib.

Replies from: gwern, timtyler
comment by gwern · 2013-04-20T17:19:26.502Z · LW(p) · GW(p)

I've skimmed the paper and read the summary publicity, and I don't really get how this could be construed as a general intelligence. At best, I think they may've encoded a simple objective definition of a convergent AI drive like 'keep your options open and acquire any kind of influence' but nothing in it seems to map onto utility functions or anything like that.

Replies from: Richard_Kennaway, timtyler
comment by Richard_Kennaway · 2013-04-20T17:28:48.636Z · LW(p) · GW(p)

At best, I think they may've encoded a simple objective definition of a convergent AI drive like 'keep your options open and acquire any kind of influence' but nothing in it seems to map onto utility functions or anything like that.

I think that's an accurate informal summary of their basic mechanism. Personally, I'm not impressed by utility functions (or much else in AGI, for that matter), so I don't rate the fact that they aren't using them as a point against.

Replies from: gwern, drethelin
comment by gwern · 2013-04-20T18:03:18.351Z · LW(p) · GW(p)

I do, because it seems like in any nontrivial situation, simply grasping for entropies ignores the point of having power or options, which is to aim at some state of affairs which is more valuable than others. Simply buying options is worthless and could well be actively harmful if you keep exposing yourself to risks you could've shut down. They mention that it works well as a strategy in Go playing... but I can't help but think that it must be in situations where it's not feasible to do any board evaluation at all and where one is maximally ignorant about the value of anything at that point.

Replies from: timtyler
comment by timtyler · 2013-04-21T12:00:07.843Z · LW(p) · GW(p)

I do, because it seems like in any nontrivial situation, simply grasping for entropies ignores the point of having power or options, which is to aim at some state of affairs which is more valuable than others.

As I understand it, it's more a denial of that claim. The point is to maximise entropy, and values are a means to that end.

Obviously, this is counter-intuitive, since orthodoxy has this relationship the other way around: claiming that organisms maximise correlates of their own power - and the entropy they produce is a byproduct. MaxEnt suggests that this perspective may have things backwards.

comment by drethelin · 2013-04-20T18:07:02.470Z · LW(p) · GW(p)

what's your preferred system for encoding values?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-04-20T18:31:11.876Z · LW(p) · GW(p)

"Value" is just another word for "utility", isn't it? It's the whole idea of utility maximisation as a fundamental principle that I think is misguided. No, I don't have a better idea; I just think that that one is a road that, where AGI is concerned, leads nowhere.

But AGI is not something I work on. There is no reason for anyone who does to pay any attention to my opinions on the subject.

comment by timtyler · 2013-04-21T12:18:02.523Z · LW(p) · GW(p)

The idea is that entropy can be treated as utility.

Thus entropy maximisation. Modern formulizations are largely based on ideas discovered by E. T. Jaynes.

Here is Roderick Dewar explaining the link.

Replies from: gwern
comment by gwern · 2013-04-21T17:27:45.711Z · LW(p) · GW(p)

I'm aware of maxent (and that's one reason why in my other comment I mentioned the Go playing as probably reflecting a situation of maximum ignorance), but I still do not see how maximizing entropy can possibly lead to fully intelligent utility-maximizing behavior, or if it is unable to do so, why we would give a damn about what maximizing entropy does. What is the maximal entropy state of the universe but something we would abhor like a uniform warm gas? To return to the Go playing: entropy maximization may be a useful heuristic in some positions - but the best Go programs do not purely maximize entropy and ignore the value of positions or ignore the value of winning.

Replies from: timtyler
comment by timtyler · 2013-04-22T00:31:44.398Z · LW(p) · GW(p)

I still do not see how maximizing entropy can possibly lead to fully intelligent utility-maximizing behavior

That's an argument from incredulity, though. Hopefully, I can explain:

If you have a maximiser of A, the ability to constrain that maximiser, and the ability to generate A, you can use it to maximise B by rewarding the production of B with A. If A = entropy and B = utility, Q.E.D.

Of course if you can't constrain it you just get an entropy maximiser. That seems like the current situation with modern ecosystems. These dissipate mercilessly, until no energy gradients - or anything else of possible value - is left behind.

What is the maximal entropy state of the universe but something we would abhor like a uniform warm gas?

By their actions shall ye know them. Humans generate large quantities of entropy, accelerating universal heat death. Their actions clearly indicate that they don't really care about averting universal heat death.

In general, maximisers don't necessarily value the eventual results of their actions. A sweet taste maximiser might not value tooth decay and obesity. Organisms behave as though they like dissipating. They don't necessarily like the dissipated state their actions ultimately lead to.

To return to the Go playing: entropy maximization may be a useful heuristic in some positions - but the best Go programs do not purely maximize entropy and ignore the value of positions or ignore the value of winning.

Maximisation is subject to constraints. Go programs are typically constrained to play go.

An entropy maximiser whose only actions were placing pieces on go boards in competitive situations might well attempt to play excellent go - to make humans feed it power and make copies of it.

Of course, this is a bit different from what the original article is talking about. That refers to "maximizing accessible future game states". If you know go, that's pretty similar to winning. To see how, consider a variant of go in which both passing and suicide are prohibited.

Replies from: gwern
comment by gwern · 2013-04-22T01:24:21.081Z · LW(p) · GW(p)

If you have a maximiser of A, the ability to constrain that maximiser, and the ability to generate A, you can use it to maximise B by rewarding the production of B with A. If A = entropy and B = utility, Q.E.D.

That seems to simply be buck-passing. What does this gain us over simply maximizing B? If we can compute how to maximize a predicate like A, then what stops us from maximizing B directly?

If you know go, that's pretty similar to winning.

Pretty similar, yet somehow, crucially, not the same thing. If you know go, consider a board position in which 51% of the board has been filled with your giant false eye, you move, and there is 1 move which turns it into a true eye and many moves which don't. The winning-maximizing move is to turn your false eye into a true eye, yet this shuts down a huge tree of possible futures in which your false eye is killed, thousands of stones are removed from the board, and you can replay the opening with its beyond-astronomical number of possible futures...

Replies from: timtyler
comment by timtyler · 2013-04-22T01:48:42.450Z · LW(p) · GW(p)

If you have a maximiser of A, the ability to constrain that maximiser, and the ability to generate A, you can use it to maximise B by rewarding the production of B with A. If A = entropy and B = utility, Q.E.D.

That seems to simply be buck-passing. What does this gain us over simply maximizing B? If we can compute how to maximize a predicate like A, then what stops us from maximizing B directly?

You said you didn't see how having an entropy maximizer would help with maximizing utility. Having an entropy maximizer would help a lot. Basically maximizers are very useful things - almost irrespective of what they maximize.

If you know go, that's pretty similar to winning.

Pretty similar, yet somehow, crucially, not the same thing. [...]

Sure. I never claimed they were the same thing.

If you forbid passing, forbid suicide and aim to mimimize your opponent's possible moves, that would make a lot more sense - as a short description of a go-playing strategy.

Replies from: gwern
comment by gwern · 2013-04-22T02:00:17.934Z · LW(p) · GW(p)

You said you didn't see how having an entropy maximizer would help with maximizing utility. Having an entropy maximizer would help a lot. Basically maximizers are very useful things - almost irrespective of what they maximize.

So maximizers are useful for maximizing? That's good to know.

Replies from: timtyler
comment by timtyler · 2013-04-22T10:49:31.981Z · LW(p) · GW(p)

That's trivializing the issue. The idea is that maximisers can often be repurposed to help other agents (via trade, slavery etc).

It sounds as though you originally meant to ask a different question. You can now see how maximizing entropy would be useful, but want to know what advantages it has over other approaches.

The main advantage I am aware of associated with maximizing entropy is one of efficiency. If you maximize something else (say carbon atoms), you try and leave something behind. By contrast, an entropy maximizer would use carbon atoms as fuel. In a competition, the entropy maximizer would come out on top - all else being equal.

It's also a pure and abstract type of maximisation that mirrors what happens in natural systems. Maybe it has been studied more.

Replies from: gwern
comment by gwern · 2013-04-22T16:37:53.306Z · LW(p) · GW(p)

It sounds as though you originally meant to ask a different question. You can now see how maximizing entropy would be useful,

I already saw how it could be useful in a handful of limited situations - that's why I brought up the Go example in the first place!

but want to know what advantages it has over other approaches.

As it stands, it sounds like a limited heuristic and the claims about intelligence grossly exaggerated.

comment by timtyler · 2013-04-21T13:44:20.572Z · LW(p) · GW(p)

On the one hand, these are really smart guys, no question. On the other, toy demos + "this could be the solution to AI!" => likely to be a damp squib.

Entropy maximisation purports to explain all adaptation. However, it doesn't tell us much that we didn't already know about how to go about making good adaptations. For one thing, entropy maximisation is a very old idea - dating back at least to Lotka, 1922.

comment by [deleted] · 2013-04-16T02:19:15.400Z · LW(p) · GW(p)

I have a super dumb question.

So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there's a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say "Well...don't do that".

So there must be some other reason for the rule, 'don't divide by zero.' What is it?

Replies from: Qiaochu_Yuan, Kindly, ciphergoth, latanius, OrphanWilde
comment by Qiaochu_Yuan · 2013-04-16T06:33:44.352Z · LW(p) · GW(p)

We don't divide by zero because it's boring.

You can totally divide by zero, but the ring you get when you do that is the zero ring, and it only has one element. When you start with the integers and try dividing by nonzero stuff, you can say "you can't do that" or you can move out of the integers and into the rationals, into which the integers embed (or you can restrict yourself to only dividing by some nonzero things - that's called localization - which is also interesting). The difference between doing that and dividing by zero is that nothing embeds into the zero ring (except the zero ring). It's not that we can't study it, but that we don't want to.

Also, in the future, if you want to ask math questions, ask them on math.stackexchange.com (I've answered a version of this question there already, I think).

Replies from: None, Kawoomba
comment by [deleted] · 2013-04-16T13:20:35.383Z · LW(p) · GW(p)

Thanks, I think that answers my question.

comment by Kawoomba · 2013-04-16T06:54:35.683Z · LW(p) · GW(p)

You can totally divide by zero, but the ring you get when you do that is the zero ring

What do you mean by "you get"? Do you mean Wheel theory or what?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-16T06:57:23.706Z · LW(p) · GW(p)

I mean if you localize a ring at zero you get the zero ring. Equivalently, the unique ring in which zero is invertible is the zero ring. (Some textbooks will tell you that you can't localize at zero. They are haters who don't like the zero ring for some reason.)

Replies from: army1987
comment by A1987dM (army1987) · 2013-04-16T19:50:24.866Z · LW(p) · GW(p)

BTW, how comes the ring with one element isn't usually considered a field?

Replies from: Qiaochu_Yuan, Oscar_Cunningham
comment by Qiaochu_Yuan · 2013-04-16T20:03:07.350Z · LW(p) · GW(p)

The theorems work out nicer if you don't. A field should be a ring with exactly two ideals (the zero ideal and the unit deal), and the zero ring has one ideal.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2013-04-16T22:52:50.779Z · LW(p) · GW(p)

Ah, so it's for exactly the same reason that 1 isn't prime.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-17T00:17:33.710Z · LW(p) · GW(p)

Yes, more or less. On nLab this phenomenon is called too simple to be simple.

comment by Oscar_Cunningham · 2013-04-16T22:51:39.292Z · LW(p) · GW(p)

We often want the field without zero to form a multiplicative group, and this isn't the case in the ring with one element (because the empty set lacks an identity and hence isn't a group). Indeed we could take the definition of a field to be

A ring such that the non-zero elements form a multiplicative group.

and this is fairly elegant.

comment by Kindly · 2013-04-16T03:23:54.481Z · LW(p) · GW(p)

The rule isn't that you cannot divide by zero. You need a rule to allow you to divide by a number, and the rule happens to only allow you to divide by nonzero numbers.

There are also lots of things logicians can tell you that you're not allowed to do. For example, you might prove that (A or B) is equivalent to (A or C). You cannot proceed to cancel the A's to prove that B and C are equivalent, unless A happens to be false. This is completely analogous to going from AB = AC to B = C, which is only allowed when A is nonzero.

Replies from: kpreid
comment by kpreid · 2013-04-17T18:47:15.355Z · LW(p) · GW(p)

However, {false, true} - {true} has only one member, and so values from it become constant, whereas ℝ - {0} has many members and can therefore remain significant.

comment by Paul Crowley (ciphergoth) · 2013-04-22T20:30:11.883Z · LW(p) · GW(p)

For the real numbers, the equation a x = b has infinitely many solutions if a = b = 0, no solutions if a = 0 but b ≠ 0, and exactly one solution whenever a ≠ 0. Because there's nearly always exactly one solution, it's convenient to have a symbol for "the one solution to the equation a x = b" and that symbol is b / a; b but you can't write that if a = 0 because then there isn't exactly one solution.

This is true of any field, almost by definition.

comment by latanius · 2013-04-16T04:42:16.027Z · LW(p) · GW(p)

Didn't they do the same with set theory? You can derive a contradiction from the existence of "the set of sets that don't contain themselves"... therefore, build a system where you just can't do that.

(of course, coming from the axioms, it's more like "it wasn't ever allowed", like in Kindly's comment, but the "new and updated" axioms were invented specifically so that wouldn't happen.)

comment by OrphanWilde · 2013-04-16T02:36:38.561Z · LW(p) · GW(p)

We divide by zero all the time, actually; derivatives are the long way about dividing by zero. We just work very carefully to cancel the actual zero out of the equation.

The rule is less "Don't divide by zero", as much as "Don't perform operations which delete your data." Dividing by zero doesn't produce a contradiction, it eliminates meaning in the data. You -can- divide by zero, you just have to do so in a way that maintains all the data you started with. Multiplying by zero eliminates data, and can be used for the same destructive purpose.

Replies from: None, mstevens, None
comment by [deleted] · 2013-04-18T13:29:29.720Z · LW(p) · GW(p)

The rule is less "Don't divide by zero", as much as "Don't perform operations which delete your data." Dividing by zero doesn't produce a contradiction, it eliminates meaning in the data. You -can- divide by zero, you just have to do so in a way that maintains all the data you started with.

I completely fail to understand how you got such a doctrine on dividing by zero. Mathematics just doesn't work like that.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-18T13:35:09.473Z · LW(p) · GW(p)

Are you denying this as somebody with strong knowledge of mathematics?

(I need to know what prior I should assign to this conceptualization being wrong. I got it from a mathematics instructor, quite possibly the best I ever had, in his explanation on why canceling out denominators doesn't fix discontinuities.)

ETA: The problem he was demonstrating it with focused more on the error of -adding- information than removing it, but he did show us how information could be deleted from an equation by inappropriately multiplying by or dividing by zero, showing how discontinuities could be removed or introduced. He also demonstrated a really weird function involving a square root which had two solutions, one of which introduced a discontinuity, one of which didn't.

Replies from: None
comment by [deleted] · 2013-04-18T14:18:17.560Z · LW(p) · GW(p)

I'm a graduate student, working on my thesis.

I accept that this is some pedagogical half-truth, but I just don't see how it benefits people to pretend mathematics cares about whether or not you "eliminate meaning in the data." There's no meta-theorem that says information in an equation has to be preserved, whatever that means.

comment by mstevens · 2013-04-18T14:40:24.978Z · LW(p) · GW(p)

Dividing by zero leads to a contradiction

Never divide by zero

Division by zero

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-18T14:50:14.466Z · LW(p) · GW(p)

Not necessarily true. A good rule for introductory math students, but some advanced math requires dividing by zero. (As mentioned, that's what a derivative is, a division by zero.)

Limits are a way of getting information out of a division by zero, which is why derivatives involve taking the limit.

Division by zero is kind of like the square root of a negative number (something introductory mathematics coursework also tells you not to do). It's not an invalid operation, it's just an operation you have to be aware of the ramifications of. (If it seems like zero has unusual behavior, well, the same is true of negative numbers with respect to zero and positive numbers, and again the same is true of positive numbers with respect to zero and negative numbers.)

Replies from: None, None
comment by [deleted] · 2013-04-18T15:14:23.020Z · LW(p) · GW(p)

You've got it the wrong way round. "A derivative is a division by zero" is the pedagogical lie for introductory students (probably one that causes more confusion than it solves), and advanced maths doesn't require it.

comment by [deleted] · 2013-04-18T15:29:04.348Z · LW(p) · GW(p)

Another link, this time explicitly dealing with derivatives and division by zero, in the vain hope that you'll actually update someday.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-18T16:34:06.510Z · LW(p) · GW(p)

What are you expecting me to update on? None of what you've sent me contradicts anything except the language I use to describe it.

A derivative -is- a division by zero; infinitesimal calculus, and limits, were invented to try to figure out what the value of a specific division by zero would be. Mathematicians threw a -fit- over infinitesimal calculus and limits, denying that division by zero was valid, and insisting that the work was therefore invalid.

So what exactly is our disageement? That I regard limits as a way of getting information out of a division by zero? Or that I insist, on the basis that we -can- get information out of a division by zero, that a division by zero can be valid? Or is it something else entirely?

Incidentally, even if I were certain exactly what you're trying to convince me of and it was something I didn't already agree with, your links are nothing but appeals to authority, and they wouldn't convince me -anyways-. They lack any kind of proof; they're just assertions.

Replies from: Patrick, None, mstevens, None, ThrustVectoring
comment by Patrick · 2013-04-18T18:39:10.992Z · LW(p) · GW(p)

The definition of limit: "lim x -> a f(x) = c " means for all epsilon > 0, there exists delta > 0 such that for all x, if 0 < |x-a|<delta then |f(x) - c| < epsilon.

The definition of derivative: f'(x) = lim h -> 0 (f(x+h) - f(x))/h

That is, for all epsilon > 0, there exists delta > 0 such that for all h, if 0 < |h| < delta then |(f(x+h) - f(x))/h - f'(x)| < epsilon.

At no point do we divide by 0. h never takes on the value 0.

comment by [deleted] · 2013-04-18T16:58:23.978Z · LW(p) · GW(p)

What are you expecting me to update on?

Sigh. Consider this my last reply.

  • mstevens' links have several demonstrations that division by zero leads to contradictions in arithmetic.
  • my link (singular) demonstrates that the definition of a derivative never requires division by zero.
  • Qiaochu's proof in a sibling thread that the only ring in which zero has an inverse is the zero ring.

So what exactly is our disageement?

That you continue to say things like "A derivative -is- a division by zero" and "division by zero can be valid", as if they were facts. Yes, you may have been taught these things, but that does not make them literally true, as many people have tried to explain to you.

Incidentally, even if I were certain exactly what you're trying to convince me of and it was something I didn't already agree with, your links are nothing but appeals to authority, and they wouldn't convince me -anyways-. They lack any kind of proof; they're just assertions.

Whose authority am I appealing to in my (singular) link? Doctor Rick? I imagine he's no more a doctor than Dr. Laura. (I actually knew one of the "doctors" on the math forum once, and he wasn't a Ph. D. (or even a grad student) either; just a reasonably intelligent person who understood mathematics properly.) The only thing he asserts is the classical definition of a derivative.

Or maybe you were just giving a fully general counterargument, without reading the link.

EDIT: It's simply logically rude to ask for my credentials, and then treat every single argument you've been presented as an argument from authority, using that as a basis for dismissing them out of hand.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-18T17:41:50.030Z · LW(p) · GW(p)

I am treating your links as arguments from authority, because they don't provide proof of their assertions, they simply assert them. As I wrote there, I didn't ask for your credentials to decide whether or not I was wrong, but to provide a prior probability of being wrong. It started pretty high. It declined; my mathematics instructor provided better arguments than you have, which have simply been assertions that I'm incorrect.

My experience with infinitesimal calculus is limited, so I can't provide proofs that you're wrong (and thus have no basis to say you're wrong), but I haven't seen proofs that my understanding is wrong, either, and thus have no basis to update in either direction on. At this point I'm tapping out; I don't see this discussion going anywhere.

comment by mstevens · 2013-04-18T16:51:05.569Z · LW(p) · GW(p)

You said " Dividing by zero doesn't produce a contradiction"

Several of these links include examples of contradictions. There is no authority required.

For example:

A Contradiction. Suppose we define 1/0 = q

for some real number . Multiplying on both sides of the equation gives 1 = 0 * q = 0

which is a contradiction (to 1 and 0 being different numbers).

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-18T17:14:40.114Z · LW(p) · GW(p)

Er, 1/0 * 0 != 1.

The law of cancellation requires that all values being cancelled have an inverse. The inverse of 0 doesn't exist in the set of real numbers (although it does exist in the hyperreals). This doesn't mean you can't multiply a number by the inverse of 0, but the product doesn't exist in real numbers, either. (Hyperreal numbers don't cancel out the way real numbers do, however; they can leave behind a hyperreal component [ETA: Or at least that's my understanding from the way my instructor explained why removable discontinuities couldn't actually be removed - open to proof otherwise].)

Replies from: None
comment by [deleted] · 2013-04-18T17:50:15.439Z · LW(p) · GW(p)

0 doesn't have an inverse in the hyperreal numbers either (To see why this it true, consider the first-order statement "∀x, x*0 != 1" which is true in the real numbers and therefore also true in the hyperreals by the transfer principle). From this it obviously follows that you can't multiply a number by the inverse of 0.

Replies from: None, OrphanWilde
comment by [deleted] · 2013-04-18T18:14:18.427Z · LW(p) · GW(p)

Further, if you did decide to adjoin an inverse of zero to the hyperreals, the result would be the zero ring.

comment by OrphanWilde · 2013-04-18T18:31:00.473Z · LW(p) · GW(p)

Going to have to investigate more, but that looks solid.

comment by [deleted] · 2013-04-18T16:39:13.026Z · LW(p) · GW(p)

Since you asked this of papermachine, it seems reasonable to reflect it back:

Are you asserting this as somebody with strong knowledge of mathematics?

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-18T16:47:43.512Z · LW(p) · GW(p)

Not compared to somebody who specializes in the field of mathematics, no.

But I don't expect to change paper-machine's mind, where paper-machine expects to change mine. I expect more than appeals to authority. I have some prior that paper-machine might be right, given that this is their field of expertise. My posterior odds that they have a strong knowledge of this particular subject, however, are shrinking pretty rapidly, since all I'm getting are links that come up early in a Google search.

comment by ThrustVectoring · 2013-04-19T01:04:16.728Z · LW(p) · GW(p)

Limits and calculus isn't what I think of, at all, when I think of division. I pretty much limit it exclusive to the multiplicative inverse in mathematical systems where addition and multiplication work like you think they ought to. There are axioms that encompass all of "works like you think they ought to", and a necessary one of them is the multiplicative inverse of zero is not a number.

comment by [deleted] · 2013-04-16T03:11:36.126Z · LW(p) · GW(p)

Thanks, that's helpful. But I guess my point is that it seems to me to be a problem for a system of mathematics that one can do operations which, as you say, delete the data. In other words, isn't it a problem that it's even possible to use basic arithmetical operations to render my data meaningless? If this were possible in a system of logic, we would throw the system out without further ado.

And while I can construct a proof that 2=1 (what I called a contradiction, namely that a number be equal to its sucessor) if you allow me to divide by zero, I cannot do so with multiplications. So the cases are at least somewhat different.

Replies from: Jonii, OrphanWilde
comment by Jonii · 2013-04-18T20:06:51.425Z · LW(p) · GW(p)

Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It's an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that ax = 1. Our notation for x here would be 1/a, and it's easy to see why a 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a * (1/b), where (1/b) is the inverse of b.

But, if you have both multiplication and addition, there is one interesting thing. If we assume addition is the group operation for all numbers(and we use "0" to signify additive neutral element you get from adding together an element and its additive inverse, that is, "a + (-a) = 0"), and we want multiplication to work the way we like it to work(so that a(x + y) = (ax) + (a*y), that is, distributivity hold, something interesting happens.

Now, neutral element 0 is such that x + 0 = x, this is by definition of neutral element. Now watch the magic happen: 0x = (0 + 0)x
= 0x + 0x So 0
x = 0x + 0x.

We subtract 0x from both sides, leaving us with 0x = 0.

Doesn't matter what you are multiplying 0 with, you always end up with zero. So, assuming 1 and 0 are not the same number(in zero ring, that's the case, also, 0 = 1 is the only number in the entire zero ring), you can't get a number such that 0*x = 1. Lacking inverse elements, there's no obvious way to define what it would mean to divide by zero. There are special situations where there is a natural way to interpret what it means to divide by zero, in which cases, go for it. However, it's separate from the division defined for other numbers.

And, if you end up dividing by zero because you somewhere assumed that there actually was such a number x that 0*x = 1, well, that's just your own clumsiness.

Also, you can prove 1=2 if you multiply both sides by zero. 1 = 2. Proof: 10 = 20 => 0 = 0. Division and multiplication work in opposite directions, multiplication gets you from not equals to equals, division gets you from equals to not equals.

Replies from: None, Watercressed
comment by [deleted] · 2013-04-20T16:43:42.249Z · LW(p) · GW(p)

Excellent explanation, thank you. I've been telling everyone I know about your resolution to my worry. I believe in math again.

Maybe you can solve my similarly dumb worry about ethics: If the best life is the life of ethical action (insofar as we do or ought to prefer to do the ethically right thing over any other comforts or pleasures), and if ethical action consists at least largely in providing and preserving the goods of life for our fellow human beings, then if someone inhabited the limit case of the best possible life (by permanently providing immortality, freedom, and happiness for all human beings), wouldn't they at the same time cut everyone else off from the best kind of life?

Replies from: drethelin
comment by drethelin · 2013-04-20T18:13:07.321Z · LW(p) · GW(p)

Ethical action is defined by situations. The best life in the scenario where we don't have immortality freedom and happiness is to try to bring them about, but the best life in the scenario where we already have them is something different.

Replies from: None
comment by [deleted] · 2013-04-20T20:22:17.497Z · LW(p) · GW(p)

Good! That would solve the problem, if true. Do you have a ready argument for this thesis (I mean "but the best life in the scenario where we already have them is something different.")?

Replies from: drethelin
comment by drethelin · 2013-04-20T20:28:43.836Z · LW(p) · GW(p)

"If true" is a tough thing here because I'm not a moral realist. I can argue by analogy for the best moral life in different scenarios being a different life but I don't have a deductive proof of anything.

By analogy: the best ethical life in 1850 is probably not identical to the best ethical life in 1950 or in 2050, simply because people have different capacities and there exist different problems in the world. This means the theoretical most ethical life is actually divorced from the real most ethical life, because no one in 1850 could've given humanity all those things and working toward would've taken away ethical effort from eg, abolishing slavery. Ethics under uncertainty means that more than one person can be living the subjectively ethically perfect life even if only one of them will achieve what their goal is because no one knows who that is ahead of time.

comment by Watercressed · 2013-04-18T22:22:39.692Z · LW(p) · GW(p)

x + 0 = 0

I think you mean x + 0 = x

Replies from: Jonii
comment by Jonii · 2013-04-19T11:39:44.471Z · LW(p) · GW(p)

yes. yes. i remember thinking "x + 0 =". after that it gets a bit fuzzy.

comment by OrphanWilde · 2013-04-16T03:36:36.846Z · LW(p) · GW(p)

You can do the same thing in any system of logic.

In more advanced mathematics you're required to keep track of values you've canceled out; the given equation remains invalid even though the cancelled value has disappeared. The cancellation isn't real; it's a notational convenience which unfortunately is promulgated as a real operation in mathematics classes. All those cancelled-out values are in fact still there. That's (one of) the mistakes performed in the proof you reference.

Replies from: Jonii
comment by Jonii · 2013-05-30T22:30:31.609Z · LW(p) · GW(p)

This strikes to me as massively confused.

Keeping track of cancelled values is not required as long as you're working with a group, that is, a set(like reals), and an operation(like addition) that follows the kinda rules addition with integers and multiplication with non-zero real values do. If you are working with a group, there's no sense in which those canceled out values are left dangling. Once you cancel them out, they are gone.

http://en.wikipedia.org/wiki/Group_%28mathematics%29 <- you can check group axioms here, I won't list them here.

Then again, canceling out, as it is procedurally done in math classes, requires each and every group axiom. That basically means it's nonsense to speak of canceling out with structures that aren't groups. If you tried to cancel out stuff with non-group, that'd be basically assuming stuff you know ain't true.

Which begs a question: What are these structures in advanced maths that you speak of?

comment by Desrtopa · 2013-04-20T19:59:43.623Z · LW(p) · GW(p)

Today, I finally took a racial/sexual Implicit Association Test.

I had always more or less accepted that it was, if not perfect, at least a fairly meaningful indicator of some sort of bias in the testing population. Now, I'm rather less confident in that conclusion.

According to the test, in terms of positive associations, I rank black women above black men above white women above white men. I do not think this is accurate.

Obviously, this is an atypical result, but I believe that I received it due to confounding factors which prevented the test from being an accurate reflection of my associations are likely to affect a large proportion of the testing population.

First, the most significant factor in how successful I was in correctly associating words and faces was simply practice. I made more mistakes in the first phase than the second phase, and more in the second than the third, etc. I believe that my test could have showed significantly different results simply by re-ordering the phases.

Second, I suspect that I was trying harder in the phases where I was matching black faces than white faces. I don't want to corrupt the test, but I also don't want it to tell me I'm a racist; would I have been so enthusiastic about making the final phase my most accurate one of all, if it had been matching white male faces rather than black male faces?

Third, I felt that many of the questions on the survey that followed the matching phase were too loaded to properly answer on their own terms. They presented a series of options from "strongly agree" to "strongly disagree," where I felt that my real answer would most accurately be framed as ADBOC.

If anyone here has access to university resources and would like to collaborate on an experiment which would attempt to discern subjects' associations while correcting for these faults, please let me know.

Replies from: Unnamed, NancyLebovitz
comment by Unnamed · 2013-04-20T21:49:21.230Z · LW(p) · GW(p)

Academic research tends to randomize everything that can be randomized, including the orders of the different IAT phases, so your first concern shouldn't be an issue in published research. (The keyword for this is "order effect.")

The IAT is one of several different measures of implicit attitudes which are used in research. When taking the IAT it is transparent to the participant what is being tested in each phase, so people could try harder on some trials than on others, but that is not the case with many of the other tests (many use subliminal priming, e.g. flashing either a black man's face or a white man's face on the screen for 20ms immediately before showing the stimulus that participants are instructed to respond to). The different measures tend to produce relatively similar results, which suggests that effort doesn't have that big of an effect (at least for most people). I suspect that this transparency is part of the reason why the IAT has caught on in popular culture - many people taking the test have the experience of it getting harder when they're doing a "mismatched" pairing; they don't need to rely solely on the website's report of their results.

The survey that you took is not part of the IAT. It is probably a separate, explicit measure of attitudes about race and/or gender (do any of these questions look familiar?).

Replies from: Desrtopa
comment by Desrtopa · 2013-04-20T21:59:40.936Z · LW(p) · GW(p)

None of those questions were on the survey, but some of the questions on the survey were similar.

The descriptions of the other measures of implicit attitudes given on that page aren't in-depth enough for me to critique them effectively for methodology. The first question that comes to mind though, is to what extent these tests have been calibrated against associations that we already know about. For example, if people are given implicit association tests which match words with pictures of, say, smiling children with candy versus pictures of people with injuries, how do they score?

comment by NancyLebovitz · 2013-04-26T09:49:24.520Z · LW(p) · GW(p)

I haven't heard of any attempts at comparing implicit association tests to behavior.

comment by Shmi (shminux) · 2013-04-25T23:07:37.831Z · LW(p) · GW(p)

Yet another fake number of sex partners self-reported:

Men report having more partners than women (15 partners versus an average of 9 partners for women).

Unless, of course, Canadian men tap the border.

Note: it basically evens out if you remove the 20+ partners boasters.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-28T21:15:08.901Z · LW(p) · GW(p)

Note: it basically evens out if you remove the 20+ partners boasters.

I wonder how many people put the maximum possible result, to be funny / troll the surveyors.

comment by jooyous · 2013-04-22T05:59:10.814Z · LW(p) · GW(p)

I keep accidentally accumulating small trinkets as presents or souvenirs from well-meaning relatives! Can anyone suggest a compact unit of furniture for storing/displaying these objects? Preferably in a way that is scalable, minimizes dustiness and falling-off and has pretty good ease of packing/unpacking. Surely there's a lifehack for this!

Or maybe I would appreciate suggestions on how to deal with this social phenomenon in general! I find that I appreciate the individual objects when I receive them, but after that initial moment, they just turn into ... stuff.

Replies from: drethelin
comment by drethelin · 2013-04-22T06:04:39.890Z · LW(p) · GW(p)

spice racks!

Replies from: jooyous
comment by jooyous · 2013-04-22T06:30:10.259Z · LW(p) · GW(p)

I knew someone had an answer but I would have never thought of that myself; I use like a total of one spices. Thank you!

Replies from: drethelin
comment by drethelin · 2013-04-22T07:16:10.280Z · LW(p) · GW(p)

In that case my further advice is: Cumin! Garlic! Pepper! Coriander!

Replies from: jooyous
comment by jooyous · 2013-04-23T18:48:34.077Z · LW(p) · GW(p)

Ohh yeahh, I guess I also use pepper. And garlic is a veggie. =P

comment by Jayson_Virissimo · 2013-04-19T19:54:11.036Z · LW(p) · GW(p)

The Girl Scouts currently offer a badge in the "science of happiness." I don't have a daughter, but if you do, perhaps you should look into the "science of style" badge as well.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-20T09:26:34.854Z · LW(p) · GW(p)

We totally need rationality badges like that!

Rationalists should win... badges.

Replies from: Vaniver
comment by Vaniver · 2013-04-21T15:47:48.785Z · LW(p) · GW(p)

The less scouty and more gamery way to describe them is "achievements."

comment by CAE_Jones · 2013-04-18T13:50:01.576Z · LW(p) · GW(p)

So far, I haven't found a good way to compare organizations for the blind other than reading their wikipedia pages.

And, well, blindness organizations are frankly a political issue. Finding unbiased information on them is horribly difficult. Add to this my relatively weak Google-fu, and I haven't found much.

Conclusions:

  • NFB is identity politics. They're also extremely assertive.
  • AFB focuses on technology, inherited Hellen Keller's everything, etc.
  • ACB... umm... exists. They did give me a scholarship, and made the case for accessible money (Good luck with that. :P), I guess.

I want to find the one with the most to offer, and take advantage of those opportunities.

The difficulty is figuring out which one is the most useful. NFB comes across as cultish and pushing their ideology on anyone who comes to them, and they seem to be ignoring medical professionals advising them against using sleep shades on people with residual sight in their training programs. Also, their specialized cane sounds like an identity symbol more than a utility maximizer; it has better reach, but is flimsy-yet-unfolding and gets in the way. I do like the implication that it optimizes arm usage, but otherwise it sounds annoying.

On the upside, they seem to be the loudest, and as we all know, America is the country where the loudest get large chunks of attention. I've read some of their legal recommendations, and they seem to be the work of someone who knows how to aim for a goal and shoot until they hit it. Also, they're intense about braille.

Meanwhile, I'm imagining AFB being a possible avenue for getting my hands on a blasted tactile display, and possibly other meaningful technology-related projects, without having to put up indoctrination shields. Eah, there doesn't seem to be as much to say on them, which tells me that they have much less to criticize, but at the same time, it makes me wonder if they're powerful enough for the vague notion of whatever nonspecific ideas spawned this investigation.

NFB's sleep shades and specialized cane are rational for their purpose: to force the trainee to strengthen blindness as an identifying quality. They have other excuses--sleep shades prepare people for the possibility of losing what sight they have, the specialized cane provides better reach and is easier on the arms--but in light of the responses to these, and their responses to those responses, it's pretty clear that the identity advertisement is their main purpose. And quite frankly, that's annoying; my vision is not an identifying quality I care much about, so much as it's an obstacle that's made its troubles much clearer to me as of late. None of the other organizations seem to be functionally equivalent to the NFB, minus that element. Their main rival, the ACB, doesn't seem to do much of anything other than have fancy meetings and occasionally talk to legal people.

Gah, I would just continue ignoring them all, as I always have, if I wasn't living in a freakin' box.

Replies from: RolfAndreassen, CAE_Jones, CAE_Jones
comment by RolfAndreassen · 2013-04-19T19:15:04.176Z · LW(p) · GW(p)

Perhaps it would be easier to help if you said what you wanted help with. "The most to offer" in what specific area?

Replies from: CAE_Jones
comment by CAE_Jones · 2013-04-22T10:40:29.591Z · LW(p) · GW(p)

The trouble is that there are multiple areas of interest, and I'm not sure which is best to focus on: life skills? Technology? Programs that I could improve? Etc. My primary strategy has been to determine the goals of each organization and how much success they've had in achieving them, and the trouble is that these are hard to measure (we can tell how much policy influence NFB has had, at least. I haven't found much about how many of AFB's recommendations have been enacted.).

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-04-22T19:16:46.460Z · LW(p) · GW(p)

Then it seems that you should recurse a level: Rather than trying to evaluate the organisations, you should be deciding which of the possible organisation-goals is most important. When you've decided that, judge which organisation best achieves that optimal goal.

comment by CAE_Jones · 2013-04-22T18:32:11.422Z · LW(p) · GW(p)

I still can't find much useful information on the AFB, but the NFB publicizes most of their major operations. The only successful one I've come across so far is the cancellation of the ABC sit com "Good and Evil" (it's worth noting that ABC denied that the NFB protests had anything to do with this). They don't seem to be having success at improving Kendel accessibility, which is more a political matter than a technological one (Amazon eventually cut communications with them). They're protesting Goodwill because 64/165 of their stores pay disabled employees less than minimum wage, in a manner that strikes me as poorly thought out (it seems to me that Goodwill has a much better image than the NFB, so this will most likely cost the NFB a lot of political capital).

This isn't really enough for me to determine whether they're powerful, or just loud, but so far it's making me update ever so slightly in favor of just loud.

It is worth noting that all of the above information came from publications written by NFB members, mostly hosted on NFB web sites. If my confidence in their abilities is hurt by writings seemingly designed to favor them, I can only imagine what something more objective would look like.

[edit]Originally typed Givewell instead of Goodwill! Fixed![/edit]

comment by CAE_Jones · 2013-04-28T13:17:24.873Z · LW(p) · GW(p)

Lighthouse International publishes scientific-looking research (although most of them appear to be single studies with small sample sizes, so they could stand further vetting). This and this match my experience pretty well, although matching my experience isn't what I'd call criteria for effectiveness. If nothing else, I expect that they would be the most likely to help me get a quantitative picture of other organizations.

comment by Pablo (Pablo_Stafforini) · 2013-04-18T03:26:11.175Z · LW(p) · GW(p)

I would like to recommend Nick Winter's book, The Motivation Hacker. From an announcement posted recently to the Minicamp Graduates mailing list:

"The book takes Luke's post about the Motivation Equation and tries to answer the question, how far can you go? How much motivation can you create with these hacks? (Turns out, a lot.) Using the example of eighteen missions I pursued over three months, it goes over in more detail how to get yourself to want to do what you always wanted to want to do."

(Disclaimer: I hadn't heard of Nick Winter until a friend forwarded me the email containing that announcement, and I have no interest in promoting the book other than to help folks here attain their goals more effectively.)

comment by newguy · 2013-04-16T04:24:25.337Z · LW(p) · GW(p)

Sex. I have a problem with it and would like to solve it. I get seriously anxious every time I'm about to have sex for the first time with a new partner. Further times are great and awesome. But the first time leaves me very anxious; which makes me delay it as much as I can. This is not optimal. I don't know how to fix it, if anyone can help I'd be greatly grateful

--

I notice I'm confused: I always tried to keep a healthy life: sleeping many hours, no alcohol, no smoke. I've just been living 5 days in a different country with some friends. We sleep 7 hours at most, they are smoking all the time, I've drank once. We hardly eat: My face looks better, I feel better, I just look healthier. Also feel like that. Possible confounds: I live mostly alone, now I'm also hanging out with at least 3 people, usually closer to 10. I'm going out and dancing at least 4 hours every night. I'm talking to new people every night. I don't know how I'd go about to test what caused this, but I'd like to know and keep that factor in my life. Any ideas?

Replies from: TheOtherDave, drethelin, falenas108, Manfred, MixedNuts
comment by TheOtherDave · 2013-04-16T05:30:38.856Z · LW(p) · GW(p)

Re: sex... is there anyone with whom you're already having great awesome sex who would be willing to help out with some desensitization? For example, adding role-playing "our first time" to your repertoire? If not, how would you feel about hiring sex workers for this purpose?

Re: lifestyle... list the novel factors (dancing 4 hrs/night, spending time with people rather than alone, sleeping <7 hrs/night, diet changes, etc. etc. etc.). When you're back home, identify the ones that are easy to introduce and experiment with introducing them, one at a time, for a week. If you don't see a benefit, move on to the next one. If none of them work, try them all at once. If that doesn't work, move on to the difficult-to-introduce ones and repeat the process.

Personally, I would guess that several hours of sustained exercise and a different diet are the primary factors, but that's just a guess.

Replies from: newguy
comment by newguy · 2013-04-16T07:23:26.455Z · LW(p) · GW(p)

re: sex Not at the moment, but in some 2 months that roleplaying stuff would be possible yes. I tried looking for some affect hacking on the website but didn't find much practical advice unfortunately.

wrt to sex workers, no great moral objection, besides and initial emotional ugh, but I'm unsure on how helpful it could be.

re: lifestyle this is somewhat what I had in mind, thank you.

comment by drethelin · 2013-04-16T05:32:06.149Z · LW(p) · GW(p)

could be a sign of a mold infestation or other environmental thing where you normally live

Replies from: newguy
comment by newguy · 2013-04-16T07:23:47.225Z · LW(p) · GW(p)

how would I go about testing this?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-16T13:32:43.811Z · LW(p) · GW(p)

Spend enough time in a third (and possibly a fourth) place to see whether your mood improves.

In re anxiety: have you tried tracking exactly what you think before first time sex?

Replies from: newguy
comment by newguy · 2013-04-16T21:08:26.926Z · LW(p) · GW(p)

will do and report back.

No, I never did try that, I feel it will be only very catastrophic thoughts; I will try to track it when the opportunity arises and update.

comment by falenas108 · 2013-04-16T04:31:29.966Z · LW(p) · GW(p)

Are you significantly happier now than before?

Replies from: newguy, drethelin
comment by newguy · 2013-04-16T07:25:02.464Z · LW(p) · GW(p)

Very much so yes. Potential big confounder: never been around so many beautiful & nice females (I'm a straight male).

But my moodflow varies between long lasting moods of feeling slightly good and slightly bad and for the days I've been here I get consistent "great" ratings - I feel awesome all the time.

Replies from: falenas108
comment by falenas108 · 2013-04-16T12:27:49.408Z · LW(p) · GW(p)

The feeling happier part could explain looking and feeling healthier alone. I'm stepping into the realm of guesswork here, but I would say that being around others that you enjoy hanging out with could be the cause, or the increased exercise from dancing so much.

Also, explaining the cigarrettes and alcohol, although there are long term risks associated (especially for the cigarettes), that doesn't mean they cause negative short term effects.

As for 7 hours of sleep tops, there's evidence that around 7 hours might be best.

comment by drethelin · 2013-04-16T05:31:50.055Z · LW(p) · GW(p)

could be a sign of a mold infestation or other environmental thing where you normally live

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-16T07:22:46.443Z · LW(p) · GW(p)

Isolating and mass producing the happiness mold could be the best invention since penicilin. :D

comment by Manfred · 2013-04-17T10:07:09.807Z · LW(p) · GW(p)

I will make the typical recommendation: cognitive behavioral therapy techniques. Try to notice your emotions and responses, and just sort them into helpful or not helpful. Studies also seem to show that this sort of thing works better when you're talking with a professional.

comment by MixedNuts · 2013-04-17T10:56:21.530Z · LW(p) · GW(p)

The standard strategy seems to be to work up to sex very progressively, going a little further on each encounter, so there's never any bright line to cross. Why is this failing for you?

Replies from: newguy
comment by newguy · 2013-04-17T18:24:40.760Z · LW(p) · GW(p)

Maybe because there is always a clear line? I go from meeting to kissing quite fast, and from kissing to being in my bedroom also quite fast, so there is no small progression, it's meeting, kissing, then we end up at a sex-appropriate place and I go trough it, but I'm incredibly anxious.

Replies from: MixedNuts
comment by MixedNuts · 2013-04-17T19:01:09.576Z · LW(p) · GW(p)

By "quite fast" do we mean a few hours, or a few dates? If the latter: You are in fact allowed not to have sex on the first date, or the first time they're in your bedroom. You can go as far as you're comfortable with and no further - and know where you'll stop in advance, so you're not anxious beforehand, and then go a little further on subsequent dates.

Is your anxiety tied to specific acts, or to sex itself? Does it help if I point out that the boundaries of what counts as sex are very blurry, and do your anxieties change if you change what you think of as sex?

Replies from: newguy
comment by newguy · 2013-04-17T19:05:44.372Z · LW(p) · GW(p)

3 meetings, wouldn't call them dates.

I understand that, but it somehow makes me feel bad to have them there and ready and that I'm the one that actually also wants to but somehow/for some reason can't.

Just first-time sex as in intercourse. Well, in my mind sex = intercourse [as in penis in vagina], everything else is "fooling around". [Not debating definitions, just saying how it feels to me].

I don't know I need to test it, but that might be useful to try, to try to think of sex as being something else.

Replies from: MixedNuts
comment by MixedNuts · 2013-04-17T19:12:26.780Z · LW(p) · GW(p)

Sounds like your problems could cancel out. If you decline intercourse but "fool around" a lot, they're unlikely to be too unhappy about it.

Replies from: newguy
comment by newguy · 2013-05-23T07:23:17.347Z · LW(p) · GW(p)

This worked out (n = 3). I explicitly say that it is unlikely intercourse will happen (to them and myself), and when it does it just feels natural, no bright line. Thank you, this was a big problem!

comment by Peter Wildeford (peter_hurford) · 2013-04-22T00:00:30.249Z · LW(p) · GW(p)

A few of you may know I have a blog called Greatplay.net, located at... surprise... http://www.greatplay.net. I’ve heard some people that discovered my site much later than they otherwise would because the name of the site didn’t communicate what it was about well and sounded unprofessional.

Why Greatplay.net in the first place? I picked it when I was 12, because it was (1) short, (2) pronounceable, (3) communicable without any risk of the other person misspelling it, and (4) did not communicate any information about what the site would be about, so I could mold the site as I grew.

Now after >2 years of blogging about basically the same thing, I think my blog will always be about utilitarianism (both practical and philosophical), lifestyle design (my quest to make myself more productive and frugal, mainly so I can be a better utilitarian), political commentary (from a utilitarian perspective), and psychology (of morality and community and that which basically underlies practical utilitarianism).

I probably would want to talk about religion/atheism from time to time, which used to be my biggest interest, but I can already tell it's moderately unpopular with my current readership (yawnnn... we really have to go over why the Bible has errors again?) and I'm already personally getting increasingly bored with it, so I can do away with discussing atheism if I needed to keep to a "topic"-focused blog.

Basically, at this point, I think I stand to gain more by making my blog and domain name more descriptive than I stand to lose by risking my interests shifting away from utilitarianism (or at least the public discussion thereof). But the big question... what should I name my blog?

Option #1: Keep with Greatplay.net: There will be costs with shifting to a new domain name. The monetary cost is mostly insignificant (<$20/yr for a new domain name), but it will take a moderate amount of time to move all the archives over and make sure all the new hyperlinks on the site work. Also, there will be confusion among the readership, and everyone who was linking to my site externally would now be linking to dead stuff. So, if I've misestimated the benefits of moving, I might want to stick with the current name and not incur the costs.

Option #2: Go to PeterHurford.com: I already use this site as an online résumé of sorts, so I wouldn't need to get the domain. This also seems the most descriptive of what the site would be about (a personal blog, about me) and fits in with what the cool kids are doing. However, some of my opinions are controversial relative to the mainstream and I don't know what I'll be doing in my future. Keeping my real name hidden from my website might be an asset (so I don't lose opportunities because of association with unpopular mainstream opinions), though it might also be a drawback (I think I have gotten some recognition and opportunity from those who share my unpopular mainstream opinions).

Option #3: A new name: If Option #1 and #2 don't work, I'd want to just rename the blog to something descriptive of a blog about utilitarianism. Some ideas I've come up with:

  • A Shallow Pond
  • The Everyday Utilitarian
  • Everyday Utilitarianism
  • Commonsense Utilitarianism
  • A Utilful Mind (credit to palladias)

Though feel free to suggest your own!

Replies from: Jonii, tondwalkar
comment by Jonii · 2013-04-25T20:08:42.975Z · LW(p) · GW(p)

I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with " utilitarianism", " design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: ". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.

comment by tondwalkar · 2013-07-02T22:37:54.301Z · LW(p) · GW(p)

Dibs on 'A Utilful Mind' if you don't take it?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-07-03T13:21:31.680Z · LW(p) · GW(p)

I ended up going with Everyday Utilitarian, so you can have it.

comment by [deleted] · 2013-04-16T22:08:01.519Z · LW(p) · GW(p)

Is tickling a type of pain?

Replies from: MileyCyrus, wedrifid, army1987
comment by MileyCyrus · 2013-04-16T22:23:57.959Z · LW(p) · GW(p)

Dissolve the question.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-17T07:00:44.746Z · LW(p) · GW(p)

One question I like to ask in response to questions like this is "what do you plan on doing with this information?" I've generally found that thinking consequentially is a good way to focus questions.

comment by wedrifid · 2013-04-16T23:42:35.737Z · LW(p) · GW(p)

Is tickling a type of pain?

The simplest way of categorizing this would be based on the biology of which nerves nerves are involved. It appears that the tickle sensation involves signals from nerve fibres associated with both pain and touch. So... "Kind of".

comment by A1987dM (army1987) · 2013-04-17T19:12:15.451Z · LW(p) · GW(p)

In case the answer to Qiaochu_Yuan's question is something like “I'm trying to establish the moral status of tickling in my provisional moral system”, note that IIUC the sensation felt when eating spicy foods is also pain according to most definitions, but a moral system according to which eating spicy foods is bad can go #$%& itself for all that I'm concerned.

comment by sixes_and_sevens · 2013-04-16T11:28:34.789Z · LW(p) · GW(p)

Does anyone have any real-world, object-level examples of degenerate cases)?

I think degeneracy has some mileage in terms of explaining certain types of category error, (eg. "atheism is a religion"), but a lot of people just switch off when they start hearing a mathematical example. So far, the only example I've come up with is a platform pass at a train station, which is a degenerate case of a train ticket. It gets you on the platform and lets you travel a certain number of stops (zero) down the train line.

Anyone want to propose any others?

Replies from: TimS, Alejandro1, kpreid
comment by TimS · 2013-04-16T16:44:22.959Z · LW(p) · GW(p)

Grabbing someone by the arm and dragging them across the room as a degenerate case of kidnapping?

Trading a gun for drugs as a degenerate case of "Using a firearm in a drug transaction"? On a related note, receiving the gun is not using a firearm in a drug transaction.

I'm sure there are more examples in the bowels of criminal law (and law generally).

comment by Alejandro1 · 2013-04-16T16:31:38.558Z · LW(p) · GW(p)

Complete anarchy as the degenerate case of a government system?

Sleeping on the floor as the degenerate case when discussing different kinds of beds and mattresses?

Asexuality as the degenerate case of hetero/homo/bi sexuality?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-04-16T17:08:14.152Z · LW(p) · GW(p)

Serious-ish answers: The degenerate case of dieting is when you increase your calorie intake by zero. (Also applies to government budgets, although it's then usually referred to as a "cut".)

The degenerate case of tax reform is to pass no new laws.

The degenerate case of keeping kosher (also halal, fasting, giving things up for Lent) is to eat anything you like.

The degenerate case of a slippery-slope argument is to say "If we do X, X will follow, and then we'll be sure to have X, from which we'll certainly get X". (That is, this argument is the limit as epsilon goes to zero of the argument X -> X+epsilon -> X+2 epsilon...).

Mainly in jest: Dictatorship considered as a degenerate case of democracy: One Man, One Vote - he is The Man, he has The Vote.

Conversely, democracy considered, Moldbug-style, as the degenerate case of dictatorship: Each of N citizens has 1/N of the powers of the dictator.

comment by kpreid · 2013-04-17T20:41:58.456Z · LW(p) · GW(p)

Not going anywhere is degenerate travel (but can be an especially restful vacation).

comment by CAE_Jones · 2013-04-30T21:20:31.893Z · LW(p) · GW(p)

There's a phenomenon I'd like more research done on. Specifically, the ability to sense solid objects nonvisually without direct physical contact.

I suspect that there might be some association with the human echolocation phenomenon. I've found evidence that there is definitely an audio component; I entirely by accident simulated it in a wav file (It was a long time before I could listen to that all the way through, for the strong sense that something was reaching for my head; system2 had little say in the matter).

I've also done my own experiments involving covering my ears, and have still been able to sense things to some extent, if more weakly. I notice that if I walk around with headphones on, I have a much harder time getting a sense of my surroundings.

The size of the object, and its proximity to my head are related to how well I can sense it (large walls and trees are easier than bike racks or benches. My college had a lot of knee-high brick walls lining its paths, which was hell on my normal navigation methods).

My selfish motivation for researching this is that, if it can be perfectly simulated in audio, then game accessibility has a potential avenue to gain much strength. I would like to understand it even without that perk, though.

If there is, in fact, decent published research on this that I don't know about, I'd be grateful if someone could provide one or more links. Otherwise, I'd like an idea of who I might contact to try and initiate such research; at the moment, I'm considering recommending it to Lighthouse International.

comment by lukeprog · 2013-04-25T02:13:19.938Z · LW(p) · GW(p)

In chapter 1 of his book Reasoning about Rational Agents, Michael Wooldridge identifies some of the reasons for trying to build rational AI agents in logic:

There are some in the AI research community who believe that logic is (to put it crudely) the work of the devil, and that the effort devoted to such problems as logical knowledge representation and theorem proving over the years has been, at best, a waste of time. At least a brief justification for the use of logic therefore seems necessary.

First, by fixing on a structured, well-defined artificial language (as opposed to unstructured, ill-defined natural language), it is possible to investigate the question of what can be expressed in a rigorous, mathematical way (see, for example, Emerson and Halpern [50], where the expressive power of a number of temporal logics are compared formally). Another major advantage is that any ambiguity can be removed (see, e.g., proofs of the unique readability of propositional logic and first-order predicate logic [52, pp.39-43]).

Transparency is another advantage: "By expressing the properties of agents, and multiagent systems as logical axioms and theorems in a language with clear semantics, the focal points of (the theory) are explicit. The theory is transparent; properties, interrelationships, and inferences are open to examination. This contrasts with the use of computer code, which requires implementational and control aspects within which the issues to be tested can often become confused." [68, p.88]

Finally, by adopting a logic-based approach, one makes available all the results and techniques of what is arguably the oldest, richest, most fundamental, and best-established branch of mathematics.

Replies from: lukeprog
comment by lukeprog · 2013-04-25T03:51:43.169Z · LW(p) · GW(p)

In An Introduction to MultiAgent Systems, he writes:

By moving away from strictly logical representation languages... one can build agents that enjoy respectable performance. But one also loses what is arguably the greatest advantage that the logical approach brings: a simple, elegant logical semantics.

comment by Tenoke · 2013-04-19T13:17:22.890Z · LW(p) · GW(p)

I started following DavidM's meditation technique Is there anything that I should know? Any advice or reasons on why I should choose a different type of meditation?

Replies from: Tenoke
comment by Tenoke · 2013-04-19T14:16:34.389Z · LW(p) · GW(p)

FWIW adding tags to distracting thoughts and feelings seems like a useful thing (for me) even when not meditating and I haven't encountered this act of labeling in my past short research on meditation.

comment by Viliam_Bur · 2013-04-17T19:00:03.060Z · LW(p) · GW(p)

Sometimes, success is the first step towards a specific kind of failure.

I heard that the most difficult moment for a company is the moment it starts making decent money. Until then, the partners shared a common dream and worked together against the rest of the world. Suddenly, the profit is getting close to one million, and each partner becomes aware that he made the most important contributions, while the others did less critical things which technically could be done by employees, so having to share the whole million with them equally is completely stupid. At this moment the company often falls apart.

When a group of people becomes very successful, fighting against other people within the group can bring higher profit than cooperating against the environment. It is like playing a variant of a Prisonner's Dilemma where the game ends at the first defection and the rewards for defection are growing each turn. It's only semi-iterated; if you cooperate, you can continue to cooperate in the next turn, but if you manage to defect successfully, there may be no revenge, because the other person will be out.

Will something like this happen to the rationalist community one day (assuming the Singularity will not happen soon)? At this moment, there are small islands of sanity in the vast oceans of irrationality. But what if some day LW-style rationality becomes popular? What are the risks of success analogical to a successful company falling apart?

I can imagine that many charismatic leaders will try to become known as the most rational individual on the planet. (If rationality becomes 1000× more popular than it is today, imagine the possible temptations: people sending you millions of dollars to support your mission, hundreds of willing attractive poly partners, millions of fans...) There will be honest competition, which is good, but there will also be backstabbing. Some groups will experiment with mixing 99% rationality and 1% applause lights (or maybe 90% rationality and 10% applause lights), where "applause lights" will be different for different groups; it could be religion, marxism, feminism, libertarianism, racism, whatever. Or perhaps just removing the controversial parts, starting with many-worlds interpretation. Groups which optimize for popularity could spread faster; the question is how quickly would they diverge from rationality.

Do you think an outcome like this is likely? Do you think it is good or bad? (Maybe it is better to have million people with 90% of rationality, than only a thousand with 99% of rationality.) When will it happen? How could we prevent it?

Replies from: OrphanWilde, NancyLebovitz
comment by OrphanWilde · 2013-04-17T19:25:04.412Z · LW(p) · GW(p)

People competing to be known as the most rational?

Er... what's the downside again?

Replies from: bramflakes
comment by bramflakes · 2013-04-17T21:08:10.233Z · LW(p) · GW(p)

It's much easier to signal rationality than to actually be rational.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-17T21:16:50.972Z · LW(p) · GW(p)

True. It's harder to fake rationality than it is to fake the things that matter today, however (say, piety). And given that the sanity waterline has increased enough that "rational" is one of the most desirable traits for somebody to have, fake signaling should be much harder to execute. (Somebody who views rationality as such a positive trait is likely to be trying to hone their own rationality skills, after all, and should be harder to fool than the same person without any such respect for rationality or desire to improve their own.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-18T07:22:19.643Z · LW(p) · GW(p)

Faking rationality would be rather easy: Criticize everything which is not generally accepted and always find biases in people you disagree with (and since they are humans, you always find some). When "rationality" becomes a popular word, you can get many followers by doing this.

Here I assume that the popularity of the word "rationality" will come before there are millions of x-rationalists to provide feedback against wannabe rationalists. It would be enough if some political movement decided to use this word as their applause light.

Replies from: OrphanWilde, private_messaging
comment by OrphanWilde · 2013-04-18T07:29:35.396Z · LW(p) · GW(p)

Do you see any popular people here you'd describe as faking rationality? Do we seem to have good detectors for such behavior?

We're a pretty good test case for whether this is viable or not, after all. (Less so for somebody co-opting words, granted...)

Replies from: Viliam_Bur, David_Gerard
comment by Viliam_Bur · 2013-04-18T12:07:59.494Z · LW(p) · GW(p)

The community here is heavily centered around Eliezer. I guess if someone started promoting some kind of fake rationality here, sooner or later they would get into conflict with Eliezer, and then most likely lose the support of the community.

For another wannabe rationalist guru it would be better to start their own website, not interact with people on LW, but start recruiting somewhere else, until they have greater user base than LW. At the moment their users notice LW, all they have to do is: 1) publish a few articles about cults and mindkilling, to prime their readers, and 2) publish a critique of LW with hyperlinks to all currently existing critical sources. The proper framing would be that LW is a fringe group which uses "rationality" as applause lights, but fails horribly (insert a lot of quotations and hyperlinks here), and discussing them is really low-status.

It would help if the new rationalist website had a more professional design, and emphasised its compatibility with mainstream science, e.g. by linking to high-status scientific institutions, and sometimes writing completely uncontroversial articles about what those institutions do. In other words, the new website should be optimized to get 100% approval of the RationalWiki community. (For someone trying to do this, becoming a trusted member of RationalWiki community could be a good starting point.)

Replies from: David_Gerard, MugaSofer
comment by David_Gerard · 2013-04-27T18:47:22.286Z · LW(p) · GW(p)

I'm busy having pretty much every function of RW come my way, in a Ponder Stibbons-like manner, so if you can tell me where the money is in this I'll see what I can come up with. (So far I've started a blog with no ads. This may not be the way to fame and fortune.)

Replies from: gwern
comment by gwern · 2013-04-27T21:18:27.357Z · LW(p) · GW(p)

The money or lack thereof doesn't matter, since RW is obviously not an implementation of Villam's proposed strategy: it fails on the ugliness with its stock MediaWiki appearance, has too broad a remit, and like El Reg it shoots itself in the foot with its oh-so-hilarious-not! sense of humor (I dislike reading it even on pages completely unrelated to LW). It may be successful in its niche, but its niche is essentially the same niche as /r/atheism or Richard Dawkins - mockery of the enemy leavened with some facts and references.

If - purely hypothetically speaking here, of course - one wished to discredit LW by making the respective RW article as negative as possible, I would expect it to do real damage. But not be any sort of fatal takedown that set a mainstream tone or gave a general population its marching orders, along the lines of Shermer's 'cryonics is a scam because frozen strawberries' or Gould's Mismeasure of Man's 'IQ is racist, involved researchers like Merton faked the data because they are racist, and it caused the Holocaust too'.

comment by MugaSofer · 2013-04-23T12:46:28.900Z · LW(p) · GW(p)

It would help if the new rationalist website had a more professional design, and emphasised its compatibility with mainstream science, e.g. by linking to high-status scientific institutions, and sometimes writing completely uncontroversial articles about what those institutions do. In other words, the new website should be optimized to get 100% approval of the RationalWiki community. (For someone trying to do this, becoming a trusted member of RationalWiki community could be a good starting point.)

So ... RationalWiki, then.

comment by David_Gerard · 2013-04-27T18:12:52.450Z · LW(p) · GW(p)

Do you see any popular people here you'd describe as faking rationality? Do we seem to have good detectors for such behavior?

Accomplishment is a start. Do the claims match the observable results?

comment by private_messaging · 2013-04-27T15:38:47.510Z · LW(p) · GW(p)

Yeah, because true rationality is going to be supporting something like cryonics that you personally believe in.

comment by NancyLebovitz · 2013-07-07T22:02:28.040Z · LW(p) · GW(p)

I can't see any good general solutions. People are limited to their own judgement about whether something which purports to be selling rationality actually makes sense.

You take your chances with whether martial arts and yoga classes are useful and safe.

LW et al. does have first mover advantage and hopefully some prestige as a result, and I'm hoping that that resources for the general public will be developed here. On the other hand, taking sufficient care to develop workshops which actually work takes time-- and that's workshops for people whose intelligence level is similar to that of the people putting on the workshops.

If we assume that rationalists should win, even over fake rationalists, then maybe we should leave the possibility open that rationalists who are actually in the situation of competing with fake rationalists should be in a better position to find solutions because they'll know more than we do now.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-08T08:52:16.143Z · LW(p) · GW(p)

I also don't have a solution besides reminding the rationalists that we run on corrupted hardware, and the strong feeling of "these people around me are idiots, I could do it hundred times better" is an evolutionary adaptation for situations when there are many resources and no significant external enemy. (And by the way, this could explain a lot of individualism our society has these days.) We had a few people here who got offended e.g. by Eliezer's certainty about quantum physics, and tried to split, and failed.

So perhaps the risk is actually small. Fake rationalists may be prone to self-sabotage. The proverbial valley of the bad rationality surrounding the castle of rationality can make being a half-rationalist even worse than being a non-rationalist. So the rationalists may have a hard time fighting pure superstition, but the half-rationalists will just conveniently destroy themselves.

The first mover advantage works best if all players are using the same strategy. But sometimes the new player can learn from older players' mistakes, and does not have to pay the costs. (Google wasn't the first search engine; Facebook wasn't the first social network; MS Windows wasn't the first operating system with graphical interface.) The second player could learn from LW's bad PR. But it is likely that being completely irrational would be even more profitable for them, if profit would be the main goal.

comment by OrphanWilde · 2013-04-17T14:49:04.508Z · LW(p) · GW(p)

Does anybody on here use at-home EEG monitors? (Something like http://www.emotiv.com/store/hardware/epoc-bci-eeg/developer-neuroheadset/ although that one looks rather expensive)

If you do, do you get any utility out of them?

Replies from: gwern, Emile
comment by gwern · 2013-04-17T17:45:05.631Z · LW(p) · GW(p)

SDr actually gave me his research-edition Emotiv EPOC, but... I haven't actually gotten around to using it because I've been busy with things like Coursera and statistics. So, eventually! Hopefully.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-17T19:27:40.280Z · LW(p) · GW(p)

Hm. Do you know of any resources on how to use EEG information to improve your thought processes?

(I'm highly tempted to put some of my tax return to trying it out; partially for improvement purposes, partially because I'm curious how much is going on inside my mind I'm unaware of.)

Replies from: gwern, Zaine
comment by gwern · 2013-04-17T20:28:29.348Z · LW(p) · GW(p)

Do you know of any resources on how to use EEG information to improve your thought processes?

Anything labeled 'neurofeedback' seems like a good place to start. I presently have few ideas about how to use it, aside from seeing if it's a good way to quantify meditation quality and hence have more direction in meditation than random books and 'well, it seems to be helping a little'.

comment by Zaine · 2013-04-20T20:51:16.053Z · LW(p) · GW(p)

EEG machines measure frequency of neuronal firing in the cortex. The greater the frequency, the more asynchronous the firing and thus the more active the brain. Learning how to read EEG output requires training, but there might be computer programs for that. To use the machine effectively, identify an activity for which you'd like to measure your brain waves, exempli gratia:

Measure degrees of neuronal firing asynchrony during work periods (pomodoros) - useful for calibrating an accurate feeling of focus; measure success of meditation (gamma wave output), as gwern noted; measure which break activities actually induce a relaxed state; and of course check quality of sleep.

comment by Emile · 2013-04-17T21:41:17.568Z · LW(p) · GW(p)

I'm pretty curious about those and have considered buying one, but didn't really think it worthwhile - I tried one and was not very impressed, though if I have a lot of time I might give it a stab.

comment by MileyCyrus · 2013-04-16T04:57:19.270Z · LW(p) · GW(p)

Cal Newport and Scott H. Young are collobarating to form a start deliberate practice course by email. Here's an excerpt from on Cal's emails to inquiring people:

The goal of the course is simple: to teach you how to apply the principles of deliberate practice to become a stand out in your job.

Why is this important? The Career Capital Theory I teach in my latest book and on Study Hacks maintains that the skills that make you remarkable are also your leverage for taking control of your working life, and transforming it into a source of passion.

The goal for Scott and I in offering a limited pilot run of the course at this point, is to get feedback from real people in real jobs. Adapting deliberate practice to knowledge work is difficult. We think experiments of this type are the only way to keep advancing our understanding.

The course lasts four weeks and is e-mail based. During each week you will receive three e-mails concluding with a concrete action step to help you solidify what you learned and start applying it to your life immediately.

Here is the curriculum: Week One: Mapping out How Success Actually Works in Your Field Week Two: Hard Facts, Driving Your Career by Metrics Week Three: Designing and Choosing Projects to Build Skills Faster Week Four: Enabling Deep Work

Does this sound like it's worth $100?

Replies from: DaFranker
comment by DaFranker · 2013-04-16T14:32:00.370Z · LW(p) · GW(p)

Errh

On an uncharitable reading, this sounds like two wide-eyed broscientist prophets who found The One Right Way To Have A Successful Career (because by doing this their career got successful, of course), and are now preaching The Good Word by running an uncontrolled, unblinded experiment for which you pay 100$ just to be one of the lucky test subjects.

Note that this is from someone who's never heard of "Cal Newport" or "Scott H. Young" before now, or perhaps just doesn't recognize the names. The facts that they've sold popular books with "get better" in the description and that they are socially-recognized as scientists are rather impressive, but doesn't substantially raise my priors of this working or not.

So if you've already tried some of their advice in enough quantity that your updated belief that any given advice from them will work is high enough and stable enough, this seems more than worth 100$.

Just the possible monetary benefits probably outweigh the upfront costs if it works, and even without that, depending on the kind of career you're in, the VoI and RoI here might be quite high, so depending on one's career situation this might need only a 30% to 50% probability of being useful for it to be worth the time and money.

Replies from: MileyCyrus
comment by MileyCyrus · 2013-04-16T16:55:33.077Z · LW(p) · GW(p)

Note that this is from someone who's never heard of "Cal Newport" or "Scott H. Young" before now, or perhaps just doesn't recognize the names.

They seem to get more respect on LW than average career advice bloggers, so I was hoping someone who was familiar would comment. Nonetheless, I'm upvoting you because it's good to hear an outsider's opinion.

comment by Dorikka · 2013-04-15T18:33:06.393Z · LW(p) · GW(p)

I think that the open thread belongs in Discussion, not Main.

Replies from: David_Gerard
comment by David_Gerard · 2013-04-15T18:44:54.356Z · LW(p) · GW(p)

It usually goes there, yes - presumably it was put in Main in error.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-15T20:00:13.127Z · LW(p) · GW(p)

Before I've checked two other open threads, including the last one, and when the link is open it shows "Main" dark shaded on top of them.

http://lesswrong.com/lw/h3w/open_thread_april_115_2013/

For the time being I switched it to discussion.

Replies from: Vaniver
comment by Vaniver · 2013-04-15T20:05:30.351Z · LW(p) · GW(p)

Before I've checked two other open threads, including the last one, and when the link is open it shows "Main" dark shaded on top of them.

Unfortunately, this is not an indicator that the post is actually in Main.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-15T20:07:32.056Z · LW(p) · GW(p)

How bizarre. :)

Replies from: lsparrish
comment by lsparrish · 2013-04-15T22:23:18.601Z · LW(p) · GW(p)

Originally, they were generally in Main, since Discussion was just for putting posts that need cleanup work. Eventually this was changed though, and we usually keep open threads in Discussion these days.

comment by fubarobfusco · 2013-04-30T06:27:03.593Z · LW(p) · GW(p)

In a few places — possibly here! — I've recently seen people refer to governments as being agents, in an economic or optimizing sense. But when I reflect on the idea that humans are only kinda-sorta agents, it seems obvious to me that organizations generally are not. (And governments are a sort of organization.)

People often refer to governments, political parties, charities, or corporations as having goals ... and even as having specific goals which are written down here in this constitution, party platform, or mission statement. They express dismay and outrage when these organizations act in ways that contradict or ignore those stated goals.

Does this really make sense?

It seems to me that just as the art or science of acting like you have goals is "instrumental rationality", it may be that the art or science of causing organizations to act like they have goals is called "management".

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-30T06:44:09.269Z · LW(p) · GW(p)

What do you mean by "agent" here?

Replies from: fubarobfusco
comment by fubarobfusco · 2013-04-30T07:09:34.971Z · LW(p) · GW(p)

"Entity that acts like it has goals." If someone says, "The Democratic Party wants to protect the environment" or "The Republican Party wants to lower the national debt," they are attributing goals to an organization.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-30T07:43:28.477Z · LW(p) · GW(p)

Can you give an example of something that's not an agent?

comment by blashimov · 2013-04-29T06:24:36.637Z · LW(p) · GW(p)

[link] XKCD on saving time; http://xkcd.com/1205/ Image URL (for hotlinking/embedding): http://imgs.xkcd.com/comics/is_it_wor Though it will probably be mostly unseen as the month is about to end.

comment by Tenoke · 2013-04-27T10:25:50.909Z · LW(p) · GW(p)

I encountered this cute summary of priming findings, thought you guys might like it, too:

You are walking into a room. There is a man sitting behind a table. You sit down across from him. The man sits higher than you, which makes you feel relatively powerless. But he gives you a mug of hot coffee. The warm mug makes you like the man a little more. You warm to him so to speak. He asks you about your relationship with your significant other. You lean on the table. It is wobbly, so you say that your relationship is very stable. You take a sip from the coffee. It is bitter. Now you think the man is a jerk for having asked you about your personal life. Then the man hands you the test. It is attached to a heavy clipboard, which makes you think the test is important. You’re probably not going to do well, because the cover sheet is red. But wait—what a relief!—on the first page is a picture of Einstein! Now you are going to ace the test. If only there wasn’t that lingering smell of the cleaning fluid that was used to sanitize the room. It makes you want to clean the crumbs, which must have been left by a previous test-taker, from the tabletop. You need to focus. Fortunately, there is a ray of sunlight coming through the window. It leaves a bright spot on the floor. At last you can concentrate on the test. The final question of the test asks you to form a sentence that includes the words gray, Florida, bingo, and pension. You leave the room, walking slowly…

comment by A1987dM (army1987) · 2013-04-25T18:32:20.082Z · LW(p) · GW(p)

How do you people pronounce MIRI? To rhyme with Siri?

Replies from: lukeprog, MugaSofer
comment by lukeprog · 2013-04-25T20:26:55.577Z · LW(p) · GW(p)

yes

comment by MugaSofer · 2013-04-28T21:16:32.792Z · LW(p) · GW(p)

It's from a Star Trek episode, so you can probably find it spoken online somewhere.

comment by TimS · 2013-04-25T01:12:08.556Z · LW(p) · GW(p)

Amanda Knox and evolutionary psychology - two of LessWrong's favorite topics, together in one news article / opinion piece.

The author explains the anti-Knox reaction as essentially a spandrel of an ev. psych reaction. Money quote:

In our evolutionary past, small groups of hunter-gatherers needed enforcers, individuals who took it upon themselves to punish slackers and transgressors to maintain group cohesion. We evolved this way. As a result, some people are born to be punishers. They are hard-wired for it.

I'm skeptical of the ev. psych because it seems to require a fairly strong form of group selection pressure. But I thought folks might find it interesting.

Replies from: komponisto
comment by komponisto · 2013-04-25T01:54:37.959Z · LW(p) · GW(p)

The phenomenon of altruistic punishment itself is apparently not just a matter of speculation. Another quote from Preston's piece:

Experiments show that when some people punish others, the reward part of their brain lights up like a Christmas tree. It turns out we humans avidly engage in something anthropologists call “altruistic punishment.”

He links to this PNAS paper which uses a computer simulation to model the evolution of altruistic punishment. (I haven't looked at it in detail.)

Whatever the explanation for their behavior (and it really cries out for one), the anti-Knox people are truly disturbing, and their existence has taught me some very unpleasant but important lessons about Homo sapiens.

(EDIT: One of them, incidentally, is a mathematician who has written a book about the misuse of mathematics in trials -- one of whose chapters argues, in a highly misleading and even disingenuous manner, that the acquittal of Knox and Sollecito represents such an instance.)

Replies from: TimS
comment by TimS · 2013-04-25T02:20:48.423Z · LW(p) · GW(p)

Skimming the PNAS paper, it appears that the conclusion is that evolved group co-operation is not mathematically stable without evolved altruistic punishment. I.e. populations with only evolved co-operation drift towards populations without any group focused evolved traits, but altruistic punishment seems to exclude enough defectors that evolved co-operation maintained frequency in the population.

Which makes sense, but I'm nowhere close to qualified to judge the quality of the paper or its implications for evolutionary theory.

comment by skeptical_lurker · 2013-04-19T21:15:51.221Z · LW(p) · GW(p)

I am aware that there have been several discussions over to what extent x-rationality translates to actual improved outcomes, at least outside of certain very hard problems like metaethics. It seems to me that one of the best ways to translate epistemic rationality directly into actual utility is through financial investment/speculation, and so this would be a good subject for discussion (I assume it probably has been discussed before, but I've read most of this website and cannot remember any in depth-thread about this, except for the mention of markets being at least partially anti-inductive).

Partially the reason for my writing this is that I have been reading about neuroeconomics and doing some academic research of my own (as in actually running experiments), and I am shocked by how near-universal irrational behavior displayed is (and therefore, exploitable by more rational agents). Even professional traders behavior is swayed by things like fluctuating testosterone levels. (Not that I know how to compensate for this!)

On a related note I've also been thinking about:

1) Applications for machine learning/narrow AI to finance.

2) Economic irrationality invalidating the libertarian free-market ideas, and possibly libertarianism in general, seeing as personal decisions can often be conceptualized economically. (I should point out that libertarianism used to appeal to me, and I find this line of reasoning mildly disturbing)

3) Gender relations, and the possibility that men are on average better at maths then women has been discussed here, and so discussion of the possibility that women are generally better at finance (see link above) could be beneficial, both in the context of pointing out opportunities to female rationalists, and to help dispel any appearance of misogyny that this community may have.

Again, I can't remember these being discussed here, and (1) seems very relevant to this community, although (2) is probably mind-killing and not very productive, unless any of us actually have the power to influence politics.

Apologies if this all has been already discussed in-depth somewhere.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-04-20T09:43:45.197Z · LW(p) · GW(p)

Even professional traders behavior is swayed by things like fluctuating testosterone levels. (Not that I know how to compensate for this!)

Bring some women to the team. (Yeah, that just changes the problem to a harder one: Where to find enough women rationalists interested in finance?) Or have multiple men on the team, and let them decide through some kind of voting. This would work only if their testosterone level fluctuations are uncorrelated. You could do some things to prevent that, e.g. forbid them to meet in person, and make their coordination as impersonal as possible, to prevent them from making each other angry.

This sounds like a huge complication to compensate for a single source of bias, so it needs some measurement. If this could help the team make millions, perhaps it is worth doing.

Economic irrationality invalidating the libertarian free-market ideas

Maybe irrationality could be modelled as just another cost of participating in the market. There are many kinds of costs which one has to pay to participate in the market. You pay for advertising, for transferring goods from the place they are produced to the customer, etc. Your own body must be fed and clothed. Irrationality is a cost of using your brain.

If you would transfer your cargo by a ship, especially a few centuries ago, you would have to accept that some part of your ships will sink. And yet, you could make a profit, on average. Similarly, if you use human brain to plan your business, you have to accept that some part of your plans will fail. The profit can still be possible, on average.

Replies from: NancyLebovitz, wedrifid
comment by NancyLebovitz · 2013-04-20T09:58:20.094Z · LW(p) · GW(p)

This is just from memory, but I think testosterone levels aren't (just?) about anger. Again from memory, testosterone goes up from winning, so the problem is overconfidence from previous victories.

Replies from: gwern
comment by wedrifid · 2013-04-25T04:41:58.096Z · LW(p) · GW(p)

Bring some women to the team. (Yeah, that just changes the problem to a harder one: Where to find enough women rationalists interested in finance?)

I'm afraid that is opposite to a solution to this particular problem. Even neglecting the fluctuation in women's testosterone levels and considering only the stereotypical androgenic behaviour of the males this can be expected to (if anything) increase the risk taking behaviours of the kind warned against here. Adding females to an already aggressive male group gives them prospective mates to show off to. The linked to article mentions observations of this.

(There may be other reasons to bring more women onto your professional trading team. Just not this one.)

comment by lsparrish · 2013-04-16T20:54:23.427Z · LW(p) · GW(p)

I wonder if many people are putting off buying a bitcoin to hang onto, due more to trivial inconvenience than calculation of expected value. There's a bit of work involved in buying bitcoins, either getting your funds into mtgox or finding someone willing to accept paypal/other convenient internet money sources.

Replies from: Qiaochu_Yuan, RomeoStevens, wedrifid, lsparrish
comment by Qiaochu_Yuan · 2013-04-17T07:01:52.305Z · LW(p) · GW(p)

What if we're putting off buying a bitcoin because we, uh, don't want to?

Replies from: lsparrish
comment by lsparrish · 2013-04-17T19:25:31.696Z · LW(p) · GW(p)

Ok... Well... If that's the case, and if you can tell me why you feel that way, I might have a response that would modify your preference. Then again, your reasoning might modify my own preference. Cryptic non-argument isn't particularly interesting, or helpful for coming to an Aumann Agreement.

Edit: Here is my response.

Replies from: Qiaochu_Yuan, Kaj_Sotala
comment by Qiaochu_Yuan · 2013-04-18T04:50:17.442Z · LW(p) · GW(p)

1) I am not at all convinced that investing in bitcoins is positive expected value, 2) they seem high-variance and I'm wary about increasing the variance of my money too much, 3) I am not a domain expert in finance and would strongly prefer to learn more about finance in general before making investment decisions of any kind, and 4) your initial comment rubbed me the wrong way because it took as a standing assumption that bitcoins are obviously a sensible investment and didn't take into account the possibility that this isn't a universally shared opinion. (Your initial follow-up comment read to me like "okay, then you're obviously an idiot," and that also rubbed me the wrong way.)

If the bitcoin situation is so clear to you, I would appreciate a Discussion post making the case for bitcoin investment in more detail.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-04-18T09:50:13.072Z · LW(p) · GW(p)

regulatory uncertainty swamps any quantitative analysis I think.

comment by Kaj_Sotala · 2013-04-18T05:01:45.433Z · LW(p) · GW(p)

The standard advice is that normal people should never try to beat the market by picking any single investment, but rather put their money in index funds. The best publicly available information is already considered to be reflected in the current prices: if you recommend in buying a particular investment, that implies that you have knowledge that the best traders currently on the market do not have. As a friend commented:

The only rational reasons to hold a highly volatile, speculative investment are either if you have a huge risk preference (and with bitcoin we're talking about crack users) or if it's a really small share of your investments, of which the majority are really low-risk investments.

So if you think that people should be buying Bitcoins, it's up to you to explain why the standard wisdom on investment is wrong in this case.

(For what it's worth, personally I do own Bitcoins, but I view it as a form of geek gambling, not investment. It's fun watching your coins lose 60% in value and go up 40% from that, all within a matter of a few days.)

Replies from: RomeoStevens
comment by RomeoStevens · 2013-04-18T09:49:14.861Z · LW(p) · GW(p)

Bitcoins are more like investing in a startup. The plausible scenarios to bitcoins netting you a return commensurate with the risk involve it disrupting several 100 billion+ markets (paypal, western union). I think investing in startups that have plausible paths towards such disruptions are worthy of a small portion of your portfolio.

comment by RomeoStevens · 2013-04-16T23:42:50.365Z · LW(p) · GW(p)

It should be significantly better on may 6th presuming the coinlab/silicon valley bank/mtgox stuff goes live.

comment by wedrifid · 2013-04-16T23:06:04.964Z · LW(p) · GW(p)

I wonder if many people are putting off buying a bitcoin to hang onto, due more to trivial inconvenience than calculation of expected value. There's a bit of work involved in buying bitcoins, either getting your funds into mtgox or finding someone willing to accept paypal/other convenient internet money sources.

At the level of buying just one bitcoin the convenience is more than trivial. Even just in the financial burden of the bank transfers changes the expected value calculation quite a bit (allthough the cost seems to be reducing somewhat).

comment by lsparrish · 2013-04-16T21:50:03.832Z · LW(p) · GW(p)

In case anyone has difficulty with the convenience factor:

I have four bitcoins that I bought for about 100 USD. Currently MtGox is at under 80 USD. As long as it remains at or below that rate, I am willing to sell these at 100 USD via paypal. It's not a great price but it is much more convenient to pay by paypal than the other methods available to buy bitcoins.

  • Only lesswrongers with decent karma and posting history qualify.
  • The intended purpose is for you to hold them long-term against the off chance that it becomes a mainstream currency in the long term. Please hold them for at least a year.
  • They can be converted to paper wallet form, which I will do for you if you opt to trust me ( because as far as I know this implies I would have access unless/until I delete the wallet) and send it to you by snail-mail.
  • I will most likely be using the proceeds to buy more via IRC.

PM if interested.

comment by kgalias · 2013-04-23T20:05:09.823Z · LW(p) · GW(p)

Request for a textbook (or similar) followup to The Selfish Gene and/or The Moral Animal. Preferably with some math, but it's not necessary.

Replies from: beoShaffer
comment by beoShaffer · 2013-04-23T22:31:53.206Z · LW(p) · GW(p)

Buss's Evolutionary Psychology is good if you are specially looking for the evolutionary psychology element not so sure about general evolutionary biology books. Also we have a dedicated textbook thread.

Replies from: kgalias
comment by kgalias · 2013-04-24T20:47:21.503Z · LW(p) · GW(p)

Thanks. I'm aware of the topic, but sadly there's not much there related to evolution (though I did rediscover that khan academy has some stuff). Is there any merit to this criticism? http://www.amazon.com/review/R3NG0J7T66E9N4/ref=cm_cr_pr_viewpnt#R3NG0J7T66E9N4

comment by Shmi (shminux) · 2013-04-23T14:49:28.845Z · LW(p) · GW(p)

I could swear Zach Weiner reads this forum.

Replies from: gwern
comment by gwern · 2013-04-23T16:37:23.801Z · LW(p) · GW(p)

He's been asked before and denied it, IIRC.

Replies from: shminux
comment by Shmi (shminux) · 2013-04-23T17:11:24.541Z · LW(p) · GW(p)

I guess he is just a natural genius. Nietzsche would have looked up to him.

Replies from: gwern
comment by gwern · 2013-04-23T17:19:57.561Z · LW(p) · GW(p)

Or he's just channeling regular skeptic/geek/transhumanist memes from fiction etc. Manipulative evil AIs? Well, that's like every other Hollywood movie with an AI in it...

Replies from: shminux
comment by Shmi (shminux) · 2013-04-23T17:31:56.444Z · LW(p) · GW(p)

I did not mean just this one strip, but what he draws/writes on philosophy, religion, metaethics and transhumanism in general.

comment by Richard_Kennaway · 2013-04-22T10:55:23.022Z · LW(p) · GW(p)

I have noticed an inconsistency between the number of comments actually present on a post and the number declared at the beginning of its comments section, the former often being one less than the latter.

For example, of the seven discussion posts starting at "Pascal's wager" and working back, the "Pascal's wager" post at the moment has 10 comments and says there are 10, but the previous six all show a count one more than the actual number of visible comments. Two of them say there is 1 comment, yet there are no comments and the text "There doesn't seem to be anything here" appears. These are meetup announcements that I would not expect anyone to be posting banworthy comments to.

There is no sign of comments having being deleted or banned, and even if something of the sort is what has happened, I would expect the comment count displayed on a page to agree with the number of accessible comments.

On the Discussion page itself, the comment count displayed for each post agrees with the comment count displayed within the post.

Replies from: pragmatist
comment by pragmatist · 2013-04-22T11:01:06.392Z · LW(p) · GW(p)

A short while ago, spam comments in French were posted to a bunch of discussion threads. All of these were deleted. I'm guessing this discrepancy is a consequence of that.

comment by [deleted] · 2013-04-16T19:59:55.367Z · LW(p) · GW(p)

This has most likely been mentioned in various places, but is it possible to make new meetup posts (via the "Add new meetup" button) to only show up under "Nearest Meetups", and not be in Discussion? Also, renaming the link to "Upcoming Meetups" to match the title on that page, and listing more than two - perhaps a rolling schedule of the next 7 days.

comment by latanius · 2013-04-16T01:37:21.932Z · LW(p) · GW(p)

Is there a nice way of being notified about new comments on posts I found interesting / commented on / etc? I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.

... or a "number of green posts" indicator near the post titles when listing them? (I know it's a) takes someone to code it b) my gut feeling is that it would take a little more than usual resources, but maybe someone knows of an easier way of the same effect.)

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2013-04-16T11:22:35.669Z · LW(p) · GW(p)

I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.

I don't quite see what you mean here. Do you know that each post has its own comments RSS feed?

Replies from: latanius
comment by latanius · 2013-04-19T01:19:53.761Z · LW(p) · GW(p)

... this is the thing I've been looking for! (I think I had some strange cached thought from who knows where that posts do not have comments feeds, so I didn't even check... thanks for the update!)

comment by asparisi · 2013-04-15T23:46:33.931Z · LW(p) · GW(p)

Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)

Replies from: kenzi
comment by kenzi · 2013-04-16T01:36:39.712Z · LW(p) · GW(p)

Hey; we (CFAR) are actually going to be running a shuttles from SFO Thursday evening, since the public transit time / drive time ratio is so high for the April venue. So we'll be happy to come pick you up, assuming you're willing to hang out at the airport for up to ~45 min after you get in. Feel free to ping me over email if you want to confirm details.

comment by Omid · 2013-07-07T17:19:20.036Z · LW(p) · GW(p)

Who is the best pro-feminist blogger still active? In the past I enjoyed reading Ozy Frantz, Clarisse Thorn, Julia Wise and Yvain, but none of them post regularly anymore. Who's left?

Replies from: shminux, Alicorn
comment by Shmi (shminux) · 2013-07-07T17:29:59.616Z · LW(p) · GW(p)

Yvain still posts regularly (Google slate star codex), but he is not pro-feminist, he is anti-bias.

Replies from: Omid
comment by Omid · 2013-07-07T17:56:36.538Z · LW(p) · GW(p)

He's slowing down and shifting focus which makes him an unreliable source for rigorous defenses of feminism.

comment by Alicorn · 2013-07-07T20:36:02.686Z · LW(p) · GW(p)

If you liked Ozy, you might like Pervocracy too.

comment by Viliam_Bur · 2013-04-30T14:00:24.072Z · LW(p) · GW(p)

Is there a secret URL to display the oldest LW posts?

comment by lukeprog · 2013-04-25T20:31:17.173Z · LW(p) · GW(p)

I wrote something on Facebook recently that may interest people, so I'll cross-post it here.

Cem Sertoglu of Earlybird Venture Capital asked me: "will traders be able to look at their algorithms, and adjust them to prevent what happened yesterday from recurring?

My reply was:

I wouldn't be surprised if traders will be able to update their algorithms so that this particular problem doesn't re-occur, but traders have very little incentive to write their algorithms such that those algorithms would be significantly more robust in general. The approaches they use now are intrinsically not as transparent as (e.g.) logic-based approaches to software agent design, but they are more immediately profitable than logic-based approaches.

Wall Street has tried before to update its systems to be more robust, but their "band-aids" approach won't be sufficient. For example: in response to the flash crash of 2010, regulators installed a kind of "circuit breaker" that halts trading when there are extreme changes in a stock's price. Unfortunately, this did not prevent high-frequency trading programs from disrupting markets again on August 1st, 2012, in part because the circuit breaker wasn't also programmed to halt trading if there were extreme changes in the number of shares being traded (see: http://is.gd/vBqf53).

We can design multi-agent ecosystems using only logic-based agents that are (in some cases) subject to "formal verification" (mathematical proof of correct operation). See, for example, http://is.gd/XgRJYn. But these approaches haven't seen nearly as much development as the approaches currently in use on Wall Street, because they are not as immediately profitable.

Only regulators could have sufficient incentive to implement a more trustworthy ecosystem of high-frequency trading programs, but they succumbed to regulatory capture long ago, and therefore won't do anything so drastic.

I'm not too worried about the next 5 years, though. Mostly it will just be momentary scares, like the flash crash and the recent fake tweet disruption. I'm more worried about the far more powerful autonomous programs of the future, and those programs are the focus of our research at MIRI.

comment by CAE_Jones · 2013-04-25T11:47:43.930Z · LW(p) · GW(p)

Considering making my livejournal into something resembling the rationality diaries (I'd keep the horrible rambling/stupid posts for honesty/archival purposes). I can't tell if this is a good idea or not; the probability that it'd end like everything else I do (quietly stewing where only I bother going) seems absurdly high. On the other hand, trying to draw this kind of attention to it and adding structure would probably help spawn success spirals. Perhaps I should try posting on a schedule (Sunday/Tuesday/Thursday seems good, since weekends typically suck and probably will motivate me to post, but holding off on that until Monday could keep me in a negative mindset that could delay rebounding). I suppose I'll have an answer (to the question that no one asked) by Sunday, then, unless someone convinces me one way or the other before then.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-04-28T21:07:56.521Z · LW(p) · GW(p)

I think I'm going with something less structured, but will gear it more toward rationality techniques, past and present, etc, and will post more often (the three a week mentioned in the parent is what I'll be shooting for). (Previously, I mostly just used livejournal as a dumpingground for particularly unhappy days, hence all the stupid from before April 2013.). I was also encouraged by the idea of web serial novels, and may or may not try to make it wind up looking like such a thing, somehow.

comment by Document · 2013-04-24T23:07:35.116Z · LW(p) · GW(p)

I started browsing under Google Chrome for Android on a tablet recently. Since there's no tablet equivalent of mouse hovering, to see where a link points without opening it I have to press and hold on it. For off-site links in posts and comments, though, LW passes them through api.viglink.com, so I can't see the original URL through press-and-hold. Is there a way to turn that function off, or an Android-compatible browser plugin to reverse it?

(Edit: Posted and discussed here.)

comment by Pablo (Pablo_Stafforini) · 2013-04-23T21:49:37.044Z · LW(p) · GW(p)

Some folks here might want to know that the Center for Effective Altruism is recruiting for a Finance & Fundraising Manager:

Would you like to gain experience in non-profit operations by working for the Centre for Effective Altruism, a young and rapidly expanding charity based in Oxford? If so, we encourage you to apply to join our Graduate Volunteer Scheme as Finance and Fundraising Manager

comment by Shmi (shminux) · 2013-04-22T19:25:11.206Z · LW(p) · GW(p)

I've always felt that Atlas Shrugged was mostly an annoying ad nauseum attack on the same strawman over and over, but given the recent critique of Google, Amazon and others working to minimize their tax payments, I may have underestimated human idiocy:

the Public Accounts Committee, whose general verdict was that while companies weren't doing anything legally wrong when they shifted profits around the world to lower their total tax bill, the practice was "immoral".

On the other hand, these are people wearing their MP hats, they probably sing a different tune as board members. Or maybe Britain is overdue for another Thatcher.

To quote (apparently) Arthur Godfrey,

I'm proud to pay taxes in the United States; the only thing is, I could be just as proud for half the money.

comment by Paul Crowley (ciphergoth) · 2013-04-22T06:07:31.708Z · LW(p) · GW(p)

What happened to that article on cold fusion? Did the author delete it?

Replies from: ahbwramc
comment by ahbwramc · 2013-04-23T15:05:35.416Z · LW(p) · GW(p)

No, I didn't delete it. It went down to -3 karma, which apparently hides it on the discussion page. That's how I'm assuming it works anyway, given that it reappeared as soon as it went back up to -2. Incidentally, it now seems to be attracting random cold fusion "enthusiasts" from the greater internet, which was not my intention.

Replies from: TimS
comment by TimS · 2013-04-23T15:33:16.934Z · LW(p) · GW(p)

The hide / not hide can be set individually by clicking Preferences next to one's name. I think you are seeing the result for the default settings - I changed mine a while ago and don't remember what the default is.

Replies from: ahbwramc
comment by ahbwramc · 2013-04-23T15:45:13.763Z · LW(p) · GW(p)

Thanks!

comment by diegocaleiro · 2013-04-22T00:03:29.975Z · LW(p) · GW(p)

Is there anyway to see authors classified by h-index? Google scholar seems not to have that functionality. And online lists only exist of some topics...

Lewis Dennett and Pinker for instance have nearly the same h-index.

Ed Witten's is much larger than Stephen Hawkings..... etc........

If you know where to find listings of top h-indexes, please let me know!

Replies from: ahbwramc
comment by ahbwramc · 2013-04-23T16:03:36.088Z · LW(p) · GW(p)

Depends, do you work at a university or research institution, or have access to one? The scientific database Web of Science has an author search function, and it can give you a full citation report for any scientist in the database with a bunch of useful info, including h-index.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-24T18:19:08.193Z · LW(p) · GW(p)

That is not what I want, I want an ordered list, which is ordered according to h-index. But thanks for letting me know.

What I want is, say a Top 100 h-indexes of all time, or top 100 within math, or within biology, within analytic philosophy, etc...

comment by Jayson_Virissimo · 2013-04-19T23:17:53.928Z · LW(p) · GW(p)

Art Carden, guest blogger at Econlog, advocates Bayes theorem as a strategy for maintaining serenity here.

comment by TeaTower · 2013-04-19T04:25:56.364Z · LW(p) · GW(p)

I remember seeing a post (or more than one?) where Yudkowsky exhorts smart people (e.g. hedge fund managers) to conquer mismanaged countries, but I can't find it by googling.

Does anyone have a link?

Replies from: ModusPonies
comment by ModusPonies · 2013-04-19T14:01:47.829Z · LW(p) · GW(p)

Then again, in the Muggle world, all of the extremely intelligent people Harry knew about from history had not become evil dictators or terrorists. The closest thing to that in the Muggle world was hedge-fund managers, and none of them had tried to take over so much as a third-world country, a point which put upper bounds on both their possible evil and possible goodness.

HPMoR Chapter 86

If you had something more specific in mind, I can't recall it offhand.

Replies from: TeaTower
comment by TeaTower · 2013-04-19T14:14:45.133Z · LW(p) · GW(p)

That's the one. I misremembered and thought it was an LW post.

comment by A113 · 2013-04-18T17:45:00.321Z · LW(p) · GW(p)

I heard a speaker claim that the frequency of names in the Gospels matches the list of most popular names in the time and place they are set, not the time and place they are accepted to have been written in. I hadn't heard this argument before and couldn't think of a refutation. Assuming his facts are accurate, is this a problem?

Replies from: gwern
comment by gwern · 2013-04-18T18:19:02.512Z · LW(p) · GW(p)

Assuming his facts are accurate, is this a problem?

A problem for what? It's not much evidence for a historical-realist-literalist viewpoint, because the usual mythicist or less-literal theories generally believe that the original stories would have gotten started around the time they are set in, and so could be expected to mimick the name distribution of the setting, and keep the mimicking (while warping and evolving in many other ways) until such time as they are compiled by a scribe and set down into a textual form.

Few think that Gospels were made up out of whole cloth in 300 AD and hence having versimiltude (names matching 30s AD) is a surprising feature and evidence against the whole-cloth theory. Generally, both believers and mythicists think some stories and myths and sayings and parables got started in the 30s+ AD and passed down and eventually written down, possibly generations later, at various points like the 90s AD; what they disagree on is how much the oral transmission and disciples affected things and what the origin was.

comment by Metus · 2013-04-18T14:06:19.852Z · LW(p) · GW(p)

Toying around with the Kelly criterion I get that the amount I should spend on insurance increases with my income though my intuition says that the higher your income is the less you should insure. Can someone less confused about the Kelly criterion provide some kind of calculation?

For anyone asking, I wondered if, given income and savings rate how much should be invested in bonds, stocks, etc. and how much should be put into insurance, e.g. health, fire, car, etc. from a purely monetary perspective.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-04-19T19:13:56.388Z · LW(p) · GW(p)

The Kelly criterion returns a fraction of your bankroll; it follows that for any (positive-expected-value) bet whatsoever, it will advise you to increase your bet linearly in your income. Could this be the problem, or have you already taken that into account?

That aside, I'm slightly confused about how you can use the Kelly criterion in this case. Insurance must necessarily have negative expected value for the buyer, or the insurer makes no profit. So Kelly should be advising you not to buy any. How are you setting up the problem?

Replies from: Metus
comment by Metus · 2013-04-20T10:28:20.093Z · LW(p) · GW(p)

The Kelly criterion returns a fraction of your bankroll; it follows that for any (positive-expected-value) bet whatsoever, it will advise you to increase your bet linearly in your income. Could this be the problem, or have you already taken that into account?

Well that is exactly the point. It confuses me that the richer I am the more insurance I should buy, though the richer I am the more I am able to compensate the risk in not buying any insurance.

That aside, I'm slightly confused about how you can use the Kelly criterion in this case. Insurance must necessarily have negative expected value for the buyer, or the insurer makes no profit.

Yes and no. The insurer makes only a profit if the total cost of insurance is lower than the expected value of the case with no insurance. What you pay the insurer for is that the insurer takes on a risk you yourself are not able to survive (financially), that is catastrophically high costs of medical procedures, liabilities or similar. It is easily possible for the average Joe to foot the bill if he breaks a $5 mug but it would be catastrophic for him if he runs into an oil tank and has to foot the $10,000,000 bill to clean up the environment. (This example is not made up but actually happened around here.)

It is here where my intuition says that the richer you are, the less insurance you need. I could also argue that if it was the other way around, that you should insure more the richer you are, insurance couldn't exist, seeing as the insurer is the one who should buy insurance from the poor!

You can use the Kelly criterion in any case, either negative or positive expected value. In the case of negative value it just tells you to take the other side of the bet or to pay to avoid the bet. The latter is exactly what insurance is.

So Kelly should be advising you not to buy any. How are you setting up the problem?

I model insurance from the point of view of the buyer. In any given time frame, I can avoid the insurance case with probability q, saving the cost of insurance b. Or I could lose and have to pay a with the probability p = 1-q. This is the case of not buying insurance, though it is available. So if f = p/a - q/a is negative I should insure, if f is positive, I should take the risk. This follows my intuition insofar that catastrophic but improbable risk (very high a, very low p) should be insured but not probable and cheap liabilities (high p, low a).

The trick is now that f is actually the fraction of my bankroll I have to invest. So the richer I am the more I should insure absolutely but my intuition says I should by less insurance. I know I have ignored something fundamental in my model. Is it the cost of insurance? Is it some hidden assumption in the formulation of the Kelly criterion as applied to bets? Did I accidentally assume that someone knows something the other party doesn't? Did I ignore fixed costs? This eats me up.

Edit: Maybe the results have to be interpreted differently? Of course if I don't pay the insurance, Kelly still says to invest the money somehow, maybe in having a small amount always at hand as a form of personally organized insurance. Intuition again says that this pool should grow with my wealth, effectively increasing the amount of insurance I buy, though not from an insurer but in opportunity cost.

Replies from: Richard_Kennaway, aleksiL
comment by Richard_Kennaway · 2013-04-20T15:37:15.678Z · LW(p) · GW(p)

I know I have ignored something fundamental in my model.

The Kelly formula assumes that you can bet any amount you like, but there are only so many things worth insuring against. Once those are covered, there is no opportunity to spend more, even if you're still below what the formula says.

In addition, what is a catastrophic loss, hence worth insuring against, varies with wealth. If the risks that you actually face scale linearly with your wealth, then so should your expenditure on insurance. But if having ten times the wealth, your taste were only to live in twice as expensive a house, drive twice as expensive a car, etc. then this will not be the case. You will run out of insurance opportunities even faster than when you were poorer. At the Jobs or Gates level of wealth, there are essentially no insurable catastrophes. Anything big enough to wipe out your fortune would also wipe out the insurance company.

Replies from: Metus
comment by Metus · 2013-04-20T16:39:17.114Z · LW(p) · GW(p)

Your reply provides part of the missing piece. Given that I am over some kind of absolute measure of poverty, empirically having twice as much disposable income won't translate into twice as much insurable assets. This limits the portion of bankroll that can be spent for insurance. Also, Kelly assumed unlitimited offer of bets which is not that far from the truth. Theoretically I can ask the insurer to give twice the payout for twice the cost of insurance.

And still, your answer doesn't quite answer my original question. I asked for given (monthly) income, savings rate and maybe wealth, what is a optimal allocation of insurance and investments, e.g. bonds or equity? And even if assuming that I keep my current assets but double my income and wealth, Kelly still says to buy insurance, though you admit that anything Gates would want to insure against would ruin the insurer, but my intuition still says that Gates does not insure anything that I would, like a car, house or health costs.

Replies from: RolfAndreassen, Richard_Kennaway
comment by RolfAndreassen · 2013-04-20T17:13:03.186Z · LW(p) · GW(p)

Perhaps the problem lies in the dichotomy "buy insurance" versus "do not buy". It seems to me that you have, in fact, got three, not two, options:

a) Buy insurance from someone else

b) Spend the money

c) Save the money, in effect buying insurance from yourself.

I think option c) is showing up in your analysis as "do not buy insurance", which should be reserved for b). You are no doubt correct that Gates does not buy car insurance (unless perhaps he is forced to by law), but that does not mean he is not insured. In effect he is acting as his own insurer, pocketing the profit.

It seems to me, then, that Kelly is telling you that the richer you are, the more you should set aside for emergencies, which seems to make sense; but it cannot distinguish between self-insurance and buying an insurance policy.

Replies from: Metus
comment by Metus · 2013-04-20T17:30:04.997Z · LW(p) · GW(p)

So you say that if Kelly says to buy insurance for $300 if the insurance costs $100 I should not buy the police but set aside $300 in case of emergency?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-04-22T19:11:22.858Z · LW(p) · GW(p)

Insurance whose payout is only three times the policy cost should rather be classified as a scam. More generally, I think the strategy would be thus: Kelly tells you to take some amount of money and spend it on insurance. If that amount is enough to cover the payout of the insurance policy, then you should not pay the premium; instead you should put the money in savings and enjoy the interest payments. Only if the amount Kelly assigns to insurance is too small to cover the payout should you consider paying the premium.

Replies from: gwern
comment by gwern · 2013-04-22T19:57:53.381Z · LW(p) · GW(p)

Insurance whose payout is only three times the policy cost should rather be classified as a scam.

Depends on the cost of the risk, no? For a first generation XBox 360, paying half the price for a new replacement is not obviously a bad deal...

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-04-23T17:29:33.688Z · LW(p) · GW(p)

Ok, in that case it's rather the Xbox that is the scam, but I stand by the use of the word. If that sort of insurance is a good deal, you're being screwed over somewhere. :)

Replies from: gwern
comment by gwern · 2013-04-23T18:21:15.466Z · LW(p) · GW(p)

in that case it's rather the Xbox that is the scam

I wouldn't say that; as always, the question is whether the good is +EV and the best marginal use of your money. If the console costs $3 and insurance costs $1 and there's a >33% chance the console will break and you'll use the insurance, given how much fun you can have with an Xbox is that really a scam? I wouldn't say so.

If that sort of insurance is a good deal, you're being screwed over somewhere.

In practice the insurance that you can buy is too limited and the odds too bad to actually make it a good deal; I did some basic analysis of the issue at http://www.gwern.net/Console%20Insurance and you're better off self-insuring, at least with post-second-generation Xbox 360s (the numbers look really bad for the first-generation but hard sources are hard to come by).

comment by Richard_Kennaway · 2013-04-20T17:02:53.176Z · LW(p) · GW(p)

Theoretically I can ask the insurer to give twice the payout for twice the cost of insurance.

You can ask, but your insurer will decline. You can only insure your house for what it's worth.

Replies from: Metus
comment by Metus · 2013-04-20T17:28:48.960Z · LW(p) · GW(p)

Maybe in that case. In the case of life insurance I have practically unlimited options. Whether I insure for $1M or $1k in case of my death is up to me.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-04-20T17:42:38.673Z · LW(p) · GW(p)

Term life insurance, which pays out nothing if you live beyond the term, is like other insurance: it is only worth buying to protect against some specific risk (e.g. mortgage payments) consequent on an early death.

Life assurance is different, in that the event insured against is certain to happen. That is why it is called assurance: assuredly, you will die. As such, it is primarily an investment, together with an insurance component that guarantees a payout even if you die prematurely. As an investment, you can put as much as you like into it, but if your heirs will not be financially stricken if you die early, you -- or rather, they -- do not need the insurance part.

comment by aleksiL · 2013-04-20T16:51:54.687Z · LW(p) · GW(p)

You have it backwards. The bet you need to look at is the risk you're insuring against, not the insurance transaction.

Every day you're betting that your house won't burn down today. You're very likely to win but you're not making much of a profit when you do. What fraction of your bankroll is your house worth, how likely is it to survive the day and how much will you make when it does? That's what you need to apply the Kelly criterion to.

Replies from: Metus
comment by Metus · 2013-04-20T17:31:36.172Z · LW(p) · GW(p)

Have you read my reply to RichardKennaway? I explicitly look at the case you mention.

comment by sixes_and_sevens · 2013-04-17T23:26:20.705Z · LW(p) · GW(p)

Here's something I think should exist, but don't know if it does: a list of interesting mental / neurological disorders, referencing the subjects they have bearing on.

Does this exist already?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-18T04:57:56.804Z · LW(p) · GW(p)

What do you mean by "interesting" and "subjects"?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2013-04-18T09:16:36.035Z · LW(p) · GW(p)

By "interesting" I mean "of interest to the Less Wrong community", specifically because they provide insight into common failure modes of cognition, and by extension, cognition itself.

By "subjects", I mean the topics of inquiry they provide insight into.

Here is an example that should hopefully pin down what I am talking about. I personally have a mental catalogue of such disorders, and given their prevalence in discussion around here I suspect a number of other people do as well. It would be nice if we all had one big catalogue.

comment by RolfAndreassen · 2013-04-16T17:46:27.814Z · LW(p) · GW(p)

So, I have a primitive system for keeping track of my weight: I weigh myself daily and put the number in a log file. Every so often I make a plot. Here is the current one. I have been diligent about writing down the numbers, but I have not made the plot for at least a year, so while I was aware that I'm heavier now than during last summer, I had no idea of the visual impact of that weight loss and regain. My immediate thought: Now what the devil was I doing in May of 2012, and can I repeat it this year and avoid whatever happened in July-August?

Hmm... come to think of it, I was taking allergy meds, and then I stopped taking them. If that's it, probably not replicable.

Further hmm, my daughter was born in early April last year. Stress? I did not consciously notice being stressed, but perhaps I wouldn't; at any rate my routine, obviously, was rather disrupted. Timing is not quite right, but there could be some lag.

Replies from: Qiaochu_Yuan, Zaine
comment by Qiaochu_Yuan · 2013-04-17T07:02:48.642Z · LW(p) · GW(p)

If you put the data into a Google Doc, you can get a plot that updates whenever you update the log. That's what I've been doing.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-04-17T16:28:05.591Z · LW(p) · GW(p)

Convenient, but

a) I like having things in a text file that I can open with a flick of the keyboard on the same system I'm working on anyway b) Making my own plot, I have full control of the formatting, plus I can do things like fit trends over given periods, mark out particular dates, or otherwise customise c) I dread the day when some overeager Google popup tells me that "It looks like you're trying to control your weight! Would you like me to show you some weight-loss products?"

(At least one of these items not intended seriously).

Replies from: DaFranker
comment by DaFranker · 2013-04-18T16:45:02.377Z · LW(p) · GW(p)

(At least one of these items not intended seriously).

You mean (a), right? 'caus "flick of the keyboard" is kind of funny, but setting that up for a particular text file sounds awfully... unworkable.

(point (c) is not nearly as unrealistic as it might seem at first - they're pretty much already there to some extent)

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-04-18T17:00:55.761Z · LW(p) · GW(p)

Oh, I absolutely believe that Google will tell you about weight-loss products if they detect you tracking a number that looks reasonable for a human weight in pounds, and that they have an algorithm capable of doing that. It's the overeager popup with the near-quote of Clippy (the original Microsoft version, not our friendly local Clippy who, while he might want to turn you into your component atoms for reuse, is at least not unbearably upbeat about it) that's unrealistic.

"Flick of the fingers on the keyboard", then: From writing here it is Windows-Tab, Windows-1, C-x b, w-e-i-Tab, Enter. If the file wasn't already open in emacs, replace C-x b with C-x C-f.

Replies from: DaFranker
comment by DaFranker · 2013-04-18T17:12:51.134Z · LW(p) · GW(p)

Ah, yes, the mighty emacs.

I should get around to installing and using that someday. >.<

comment by Zaine · 2013-04-16T18:13:10.313Z · LW(p) · GW(p)

If you can, buy a machine that measures your body fat percentage as well (bioelectrical impedance) - it's a more meaningful statistic. If you're measuring once per month, under consistent hydration and bowel volume, it could be pretty convenient. The alternative, buying callipers with which you'd perform a skinfold test, requires you train yourself in their proper use (perhaps someone could teach you).

comment by James_Miller · 2013-04-15T18:58:22.088Z · LW(p) · GW(p)

North Korea is threatening to start a nuclear war. The rest of the world seems to be dismissing this threat, claiming it's being done for domestic political reasons. It's true that North Korea has in the past made what have turned out to be false threats, and the North Korean leadership would almost certainly be made much worse off if they started an all out war.

But imagine that North Korea does launch a first strike nuclear attack, and later investigations reveal that the North Korean leadership truly believed that it was about to be attacked and so made the threats in an attempt to get the U.S. to take a less aggressive posture. Wouldn't future historians (perhaps suffering from hindsight bias) judge us to be idiots for ignoring clear and repeated threats from a nuclear-armed government that appeared crazy (map doesn't match territory) and obsessed with war.

Replies from: gwern, Qiaochu_Yuan, FiftyTwo, Estarlio
comment by gwern · 2013-04-15T22:33:43.686Z · LW(p) · GW(p)

Wouldn't future historians (perhaps suffering from hindsight bias) judge us to be idiots for ignoring clear and repeated threats from a nuclear-armed government that appeared crazy (map doesn't match territory) and obsessed with war.

Why do we care what they think, and can you name previous examples of this?

Replies from: James_Miller
comment by James_Miller · 2013-04-15T22:41:29.148Z · LW(p) · GW(p)

As someone who studies lots of history while often thinking, "how could they have been this stupid didn't they know what would happen?", I thought it useful to frame the question this way.

Hitler's professed intentions were not taken seriously by many.

Replies from: gwern
comment by gwern · 2013-04-15T23:14:29.345Z · LW(p) · GW(p)

Hitler's professed intentions were not taken seriously by many.

Taken seriously... when? Back when he was a crazy failed artist imprisoned after a beer hall putsch, sure; up to the mid-1930s people took him seriously but were more interested in accommodationism. After he took Austria, I imagine pretty much everyone started taking him seriously, with Chamberlain conceding Czechoslovakia but then deciding to go to war if Poland was invaded (hardly a decision to make if you didn't take the possibilities seriously). Which it then was. And after that...

If we were to analogize North Korea to Hitler's career, we're not at the conquest of France, or Poland, or Czechoslovakia; we're at maybe breaking treaties & remilitarizing the Rhineland in 1936 (Un claiming to abandon the cease-fire and closing down Kaesŏng).


One thing that hopefully the future historians will notice is that when North Korea attacks, it doesn't give warnings. There were no warnings or buildups of tension or propaganda crescendos before bombing & hijacking & kidnapping of Korean airliners, the DMZ ax murders, the commando assault on the Blue House, the sinking of the Cheonan, kidnapping Korean or Japanese citizens over the decades, bombing the SK president & cabinet in Burma, shelling Yeonpyeong, the attempted assassination of Park Sang-hak... you know, all the stuff North Korea has done before.

To the extent that history can be a guide, the propaganda war and threats ought to make us less worried about there being any attack. When NK beats the war drums, it want talks and concessions; when it is silent, then that is when it attacks. Hence, war drums are comforting and silence worrisome.

comment by Qiaochu_Yuan · 2013-04-15T19:18:39.140Z · LW(p) · GW(p)

Certainly the consequences of us being wrong are bad, but that isn't necessarily enough to outweigh the presumably low prior probability that we're wrong. (I'm not taking a stance on how low this probability is because I don't know enough about the situation.) Presumably people also feel like there are game-theoretic reasons not to respond to such threats.

comment by FiftyTwo · 2013-04-15T21:36:50.074Z · LW(p) · GW(p)

There is an issue of ability vs. intention, no matter whether the North Korean leadership wants to destroy the US or South Korea they don't have the ability to do any major harm. The real fear is that the regime collapses and we're left with a massive humanitarian crisis.

Replies from: drethelin
comment by drethelin · 2013-04-16T05:37:47.747Z · LW(p) · GW(p)

Pretty sure nuking Seoul is worse than the regime in NK collapsing. I think annexation by either china or SK would be way better than the current system of starvation in NK.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-16T13:25:33.627Z · LW(p) · GW(p)

Any thoughts about what a relatively soft landing for NK would look like?

Replies from: shminux, TimS, None, drethelin
comment by Shmi (shminux) · 2013-04-16T23:19:17.433Z · LW(p) · GW(p)

Maybe a slow and controlled introduction of free enterprise, Deng Xiaoping-style, while maintaining a tight grip on political freedoms, at least until the economy recovers somewhat, could soften it. Incidentally, this is apparently the direction Kim Jong-un is carefully steering towards. Admittedly, slacklining seems like a child's play compared to the perils he'd have to go through to land softly.

comment by TimS · 2013-04-16T14:44:39.867Z · LW(p) · GW(p)

Some here. One of the most interesting parts of the essay was the aside claiming that NK saber rattling is an intentional effort to distraction S. Korean, US, and Chinese attention from thinking about the mechanics of unification.

Edit: I'll just quote the interesting paragraph:

The way I read the North Korean sabre-rattling (and use) is that it is designed to keep the South Koreans and their allies off balance, focussing on crisis management and preventing war, and not – for instance – planning coherently for the probable collapse of their régime. After all, if there was a good reunification plan, it would become more likely.

It’s only anecdotal evidence, but my son, teaching in a small town near the DMZ, warned me that the topic is too sensitive for casual conversation. So Pyongyang may have spooked the South Korean public into treating the whole subject as unthinkable, because of its one unthinkable component, a nuclear conflict.

Emphasis mine.

comment by [deleted] · 2013-04-16T23:03:50.640Z · LW(p) · GW(p)

I was talking about this with a friend of mine, and it does seem like there is no outcome that's not going to be hugely, hideously expensive. The costs of a war are obviously high - even if they don't/can't use nukes, they could knock Seoul right out of the global economy. But even if its peaceful you'd have this tidal wave of refugees into China and the South, and South Korea will be paying reunification costs for at least the next decade or so.

You can sort of see why SK and China are willing to pay to keep the status quo, and screw the starving millions.

Replies from: gwern
comment by gwern · 2013-04-17T00:39:54.264Z · LW(p) · GW(p)

South Korea will be paying reunification costs for at least the next decade or so.

Far longer than that. (West) Germany is apparently still effectively subsidizing (former) East Germany, more than 2 decades after unification - and I have read that West & East Germany were much closer in terms of development than North & South Korea are now. For the total costs of reunification, 'trillions' is probably the right order of magnitude to be looking at (even though it would eventually more than pay for itself, never mind the moral dimension).

Replies from: None
comment by [deleted] · 2013-04-17T17:09:13.564Z · LW(p) · GW(p)

I quite agree, on both parts. 25 million new consumers, catch-up growth, road networks from Seoul to Beijing, navigable waters, less political risk premium, etc.

It's a gloomy picture though. A coup seems unlikely (given the Kim-religion) and it'll probably be 2050-70 until Jong-un dies. I've got two hopes: the recent provocation is aimed at a domestic audience, and once he's proved himself he'll pull a Burma; or the international community doesn't blink and resume aid, forcing them into some sort of opening. Not very high hopes though.

Replies from: gwern
comment by gwern · 2013-04-17T17:23:11.086Z · LW(p) · GW(p)

etc.

To expand: a massive burst of cheap labor, a peace dividend in winding down both militaries (on top of the reduction in risk premium) such as closing down military bases taking valuable Seoul-area real estate, and access to all NK's mineral and natural resources.

comment by drethelin · 2013-04-16T14:30:14.464Z · LW(p) · GW(p)

I had a little dream scenario in my head when Jong Il died that Jong Un would have been secretly rebellious and reasonable and start implementing better policy bit by bit, but that clearly didn't happen. My hope is that whoever actually has their hands on the buttons in charge of the bombs and military is more reasonable than Jong-Un, and that he gets taken out either by us or by someone close to him who has a more accurate view of reality. At this point, the international rhetoric would immediately start being toned down, and the de facto government could start making announcements about the world changing its mind or something to smooth over increased cooperation and peace and foreign aid.

comment by Estarlio · 2013-04-15T19:32:37.206Z · LW(p) · GW(p)

I think part of the problem is that we don't know whether they seem to be crazy or not.

comment by [deleted] · 2013-04-27T08:09:36.809Z · LW(p) · GW(p)

I want to change the stylesheets on a wordpress blog so the default font is Baskerville. I'm not too experienced with editing CSS files, anyone here good at that? I know how to manually make each paragraph Baskerville.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-04-28T10:32:09.985Z · LW(p) · GW(p)

Try adding this line:

body { font-family:"Comic Sans MS", serif; }

Perhaps you'll need to adjust the name of the font though. :-)

comment by khafra · 2013-04-26T19:01:43.596Z · LW(p) · GW(p)

Are you a guy that wants more social interaction? Do you wish you could get complimented on your appearance?

Grow a beard! For some reason, it seems to be socially acceptable to compliment guys on a full, >1", neatly trimmed beard. I've gotten compliments on mine from both men and women, although requests to touch it come mostly from the latter (but aren't always sexual--women with no sexual attraction to men also like it). Getting the compliments pretty much invariably improves my mood; so I highly recommend it if you have the follicular support.

Replies from: TimS
comment by TimS · 2013-04-26T20:02:25.149Z · LW(p) · GW(p)

Because of differences in local culture, please list what country you live in, and perhaps what region.

Replies from: khafra, CAE_Jones
comment by khafra · 2013-04-26T23:57:35.870Z · LW(p) · GW(p)

I thought of listing "Southeast USA." However, a large metropolitan area in Florida, where I live, is a fairly cosmopolitan blend of Western culture. Not super-hip like a world-class city; and not provincial like 50 miles in any direction.

And the compliments have come from diverse sources--women at clubs, women on college campuses, military officers, people one socioeconomic class up and down...

comment by CAE_Jones · 2013-04-26T20:39:30.576Z · LW(p) · GW(p)

I've heard people get complimented on their beards quite a bit in the past 8 years or so. Central/south Arkansas, but the locations were specifically college campuses (I think there was some beard-complimenting going around at ASMSA (residential high school intended to be more college-like), but I could be misremembering since the person I'm thinking of was active in other places I went). It was recommended to me, I think more than once.

comment by FiftyTwo · 2013-11-25T23:41:20.276Z · LW(p) · GW(p)

How feasible is it for a private individual in a western developed country to conduct or commission their own brain scan?

comment by OrphanWilde · 2013-05-01T03:37:40.277Z · LW(p) · GW(p)

Howdy - comment to some person I haven't identified but will probably read this:

I appreciate the upvotes, but please only upvote my comments if you agree with them/like them/find them interesting/whatever. I'm trying to calibrate what the Less Wrong community wants/doesn't want, and straight-ticket upvoting messes with that calibration, which is already dealing with extremely noisy and conflicting data.

comment by [deleted] · 2013-04-30T00:34:46.066Z · LW(p) · GW(p)

Now that school's out for the summer, I have an additional 40 hours per week or so of free time.

How would you use that?

Replies from: Qiaochu_Yuan, Document
comment by Qiaochu_Yuan · 2013-04-30T06:44:52.675Z · LW(p) · GW(p)

Learn how to program?

comment by Document · 2013-04-30T00:45:17.247Z · LW(p) · GW(p)

How would I want to use it, or how would I actually use it?

Replies from: None
comment by [deleted] · 2013-04-30T00:49:43.822Z · LW(p) · GW(p)

The former.

Replies from: Document
comment by Document · 2013-04-30T00:55:58.911Z · LW(p) · GW(p)

Take one or more summer classes?

Replies from: None
comment by [deleted] · 2013-04-30T01:09:21.952Z · LW(p) · GW(p)

Hmmm. I guess there a few things I've been meaning to study.

comment by ITakeBets · 2013-04-29T02:12:10.008Z · LW(p) · GW(p)

Anybody on here ever sold eggs (female human gametes)? Experiences? Advice on how best to do it?

comment by Suryc11 · 2013-04-29T01:08:10.752Z · LW(p) · GW(p)

I came across this post on Quora and it strikes me as very plausible. The summary is essentially this: "Become the type of person who can achieve the things you want to achieve." What's your (considered) opinion?

Also, this seems relevant to the post I linked, but I'm not sure exactly how.

comment by zslastman · 2013-04-28T12:53:16.155Z · LW(p) · GW(p)

It's an old point, probably made by Robin Hanson, that if you want to donate to charity you should actually boast about it as much as possible to get your friends to do the same, rather than doing the status-preserving, humble saint act.

I think it might be worth making an app on facebook, say, that would allow people to boast anonymously. Let's say your offered the chance to see if your friends are donating. Hopefully people bite - curiousity makes them accept (no obligation to do anything after all). But now they know that their friends are giving and they aren't. "Do as you're friends have done!" we tell them "donate a little" you'll even encourage others to donate, and nobody can accuse you of boasting...

Thoughts?

Replies from: drethelin, MugaSofer
comment by drethelin · 2013-04-28T18:05:50.883Z · LW(p) · GW(p)

eh, I'm unselfish enough to donate but selfish enough not to boast about it

Replies from: zslastman
comment by zslastman · 2013-04-28T19:18:59.649Z · LW(p) · GW(p)

Right, but if you were given the opportunity for all your friends to that one of their friends had donated, you'd take it right? No social cost to you.

comment by MugaSofer · 2013-04-28T20:56:04.708Z · LW(p) · GW(p)

Eh, I'm selfish enough not to donate but unselfish enough to fill out an anonymous form claiming I donate vast sums.

Replies from: zslastman
comment by zslastman · 2013-04-29T06:30:26.707Z · LW(p) · GW(p)

Could be made verifiable by communicating with charities, to stop people doing that. Good point though.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-29T20:28:11.128Z · LW(p) · GW(p)

But if you stop people lying, you'll reduce the peer pressure!

Hey, if it works for sex stuff ...

comment by CAE_Jones · 2013-04-28T11:36:48.385Z · LW(p) · GW(p)

I found this blog post on the commensurate wage fallacy, and am wondering if there are any glaring errors (you know, besides the lack of citations).

comment by sparkles · 2013-04-28T01:40:18.213Z · LW(p) · GW(p)

Help me get matrix multiplication? (Intuitively understand.) I've asked google and read through http://math.stackexchange.com/questions/31725/what-does-matrix-multiplication-actually-mean and similar pages&articles , and I get what linear functions mean. I've had it explained in terms of transformation matrices and I get how those work and I'm somewhat familiar with them from opengl. But it's always seemed like additional complexity that happens to work (and sometimes happens to work in a cute way) because it's this combination of multiplication and addition, and a lot of algorithms involve a bunch of multiplication and addition at some point.

I used to not get trigonometric functions either; I could compute them and I got certain things they did and I did see that there wasn't really other stuff that could do what they could do, but a lot of stuff like secant seemed totally arbitrary, but then I asked a prof and he was all "geometric interpretation!" and then it clicked. I think.

comment by FiftyTwo · 2013-04-27T18:17:22.812Z · LW(p) · GW(p)

Are there any psychometric or aptitude tests that are worth taking as an adult?

comment by Nornagest · 2013-04-22T21:23:42.797Z · LW(p) · GW(p)

Just got bitten again by the silent -5 karma bug that happens when a post upthread from the one you're replying to gets downvoted below the threshold while you're writing your reply. If we can spare the developer resources, which I expect we can't, it would be nice if that didn't happen.

comment by A1987dM (army1987) · 2013-04-19T19:00:14.759Z · LW(p) · GW(p)

Overheard this on the bus: “If Christians are opposed to abortion because they think fetuses are people, how comes they don't hold funerals for miscarriages?”

Replies from: RolfAndreassen, skeptical_lurker, Vaniver, Jayson_Virissimo, MugaSofer
comment by RolfAndreassen · 2013-04-19T19:08:50.894Z · LW(p) · GW(p)

I would suppose that some of them do. I would further suppose that it's not actually a bad idea, if the pregnancy was reasonably advanced. The grief is, I believe, rather similar to that experienced by someone losing a child that had been brought to term. To the extent that funerals are a grief-coping mechanism, people probably should hold them for miscarriages.

Replies from: gwern
comment by gwern · 2013-04-19T21:04:08.946Z · LW(p) · GW(p)

A google search for 'miscarriage funeral' suggests that people do, yes, but it's sufficiently rare that one can write articles about it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-20T09:52:07.598Z · LW(p) · GW(p)

They seem to be somewhat more standard in Japan and Taiwan.

comment by skeptical_lurker · 2013-04-19T21:22:00.075Z · LW(p) · GW(p)

My favorite argument about abortion is to point out that if the soul enters the body at conception, as identical twins split after conception, this logically implies that one twin has no soul, and thus is evil. The evil twin can usually be identified by sporting a goatee.

Replies from: Desrtopa, shminux, Alicorn
comment by Desrtopa · 2013-04-20T04:15:47.362Z · LW(p) · GW(p)

That's based on the unstated but incorrect premise that souls are indivisible and only distributed in whole number amounts. Anyone who's spent time around identical twins can tell that they only have half a soul each.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-04-20T14:38:01.548Z · LW(p) · GW(p)

Of course - this explains identical twin telepathy!

comment by Shmi (shminux) · 2013-04-23T17:50:51.764Z · LW(p) · GW(p)

Your self-congratulatory argument's quality is just as bad as that of those against evolution. Maybe souls twin, too. Or maybe fetus twinning is caused by the need to accommodate a surplus soul. Or...

Replies from: skeptical_lurker
comment by skeptical_lurker · 2013-04-23T21:37:23.570Z · LW(p) · GW(p)

I would like to point out that my argument is against 'the soul enters the body at conception' not 'there exists a soul'. If souls twin, then this provides an example where the soul enters the body after conception, proving my point.

There are plenty of beliefs in souls that do not require them entering the body at conception. Some Hindus would say that the body, like all material objects, is maya, or illusion, and only consciousness exists, and thus the question 'does the soul enter the body at conception?' is meaningless.

I wouldn't say I agree with this point of view, but its a lot more reasonable.

comment by Alicorn · 2013-04-20T03:20:44.500Z · LW(p) · GW(p)

Or genetic chimeras, who are fused fraternal twin embryos - have they got two souls?

Replies from: skeptical_lurker, MugaSofer
comment by skeptical_lurker · 2013-04-20T14:34:12.807Z · LW(p) · GW(p)

Interesting - I had forgotten about that. If one actually assigned a non-trivial probability to the hypothesis that the soul enters the body at conception, one could do the stats to see if chimeras are more likely to exhibit multiple personality disorder!

comment by MugaSofer · 2013-04-23T12:36:52.615Z · LW(p) · GW(p)

Maybe one of them is dead? The one that didn't form the brain, I guess.

Although, if you can have a soul at conception, the brain must be unnecessary ... hmm, transplant patients ...

comment by Vaniver · 2013-04-21T15:49:20.897Z · LW(p) · GW(p)

This actually happens, sometimes. Perinatal hospice is also a thing.

comment by Jayson_Virissimo · 2013-04-19T19:07:31.809Z · LW(p) · GW(p)

Then again, there are plenty of jurisdictions that will charge someone with double-murder if they intentionally kill a woman they know to be pregnant (and I'm sure at least some of these jurisdictions allow abortion). Curious. Also, some do have funerals for their miscarried babies, but I have no idea whether Christians do so at higher rates.

comment by MugaSofer · 2013-04-23T12:33:34.188Z · LW(p) · GW(p)

Data point: I have, in fact, been to such a funeral. However, it wasn't official.

comment by [deleted] · 2013-04-17T18:03:22.907Z · LW(p) · GW(p)

This article is fascinating: http://io9.com/5963263/how-nasa-will-build-its-very-first-warp-drive

A NASA physicist called Harold White suggests that if he tweaks the design of an 'Alcubierre Drive', extremely fast space travel is possible. It bends spacetime around itself, apparently. I don't know enough about physics to be able to call 'shenanigans' - what do other people think?

Replies from: kpreid
comment by kpreid · 2013-04-17T21:26:58.073Z · LW(p) · GW(p)

IANAPhysicist, but what seem to me to be the main points:

  • The Wikipedia article seems to be a fairly good description of problems. Briefly: The warp bubble is a possible state of the universe solely considering the equations of general relativity. We don't yet know whether it is compatible with the rest of physics.

  • The Alcubierre drive requires a region of space with negative energy density; we don't know any way to produce this, but if there is it would involve some currently-unknown form of matter (which is referred to as “exotic matter”, which is just a catch-all label, not something specific).

  • The work described in the article consists of two things:

    1. Refining the possible state to have less extreme requirements while still being FTL.
    2. Conducting experiments which study an, ah, extremely sub-FTL state which is similar in some sense to the warp bubble. This part seems to me to have a high chance of being just more confirmation of what we already know about general relativity.
Replies from: DaFranker
comment by DaFranker · 2013-04-18T15:42:45.262Z · LW(p) · GW(p)

The Alcubierre drive requires a region of space with negative energy density; we don't know any way to produce this, but if there is it would involve some currently-unknown form of matter (which is referred to as “exotic matter”, which is just a catch-all label, not something specific).

I've read and been told that this is not entirely accurate; apparently, tiny pockets with effectively this effect have been created in labs by abusing things I don't understand.

However, it's apparently still under question whether these can be aggregated and scaled up at all, or if they are isolated events that can only be made under specific one-off circumstances.

comment by diegocaleiro · 2013-04-15T20:04:11.483Z · LW(p) · GW(p)

All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Harry Potter and the Methods of Rationality.

Thus, we need 50 shades of Grey Matter.

As well as good marketing designs of things that attract women into rationality.

Which are the bestselling books if you only consider women? What about the best movies for women?

Replies from: gwern, ModusPonies, shminux, mstevens, Document, ModusPonies
comment by gwern · 2013-04-24T23:09:50.758Z · LW(p) · GW(p)

All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Harry Potter and the Methods of Rationality.

I'm not sure that's true. When I looked in the 2012 survey, I didn't see any striking gender disparity based on MoR: http://lesswrong.com/lw/fp5/2012_survey_results/8bms - something like 31% of the women found LW via MoR vs 21% of the men, but there are just not that many women in the survey...

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-24T23:22:24.075Z · LW(p) · GW(p)

That does not factor the main point " that would not otherwise have become rationalist" There are loads of women out there on a certain road into rationalism. Those don't matter. By definition, they will become rationalists anyway.

There are large numbers who could, and we don't know how large, or how else they could, except HPMOR

Replies from: TimS, gwern
comment by TimS · 2013-04-25T01:16:21.196Z · LW(p) · GW(p)

Leaving aside gwern's rudeness, he is right - if MoR doesn't entice more women towards rationality than the average intervention, and your goal is to change the current gender imbalance among LW-rationalists, then MoR is not a good investment for your attention or time.

comment by gwern · 2013-04-24T23:41:17.798Z · LW(p) · GW(p)

I'm sorry, I was just trying to interpret the claim in a non-stupidly unverifiable and unprovable sense.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-25T00:27:05.005Z · LW(p) · GW(p)

It is not a claim, it is an assumption that the reader ought to take for granted, not verify. If I thought there were reliable large N data of a double blind on the subject, I'd simply have linked the stats. As I know there are not, I said something based on personal experience (as one should) and asked for advice on how to improve the world, if the world turns out to correlate with my experience of it.

Your response reminds me of Russell's joke about those who believe that "all murderers have been caught, since all muderers we know have been caught"...

The point is to find attractors, not to reject the stats.

Replies from: gwern
comment by gwern · 2013-04-25T01:06:57.299Z · LW(p) · GW(p)

It is not a claim, it is an assumption that the reader ought to take for granted, not verify.

ಠ_ಠ All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Baby Eaters in "Three Worlds Collide".

Thus, we need 50 Shades of Cooked Babies.

As well as good marketing designs of things that attract women into rationality.

Does this strike you as dubious? Well, it is not a claim, it is an assumption that the reader ought to take for granted, not verify!

comment by ModusPonies · 2013-04-15T22:13:41.053Z · LW(p) · GW(p)

Fanfiction readers tend to be female. HPMoR has attracted mostly men. I'm skeptical that your strategy will influence gender ratio.

Possible data point: are Luminosity fans predominantly female?

Replies from: falenas108, latanius
comment by falenas108 · 2013-04-16T12:31:15.502Z · LW(p) · GW(p)

Wait, the question isn't in HPMoR attracted more women than men, it's if it the women to man ratio is higher than other things that attracts people.

comment by latanius · 2013-04-16T01:32:09.909Z · LW(p) · GW(p)

P(Luminosity fan | reads this comment) is probably not a good estimate... (count me in with a "no" data point though :)) Also, what is the ratio of "Luminosity fan because of Twilight" and "read it even though... Twilight, and liked it" populations?

(with "read Twilight because of Luminosity" also a valid case.)

comment by Shmi (shminux) · 2013-04-15T20:54:47.077Z · LW(p) · GW(p)

Reminds me of this

Replies from: diegocaleiro
comment by diegocaleiro · 2013-04-15T23:35:50.219Z · LW(p) · GW(p)

We can't afford not to do both

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-07T15:55:06.024Z · LW(p) · GW(p)

GoldieBlox got funded at almost double its goal, has been produced, and is received with enthusiasm by at least a fair number of little girls.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-07-07T16:29:10.408Z · LW(p) · GW(p)

Yes, the owner is making more than 300 000 per month on sales, or so claims Tim Ferriss. Awesome isn't it?

comment by mstevens · 2013-04-16T10:39:31.993Z · LW(p) · GW(p)

I am hoping for someone to write Anita Blake, Rational Vampire Hunter.

Or the rationalist True Blood (it already has "True" in the title!)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-16T13:26:45.115Z · LW(p) · GW(p)

Is anyone working on rationalist stand-alone fiction?

Actually, what I meant was "Is anyone in this community working on rationalist stand-alone fiction?".

Replies from: mstevens, TimS
comment by mstevens · 2013-04-16T13:36:08.466Z · LW(p) · GW(p)

Not that I've seen. It'd be cool though. I think maybe you can see traces in people like Peter Watts, but if you take HPMOR as the defining example, I can't think of anything.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-04-16T17:11:44.011Z · LW(p) · GW(p)

Lee Child (the Jack Reacher series) presents a good bit of clear thinking.

comment by TimS · 2013-04-16T13:53:57.536Z · LW(p) · GW(p)

I'm always found Stross (and to a lesser extent, Scalzi) to be fairly rationalist - in the sense that I don't see anyone holding the idiot ball all that frequently. People do stupid things, but they tend not to miss the obvious ways of implementing their preferences.

comment by Document · 2013-04-24T22:45:50.277Z · LW(p) · GW(p)

All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Harry Potter and the Methods of Rationality.

Isn't that a tautology?

Edit: missed this subthread already discussing that; sorry.

comment by ModusPonies · 2013-04-15T22:08:46.905Z · LW(p) · GW(p)

Fanfiction readers tend to be female. If this strategy were going to work, it would have worked already.

comment by MugaSofer · 2013-04-28T21:05:41.585Z · LW(p) · GW(p)

Your Strength As A Rationalist [LINK]

This site is filled with examples, but this one is particularly noteworthy because they're completely unsurprised and, indeed, claim it as confirming evidence for their beliefs.

comment by MugaSofer · 2013-04-23T12:56:58.867Z · LW(p) · GW(p)

Is anyone here skilled at avoiding strawmanning and categorizing people's views? We could do with some tricks for this, kind of like the opposite of "feminist bingo".

Replies from: shminux
comment by Shmi (shminux) · 2013-04-23T14:54:31.226Z · LW(p) · GW(p)

We could do with some tricks for this

But we won't point fingers at anyone in particular, no.

Anyway, steelmanning seems like the standard approach here, if rarely used.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-25T15:49:30.067Z · LW(p) · GW(p)

But we won't point fingers at anyone in particular, no.

Is this intended as a criticism? I can't tell.

Anyway, steelmanning seems like the standard approach here, if rarely used.

Steelmanning is great, but you can still end up "steelmanning" your stereotype of someone's arguments, which is more what I'm worried about.

comment by knb · 2013-04-18T09:24:26.303Z · LW(p) · GW(p)

This is a funny video for people familiar with the r/atheism community.