Open Thread, May 12 - 18, 2014

post by eggman · 2014-05-12T08:16:58.489Z · LW · GW · Legacy · 201 comments

Contents

  You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
None
201 comments

Previous Open Thread


You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

201 comments

Comments sorted by top scores.

comment by [deleted] · 2014-05-12T13:37:02.250Z · LW(p) · GW(p)

After HeartBleed, I got really irritated at how much time it took to hunt down the "change password" links for all the services I used. So, in the name of fighting trivial inconveniences, I made a list of direct password-and-two-factor-updating links for various popular services: UpdateYourPasswords.

Replies from: Joshua_Blaine
comment by Joshua_Blaine · 2014-05-12T22:42:15.115Z · LW(p) · GW(p)

This is beautiful and really useful seeming. I'm happy it exists, so thanks for making it.

Replies from: None
comment by [deleted] · 2014-05-13T12:25:28.988Z · LW(p) · GW(p)

You're welcome! It doubled as a pedagogical introduction to jQuery, so usefulness all around.

Sidenote: I want this to be as useful as possible to as many people as possible, but I'm not sure how to promote it without seeming spammy.

Replies from: lmm
comment by lmm · 2014-05-13T20:54:23.675Z · LW(p) · GW(p)

You could "Show HN" if you haven't already; such things are usually appreciated there.

Replies from: None
comment by [deleted] · 2014-05-14T00:38:28.744Z · LW(p) · GW(p)

Did this yesterday, but went unseen: https://news.ycombinator.com/item?id=7732588

I get the sense that posting again is frowned upon.

comment by Manfred · 2014-05-12T15:23:22.623Z · LW(p) · GW(p)

FHI's "ask us anything" thread is on the front page of reddit. Congratulations!

comment by Viliam_Bur · 2014-05-12T13:44:00.951Z · LW(p) · GW(p)

I'm reading Ayn Rand's "The Virtue of Selfishness" and it seems to me that (a part of) what she tried to say was approximately this:

Some ethical systems put false dichotomy between "doing what one wants" and "helping other people". And then they derive an 'ethical' conclusion that "doing what one wants" is evil, and "helping other people" is good, by definition. Which is nonsense. Also, humans can't psychologically completely abstain from "doing what they want" part (even after removing "helping other people" from it), but instead of realising the nonsense of such ethics, they feel guilty, which makes them easier to control.

I don't read philosophy, so I can't tell if someone has said it exactly like this, but it seems to me that this is not a strawman. At least it seems to me that I have heard such ideas floating around, although not expressed this clearly. (Maybe it's not exactly what the original philosopher said; maybe it's just a popular simplification.) There is the unspoken assumption that when people "do what they want", that does not include caring about others; that people must be forced into pro-social behavior... and the person who says this usually suggests that some group they identify with should be given power over the evil humans to force them into doing good.

And somehow people never realize the paradox of where does the "wanting to do what seemingly no one wants to" come from. I mean, if no one really cared about X, then no one would be concerned that no one cares about X, right? If nobody cares about sorting pebbles, then nobody feels that we should create some mechanisms to force people to sort pebbles because otherwise, oh the horrors, the pebbles wouldn't be sorted properly. So what; the pebbles won't be sorted, no one cares. But we care about people in need not getting help. So that desire obviously comes from us. Therefore acting on that desire is not contradictory to "doing what we want", because that's a part of what we want.

So... now these are my thoughts, not Rand's... one possible interpretation is that the people who created these systems of ethics actually were psychopaths. They really didn't feel any desire to help other people. But they probably understood that other people will reward them for creating ideas about how to help others. Probably because they understood on intelectual level that without some degree of cooperation, the society would fall apart, which would be against their interest. So they approached it like a game theory problem: no one really cares about other people, but blah blah blah iterated prisonner's dilemma or something, therefore people should act contrary to their instincts and actually help each other. And because these psychopaths were charming people, others believed their theories expressed the highest wisdom and benevolence, and felt guilty for not seeings things so clearly. (Imagine less intelligent professor Quirrell suffering from typical mind fallacy, designing rules for a society composed of his clones.)

Replies from: Benito, Nornagest, army1987, pragmatist, VAuroch, blacktrance
comment by Ben Pace (Benito) · 2014-05-12T15:20:18.893Z · LW(p) · GW(p)

I've been reading Pinker's "Better Angels of Our Nature" and it seems to me that people don't need to be psychopaths to have difficulty feeling empathy and concern for other people. If you've read HPMOR, the villagers that used to enjoy cat burning are a good example, which Pinker uses. He suggests that our feelings of empathy have increased over time, although he's not sure for what reason. So earlier, a couple of people in their better moments might have claimed caring about others was important, but generally people were more selfish, so that the two did become out of sync.

I mean, even today when you say you care about other people, you don't suddenly donate all of the money that isn't keeping you alive to effective charities, because of the empathy you don't feel with every single other person on this earth. You don't have to be a psychopath for that happen.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-12T21:38:39.418Z · LW(p) · GW(p)

This reminds me of this part from "The Failures of Eld Science":

"A woman of wisdom," Brennan said, "once told me that it is wisest to regard our past selves as fools beyond redemption—to see the people we once were as idiots entire. I do not necessarily say this myself; but it is what she said to me, and there is more than a grain of truth in it. As long as we are making excuses for the past, trying to make it look better, respecting it, we cannot make a clean break. It occurs to me that the rule may be no different for human civilizations. So I tried looking back and considering the Eld scientists as simple fools."

"Which they were not," Jeffreyssai said.

Maybe, analogically, it would be wise to regard the former civilizations as psychopaths, although they were not. This includes religions, moral philosophies, etc. The idea is that those people didn't know what we know now... and probably also didn't feel what we feel now.

EDIT: To be more precise, they were capable of having the same emotions; they just connected it with different things. They had the same chemical foundation for emotions, but connected them with different states of mind. For example, they experienced fun, but instead of computer games they connected it with burning cats; etc.

(Of course there are differences in knowledge and feelings among different people now and in the past, etc. But there are some general trends, so if we speak about sufficiently educated or moral people, they may have no counterparts in the past, or at least not many of them.)

comment by Nornagest · 2014-05-12T21:16:00.932Z · LW(p) · GW(p)

Some ethical systems put false dichotomy between "doing what one wants" and "helping other people". And then they derive an 'ethical' conclusion that "doing what one wants" is evil, and "helping other people" is good, by definition.

Funny, this is a decent summary of an idea I've had kicking around for a while, though framed differently. A more or less independent one, I think; I've read Rand, but not for about a decade and a half.

I'd also add that "helping people" in this pop-culture mentality is typically built in a virtue-ethical rather than a consequential way; one is recognized as a good person by pattern-matching to preconceived notions of how a good person should behave, not by the expected results of one's actions. Since those preconceptions are based on well-known responses to well-known problems, a pop-culture altruist can't be too innovative or solve problems at too abstract a level; everyone remembers the guy that gave half his cloak to the beggar over the guy that pioneered a new weaving technique or produced an unusually large flax crop. Nor can one target too unfashionable a cause.

Innovators might eventually be seen as heroes, but only weakly and in retrospect. In the moment, they're more likely to be seen neutrally or even as villains (for e.g. crowding out less efficient flax merchants, or simply for the sin of greed). Though this only seems to apply in certain domains; pure scientists for example are usually admired, even if their research isn't directly socially useful. Same for artists.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-13T08:55:24.695Z · LW(p) · GW(p)

one is recognized as a good person by pattern-matching to preconceived notions of how a good person should behave, not by the expected results of one's actions

Yes, even when the "generally seen as good" actions are predictably failing or even making things worse, you are supposed to do them. Because that's what good people do! And you should signal goodness, as opposed to... uhm, actually making things better, or something.

comment by A1987dM (army1987) · 2014-05-13T16:43:09.093Z · LW(p) · GW(p)

IOW “Typical Mind and Disbelief In Straight People” but s/straight/good/?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-13T20:22:37.890Z · LW(p) · GW(p)

Exactly.

This pattern of "taking something unbelievable other people said, and imagining what would it mean if from their point of view it made complete sense literally, even if it creates an impolite ad-hominem argument against them" probably has a potential to create many surprising hypotheses.

It probably needs some nice short name, to remind people to use it more often.

comment by pragmatist · 2014-05-12T20:42:20.870Z · LW(p) · GW(p)

I don't read philosophy, so I can't tell if someone has said it exactly like this, but it seems to me that this is not a strawman. At least it seems to me that I have heard such ideas floating around, although not expressed this clearly. (Maybe it's not exactly what the original philosopher said; maybe it's just a popular simplification.) There is the unspoken assumption that when people "do what they want", that does not include caring about others; that people must be forced into pro-social behavior... and the person who says this usually suggests that some group they identify with should be given power over the evil humans to force them into doing good.

I do read philosophy, and this does seem like a strawman to me. I'm not aware of a single serious moral philosopher who believes there is a sharp dichotomy between "doing what you want" and "helping others".

The only philosopher who comes close, I think, is Kant, who thought that the reasons for performing an action are morally relevant, above and beyond the action and its consequences. So, according to Kant, it is morally superior to perform an act because it is the right thing to do rather than because it is an act I want to perform for some other reason. Given this view, the ideal test case for moral character is whether a person is willing to perform an act that goes against her non-moral interests simply because it is the right thing to do. But this still differs from the claim that altruistic behavior is opposed to self-interested behavior.

Replies from: blacktrance
comment by blacktrance · 2014-05-13T18:59:37.881Z · LW(p) · GW(p)

I also read some philosophy, and while the dichotomy between doing what you want and helping others isn't often stated explicitly, it's common to assume that someone who is doing what they want is not benevolent and is likely to screw people over. Mainly it's only the virtue ethicists who think that egoists would be benevolent.

comment by VAuroch · 2014-05-12T20:57:38.270Z · LW(p) · GW(p)

And somehow people never realize the paradox of where does the "wanting to do what seemingly no one wants to" come from. I mean, if no one really cared about X, then no one would be concerned that no one cares about X, right? If nobody cares about sorting pebbles, then nobody feels that we should create some mechanisms to force people to sort pebbles because otherwise, oh the horrors, the pebbles wouldn't be sorted properly.

Well, no. For example, I care very much about these pebbles right here (these represent my friends), and recognize that there are many other people who don't care about these pebbles and instead care about totally different pebbles I don't care either way about. And some other people I know care about some of my pebbles, but not the rest, and I care about some of theirs but not the rest.

It occurs to me that if there were a broad set of principles everyone agreed to which said that, ethically, all pebbles ought to be sorted, then everyone would care some about my pebbles, at the comparatively low cost for me of caring a little about other people's pebbles.

Of course, from there it's a short step to people who conclude that, ethically, it is best to disregard your own particular attachment to your personal pebbles and be an effective pebblist, taking whatever actions most effectively sort pebbles anywhere even if that means your own pebbles are less sorted than they could be if you devoted more time to them. And some people take that too far and provoke Rayn And Pebblist to promote focusing on your own pebbles to the exclusion of all else.

comment by blacktrance · 2014-05-12T17:03:14.656Z · LW(p) · GW(p)

And somehow people never realize the paradox of where does the "wanting to do what seemingly no one wants to" come from. I mean, if no one really cared about X, then no one would be concerned that no one cares about X, right?

The way out of this paradox is that no one wants to promote X themselves, but they want other people to do it.

comment by jaime2000 · 2014-05-12T21:42:56.014Z · LW(p) · GW(p)

I have recently discovered a technique called "ranger rolling" which has proven ridiculously useful in dealing with my clothing. It basically allows you to turn each item of clothing into an individual block, which you then use to play real life Tetris. This is a much better system than treating them as stacks of paper (which is what happens when you fold them) or as amorphous blobs (which is what happens when you shove them into drawers however you can).

Replies from: TylerJay, pragmatist
comment by TylerJay · 2014-05-12T23:30:42.104Z · LW(p) · GW(p)

I've never heard it called that, but I roll most of my clothes when traveling for work. They end up less wrinkled and you can fit a lot into a small volume. I highly recommend it.

comment by pragmatist · 2014-05-13T07:19:08.143Z · LW(p) · GW(p)

Looks interesting, but I'm assuming this doesn't work if I like to iron my clothes before storing them. Is that right, or does the rolling not majorly detract from the ironing?

Replies from: jaime2000
comment by jaime2000 · 2014-05-13T13:18:06.135Z · LW(p) · GW(p)

Looks interesting, but I'm assuming this doesn't work if I like to iron my clothes before storing them. Is that right, or does the rolling not majorly detract from the ironing?

I don't iron my clothes before storing them, so I couldn't tell you, but surely this is an opportunity to practice the virtue of empiricism? Iron a couple of shirts, carefully roll them, leave them for a day or two, and check how the wrinkling compares to your usual method of storage. Then share your results for goodwill and karma.

Replies from: DanielLC
comment by DanielLC · 2014-05-15T22:46:29.434Z · LW(p) · GW(p)

Asking is also a virtue.

comment by Viliam_Bur · 2014-05-15T08:49:57.677Z · LW(p) · GW(p)

An idea: prestige-based prediction market.

Prediction markets (a.k.a. putting your money where your mouth is) are popular among rationalists, but kinda unpopular with governments. It is too easy to classify them as gambling. But if we remove the money, people have less incentive to get things right.

But there is also a non-monetary thing people deeply care about: prestige. This can't be used on a website with anonymous users, but could be used with famous people who care about what others think about their opinions: journalists or analytics. So here is a plan:

A newspaper (existing or a new one) could make a "Predictions" section, where experts would be asked to assign probabilities to various outcomes. If they guessed correctly, they would gain points; if they guessed incorrectly, they would lose points. The points would influence their position on the page: Opinions of predictors with more points would be at the top of the page (with larger font); opinions of predictors with less points would be at the bottom (with smaller font). Everyone starts with some given number of points; if someone drops below zero, they are removed from this newspaper section, forever. And a new predictor, with given starting number of points is selected instead of them.

There is no money directly involved here, so the government can't object against gambling. But still, the predictors will care about their outcomes (because those who won't, will be gradually removed from the system). And the readers will see the prediction market in action, although they will not be allowed to bet there. Or perhaps there could be a possibility for them to give an internet vote, too, and see how good they are compared with the official predictors. Actually, the new predictors could be picked among the most successful readers.

This whole thing is inspired by the fact that some people also care about virtual money; e.g. by gold in World of Warcraft. As long as you have the virtual currency, you can have a prediction market running on that virtual currency, and it's like having a casino in the World of Warcraft world: not quite real, therefore not regulated the same way.

Replies from: Lumifer, Douglas_Knight
comment by Lumifer · 2014-05-15T14:47:49.820Z · LW(p) · GW(p)

But still, the predictors will care about their outcomes (because those who won't, will be gradually removed from the system).

So you are suggesting a system that relies on the opinions of people who got selected because they really want to see their names at the top of the page in big font?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-15T16:01:25.651Z · LW(p) · GW(p)

If the only way to see their names at the top of the page in big font is to provide correct predictions... why not?

The classical prediction market relies on opinions of people who got selected because they really wanted to make money on the prediction market. What's the big difference?

Okay... I can imagine that if someone's goal is to bring attention to themselves, they might make correct predictions to get to the top, and then intentionally make shocking (incorrect) predictions to bring even more attention to them. Kinda like people with too much karma sometimes start trolling, because, why not.

Replies from: Lumifer
comment by Lumifer · 2014-05-15T16:59:07.851Z · LW(p) · GW(p)

What's the big difference?

Money is a MUCH better motivator.

In particular, making predictions is not costless. To consistently produce good forecasts you need to commit resources to the task -- off-the-cuff opinions are probably not going to make it. Why should serious people commit resources, including their valuable time, if the only benefit they get is seeing their name in big letters on top of a long list?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-15T18:20:17.908Z · LW(p) · GW(p)

Well, this is something that can be tested experimentally. It could be statistically tested whether the results of the top predictors resemble random noise.

Some people spend incredible amounts of time on internet, reading about stuff that interests them. I can imagine they could make good predictions in their area. (And there should be a "no vote" option for questions outside of their area.)

Wikipedia exists, despite it doesn't pay its contributors, unlike other encyclopedias. And there is some good stuff there. Also bad stuff... but that's what the competition between predictors could fix.

There is probably a limit on how difficult things can be predicted. But it could be higher than we imagine. Especially if the predictions become popular, so for many topics there would be predictors whose hobby it is.

There are some technical details to solve, e.g. whether the predictor's prestige will be global, or topic-dependent. (To prevent people from systematically giving 10 great prediction in topic X, and then 1 bad but very visible in topic Y.) But that's like having multiple StackExchange accounts.

comment by Douglas_Knight · 2014-05-15T17:15:40.555Z · LW(p) · GW(p)

Pundits already make predictions all the time, but no one scores them. Even ones that where the outcome is very clear, like finding a nuclear program in Iraq or Apple making a TV.

So I think it is important to identify what the problem is and make sure you are actually addressing it. Setting up special venues has some advantages, like making sure that the questions are precise and judgeable, and attracting the right kind of people. Prestige for pundits is basically venue. Moving up and down inside a venue is a pretty small scale of change. I suppose that it is possible that if your venue has lots of churn, then the stakes would be higher.

One venue for realish money celebrity predictions is Long Bets, but while the emphasis is on accountability, it sure isn't on building a track record.

Also: does any government other than USA object to prediction markets? I suppose that ipredict's bankroll limitations indicate an objection by NZ, but I'm skeptical that this is a significant limiting factor.

Replies from: 9eB1
comment by 9eB1 · 2014-05-18T20:06:01.285Z · LW(p) · GW(p)

There is a site called pundittracker.com that tracks the predictions of pundits. Personally, I don't think it's all that interesting, in large part because the most concrete testable predictions pundits make are in areas not particularly interesting to me. But at least someone is trying.

comment by [deleted] · 2014-05-12T10:41:57.291Z · LW(p) · GW(p)

Does anyone have a good grasp of the literature on the relationship between drinking and intelligence?

Replies from: raisin, someonewrongonthenet
comment by raisin · 2014-05-12T11:01:56.184Z · LW(p) · GW(p)

Corretational or causational e.g. how it affects intelligence or how much intelligent people usually drink?

Replies from: None
comment by [deleted] · 2014-05-12T11:37:58.344Z · LW(p) · GW(p)

I'm interested in the causal aspects to help me decide how much I should be drinking.

Replies from: Douglas_Knight, Lumifer
comment by Douglas_Knight · 2014-05-12T16:51:30.037Z · LW(p) · GW(p)

Smart people are less likely to abstain from drinking (search for the word "floored"). I suspect that the quantitative trend is driven by the choice to drink or not and thus the correlation, even if were causal, is not relevant to the question of how much to drink.

comment by Lumifer · 2014-05-12T14:33:50.329Z · LW(p) · GW(p)

Are you just asking how much drinking will make you stupider, long-term?

Replies from: None
comment by [deleted] · 2014-05-12T15:41:07.715Z · LW(p) · GW(p)

Yes. I presume it does make you stupider?

Replies from: Lumifer
comment by Lumifer · 2014-05-12T15:48:53.128Z · LW(p) · GW(p)

Not that I know of, at least in reasonable amounts (where "reasonable" is defined as not causing clinical-grade medical symptoms, like a failing liver).

I haven't seen any evidence that moderate drinking lowers IQ. And if your drinking is immoderate, cognitive effects are probably not what you should be worried about.

Replies from: None
comment by [deleted] · 2014-05-12T15:56:55.503Z · LW(p) · GW(p)

I see. Google tells me that smarter people tend to drink more. Would I be right in assuming that this doesn't mean I should drink to get smarter?

Replies from: Lumifer
comment by Lumifer · 2014-05-12T16:05:14.748Z · LW(p) · GW(p)

Yes, you would be right. I don't think drinking helps with IQ -- it's mostly used as a stress reliever and a social lubricant, in which roles it functions well.

comment by someonewrongonthenet · 2014-05-19T08:15:09.379Z · LW(p) · GW(p)

High certainty assigned to binge drinking causing some brain damage (if you have a hangover, you definitely binge-drank) via a combination of toxicity and depletion of key resources.

Low certainty assigned to moderate drinking possibly having protective effects on the aging brain via its blood thinning properties preventing stroke.

Medium certainty assigned to absolutely no protective or beneficial effects for moderate drinking in youth (beyond fun and social benefits).

Medium certainty assigned to the notion that for youth, the major drawback of moderate alcohol consumption is the risk of physically injuring yourself while intoxicated, not the actual toxicity.

I really doubt there is significant damage associated with moderate drinking in youth. With the number of studies that have been done on this, if there were a huge noticeable difference in brain structure and function we would have found them by now. However, I do think it will damage you at least a little bit.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-05-19T23:17:14.241Z · LW(p) · GW(p)

Low certainty assigned to moderate drinking possibly having protective effects on the aging brain via its blood thinning properties preventing stroke.

There have been a couple of studies which say that, but I believe meta-analysis says the opposite: even moderate drinking is associated with increased rates of ischemic strokes (not just hemorrhagic). The only cause of death reduced among moderate drinkers is ischemic heart disease.

comment by PhilR · 2014-05-14T20:28:31.456Z · LW(p) · GW(p)

Question: what are the norms on showing up to meetups for the first time? I happen to be in Berkeley this week, and since there's a meetup this evening I thought I might check it out; should I just show up, or should I get in touch with the organizers and let them know I'm coming/around?

I predict that the answer will be something like "new attendees are welcome, announced or otherwise, but {insert local peculiarity here, e.g. 'Don't worry about the sign that says BEWARE OF THE LEOPARD, we're just getting ready for Towel Day'}". However, enough of my probability mass is elsewhere that I thought I'd check. Also, I couldn't find a definitive statement of the community norms within reach of Google, so I thought change that by asking reasonably publicly.

Replies from: Manfred
comment by Manfred · 2014-05-15T00:46:55.674Z · LW(p) · GW(p)

As someone who has been on both sides, "just show up and introduce yourself" has been good every time so far.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-05-15T00:58:14.092Z · LW(p) · GW(p)

That'll work fine at my local meetup.

One tip: If there's a mailing list / Google Group / Meetup.com group / etc., get on it, so you can see topic announcements and contact info for the organizers.

comment by [deleted] · 2014-05-12T17:34:36.899Z · LW(p) · GW(p)

What happened to the plans of creating more thematic subforums? Is anyone who's in charge willing to implement them?

Replies from: eggman, Larks
comment by eggman · 2014-05-13T10:20:33.637Z · LW(p) · GW(p)

It doesn't seem like the webmasters, or administrators, of Less Wrong receive these requests as signals. Maybe try sending them a private message directly, unless the culture of Less Wrong already considers that inappropriate, or rude.

comment by Larks · 2014-05-16T23:31:44.286Z · LW(p) · GW(p)

As a first approximation, if you want the LW codebase changed, you need to do it yourself.

comment by Risto_Saarelma · 2014-05-14T08:57:30.412Z · LW(p) · GW(p)

The Person of Interest TV show is apparently getting pretty explicit about real-world AGI concerns.

With Finch trying to build a machine that can predict violent, aberrant human behavior, he finally realized that the only solution was to build something at least as smart as a human. And that’s the moment we’re in right now in history. Forget the show. We are currently engaged in an arms race — a very real one. But it’s being conducted not by governments, as in our show, but by private corporations to build an AGI — to build artificial intelligence roughly as intelligent as a human that can be industrialized and used toward specific applications.

...I’m pretty confident that we’re going to see the emergence of AGI in the next 10 years. We have friends and sources within Silicon Valley — there is currently a headlong rush and race between a couple of very rich people to try to solve this problem. Maybe it will even happen in a way that no one knows about; that’s the premise we take for our show. But we thought it would be a fun idea that the Manhattan Project of our era — which is preventing nuclear terrorism, that’s the quiet thing that people have been diligently working on for 10 years — that’s the subtext of the whole show.

They're still doing the privacy versus data mining narrative, not talking about what might happen if you could cut humans off the general research and industry loop, but they seem to be very much in with the idea of an AGI being possible very soon, with a potential massive societal impact and probably being inimical to humans by default.

Replies from: Douglas_Knight, TylerJay
comment by Douglas_Knight · 2014-05-15T18:00:03.417Z · LW(p) · GW(p)

One thing in the show that I see very rarely outside of LW is the AI taking over a person.

comment by TylerJay · 2014-05-15T05:10:25.409Z · LW(p) · GW(p)

So I watched the first episode a while back and it seemed like they have an AI that models the world so well that it knows what's going to happen and who is involved. Maybe I missed something, but if it can tell what's going to happen, why can't it tell the difference between the one responsible for the bad thing happening and the victim?

Replies from: MrMind
comment by MrMind · 2014-05-15T07:46:19.217Z · LW(p) · GW(p)

I feels there's someone really competent behind the show, because your concern is addresed.

Spoiler alert (not too much, but still): GUR TBBQ GUVAT VF GUNG VG PNA. UBJRIRE SVAPU AB YBATRE PBAGEBYF GUR ZNPUVAR, NAQ AB YBATRE PNA PBZZHAVPNGR JVGU VG. FB UR YRSG N IREL GVAL ONPXQBBE SBE UVZFRYS, V.R. GUR FFA UR ERPRVIRF ERTHYNEYL.

comment by [deleted] · 2014-05-12T16:49:32.188Z · LW(p) · GW(p)

Throwing a half-formed idea out there for feedback.

For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues. Perhaps society should take that into account, and weigh their opinions more heavily. Right now, this would mean that gay marriage, marijuana legalization, abortion, etc would all very quickly become legal (In the US at least).

Possible counterarguments:

  1. Younger people haven't been right, they merely won the demographic battle and had their way. Current norms are worse than traditional ones.

  2. Younger people haven't been consistently right. They support bad ideas as often as any other group but those are rejected and forgotten.

  3. Younger people have been consistently right, but this trend may not continue.

  4. Younger people have been consistently right, and this will continue to be so, but it's still a terrible idea to privilege their opinions.

Thoughts?

Replies from: knb, None, Risto_Saarelma, jaime2000, None, None, satt, blacktrance, philh, Lumifer, Viliam_Bur, pcm, Larks, Strangeattractor
comment by knb · 2014-05-13T00:03:33.758Z · LW(p) · GW(p)

I think there's some truth to your counterarguments 1 and 2. Young people are easier to sway into any change-oriented movement, so any push for sweeping change will have a lot of youth behind it, even if it's an older person pulling the strings and reaping the benefits.

It was the youthful Red Guards who were guilty of the worst Cultural Revolution atrocities, and Pol Pot's regime was even more reliant on adolescent murderers killing everyone who had criminal traditional values or had received a traditional education.

In contrast, Deng Xiaoping was over 70 years old when he instituted his post Cultural Revolution reforms.

Aside from teaching basic mathematical skills and literacy, the major goal of the new educational system was to instill revolutionary values in the young. For a regime at war with most of Cambodia's traditional values, this meant that it was necessary to create a gap between the values of the young and the values of the nonrevolutionary old.

The regime recruited children to spy on adults. The pliancy of the younger generation made them, in the Angkar's words, the "dictatorial instrument of the party."[citation needed] In 1962 the communists had created a special secret organisation, the Democratic Youth League, that, in the early 1970s, changed its name to the Communist Youth League of Kampuchea. Pol Pot considered Youth League alumni as his most loyal and reliable supporters, and used them to gain control of the central and of the regional CPK apparatus. The powerful Khieu Thirith, minister of social action, was responsible for directing the youth movement.

Hardened young cadres, many little more than twelve years of age, were enthusiastic accomplices in some of the regime's worst atrocities. Sihanouk, who was kept under virtual house arrest in Phnom Penh between 1976 and 1978, wrote in War and Hope that his youthful guards, having been separated from their families and given a thorough indoctrination, were encouraged to play cruel games involving the torture of animals. Having lost parents, siblings, and friends in the war and lacking the Buddhist values of their elders, the Khmer Rouge youth also lacked the inhibitions that would have dampened their zeal for revolutionary terror.

The Nazis, Bolsheviks, and Italofascists were also Young Turk movements, although the generation gap was not quite so extreme. In the case of the Nazis, the weakened conservative Junker elite (epitomized by the elderly Paul von Hindenburg) first tried to reign them in, then tried to use them, and wound up losing everything to them.

comment by [deleted] · 2014-05-15T10:48:40.174Z · LW(p) · GW(p)

I'm Against Moral Progress. I don't think moral progress the way we usually talk about it is well founded. We observe moral change, then we decide since past moral change made values ever more like our present values on average, something that is nearly a tautology, the process itself must be good, despite us having no clear understanding of how it works.

Such confusion fogs many people on a similar process, evolution, having noticed they like opposable thumbs and that over time past hominids have come to resemble present hominids ever more they often imagine evolution to be an inherently good process. This is a horribly wrong perception.

Younger people haven't been right, they merely won the demographic battle and had their way.

Young people in general are good at picking winners, and quickly adapting to what is popular. Younger people's status quo bias will also fixate on newer norms compared to older people with aliefs the status quo is something else. Winners will also tend to try and influence them, especially in our society where voting power and public opinion grant legitimacy.

Younger people haven't been right, but despite being a young person who has over the past 3 years drifted strongly towards traditional values, I can't say they were wrong either. They simply had a different value set from the older generations before them.

Consider that currently monogamy or not flaying people alive is still valued both by the old and the young. We don't feel a need to explain why this is so. But should very convincing PeopleFlayers land in Central Park (LW2014 would think this bad) or should Polyamory continue to gain ground (LW2014 would think this good) in 20 years we would be wondering why young people are into PeopleFlaying and Polyamory and older people are not as much, and if this says something important about the nature of morality.

comment by Risto_Saarelma · 2014-05-14T08:43:50.247Z · LW(p) · GW(p)

For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues.

I guess you're not from a country that had Stalin Youth around in the 1970s. (We weren't an Eastern Bloc country either, they were just useful idiots.)

Replies from: None
comment by [deleted] · 2014-05-15T11:09:01.086Z · LW(p) · GW(p)

1970s intelligent young American students at Harvard favored the the Khmer Rouge.

Since the U.S. incursion into Cambodia in the spring of 1970, and the subsequent saturation-bombings The Crimson has supported the Khmer Rouge in its efforts to form a revolutionary government in that country. …

In the days following the mass exodus from Phnom Penh, reports in the western press of brutality and coercion put these assumptions into doubt. But there were other reports on the exodus. William Goodfellow in the New York Times and Richard Boyle, the last American to leave Phnom Penn in the Colorado Daily reported that the exodus from major cities had been planned since February, that unless the people were moved out of the capital city they would have starved and that there was a strong possibility of a cholera epidemic. The exodus, according to these reports, was orderly; there were regroupment centers on all of the major roads leading out of Phnom Penh and people were reassigned to rural areas, where the food supplies were more plentiful.

There is no way to assess the merits of these conflicting reports—and if there were instances of brutality and coercion, we condemn them—but the goals of the exodus itself were good, and we support them. …

The new government of Cambodia may have to resort to strong measures against a few to gain democratic socialism for all Cambodians. And we support the United Front [i.e. the Khmer Rouge] in the pursuit of its presently stated goals.

Another article "The Will of The people" concludes:

Congress and the public have come to accept that the U.S. must stop interfering in Cambodia’s affairs, which will surely result in well-deserved victory of the revolutionary forces led by Prince Sihanouk and the Khmer Rouge.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-15T15:55:13.451Z · LW(p) · GW(p)

The idiocy of the former group seems greater to me, because there the horrors have happened geographically closer (should switch them more to "near" mode), and they had enough time to learn about what happened (enough evidence). EDIT: On the other hand, the former group had a realistic chance to become the new leaders, while the latter praised someone who would kill every single one of them.

But otherwise, both are examples of: "yeah, seems that millions have died horribly, but our beliefs that our role models are the good guys remain unshaken."

comment by jaime2000 · 2014-05-12T21:57:31.705Z · LW(p) · GW(p)

For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues.

Taboo "right."

comment by [deleted] · 2014-05-13T18:48:19.776Z · LW(p) · GW(p)

Alternatively:

There is no "right and wrong", it's a subjective value judgement, and similar to #1, they ultimately are the one's who's subjective values are taken to be objectively "right".

comment by [deleted] · 2014-05-13T00:14:06.431Z · LW(p) · GW(p)

This gallup report suggests that views on abortion are more complicated. Young people are most likely to favor no restrictions on abortion, but also most likely to favor a categorical ban (even more likely than the 65+ crowd).

Replies from: BloodyShrimp
comment by BloodyShrimp · 2014-05-14T19:04:57.729Z · LW(p) · GW(p)

i.e., young people are most likely to have the least complicated views on abortion!

comment by satt · 2014-05-13T00:10:31.757Z · LW(p) · GW(p)

Right now, this would mean that gay marriage, marijuana legalization, abortion, etc would all very quickly become legal (In the US at least).

In the US, under-30 adults have less liberal views on abortion than middle-aged adults, and the under-30s were getting less liberal about it more quickly than older adults until 2010 or so. (Also, abortion's been legal in the US for four decades.)

comment by blacktrance · 2014-05-12T17:23:47.084Z · LW(p) · GW(p)

What were the equivalents of marijuana legalization and same-sex marriage 20 years ago? 40 years ago? Etc. And what policies did young people support that weren't enacted?

Replies from: Nornagest, None, polymathwannabe
comment by Nornagest · 2014-05-12T18:39:00.658Z · LW(p) · GW(p)

Let's see. In terms of youth subcultures, 1994 would be a little after grunge had peaked; punk would have been on its way out as a mass movement rather than a vestigial scene, but it still had some presence. Rage Against The Machine was probably the most politicized band I remember from that era, although it wasn't linked to any particular movement so much as a generalized morass of contrarian sentiment.

Anti-globalization wouldn't peak for another five years, but it was picking up steam. Race relations were pretty tense in the aftermath of the 1992 Rodney King riots. Tibetan independence was a popular cause. I also remember environmentalism having a lot of presence -- lots of talk about deforestation, for example. I don't remember much in the way of specific policy prescriptions, though.

Bill Clinton had just been elected, and I think he introduced his health care reform plan about that time. That one failed, but I don't remember it showing the same generational divisions that marijuana legalization and same-sex marriage now do. 'Course, I could be wrong; I was pretty young at the time.

comment by [deleted] · 2014-05-12T17:30:18.123Z · LW(p) · GW(p)

A whole lot of things having to do with race, I believe.

And what policies did young people support that weren't enacted?

This is what I don't know, and would like to pick LW's brains for.

comment by polymathwannabe · 2014-05-12T18:00:15.218Z · LW(p) · GW(p)

Nuclear disarmament, Non-Aligned countries, decolonization of Africa, ending the Vietnam War, and stopping the red scare witch hunts.

Replies from: Douglas_Knight, bramflakes
comment by Douglas_Knight · 2014-05-12T22:13:54.915Z · LW(p) · GW(p)

Are you saying that these were distinctively supported by young people?

If so, I'm skeptical that "stopping the red scare witch hunts" falls in that category, at least if you mean McCarthyism. The others seem more reasonable to me.

comment by bramflakes · 2014-05-12T20:18:58.389Z · LW(p) · GW(p)

One of these things is not like the others ...

Replies from: satt, pragmatist
comment by satt · 2014-05-13T00:20:42.346Z · LW(p) · GW(p)

Ending the Vietnam War? Although young people in 1969 reported being more likely to sympathize with anti-war demonstrators' goals, they were generally less likely to call the war a "mistake", at least between 1965 & 1971.

comment by pragmatist · 2014-05-12T20:44:13.752Z · LW(p) · GW(p)

Which one? I mean, superficially, the second option on the list ("Non-aligned countries") is not actually a policy proposal, but I'm assuming the charitable reading is something like "Support for non-alignment". Is that what you meant, or something else?

Replies from: bramflakes
comment by bramflakes · 2014-05-13T12:24:21.143Z · LW(p) · GW(p)

Decolonization of Africa.

Replies from: pragmatist
comment by pragmatist · 2014-05-13T15:07:39.283Z · LW(p) · GW(p)

I don't get it. In what respect is that not like the others?

Replies from: bramflakes
comment by bramflakes · 2014-05-13T19:49:47.562Z · LW(p) · GW(p)

It wasn't a good outcome.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-20T02:54:59.329Z · LW(p) · GW(p)

Neither was ending the Vietnam war. For that matter, did the Non-Aligned movement accomplish much of anything besides providing cover for various dictators?

Replies from: bramflakes
comment by bramflakes · 2014-05-20T09:25:29.638Z · LW(p) · GW(p)

I considered those two options but figured that they're more muddled cases. Africa decolonization was clearly an order of magnitude worse than the worst-case interpretations of Vietnam+Nonaligned.

comment by philh · 2014-05-13T13:20:16.458Z · LW(p) · GW(p)

Perhaps society should take that into account, and weigh their opinions more heavily.

This kind of feels like suggesting "if you notice that your tribe is becoming extinct, you should help to speed up the process".

comment by Lumifer · 2014-05-12T16:57:29.721Z · LW(p) · GW(p)

younger people have consistently been right in their position on social issues.

[Citation needed]

Not to mention that you treat "younger people" as a homogenous group which, quite clearly, it is not.

Replies from: None
comment by [deleted] · 2014-05-12T17:13:12.178Z · LW(p) · GW(p)

Of course. The observation is that different demographics show markedly different attitudes on social issues, and that one such demographic seems to have a tendency to get things right. There are many possible counterarguments, but I am not convinced that the basic idea is unworkable.

Replies from: Lumifer
comment by Lumifer · 2014-05-12T17:32:31.276Z · LW(p) · GW(p)

Let me offer you an alternate explanation.

One thing about which I feel pretty safe generalizing is that the youth has considerably higher risk tolerance than the elderly. A consequence of that is that the young will actually go out and try all the ideas which swirl around any given culture at any given time. Most will turn out to be meh, but some will turn out to be great and some -- horrible.

Fast-forward about half a century and you know what? The elderly very clearly remember how, when young, they supported all the right ideas and very thoroughly forget how they supported the ideas which now decorate the dustbin of history.

Rinse and repeat for each generation.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-05-12T17:47:43.142Z · LW(p) · GW(p)

Solvent offered this hypothesis as #2.

Replies from: Lumifer
comment by Lumifer · 2014-05-12T17:51:44.302Z · LW(p) · GW(p)

Yes, and I suggest a plausible mechanism for that.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-05-12T20:07:53.254Z · LW(p) · GW(p)

I'm just saying that your first hostile comment is inappropriate and calling it "alternative" is misleading.

Replies from: Lumifer
comment by Lumifer · 2014-05-12T20:27:15.259Z · LW(p) · GW(p)

your first hostile comment is inappropriate

I rarely engage in hostilities on LW, but your readiness to assume such things is interesting :-P

Do note that the OP started by saying "Throwing a half-formed idea out there for feedback."

comment by Viliam_Bur · 2014-05-12T21:30:09.412Z · LW(p) · GW(p)

Younger people haven't been consistently right. They support bad ideas as often as any other group but those are rejected and forgotten.

This can be tested. Organize a huge youth conference that will provide dozens of new ideas. Record the ideas, wait 20 years, and review them; wait 50 years and review them yet again. Also, compare with new good things that happened meanwhile, but were not suggested at the conference.

comment by pcm · 2014-05-12T17:28:15.301Z · LW(p) · GW(p)

I suggest trying to find evidence about issues that made a larger difference, such as support for Mao or for fighting major wars. Maybe there's a principled definition of "social issues" that excludes things about which the young are wrong, but I'll guess that it's hard to find consensus about such a definition.

comment by Larks · 2014-05-15T00:28:18.333Z · LW(p) · GW(p)

It only seems this way because of selection bias. Young people generally want to change stuff, and history/ethics is written by the victors. In cases where young people lost, the status quo was maintained, which means we don't pay as much attention.

comment by Strangeattractor · 2014-05-14T15:47:49.887Z · LW(p) · GW(p)

I think, in practice, young people would have to start voting more in order to have their opinions reflected in politics. Voter turnout among young adults is very low, so when politicians make decisions, they feel like they can safely ignore their concerns.

comment by hg00 · 2014-05-14T04:45:48.681Z · LW(p) · GW(p)

So recently I've been philosophizing about love, sex, and relationships. I'm a man, and I experience lust much more often than love. However, it seems like long-term relationships are better than short-term relationships for a variety of reasons: consistently having sex through short-term relationships as a man requires that you spend a lot of time chasing women, and I've read about many mental health benefits that come with being in a loving relationship that I assume don't come in to play if you're only having short-term relationships.

I'm a outgoing, masculine, fairly neurotypical guy, and I can get dates with women by putting in time and effort. However, it's rare for me to feel substantial intimacy or connection with them. So my question for LW is, how can I hack myself to be more loving and get infatuated with women for their personalities and the subtle feminine things they do instead of their overtly sexy aspects? E.g. one idea (rot13'd to avoid contaminating you with my ideas): gnxvat rpfgnfl.

Replies from: MrMind, free_rip, Viliam_Bur
comment by MrMind · 2014-05-14T08:28:01.389Z · LW(p) · GW(p)

In my experience, the primary factor that generates love is the amount of time I think about positive qualities in a woman.
It's a fairly simple and surprising hack of my brain that could very well work for you too: if I devote enough time (say, a couple of minutes every hour) thinking about what I like in a woman, my brain will automatically start to generate feelings of love. On the converse, when I feel a lot of attraction, if I consciously stop thinking about her, the feelings intensity diminishes drastically.
It's almost like the brain was seeking to maintain internal consistency: if you think a lot about someone you're attracted to, it must be because you're in love, and viceversa. The best part is that, since we don't have access to our internal workings, the feelings generated in this way feels very true (they are true, actually).

My advice then is: before trying more invasive hack of your brain, just devote some time to regularly think about what you like in her. There's a good chance that soon you'll start idealizing her.

comment by free_rip · 2014-05-14T07:10:48.382Z · LW(p) · GW(p)

One of the ways of building intimacy or closeness, which is a key component of companionate love (the type you seem to be going for here, have a look at the research on passionate vs. companionate love if you're interested) is self-disclosure that is responded to by one's partner with warmth, understanding and supportiveness.

You can spend a lot of time doing things together without having this self-disclosure: to get it, you need to want to disclose/hear more about the other person, and preferably have dates etc. where you spend some time just talking about whatever, in private, about your pasts or your thoughts - things that might lead to self-disclosure.

So first step, set up these situations. Second step, talk about your past and your thoughts and try to be open - be trusting. Relate random conversations to things you hold close to you. Third step, if your partner opens up to you, make sure to respond supportively and engage with it, and not brush it off or turn the conversation to less close topics.

Which is not to say you should do this all the time, fun dates and silliness and dancing in a club way too loud to talk in are good too. But with any luck, adding a bit more of this in will help you feel that connection and intimacy.

comment by Viliam_Bur · 2014-05-14T10:09:21.573Z · LW(p) · GW(p)

There is a thin line between changing your desires and suppressing them. You may replace a goal X by a goal Y, or you may just convince yourself that you did. -- Think about all the religious homosexual people who try to hack themselves to heterosexuality, believe they have succeeded, and then later realize it didn't work.

Is there a way to get both X and Y? For example having an open long-term relationship with one partner, and a few short-term relationships when you need them. Or to save time and effort, to have one long-term emotional relationship, and a few long-term contacts where both sides only want sex occasionally.

comment by Omid · 2014-05-13T15:51:04.483Z · LW(p) · GW(p)

This writing style has become a cliche:

Imagine a hypothetical situation X. In situation X, you would surely agree that we ought to do Y. Surprise! We actually are living in hypothetical situation X. So we obliged to do Y.

Are there any ways to make it less obnoxious?

Replies from: shminux, None
comment by shminux · 2014-05-13T16:51:49.950Z · LW(p) · GW(p)

Why do you feel that it comes across as obnoxious?

Replies from: gjm
comment by gjm · 2014-05-13T20:27:50.719Z · LW(p) · GW(p)

I am not Omid, but: It feels like an attempt to sneak something past the reader -- and indeed that's clearly what it is. The writer might defend it along the following lines: "Few readers would listen if simply told to do Y, because it's very different from what we're used to doing. But, in fact, principles we all accept mean that we should all be doing Y. So I need a way of explaining this that won't get blocked off at the outset by readers' immediate aversion to the idea of Y." Which is fair enough, but if you notice it being done to you you might feel patronized or manipulated. And while we might like to think of ourselves as making moral judgements on the basis of coherent principles, it's at least arguable that really those principles are often less fundamental than the judgements they purport to explain -- so the reader's actual conclusion might be "... so I need to revise my moral principles somehow" rather than "so I need to do Y", but the argument is phrased in a way that rather bypasses that conclusion.

Having said all which, I'll add that I don't think I myself find that sort of thing obnoxious; while I can't think of any case in which I've done the same myself, I can imagine doing and don't feel any guilt at the thought; and I think that even if our moral principles are derived from reflection on more specific moral judgements rather than vice versa, it's reasonable to give precedence to such a principle even when a specific judgement turns out to conflict with it. So I don't altogether agree with the argument I'm putting in Omid's mouth.

(Which is probably reason to doubt whether I've correctly divined his meaning. Omid, please correct me if my guess about what you find obnoxious is wrong.)

Replies from: shminux
comment by shminux · 2014-05-13T20:54:48.238Z · LW(p) · GW(p)

Ah, makes sense. I wonder if replacing "Surprise! We actually are living in hypothetical situation X." with "If we were living in X, how would we tell?" would be better.

comment by [deleted] · 2014-05-14T03:19:45.463Z · LW(p) · GW(p)

Say what you're about to do, before doing it.

Sometimes, providing a summary will activate an immune response. This is a red flag about the context in which you're communicating, but that doesn't mean you shouldn't participate in an effective way.

comment by MrMind · 2014-05-13T13:03:33.914Z · LW(p) · GW(p)

There's a woman that has recently started to treat me poorly, and I can't figure out why. I would like help in designing the most efficient social experiment that helps me to solve this riddle.
If it's not clear from the disclaimer above, this post is about a personal situation and contains details about the two persons involved and their feelings.

Some possibly useful background info: it's about a year that we dance regularly together. I like her a lot and some months ago I have told her so, trying to be as level-headed as possible. She replied that she is still in love with the guy from her last relationship. We also talked about the fact that I'm polyamorous while she is strictly monogamous, so this situation could never evolve into a relationship. Aside from the occasional remainder that I like her and I find her attractive, there were no apparent ripercussions of those talkings: we kept on dancing together and being casual friends. In her own words, I've never made her feel uncomfortable.

However, it's two weeks that I sense that she is treating me with contempt, and I have no idea why. It seems to me that nothing in our relationship has changed, and this tells me that my model of the situation is way off-track.

There were two specific occasions that triggered my self-respect alarm: in the first one, while we were dancing she said something like "I tripped and almost fell on you. Oh but you would be happy if I accidentally fell on you, right?".
In the next occasion: while we were sorting for costumes provided by our dancing schools, I said "I'm going to need an XL size, the L size just doesn't fit" to which she interjected "Oh no, you're going to need a triple X size."

These are some possible explanations:

  • she is not really treating me poorly. I'm being oversensitive and in her mind these were unrelevant remarks, or this is just some bizarre way of flirting or attracting my attention.
  • She is treating me poorly, but there's no systemic reason behind: something totally unrelated put her in a bad mood on two occasions, and I just happened to be around.
  • She is treating me poorly, there's a systemic reason behind, but it's not related to me: the systemic version of the one above. She is in a bad mood for a recurring cause, wether it's the tension for the incoming school performance or something else unrelated and for some reason I'm her preferred punching ball.
  • She is treating me poorly, there's a systemic reason behind, and it's related to me: I've read that a woman with low self-esteem may start to despise a guy who likes her. The reasoning goes along this line: "I'm worthless and he likes me, so he likes worthless things, he must be worthless too."
  • Something else totally alien: an unkown variable or a constrain of the system I've never thought about, an "unknown unknown".

I've layed the conundrum at your feet, let's see if you can suggest a way to unravel it.
If you need more information ask here or in private and I'll do my best to answer you.

ETA: changed "girl" to "woman".

Replies from: palladias, JQuinton, Benito, MrMind, Daniel_Burfoot
comment by palladias · 2014-05-14T14:12:11.597Z · LW(p) · GW(p)

Here's an intervention, rather than a test: If she says something that hurts your feelings again, just say, "I know you're joking around, but that kind of hurts my feelings."

Instead of informing your model, inform hers.

Replies from: MrMind
comment by MrMind · 2014-05-15T07:28:17.359Z · LW(p) · GW(p)

That is a simple and worthwhile point of view. It made me change my mind, as per comment above, so upvoted!

comment by JQuinton · 2014-05-13T13:57:57.452Z · LW(p) · GW(p)

If it were me, I would just assume she was lightheartedly teasing. If that's the case, the course of action would be to tease back, but also in a lighthearted way. Either that, or reply with an extremely exaggerated form of self-deprecation; agree with her teasing but in a way that exaggerates the original intent. Even if that's not the case, and she's being vindictive, I think responding as though she were teasing would be ideal anyway.

Examples:

"I tripped and almost fell on you. Oh but you would be happy if I accidentally fell on you, right?" (tease back): "Clumsy people don't really do it for me" (exaggerate): "That's because I have never had a woman touch me before in my life"

"Oh no, you're going to need a triple X size." (tease back): "I think you just like saying 'triple X'. Get your mind out of the gutter, thanks" (exaggerate): "I'm going to cry myself to sleep over my size tonight "

If she laughs and/or plays along with these responses, she's probably just teasing. If she gets even more cruel in her response, then she's probably being intentionally vindictive.

Replies from: MrMind
comment by MrMind · 2014-05-13T18:01:52.668Z · LW(p) · GW(p)

I'll implement the 'tease back' strategy, plus I will also mention that I've noticed that she's treating me worse than usual lately.

This way I'll gather intel both from her emotional and logical reactions, and will try to make up a single model of the situation.

Replies from: gjm, JQuinton
comment by gjm · 2014-05-13T20:29:53.534Z · LW(p) · GW(p)

I am far from an expert in these matters, but would advise against both teasing back and saying explicitly that you interpret the teasing as "treating me worse than usual".

[EDITED to add: To be clear, I mean "don't do both of these together" rather than "both of these are individually bad ideas".]

Replies from: MrMind, Lumifer
comment by MrMind · 2014-05-14T08:13:03.586Z · LW(p) · GW(p)

Why not both? What could go especially wrong?

Replies from: palladias
comment by palladias · 2014-05-14T14:05:09.452Z · LW(p) · GW(p)

Because one is playful and the other feels hostile. Doing both at once won't give you a clear sense of what her response is to either. Do them in separate encounters.

comment by Lumifer · 2014-05-13T20:45:40.517Z · LW(p) · GW(p)

Why is teasing back a bad idea?

Replies from: gjm
comment by gjm · 2014-05-13T21:00:31.169Z · LW(p) · GW(p)

Apparently even with my edit I wasn't clear enough. Letting A be "tease back" and B be "mention that she seems to be treating you worse recently", I wasn't saying

  • "don't do A, and don't do B"

but was saying

  • "don't both-do-A-and-do-B".
comment by JQuinton · 2014-05-13T18:15:21.849Z · LW(p) · GW(p)

If you ask her a direct question, I would take into account that this would more than likely engage her press secretary and might not get the logical answer you are looking for.

Replies from: MrMind
comment by MrMind · 2014-05-14T08:12:25.925Z · LW(p) · GW(p)

Yeah, I explained myself poorly. By 'logical' I meant the 'rationalized' explanation.
It should at least tell me if she's aware of the behaviour or not.

Replies from: palladias
comment by palladias · 2014-05-14T14:10:00.347Z · LW(p) · GW(p)

Really? Because if someone told me I wasn't treating them well, I would apologize and make nice regardless of whether I'd been doing it intentionally. I think you are overestimating how well confronting her will work to inform you.

Think about (ahead of time) what response(s) you'd expect if it were all a misunderstanding and what response(s) you'd expect if it were deliberate. If there's a lot of plausible overlap between the two worlds, you won't learn very much, but you may make the whole thing more awkward by drawing attention to it.

Replies from: MrMind
comment by MrMind · 2014-05-15T07:26:44.130Z · LW(p) · GW(p)

I think you're right: telling her is not especially informative, plus would surely modify her model of me and muddle the waters even more (I forgot to apply the principle that you disturb everything you measure).
I think I'll just tease her back and resort to telling her if and only if this escalates in a bad direction.

comment by Ben Pace (Benito) · 2014-05-13T16:46:33.839Z · LW(p) · GW(p)

Y'know egocentric bias? Where people think the world revolves around them more than it does? I find that I often see my friend's actions in terms of what they think of me, but I imagine that they're in fact focused on me a lot less, so I would advise trying to discount that idea strongly. If it bothers you more, then just look at your options e.g. Mention it to her, don't mention it to her, think of an experimental test for the hypothesis, etc. Then pick one. Otherwise... Worrying is of course useless if it isn't motivating a useful action, so attempt not to.

Replies from: MrMind
comment by MrMind · 2014-05-13T17:49:49.252Z · LW(p) · GW(p)

Mention it to her, don't mention it to her, think of an experimental test for the hypothesis, etc. Then pick one. Otherwise... Worrying is of course useless if it isn't motivating a useful action, so attempt not to.

Yes, an experimental test is just what I want to create. That should be the useful action motivating my question.

I understand that not discounting for egocentric bias is a form of reduced Pascal's wager: the small chance of a correlation of her behaviour with me specifically has a huge payoff, so I better devote careful effort to discern the probability of this being the case.

However, if the pattern continues, I think that the correlation becomes more and more probable.

Replies from: Benito
comment by Ben Pace (Benito) · 2014-05-13T22:33:12.900Z · LW(p) · GW(p)

I still think that people worry about what other people think of them more than is healthy, which is why I think the egocentric bias fix is important. If you can think of a test, try it if it worries you, but... Well, I don't know. Perhaps I'm Other-Optimising too much.

comment by MrMind · 2014-05-13T13:13:42.308Z · LW(p) · GW(p)

My probabilities for each scenario: 0.1 - 0.2 - 0.3 - 0.4 - 0.

Replies from: MrMind
comment by MrMind · 2014-05-13T17:56:36.072Z · LW(p) · GW(p)

After adjusting for egocentric bias, I'd say 0.2 - 0.2 - 0.3 - 0.3 - 0, even if this rings extremely wrong to my emotional brain.

comment by Daniel_Burfoot · 2014-05-16T03:37:01.948Z · LW(p) · GW(p)

I like her a lot and some months ago I have told her so, trying to be as level-headed as possible.

In my experience, explicit declarations never work. You need to convey attraction subtextually.

Replies from: MrMind
comment by MrMind · 2014-05-16T07:54:38.522Z · LW(p) · GW(p)

Because of plausible deniability or some other factor? If I'm not mistaken, there were studies that showed we tend to like more people who like us.

comment by [deleted] · 2014-05-12T23:15:13.998Z · LW(p) · GW(p)

This never occurred to me until today, but can you solve the 'three wishes from a mischievous but rule abiding genie' problem just by spending your first wish on asking for a perspicuous explanation of what you should wish for? What could go wrong?

Replies from: TylerJay, shminux, None
comment by TylerJay · 2014-05-12T23:28:18.781Z · LW(p) · GW(p)

Asking what you "should wish for" still requires you to specify what you're trying to maximize. Specifying your goal in detail has all the same risks as specifying your wish in detail, so you have the same exposure to risk.

Edit: See my longer explanation below

Replies from: ShardPhoenix, None
comment by ShardPhoenix · 2014-05-13T11:13:08.974Z · LW(p) · GW(p)

You could possibly say "I wish for you to tell me you would wish for if you had your current intelligence and knowledge, but the same values and desires as me." That would still require just the right combination of intelligence, omniscience, and literal truthfulness on the genie's part though.

Replies from: Richard_Kennaway, None, Strilanc
comment by Richard_Kennaway · 2014-05-13T11:38:40.477Z · LW(p) · GW(p)

The genie replies, "What I would wish for in those circumstances would only be of value to an entity of my own intelligence and knowledge. You couldn't possibly use it. And beside, I'm sorry, but there's no such thing as 'your values and desires'. You're barely capable of carrying out a decision made yesterday to go to the gym today. You might as well ask me to make colourless green ideas sleep furiously. On a more constructive note, I suggest you start small, and wish for fortunate chances of a scale that could actually happen without me, that will take you a little in whatever direction you want to go. I'll count something that size as a milli-wish. Then tomorrow you can make another, and so on. I have to warn you, though, that still ends badly for some people. Giving you this advice counts as your first milli-wish. See you tomorrow."

comment by [deleted] · 2014-05-13T17:33:17.446Z · LW(p) · GW(p)

I'm nowhere near that confident in my values and desires.

comment by Strilanc · 2014-05-13T13:31:36.355Z · LW(p) · GW(p)

I wish for you to tell me [what] you would wish for [in my place] if you had your current intelligence and knowledge, but the same values and desires as me.

  • "You should wish for me to tell you what you would wish for in your place if I had your current intelligence and knowledge, but the same values and desires as you."

  • "I have replaced your values and desires with my own. You should wish to become a genie."

  • "Here is your list of all possible wishes."

  • "You should wish that genies never existed."

comment by [deleted] · 2014-05-13T00:49:11.933Z · LW(p) · GW(p)

Asking what you "should wish for" still requires you to specify what you're trying to maximize.

Can you explain why? Why couldn't that be exactly what I'm asking for?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-13T15:13:11.726Z · LW(p) · GW(p)

"Should" implies a goal according to some set of values. Since the genie is mischievous, it might e.g. tell you what you should ask for so as to make the consequences maximally amusing to the genie.

Replies from: None
comment by [deleted] · 2014-05-13T16:45:01.181Z · LW(p) · GW(p)

"Should" implies a goal according to some set of values.

Do you mean this as an empirical claim about the way we use the word? I think it's at least grammatical to say 'What should my ultimate, terminal goals be?' Why can't I ask the genie that?

Replies from: TylerJay, Kaj_Sotala
comment by TylerJay · 2014-05-13T19:22:38.952Z · LW(p) · GW(p)

Taboo the word "should" and try to ask that question again. I think you'll find that all "should"-like phrases have an implicit second part-- the "With respect to" or the "In order to" part.

If you ask "what should I do tomorrow?", the implicit second part (the value parameter) could be either "in order to enjoy myself the most" or "in order to make the most people happy" or "in order to make the most money"

You will definitely have to specify the value parameter to a mischievous genie or he will probably just pick one to make you regret your wish.

It seems that you're asking for a universal should which assumes that there is some universal value or objective morality.

Replies from: None
comment by [deleted] · 2014-05-13T19:58:00.813Z · LW(p) · GW(p)

I think you'll find that all "should"-like phrases have an implicit second part-- the "With respect to" or the "In order to" part.

Well, I'm interested in your answer to the question I put to Kaj: are you making a linguistic claim here, or a meta-ethical claim? I take it that given this:

It seems that you're asking for a universal should which assumes that there is some universal value or objective morality.

...that you're making the meta-ethical claim. So would you say that a question like this "What should my ultimate, terminal goals be?" is nonsense, or what?

Replies from: TylerJay
comment by TylerJay · 2014-05-13T22:15:18.039Z · LW(p) · GW(p)

So would you say that a question like this "What should my ultimate, terminal goals be?" is nonsense, or what?

Not complete nonsense, its just an incomplete specification of the question. Let's rephrase the question so that we're asking which goals are right or correct and think about it computationally.

So the process we're asking the genie to do is:

  1. Generate the list of all possible ultimate, terminal goals.
  2. Pick the "right" one

It could pick one at random, pick the first one it considers, pick the last one it considers, or pick one based on certain criteria. Maybe it picks the one that will make you the happiest, maybe it picks the one that maximizes the number of paperclips in the world, or maybe it tries to maximize for multiple factors that are weighted in some way while abiding by certain invariants. This last one sounds more like what we want.

Basically, it has to have some criteria by which it judges the different options, otherwise its choice is necessarily arbitrary.

So if we look at the process in a bit more detail, it looks like this:

  1. Generate the list of all possible ultimate, terminal goals
  2. Run each of them through the rightness function to give them each a score
  3. Pick the one with the highest score

So that "Rightness" function is the one that we're concerned with and I think that's the core of the problem you're proposing.

Either this function is a one-place function, meaning that it takes one parameter:

rightness_score(goals) => score

Or it's a two-place function, meaning that it takes two parameters

rightness_score(goals, criteria) => (score)

When I said earlier that all "should"-like statements have an implicit second part, I was claiming that you always have to take into account the criteria by which you're judging the different possible terminal goals that you can adopt.

Even if you claim that you're just asking the first one, rightness_score(goals), the body of the function still implicitly calculates the score in some way according to some criteria. It makes more sense to just be honest about the fact that it's a two-place function.

It sounds like you're asking the genie to pick the criteria, but that just recurses the problem. According to which criteria should the genie pick the criteria to pick the goals? That leads to infinite recursion which in this case is a bad thing since it never returns an answer.

Alternatively, you could claim that there is some objective rightness function in the universe by which to evaluate all possible terminal goals. I don't believe this and I think that most here on Less Wrong would also disagree with that claim.

That's why your question doesn't make sense. To pick a "best" anything, you have to have criteria with which to compare them. We could ask the genie to determine this morality function by examining the brains of all humans on earth, which is the meta-ethical problem that Eliezer is addressing with his Coherent Extrapolated Volition (CEV), but the genie won't just somehow have that knowledge and decide to use that method when you ask him what you "should" do.

Replies from: None
comment by [deleted] · 2014-05-13T22:57:53.115Z · LW(p) · GW(p)

Hmm, you present a convincing case, but the result seems to me to be a paradox.

On the one hand, we can't ask about ultimate values or ultimate criteria or whatever in an unconditioned 'one place' way; we always need to assume some set of criteria or values in order to productively frame the question.

On the other hand, if we end up saying that human beings can't ever sensibly ask questions about ultimate criteria or values, then we've gone off the rails.

I don't quite know what to say about that.

Replies from: TylerJay
comment by TylerJay · 2014-05-14T02:52:33.292Z · LW(p) · GW(p)

On the other hand, if we end up saying that human beings can't ever sensibly ask questions about ultimate criteria or values, then we've gone off the rails.

I'm not saying you can't ever ask questions about ultimate values, just that there isn't some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that's okay. We're also going to judge it with our own brains, since that's where our moral intuitions live.

"The human value function" if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.

When wishing on a genie, you probably don't want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don't want to be in an eternal state of mindless bliss with no challenge or more complex value. That's because the "human value function" is very complex. We also don't know it. It's essentially a black box where we can compute a value on outcomes and compare them, but we don't really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as "killing is bad".

So after all this discussion, what question would you actually want to ask the genie? You probably don't want to change your values drastically, so maybe you just want to find out what they are?

It's an interesting course of thought. Thanks for starting the discussion.

Replies from: None
comment by [deleted] · 2014-05-14T03:44:09.635Z · LW(p) · GW(p)

I'm not saying you can't ever ask questions about ultimate values

Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?

comment by Kaj_Sotala · 2014-05-13T19:02:21.400Z · LW(p) · GW(p)

Do you mean this as an empirical claim about the way we use the word?

Not sure what you're asking. I guess it could be either an empirical or a logical claim, depending on which way you want to put it.

I think it's at least grammatical to say 'What should my ultimate, terminal goals be?' Why can't I ask the genie that?

Sure it's grammatical and sure you can, but if you don't specify what the "should" means, you might not like the answer. See my above comment.

Replies from: None
comment by [deleted] · 2014-05-13T19:11:38.443Z · LW(p) · GW(p)

"Should" implies a goal according to some set of values.

Let me rephrase my question: Suppose I presented a series of well-crafted studies which show that people often use the word 'should' without intending to make reference to an assumed set of terminal values. I mean that people often use the word 'should' to ask questions like 'What should my ultimate, terminal values be?'

Would your reaction to these studies be:

1) I guess I was wrong when I said that '"Should" implies a goal according to some set of values'. Apparently people use the word 'should' to talk about the values themselves and without necessarily implying a higher up set of values.

or

2) Many people appear to be confused about what 'should' means. Though it appears to be a well formed English sentence, the question 'what should my ultimate, terminal values be?' is in fact nonsense.

In other words, when you say that 'should' implies a goal according to some set of values, are you making a claim about language, such as might be found in a dictionary, or are you making a claim about meta-ethical facts, such as might be found in a philosophy paper?

Or do you mean something else entirely?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-14T07:11:16.819Z · LW(p) · GW(p)

I endorse the answers that TylerJay gave to this question, he's saying basically the same thing as I was trying to get at.

Replies from: None
comment by [deleted] · 2014-05-14T12:32:46.791Z · LW(p) · GW(p)

Do you have a response to the question I put to him? If it's true that asking after values or goals or criteria always involves presupposing some higher up goals, values or criteria, then does it follow from this that we can't ask after terminal goals, values, or ultimate criteria? If not, why not?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-15T07:22:01.425Z · LW(p) · GW(p)

Yes and no. You could just decide on some definition of terminal goals or ultimate criteria, and ask the genie what we should do to achieve the goals as you define them. But it's up to you (or someone else who you trust) to come up with that definition first, and the only "objective" criteria for what that definition should be like is something along the lines of "am I happy with this definition and its likely consequences".

Replies from: None
comment by [deleted] · 2014-05-15T14:23:39.096Z · LW(p) · GW(p)

and the only "objective" criteria for what that definition should be like is something along the lines of "am I happy with this definition and its likely consequences".

And you would say that the above doesn't involve any antecedent criteria upon which this judgement is based, say for determining the value of this or that consequence?

comment by shminux · 2014-05-13T06:31:26.910Z · LW(p) · GW(p)

How is it different from what Eliezer calls "I wish for you to do what I should wish for"?

With a safe genie, wishing is superfluous. Just run the genie.

Replies from: None
comment by [deleted] · 2014-05-13T16:58:39.414Z · LW(p) · GW(p)

Maybe it's only trivially different. But I'm imagining a genie that is sapient (so it's not like the time machine...though I don't know if the time machine pump thing is a coherent idea) and it's not safe. Suppose, say, that it's programed to fulfill any wish asked of it so as to produce two reactions: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that. That seems to me to capture the 'mischievous genie' of lore, and it's an idea EY doesn't talk about in that article, except maybe to deny its possibility.

Anyway, with such a genie, wishing for it to do whatever you ought to wish for is probably the same as asking it what to wish for. I'd take the second option, because I'm not the world's best person, and I'd want to think over hitting the 'go' button.

Replies from: shminux
comment by shminux · 2014-05-13T17:39:34.511Z · LW(p) · GW(p)

make the wisher regret having wished for that

I suspect that to be able to evoke this reaction reliably, the 100%-jackass genie would have to explicitly exclude the "do what I ought to have wished for" option, and so is at least as smart as a safe genie.

Anyway, with such a genie, wishing for it to do whatever you ought to wish for is probably the same as asking it what to wish for. I'd take the second option, because I'm not the world's best person, and I'd want to think over hitting the 'go' button.

I... do not follow at all, even after reading this paragraph a few times.

Replies from: None
comment by [deleted] · 2014-05-13T17:49:59.715Z · LW(p) · GW(p)

I suspect that to be able to evoke this reaction reliably, the 100%-jackass genie would have to explicitly exclude the "do what I ought to have wished for" option, and so is at least as smart as a safe genie.

I agree that it's at least as smart as the safe genie, and I suppose it's likely to be a even more complicated. The jackass genie needs to be able both to figure out what you really want, and to figure out how to betray that desire within the confines of your stated wish. I realize I do this with my son sometimes when he makes up crazy rules for games: I try to come up with ways to exploit the rule, so as to show why it's not a good one. I guess that kind of makes me a jackass.

Anyway, I take it you agree that my jackass genie is one of the possibilities? Being smart doesn't make it safe. And, as is the law of geniedom, it's not allowed to refuse any of my wishes.

I... do not follow at all, even after reading this paragraph a few times.

Sorry to be unclear. You asked me how my suggestion was different from just telling the genie 'just do whatever's best'. I said that my suggestion is not very different. Only, maybe 'do whatever's best' isn't in my selfish interest. Maybe, for example, I ought to stop smoking crack or something. But even if it is best for me to stop smoking crack, I might just really like crack. So I want to know what's in fact best for me before deciding to get it.

comment by [deleted] · 2014-05-13T13:31:30.627Z · LW(p) · GW(p)

I think the problem is, 'Mischievous but rule abiding' doesn't sufficiently help limit the genies activities to sane ones.

For instance, the genie pulls out a pen made entirely out of antimatter to begin writing down a perspicuous explanation, and the antimatter pen promptly reacts with the matter in the air, killing you and anyone in the area.

When the next person comes into the wasteland after it has stopped exploding and says "That's not mischievous, that's clearly malicious!" The genie points out that he can just make them all come back if someone wishes for it so clearly it is only a bit of mischief, much like how many would consider taking a cookie from a four year old and holding it above their head mischievous: Any suffering is clearly reversible. Oh also, the last person asked for a perspicuous explanation of what they should wish for, and it is written down on antimatter paper and antimatter pen in that opaque massive magnetically sealed box, which is just about to run out of power, and then THAT person also blows up when the boxes power containment fails.

Replies from: None
comment by [deleted] · 2014-05-13T17:01:03.249Z · LW(p) · GW(p)

That's kind of a cool story, but that genie is I think simply malevolent. I have in mind the genie of lore, which I think is captured by these rules: first, to satisfy the wisher that the wish was fulfilled as stated, and second, to make the wisher regret having wished for that and third, the genie isn't allowed to do anything else. I don't think your scenario satisfies these rules.

Replies from: None
comment by [deleted] · 2014-05-15T13:59:49.892Z · LW(p) · GW(p)

Well, that's true, based on those rules. The first person dies before the wish is completed, so clearly he wasn't satisfied. Let me pick a comparably hazardous interpretation that does seem to follow those rules.

The Genie writes down the perspicuous instructions in highly Radioactive, Radioluminescent Paint, comparable to that which poisoned people in the 1900's but worse, in a massive, bold font. The instructions are 'Leave the area immediately and wish to be cured of Radiation poisoning.'

When the wisher realizes that they have in fact received a near immediately fatal dose of radiation, they leave the area, follow the wish and seem to be cured and not die. When they call out the Genie for putting them in a deadly situation and forcing them to burn a wish to get out of it, the genie refers them to Jafar doing something similar to Abis Mal in Aladdin 2. The Genie DID give them a perfectly valid instructions on a concise wish. Had the Genie made the instructions longer, they would have died of radiation poisoning before reading them and wishing for it, and instructions which take longer than your lifespan to use hardly seem to the Genie to be perspicuous.

Is this more in mind with what you were thinking of?

Replies from: None
comment by [deleted] · 2014-05-15T14:29:00.583Z · LW(p) · GW(p)

Is this more in mind with what you were thinking of?

That's certainly a lot closer. I guess my question is: does this satisfy rule number three? One might worry that exposing the wisher to a high dose of radiation is totally inessential to the presentation of an explanation of what to wish for. Are you satisfied that your story differs from this one?

Me: O Genie, my first wish is for your to tell me clearly me what I should ask for!

[The Genie draws a firearm and shoots me in the stomach]

Genie: First, wish for immediate medical attention for a gunshot wound.

This story, it seems to me, would violate rule three.

Replies from: None
comment by [deleted] · 2014-05-15T18:57:51.741Z · LW(p) · GW(p)

I think I need to clarify how it works when things that are totally inessential are being disallowed, then.

Consider your wish for information again: What if the Genie says:

Genie A: "Well, I can't write down the information, because writing it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information.... not since I hurt that fellow by writing something in radioactive luminescent paint"

Genie A: "And I can't speak the information, because speaking it is totally inessential to giving you the information, and my wishing powers do not allow me to do things that are totally inessential to giving you the information... not since I hurt that other fellow by answering at 170 decibels."

Genie A: "And I can't simply alter your mind so that the information is present, because directly altering your brain is totally inessential... you see where I'm going with this. So what you should wish for with your second wish is that I can do things that are totally inessential to the wish... so that I can actually grant your wishes."

All of that SOUNDS silly. But it also seems at least partially true from the genie's perspective: Writing isn't essential, he can speak, speaking isn't essential, because he can write, brain alteration isn't essential, etc, but having some way of conveying the information to you IS essential. So presumably, the genie gets to choose at least one method from a list of choices... except choosing among a set of methods is what allowed him to hurt people in the first place. (By choosing a method that was set for arbitrarily maximized mischief)

Unless the Genie doesn't get to select methods until you tell him (hence, making those methods essential to the wish, resolving the problem), however, that could lead to an entirely different approach to mischief.

Genie B: "Okay: First you'll have to tell me whether you want me to write it down, speak it out loud, or something else."

Me: "Write it down"

Genie B: "Okay: Next, you'll have to tell me whether you want me to write it with a pen, a pencil, or something else."

Me: "A Pen."

Genie B: "Okay: Next, you'll have to tell me whether you want to write it down with a black pen, a blue pen, or something else."

Me: "Black."

Genie B: "Okay. Now you'll have to tell me whether you want to write it on lined paper, copy paper, or something else."

Me: "Are you going to actually get to writing down the perspicuous wish? How many of these questions do I have left?"

Genie B: "999,996, approximately."

Me: "Seriously?"

Neither Genie A nor Genie B is actually helping you in the way you had in mind, but their approaches to not helping you are quite different. Which (if either) fits better with your vision of a mischievous genie?

comment by raisin · 2014-05-12T10:59:28.871Z · LW(p) · GW(p)

Would an average year in the life of an em in Hanson's Malthusian explosion scenario really be >0 QALY? Hanson has kinda defended this scenario because the ems would want to be alive but I don't think that means anything. I remember reading about mice and painful wireheading (probably Yvain's post) and how you can make mice want that kind of wireheading even though it's painful. Similarly it's easy to imagine how people would want to live painful and miserable lives.

Replies from: Douglas_Knight, Manfred, pcm
comment by Douglas_Knight · 2014-05-12T17:53:28.979Z · LW(p) · GW(p)

Hanson has kinda defended this scenario because the ems would want to be alive

Has he? I think his more typical defense is Poor Folks Do Smile.

Replies from: raisin
comment by raisin · 2014-05-12T18:41:06.561Z · LW(p) · GW(p)

Yeah, I read that, reconsidered my impression and it seems you are right. My memories about his opinion seemed to have become muddled and simplified from several sources like his Uploads essay where he says "Most uploads should quickly come to value life even when life is hard or short, and wages should fall dramatically." (which doesn't seem to be a value statement) that poor folks essay, this discussion here (in which he doesn't commentate) and this video interview in which he constantly says that life will be okay even though we'll become more and more alienated from nature.

But I don't think my view of his opinion was 100% incorrect. The distinction between "valuing your life" and "wanting to live" is interesting. If you want to live, does that automatically mean that you value your life? I mean, I've had days when maybe 95% of the time I've felt miserable, and 5% of the time I've felt okay and in the end I've still considered those days okay. If I want to have more of those kind of days, does that mean I value misery? How do you assess the quality of life in these kind of cases, and in cases where the 'misery' is even more extreme?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-05-12T20:22:17.426Z · LW(p) · GW(p)

In your first paragraph, you agree with me that it isn't a value judgement, but then in your second paragraph, you go back to claiming that it is the foundation of his position. I think it is mainly a response to claims that uploads will be miserable. I think his position is that we should not care about whether the uploads value their lives, but whether we, today, value their lives; but he thinks that moral rhetoric does not well match the speaker's values. cf

comment by Manfred · 2014-05-12T15:44:58.520Z · LW(p) · GW(p)

I would guess yes - but that might change depending on details. At the very least, if we decided on some way to measure QALYs (our current methodology is real simple!), and then tried to maximize that measurement, we'd at best get something that looked like pared-down ems.

Ultimately, how you choose between futures is up to you. Even if something has an objective-sounding name like "quality-adjusted life years," this doesn't mean that it's the right thing to maximize.

comment by pcm · 2014-05-12T14:56:22.413Z · LW(p) · GW(p)

Yes, wanting to live isn't perfect evidence of a life worth living. But it sure looks like it provides some bayesian evidence.

Looking at whether the ems want more copies of themselves and want faster clock speeds should provide stronger evidence, and it seems unlikely that ems who don't want either of those will be common.

Ems should have some ability to alter themselves to enjoy life more. Wouldn't they use that?

Replies from: raisin
comment by raisin · 2014-05-12T15:11:01.714Z · LW(p) · GW(p)

If it provides bayesian evidence, shouldn't there be something that would in principle provide counterevidence? I can't figure out what that kind of counterevidence would be. Can you imagine an em population explosion where at some point no ems would want to make copies of themselves? I've got the impression that once an em population explosion gets started you can't really stop that because those ems that want copies get selected no matter how miserable the situation.

Ems should have some ability to alter themselves to enjoy life more. Wouldn't they use that?

Since in this scenario almost all ems work on a subsistence level and there's a huge number of ems, if enjoying life makes them even slightly less productive I don't think that kind of alteration would become very common due to selection effects.

Replies from: pcm
comment by pcm · 2014-05-12T17:37:33.919Z · LW(p) · GW(p)

Evidence that most ems are slaves whose copies are made at the choice of owners would seem relevant.

Making miserable workers a bit happier doesn't seem to make them less productive today. Why should there be no similar options in an em world?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-12T17:50:28.066Z · LW(p) · GW(p)

As I understand it, the premise behind ems is that it's possible to copy human minds into computers, but not to understand the spaghetti code. There won't be an obvious way to just make workers happier.

Replies from: pcm
comment by pcm · 2014-05-12T20:52:39.277Z · LW(p) · GW(p)

I expect faster and more reliable evaluations of Prozac-like interventions.

I also expect that emotions associated with having few cpu cycles are less strongly ingrained than those caused by lack of food.

comment by shminux · 2014-05-16T14:50:56.457Z · LW(p) · GW(p)

Find a problem, if any, in this reasoning:

(Dilbert)

Replies from: DanielLC, tut, witzvo
comment by DanielLC · 2014-05-19T03:14:47.489Z · LW(p) · GW(p)

The only reasoning in there was that wearable tech doesn't make you a cyborg because you're a simulation.

I'd say that even if the world is a simulation, there's no reason to go crazy with semantics. You call someone a cyborg when they'd qualify as a cyborg in a non-virtual world.

comment by tut · 2014-05-16T18:14:13.547Z · LW(p) · GW(p)

Wearable tech doesn't make you a cyborg because it isn't part of you? The fact that you scoff at things that sound like nonsense but which includes the prediction that you will scoff is (at best) very weak evidence for it, since scoff is what you would do anyway? Besides, the garbage man doesn't say anything about how he knows about the simulation, if he just made it up, then what him believing it is also unrelated to whether or not it is true.

comment by witzvo · 2014-05-16T21:06:41.488Z · LW(p) · GW(p)

the "you're a simulation" argument could explain anything and hence explains nothing. He managed to predict scoffing, but that wasn't a consequence of his hypothesis, that was just to be expected.

comment by Unnamed · 2014-05-14T05:44:32.511Z · LW(p) · GW(p)

Meetup posts have started appearing on the RSS feed for lesswrong Main (http://lesswrong.com/new/.rss).

I could switch my RSS feed to only include promoted posts, but that would increase the problem of the hiddenness of non-promoted Main posts. Is there something else that I could do, or does this need to be fixed on Less Wrong's end?

Replies from: shminux
comment by shminux · 2014-05-14T06:37:49.424Z · LW(p) · GW(p)

Meetups should have its own thread, like the Open thread, not be posted in Discussion. Of course, we do not live in a should universe...

Replies from: MathiasZaman
comment by MathiasZaman · 2014-05-14T13:59:53.314Z · LW(p) · GW(p)

This would lower the visibility of individual meetups, which in turn could lower attendance or the number of newcomers for meetups.

Replies from: shminux
comment by shminux · 2014-05-15T19:17:21.909Z · LW(p) · GW(p)

People interested in meetups would check the meetup thread or get notified when it is updated. Really important mega-meetups or inaugural meetups can still be posted in the usual place.

comment by iarwain1 · 2014-05-15T19:06:44.757Z · LW(p) · GW(p)

I recently posted on the rationality diary thread about a study deadline / accountability system I've been doing with zedzed. So far it's worked well for me, and I'd be happy to help others in the same way that zedzed is helping me. If anybody wants to use such a system for what they're studying, just ask. Unfortunately for most subjects I can't provide anything more than a deadline and some accountability, since I probably don't know the subject too well.

Also, if anybody else is willing to provide a similar service to the community (and perhaps can even provide some subject-specific guidance), please respond below so that people can contact you.

Replies from: zedzed
comment by zedzed · 2014-05-15T19:11:37.540Z · LW(p) · GW(p)

I'd be happy to provide deadlines or accountability to anyone else who wants it.

comment by NancyLebovitz · 2014-05-15T15:52:03.345Z · LW(p) · GW(p)

I'm having a problem with posting comments to Slate Star Codex-- they're rejected as spam, even though it's the email address I've been using and I haven't included any links. Anyone else having this problem?

Edited to add: whatever it was got fixed.

comment by David_Gerard · 2014-05-14T18:49:54.956Z · LW(p) · GW(p)

"Super Rationality Adventure Pals the Saturday morning cartoon! On 1080p from a BitTorrent near you." Please post plotlines and excerpts.

[an old comment I thought I'd revive.]

Replies from: Manfred, fubarobfusco, Nornagest
comment by Manfred · 2014-05-15T01:01:55.836Z · LW(p) · GW(p)

Conscientiousness: Alice keeps putting off a project, since she knows it'll only take an hour (say, fixing a roof - after all, you only need to start an hour before the rainstorm). Bob just does it. Alice gets rained on.

Coonscientiousness: Alice and Bob's town is slowly invaded by a herd of raccoons. Alice looks up how to deal with raccoons, asks for advice, and looks for successful people and copies them. Bob just does what he thinks of first, yelling at them to stay away from his garden, until he gets to tired and raccoons eat all his vegetables.

Replies from: None, Lumifer
comment by [deleted] · 2014-05-15T01:33:07.228Z · LW(p) · GW(p)

herd of raccoons

Please, either 'a gaze of raccoons' or 'a nursery of raccoons'.

Replies from: DanielLC
comment by DanielLC · 2014-05-15T23:01:31.537Z · LW(p) · GW(p)

The moral of the episode: a group of raccoons is called a "gaze" or "nursery".

The More You Know

comment by Lumifer · 2014-05-15T14:50:40.773Z · LW(p) · GW(p)

Alice and Bob's town is slowly invaded by a herd of raccoons.

Zombie raccoons!

comment by fubarobfusco · 2014-05-15T01:03:03.340Z · LW(p) · GW(p)

Steal the plotline of "Feeling Pinkie Keen" from My Little Pony and fix the ending so instead of being about taking things on faith, it's about updating on the cumulatively overwhelming evidence that the quirky character's predictive ability actually works.

Replies from: Transfuturist
comment by Transfuturist · 2014-05-18T02:04:15.764Z · LW(p) · GW(p)

It's not about taking things on faith, it's about accepting that you don't have to know the inner workings of a model to realize that it's a good predictor.

Or is that what you just said? I guess I need to watch the episode again, the moral must be different than what I remember.

comment by Nornagest · 2014-05-14T19:50:51.669Z · LW(p) · GW(p)

I'd settle for a well-executed cartoon adaptation of Pratchett's Tiffany Aching books. Almost as good in terms of rationality, and a lot more marketable.

comment by plex (ete) · 2014-05-14T03:16:46.883Z · LW(p) · GW(p)

I’m a moderately long term lurker (a couple of years), and in the last ~6 months have had much more free time due to dropping all my online projects to travel around Asia. As a result of this is I’ve ended up reading a lot of of LW and having a huge amount of time to think. I really like understanding things, and it feels like a lot of parts of how the interesting bits of reality work which were tricky for many years are making much more sense. This is pretty awesome and I can’t wait to get back home and talk to people about it (maybe visit some LW meetups so there’s people with less inferential distance to talk to).

I’ve got a few ideas which seem relevant and possibly interesting/useful/new to LWers, but am hesitant to post up without some feedback because it’s more than a little intimidating, especially since most of the posts I’d like to make seem like they should go in main not discussion. I’d like someone to bounce ideas off and look over my posts so I know I’m not just going over old ground, skipping ahead too fast without explaining each step properly, or making silly mistakes.

An outline of the posts I’d like to write: Fuzzy Pattern Theory of Identity - Like all concepts in conceptspace, “me”ness is non-binary and blurry , with the central example being me right now, close examples being past and future selves or mes in alternate branches, more distant examples including other humans, and really distant examples including a dog. Proximity to “me” in thingsspace seems most usefully defined as “how similar to me in the ways I attribute importance to is this configuration of matter”, examples to test your intuitions about this (e.g. a version of you without a brain is physically very like you, but you probably consider them much harder to identify with than a sibling, or perhaps even your pet cat). Possibly some stuff about the evolutionary usefulness of identity, how proximity to “me now” can be used as a fairly good measure of how much to follow a being’s preferences, or that may come later.

The Layers of Evolution - Briefly going through the layers of evolution: Single strand RNA can replicate, but two strands which generate each other are much more efficient , and more complex webs of chemical reactions are more efficient still, but “selfish” molecules could hijack accessible energy from the web to multiply without contributing. Cells separate different webs of self-replicating chemical reaction, giving some protection from rouge molecules at the cost of maintaining a cell wall and slowing molecule level evolution. Multicellular organisms can be more efficient reproducers in certain ways due to cell specialization and ability to act on a larger scale, but suffer from individual cells taking more than their fair share and growing to harm the organism. They counteract this by making all cells share the same DNA and not sharing cells, so an organism which has cancerous growth will die but not spread. Tribes of humans work effectively because division of labour and being able to accumulate larger stores of food make us more efficient in groups than as individuals, but make us vulnerable to selfish individuals taking more than their share at the cost to the group . Some more specific parallels drawn between levels of evolution, and how each layer specifically acts to prevent the layer below it from evolving towards “selfish” behaviour, why this happens (co-operation is a great strategy when it works), and why this is difficult (evolution is innately selfish and will lead individuals to exploit the group for if they can).

Morality and Maths - Most of the features of morality which occur reliably in many different cultures indicate that it’s a method of enforcing co-operation between members of a group, with parallels to the lower levels of evolution. Examples w/ explanation (avoid harming others, share fairly, help those in need, reproductive co-operation), and the limits of each. Other common trends (often don’t apply outside own tribe/family, don’t override self-preservation generally, punishing non-punishers, having unusual in-group specific traditions, how much of modern globalized society reacts). An argument that it is okay and kind of awesome that morality emerges from statistically advantageous strategies evolution ends up following, and how since conflict on a specific level is inherently unstable while relatively stable co-operation is definitely possible at lower levels and widely agreed to be beneficial, general co-operation may be the eventual rest state for humanity (though likely not perfect co-operation, resources needed to check everyone’s playing fair and discouraging those who are not).

Chairman Yang’s quote about “Extend the self of body outward to the self of group and the self of humanity.”, and how each level of evolution (including morality) can be seen as partially extending the “self” or the priorities of the individual evolving unit to include a larger group in order to gain the co-operation of that group.

Fuzzy Identity and Decision Theory - If it makes sense to talk about how “me” something is as a slightly blurry non-binary property, this has interesting implications for decision theory. For example, it can help explain hyperbolic discounting (far future-ete is less me than near future-ete, so has smaller weight), and how working with your future selves/being worked with by your past selves also has slight parallels with the expanding self for more co-operation thing. An analysis of how treating identity as non-binary affects each decision theory, how many of the disjunctions between how a decision theory tells us to act in a situation and our intuition directs us towards arise from the DTs treating identity as binary, how TDT can be seen as a partial implementation of a Fuzzy Identity Decision Theory, and the effects of fully embracing fuzzy identity (I think it can solve counterfactual mugging while giving sane answers to every other problem I’ve considered so far, but have not formalized and I could be missing something).

The above posts seem like they could be a mini sequence, since they all depend fairly heavily on each other and share a theme. Not sure of the title though, my best current is “Identity, Morality, Decision Theory”, which seems like it could be improved.

The Strange Loop of Consciousness - Treating consciousness as a strange loop; the mind sensing its current state, including its sense of current state, has reduced my confusion on the issue dramatically, though not completely. Some speculation on reasons why evolution could have produced key features of consciousness, labelled as potential just-so stories. I wrote half a post up on this a few months ago, but abandoned it mostly because I was worried my point would not come across properly and it would get a bad reaction. Probably best start from scratch if I get here again.

A Guided Tour of LessWrong - A post talking through the main areas of LW content and important ideas with plenty of links to key posts/places to start exploring and very brief summaries of key points. There’s a lot of really interesting content which is buried pretty deeply in the archives or in long lists of links, it’d be nice to point users at a bunch of nodes from one place.

There’s a bunch of other rough ideas I have for posts, but the above ones (plus something about logical first movers I’m waiting for more feedback from the decision theory google group on before posting) are the things I think I could potentially write a decent post on soon. Rough future ideas include some ideas on raising curious and intelligent children (needs research+talking to people with experience), improving the LW wiki (I’ve founded and run a couple of wikis and know my way around MW, the LW wiki has significant room for improvement), post(s) explaining my opinions on common LW positions (AI takeoff, cryonics, etc).

So, is anyone interested in helping out? I’ve got a lot more detailed reasoning in my head than above, so if you’ve got specific questions about justifications for some part which I’m likely to respond to anyway maybe best to hold them until I’ve made a draft for that post. Pointing me at posts which cover things I’m talking about would be good though, I may have missed them and would prefer not to duplicate effort if something’s already been said. I’m thinking I’ll probably write on google docs and give read/add comments power to anyone who wants it.

Replies from: Viliam_Bur, MrMind
comment by Viliam_Bur · 2014-05-14T09:53:18.977Z · LW(p) · GW(p)

I guess you just have to try it.

Make one article. Make it a standalone article about one topic. (Not an introduction to a planned long series of articles; just write the first article of the series. Not just the first half of an article, to be continued later; instead choose a narrower topic for the first article. As a general rule: links to already written articles are okay, but links to yet unexisting articles are bad; especially if those yet unexisting articles are used as an excuse for why the existing articles don't have a conclusion.)

Put the article in Discussion; if it is successful and upvoted, someone will move it to Main. Later perhaps, when two of your articles were moved, put the third one directly in Main.

The topics seems interesting, but it's not just what topic you write about, but also how do you write it. For example "The Layers of Evolution": I can imagine it written both very good and very badly. For example, whether you will only speak generally, or give specific examples; whether those examples will be correct or incorrect. (As a historical warning, read "the tragedy of group selectionism" for an example of something that seemed like it would make sense, but at the end, it failed. There is a difference between imagining a mechanism, and having a proof that it exists.)

If you have a lot of topics, perhaps you should start with the one where you feel most experienced.

Replies from: ete
comment by plex (ete) · 2014-05-15T01:52:29.808Z · LW(p) · GW(p)

The Fuzzy Pattern Theory of Identity could reasonably be created as a stand alone post, and probably the Layers of Evolution too. Guided Tour and Strange Loop of Consciousness too, though I'd rather have a few easier ones done before I attempt those. The other posts rely on one or both of the previous ones.

Glad they seem interesting to you :). And yes, layers of evolution is the one I feel could go wrong the most easily (though morality and maths may be hardest to explain my point clearly in). It's partly meant as a counterpoint to Eliezer's post you linked actually, since even though altruistic group selection is clearly nonsense when you look at how evolution works, selfish group selection seems like it exists in some specific but realistic conditions (at minimum, single->multicellular requires cells to act for the good of other cells, and social insects have also evolved co-operation). When individuals can be forced to bear significant reproductive losses for harming the group selfishly, selfishly harming the group no longer is an advantage. The cost of punishing an individual for harming your group is much smaller than the cost of passing up chances to help yourself at the expense of the group, so more plausibly evolveable, but still requires specific conditions. I do need to get specific examples to cite as well as general points and do some more research before I'll be ready to write that one.

I.. would still feel a lot more comfortable about posting something which at least one other person had looked over and thought about, at least for my first post. I've started writing several LW posts before, and the main reason I've not posted them up is worry about negative reaction due to some silly mistake. Most of my ideas follow non-trivial chains of reasoning and without much feedback I'm afraid to have ended up in outer mongolia. Posting to discussion would help a bit, but does not make me entirely comfortable. How about if I write up something on google docs, post a link here, then if there's not much notice in a few days use Discussion for getting initial feedback?

Replies from: MrMind
comment by MrMind · 2014-05-15T07:38:49.002Z · LW(p) · GW(p)

How about if I write up something on google docs, post a link here, then if there's not much notice in a few days use Discussion for getting initial feedback?

I think that would remove a substantial portion of your potential readers. Just suck it up and post something rough in Discussion, even if it feels uncomfortable.

For example: the piece that starts with "even though altruistic group selection is clearly nonsense" up until the end of the paragraph might be expanded just a little and posted stand-alone in an open thread.
Gather reactions.
Create an expanded post that addresses those reactions.
Post it to Discusssion.
Rinse and repeat.

comment by MrMind · 2014-05-14T08:33:16.793Z · LW(p) · GW(p)

I think the better course of action is if you just post your ideas first in the discussion section, let the feedback pour and then, based on what you receive, craft posts for the main section. After all, that's what the discussion section is for.
In this way, you'll get a lot of perspectives at a time, without waiting the help of a single individual.

Replies from: ete
comment by plex (ete) · 2014-05-15T01:54:57.777Z · LW(p) · GW(p)

hm, you're suggesting making one rough post in discussion, then use feedback to make a second post in main? I can see how that's often useful advise, but I think I'd prefer to try and thoroughly justify from the start so would find it hard to avoid making a Main length post straight away. Revising the post based on feedback from Discussion before moving it seems like a good idea though.

Replies from: MrMind
comment by MrMind · 2014-05-15T07:32:52.116Z · LW(p) · GW(p)

Well, you have three layers in LW: a post in an open thread, a discussion post, a main post (who might get promoted). Within these you can pretty much refine any idea you want without losing too much karma (some you will lose, mind you, it's almost unavoidable), so that you can reject it quickly or polish it until it shines duly for the main section.

comment by NancyLebovitz · 2014-05-16T09:02:30.625Z · LW(p) · GW(p)

New family of materials discovered by accident

Does this suggest a problem with using Bayes to generate hypotheses? My impression is that Bayes includes generating hypotheses which look in the most likely places. Are there productive ways of generating accidents, or is paying attention when something weird happens the best we can do?

Replies from: Lumifer, witzvo
comment by Lumifer · 2014-05-16T14:29:24.372Z · LW(p) · GW(p)

Does this suggest a problem with using Bayes to generate hypotheses?

I think that Bayes is completely silent on how one should generate hypotheses...

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-16T15:17:56.371Z · LW(p) · GW(p)

http://wiki.lesswrong.com/wiki/Locate_the_hypothesis

I'd have sworn that one of the first sequences I read was about improving science by using Bayes to make better choices among hypotheses. On the one hand, my memory is good but hardly perfect, and on the other choosing among hypotheses is related to but not exactly the same as generating hypotheses.

Replies from: Lumifer
comment by Lumifer · 2014-05-16T15:29:51.302Z · LW(p) · GW(p)

using Bayes to make better choices among hypotheses

That, sure. You can easily argue that Bayesianism provides a better framework for hypothesis testing. But that's quite different from generating hypotheses.

comment by witzvo · 2014-05-16T21:13:08.650Z · LW(p) · GW(p)

An example of using Bayes to "generate hypotheses" that's successful is the mining/oil industry that makes spatial models and computes posterior expected reward for different drilling plans. For general-science type hypotheses you'd ideally want to put a prior on a potentially very complicated space (e.g. the space of all programs that compute the set of interesting combinations of reagents, in your example) and that typically isn't attempted with modern algorithms. This isn't to say there isn't room to make improvements on the state of the art with more mundane approaches.

comment by Douglas_Knight · 2014-05-15T16:48:48.127Z · LW(p) · GW(p)

Do you think that the Last Psychiatrist is male or female?

[pollid:699]

Replies from: gwern, None, knb
comment by gwern · 2014-05-15T17:19:45.961Z · LW(p) · GW(p)

I'm not sure TLP is a single person.

Replies from: None, Douglas_Knight
comment by [deleted] · 2014-05-15T17:40:51.216Z · LW(p) · GW(p)

Ooh! Is it stylometrics time?

comment by Douglas_Knight · 2014-05-15T17:34:09.442Z · LW(p) · GW(p)

Good point. You should probably choose N/A.

comment by [deleted] · 2014-05-28T19:44:49.893Z · LW(p) · GW(p)

TLP's anonymity was actually broken a while ago. I won't post details here, but if you feel you need evidence feel free to PM me.

He's male.

However, this doesn't remove the possibility that some works published under the TLP brand were written by another author.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-05-30T02:47:26.040Z · LW(p) · GW(p)

There's no need to be coy about it since obvious google searches bring up a quora question devoted to it and a reddit thread that is probably the original source (but not obviously about the doxxing from google's snippet). If you know an older discussion, I'd be interested.

My running across his identity is much of my motivation for this poll, partly because the clock is ticking on my ability to run the poll, and partly because of knowing the correct answer, though I can't explain that. The other reason is that the answer female is a reactionary shibboleth, which came up again on Yvain's blog.

I don't find at all plausible the possibility of an imposter. The possibility that it is always the same team is more plausible, though I think unlikely.

comment by knb · 2014-05-15T19:43:05.840Z · LW(p) · GW(p)

I believe I read someone claim that TLP is written by a married couple. Before I read that I assumed TLP was a man, however. The writing style seems pretty masculine.

comment by eggman · 2014-05-12T08:20:09.917Z · LW(p) · GW(p)

Does anyone understand how the mutant-cyborg monster image RationalWiki uses represents Less Wrong? I've never understood that.

Replies from: Oscar_Cunningham, None, Barry_Cotter
comment by [deleted] · 2014-05-12T13:26:24.169Z · LW(p) · GW(p)

Picture of Lava Basalisk, as noted by Oscar_Cunningham, linking to the concept of Roko's Basilisk, which is a large point of interest for RW users.

comment by Barry_Cotter · 2014-05-12T09:20:39.648Z · LW(p) · GW(p)

The difference between a monster and a god is in your point of view, unless it's a sexless, lifeless husk like the Abrahamic religion's God, or some other platonic ideal. The cyborg seems likely to be a reference/confusion to the Singularity as Rapture of the Nerds, where we become as gods, a large misunderstanding of what it seems most here hop for, an AI np more conscious than a chair that makes us what we would want to be if we were more what we want to be.

Why ate you spending time on RationalWiki? I know LessWrong has gone downhill but there are still interesting books, and blogs, and people.

comment by Viliam_Bur · 2014-05-15T08:30:28.036Z · LW(p) · GW(p)

Shit rationalists say: a fellow LessWronger describing a person from Facebook:

He is an intellectual. Well, not in the "he reads LessWrong" sense, but in the "he can express himself using a whole sentence" sense.

I kept laughing for a few minutes (sorry, it's probably less funny here out of context), and I promised I will post this on LW, keeping the author's identity secret. Ignoring the in-group applause lights, it is a nice way to express a difference between a person who merely provides interesting opinions, and a person who also tries to confront their map with the territory; the latter being relatively rare among the interestingly speaking people. And yeah, it's reading LessWrong that made me appreciate the difference, to the point where the former becomes boring.