Open thread, Nov. 24 - Nov. 30, 2014

post by MrMind · 2014-11-24T08:56:19.716Z · LW · GW · Legacy · 325 comments

Contents

325 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

325 comments

Comments sorted by top scores.

comment by NancyLebovitz · 2014-11-25T10:09:36.895Z · LW(p) · GW(p)

The header for this page says "You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet.". It's inaccurate because Discussion doesn't include the posts which were started in Main.

comment by Artaxerxes · 2014-11-25T07:54:23.759Z · LW(p) · GW(p)

Stuart Russell contributes a response to the Edge.org article from earlier this month.

Of Myths And Moonshine

"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."

So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."

Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.

None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility. There have been many unconvincing arguments – especially those involving blunt applications of Moore's law or the spontaneous emergence of consciousness and evil intent. Many of the contributors to this conversation seem to be responding to those arguments and ignoring the more substantial arguments proposed by Omohundro, Bostrom, and others.

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius. AI research has been accelerating rapidly as pieces of the conceptual framework fall into place, the building blocks gain in size and strength, and commercial investment outstrips academic research activity. Senior AI researchers express noticeably more optimism about the field's prospects than was the case even a few years ago, and correspondingly greater concern about the potential risks.

No one in the field is calling for regulation of basic research; given the potential benefits of AI for humanity, that seems both infeasible and misdirected. The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values. For practical reasons, we will need to solve the value alignment problem even for relatively unintelligent AI systems that operate in the human environment. There is cause for optimism, if we understand that this issue is an intrinsic part of AI, much as containment is an intrinsic part of modern nuclear fusion research. The world need not be headed for grief.

Replies from: artemium, Brillyant
comment by artemium · 2014-11-25T20:01:40.125Z · LW(p) · GW(p)

Finally some common sense. I was seriously disappointed in statements made by people I usually admire (Pinker, Schremer). It just shows how much we still have to go in communicating AI risk to the general public when even the smartest intellectuals dismiss this idea before any rational analysis.

I'm really looking forward to Elon Musk's comment.

comment by Brillyant · 2014-11-26T17:17:16.910Z · LW(p) · GW(p)

Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

ELI5...

  • Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

  • Why is "spontaneous emergence of consciousness and evil intent" not a risk?

Replies from: Viliam_Bur, None
comment by Viliam_Bur · 2014-11-26T21:21:14.724Z · LW(p) · GW(p)

Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

If the AI is aware of the pauses, it can try to eliminate them (if the pauses are triggered by a circumstance X, it can find a clever way to technically avoid X), or to make itself receive the "instruction" it wants to receive (e.g. by threating or hypnotising a human, or by doing something that technically counts as human input).

Replies from: Brillyant
comment by Brillyant · 2014-11-26T21:32:38.027Z · LW(p) · GW(p)

I see.

by threating or hypnotising a human

This is the gist of the AI Box experiment, no?

Replies from: Viliam_Bur, wedrifid
comment by Viliam_Bur · 2014-11-27T09:20:51.348Z · LW(p) · GW(p)

The important aspect is that there are many different things the AI could try. (Maybe including those that can't be "ELI5". It is supposed to have superhuman intelligence.) Focusing on specific things is missing the point.

As a metaphor, imagine that a group of retarded people is trying to imprison MacGyver in a garden shed. Later MacGyver creates an explosive from his chewing gum, destroys a wall, and leaves. The moral of this story is not: "To imprison MacGyver reliably, you must take all the chewing gum from him." The moral is: "If you are retarded, and your enemy is MacGyver, you almost certainly cannot imprison him in the garden shed."

If you get this concept, then similar debates will feel like: "Let's suppose we make really really sure he has no chewing gum. We will even check his shoes, although, realistically, no one keeps chewing gum in their shoes. But we will be extra careful, and will check his shoes anyway. What could possibly go wrong?"

comment by wedrifid · 2014-11-26T21:51:48.840Z · LW(p) · GW(p)

This is the gist of the AI Box experiment, no?

No. Bribes and rational persuasion are fair game too.

comment by [deleted] · 2014-11-30T12:29:56.492Z · LW(p) · GW(p)

Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

Because instructions are words, and "ask for instructions" implies an ability to understand and a desire to follow. The desire to follow instructions according to their givers' intentions is more-or-less a restatement of the Hard Problem of FAI itself: how do we formally specify a utility function that converges to our own in the limit of increasing optimization power and autonomy?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-11-30T15:18:10.667Z · LW(p) · GW(p)

If you are worrying about the dangers of human level or greater AI, you are tacitly taking the problem of natural language interpretation to have been solved, so the above is an appeal to Mysterious Selective Stupidity.

Replies from: None
comment by [deleted] · 2014-11-30T22:08:58.111Z · LW(p) · GW(p)

you are tacitly taking the problem of natural language interpretation to have been solved

No, I am not. Just because an AGI can solve the natural-language interpretation problem does not mean the natural-language interpretation problem was solved separately from the AGI problem, in terms of narrow NLP models. In fact, more or less the entire point of AGI is to have a single piece of software to which we can feed any and all learning problems without having to figure out how to model them formally ourselves.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-12-15T10:43:45.537Z · LW(p) · GW(p)

In responding to Brilliant, you were tacitly assuming that the AI has been given instructions in some higher level language that is subject to differing interpretations, and is not therefore just machine code, which US tacitly assuming it has already got .NL abilities.

Yes, it would probably need a motivation to interest such sentences correctly. But that us an easier problem to solve than coding un the whole of human value. An AI would need to understand human value in order to understand NL, but would not need to be preloaded with all human value, since discovering it would be a subsidiary goal of interpreting NL correctly.

And interpreting instructions correctly is a subgoal of getting things in general right. Building AIs that are epistemic rationalists could be a further simplification of the problem of AI safety. Epistemic rationality is difficult for humans because humans are evolutionary hacks whose goals are spreading their genes, achieving status, etc.It may be excessively anthropomorphic to assume human levels of deviousness in AIs.

Replies from: None
comment by [deleted] · 2014-12-15T13:35:22.941Z · LW(p) · GW(p)

In responding to Brilliant, you were tacitly assuming that the AI has been given instructions in some higher level language that is subject to differing interpretations, and is not therefore just machine code, which US tacitly assuming it has already got .NL abilities.

No, I'm insisting that no realistic AGI at all is a Magic Genie which can be instructed in high-level English. If it were, all I would have to say is, "Do what I mean!" and Bob's your uncle. But since that cannot happen without solving Natural Language Processing as a separate problem before constructing an AGI, the AGI agent has a utility function coded as program code in a programming language -- which makes desirable behavior quite improbable.

An AI would need to understand human value in order to understand NL, but would not need to be preloaded with all human value, since discovering it would be a subsidiary goal of interpreting NL correctly.

Again: knowing is quite different from caring. What we could do in this domain is solve natural-language learning and processing separately from AGI, and then couple that to a well-worked-out infrastructure of normative uncertainty, and then, after making absolutely sure that the AI's concept-learning via the hard-wired natural-language processing library matches the way human minds represent concepts computationally, use a large corpus of natural-language text to try to teach the AI what sort of things human beings want.

Unfortunately, this approach rarely works with actual humans, since our concept machinery is horrifically prone to non-natural hypotheses about value, to the point that most of the human race refuses as a matter of principle to consider ethical naturalism a coherent meta-ethical stance, let alone the correct one.

We have some idea of a safe goal function for the AGI (it's essentially a longer-winded version of "Do what I mean, but taking the interests of all into account equally, and considering what I really mean even under reflection as more knowledge and intelligence are added"), the question is how to actually program that.

Which is actually an instance of the more general problem: how do we program goals for intelligent agents in terms of any real-world concepts about which there might be incomplete or unformalized knowledge? Without solving that we can basically only build reinforcement learners.

The whole cognitive-scientific lens towards problems is to treat them as learning and inference problems, but that doesn't really help when we need to encode something we're fuzzy about rather than being able to specify it formally.

Building AIs that are epistemic rationalists could be a further simplification of the problem of AI safety. Epistemic rationality is difficult for humans because humans are evolutionary hacks whose goals are spreading their genes, achieving status, etc.It may be excessively anthropomorphic to assume human levels of deviousness in AIs.

If being devious to humans is instrumentally rational, an instrumentally rational AI agent will do it.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-12-15T14:23:53.479Z · LW(p) · GW(p)

No, I'm insisting that no realistic AGI at all is a Magic Genie which can be instructed in high-level English. If it were, all I would have to say is, "Do what I mean!" and Bob's your uncle. But since that cannot happen without solving Natural Language Processing as a separate problem before constructing an AGI, the AGI

I was actually agreeing with you that NLP needs to be solved separately if you want to instruct it in English. The rhetoric about magic isn't helpful.

agent has a utility function coded as program code in a programming language -- which makes desirable behavior quite improbable.

I don't see why that would follow, and in fact I argued against it.

knowing is quite different from caring.

I know.

What we could do in this domain is solve natural-language learning and processing separately from AGI, and then couple that to a well-worked-out infrastructure of normative uncertainty, and then, after making absolutely sure that the AI's concept-learning via the hard-wired natural-language processing library matches the way human minds represent concepts computationally, use a large corpus of natural-language text to try to teach the AI what sort of things human beings want.

That's not what I was saying. I was saying an AI with a motivation to understand .NL correctly would research whatever human value was relevant.

We have some idea of a safe goal function for the AGI (it's essentially a longer-winded version of "Do what I mean, but taking the interests of all into account equally, and considering what I really meaneven under reflection as more knowledge and intelligence are added"), the question is how to actually program that

That's kind of what I was saying.

If being devious to humans is instrumentally rational, an instrumentally rational AI agent will do it.

Non sequitur. In general, what is an instrumental goal will vary with final goals, and epistemic rationality is a matter of final goals. Omohundran drives are unusual in not having the property of varying with final goals.

comment by NikiT · 2014-11-28T13:20:15.414Z · LW(p) · GW(p)

I've been trying to decide whether or not to pursue an opportunity to spread rationalist memes to an audience that wouldn't ordinarily be exposed to them. I happen to be friends with the CEO and editor of an online magazine/community blog that caters to queer women, and I'm reasonably confident that with the right pitch I could convince them to let me do a column dedicated to rationality as it relates to the specific interests of queer women. I think there might be value in tailoring rationality material for specific demographics.

The issue is that, in order to make it relevant to the website and the demographic, I would need to talk about politics while trying to teach rationality, which seems highly risky. As one might imagine from the demographic, the website and associated community is heavily influenced by social justice memes, many of which I wholeheartedly endorse and many others of which I'm highly critical of. The strategy I've been formulating to avoid getting everybody mindkilled is to talk about the ways biases contibute to sexisim and homophobia, and then also talk about how those same bias can manifest in feminist/social justice ideas, while emphasising to death how important it is to avoid Fully General Counterarguments, but it still seems risky.

The other issue is that it might not be such a good idea to try to teach rationality when I'm still learning myself, and haven't really participated in the rationalist community. OTOH when will I ever be done learning, and should I let this opportunity pass by?

The potential Pros are; Improving the quality of discourse within my community, providing a space for the more rationalist members of that community, and spreading rationalist memes. Also, if it works out, it would probably raise my relative status within the community, which may be clouding my judgement of how good an idea it is.

The potential Cons are; That I might mess up and mindkill everyone, that I might say something too critical that gets me socially ostracize, and that I might accidentally write something foolish on the internet that I later regret.

Thoughts?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-28T16:07:35.063Z · LW(p) · GW(p)

There a good strategy against publishing something stupid: Peer review before publication.

Something that's missing from a lot of social justice talk is quoting cognitive science papers. Talking about actual experiments and what the audience can learn from them could make people care more about empiricism.

Replies from: NikiT
comment by NikiT · 2014-11-29T03:43:43.027Z · LW(p) · GW(p)

I was planning to have one of my friends from the community around that website test read the articles for me, though I might also benefit from having a rationalist test read them, if anybody wants to volunteer.

Discussing cognitive science experiments is part of the plan. I actually performed a version of the 2-4-6 experiment on a group of people associated with the website (while dressed as a court jester!(it was during a renaissance fair)) and as predicted only 20% of them got it right. I think knowing that members of their own ingroup are just as susceptible to bias as faceless experimental subjects will help get the point across.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-29T20:04:19.028Z · LW(p) · GW(p)

I volunteer for giving you feedback on a few articles.

comment by Richard_Kennaway · 2014-11-24T12:14:16.864Z · LW(p) · GW(p)

Suddenly, I know the relative sizes of the planets!

HT Andrew Gelman.

ETA: Pluto isn't in the picture, but it would be a coriander seed, half the diameter of Mercury. For the Sun, imagine a spherical elephant.

Replies from: philh, Brillyant
comment by philh · 2014-11-25T00:07:38.116Z · LW(p) · GW(p)

The radius of the sun is only about ten times the radius of jupiter. I feel like a spherical elephant has considerably more than ten times the radius of a watermelon.

...is what I was about to say until I did research, and apparently it's pretty accurate. A watermelon can exceed 60cm diameter, and wolfram alpha gives an elephant's length between 5.4 and 7.5 metres.

comment by Brillyant · 2014-11-25T00:04:05.301Z · LW(p) · GW(p)

That's either one huge grapefruit...or one tiny watermelon.

comment by Torgo · 2014-11-24T11:19:18.468Z · LW(p) · GW(p)

I've long been convinced that donating all the income I can is the morally right thing to do. However, so far this has only taken the form of reduced consumption to save for donations down the road. Now that I have a level of savings I feel comfortable with and expect to start making more money next year, I no longer feel I have any excuse; I aim to start donating by the end of this year.

I’m increasingly convinced that existential risk reduction carries the largest expected value; however, I don’t feel like I have a good sense of where my donations would have the greatest impact. From what I have read, I am leaning towards movement building as the best instrumental goal, but I am far from sure. I’ll also mention that at this point I’m a bit skeptical that human ethics can be solved and then programmed into an FAI, but I also may be misunderstanding MIRI’s approach. I would hope that by increasing the focus on the existential risks of AI in elite/academic circles, more researchers could eventually begin pursuing a variety of possibilities for reducing AI risk.

At this point, I am primarily considering donating to FHI, CSER, MIRI or FLI, since they are ER focused. However, I am open to alternatives. What are others’ thoughts? Thanks a lot for the advice.

Replies from: Gurkenglas, jkaufman
comment by Gurkenglas · 2014-11-25T20:53:55.731Z · LW(p) · GW(p)

An upper bound on the loss incured by waiting another year before you donate your savings to an organization is the interest they would have to pay on a loan of your saving's size in that time. If you estimate the chance that you will regret your choice of donation target in a year highly enough, that means waiting may be prudent. Just a thought.

(The cost might be increased by their reduced capacity for planning with the budget provided by you in mind; but with enough people acting like you, the impact of this factor should disappear in the law of large numbers)

Replies from: Torgo
comment by Torgo · 2014-11-26T01:49:25.999Z · LW(p) · GW(p)

Certainly that is an important point to consider. I could always place funds in a donor advised fund for now. However, if an organization that I donated to thought the funds would be best spent later, they could invest the funds. Considering this, my current thinking is that I should donate to an organization if they share the goal of reducing existential risk and I think they would be better at deciding on the best course of action than I would. Considering I am not currently an expert in areas which would prove useful to reducing existential risk, I'm leaning towards donating. Does this seem like a sensible course of action?

Replies from: jkaufman, Gurkenglas
comment by jefftk (jkaufman) · 2014-12-01T12:14:47.511Z · LW(p) · GW(p)

In practice, charities don't really invest excess money or take out loans to spend money sooner. I'm not sure why. Possible explanations:

  • No one will lend much to charities, because they don't have much collateral and their income expectations are so uncertain. Or this leads to very high interest rates.
  • Investing money instead of spending it looks bad and is visible externally through things like the US Form 990.
  • You're required to spend at least X% of the money that comes in each year.
  • If you take a loan, having already spent the money makes it harder to fundraise. People want to pay for things to happen.
  • Investing extra money signals that you don't have room for more funding and so should get less money in the future.

Regardless, if you're thinking that your decision doesn't matter because the recipient can just do X or Y, and it turns out X and Y aren't really options for them, then your decision does still matter.

comment by Gurkenglas · 2014-11-26T02:12:27.964Z · LW(p) · GW(p)

So I pressed the icon that looked like "Delete" and it just struck the text through. Great.

comment by jefftk (jkaufman) · 2014-12-01T12:45:45.144Z · LW(p) · GW(p)

If you think general EA movement building is what makes the most sense currently, then funding the Centre for Effective Altruism (the people who run GWWC and 80k) is probably best.

If you think X-risk specific movement building is better, then CSER and FLI seem like they make the most sense to me: they're both very new, and spreading the ideas into new communities is very valuable.

(And congratulations on getting to where you're ready to start donating!)

Replies from: Torgo
comment by Torgo · 2014-12-01T14:55:21.491Z · LW(p) · GW(p)

Thanks.

At this point, I'm leaning towards CSER. Do you happen to know how it compares to other X-risk organizations in terms of room for more funding?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-12-01T16:33:52.312Z · LW(p) · GW(p)

I don't know, sorry! Without someone like GiveWell looking into these groups individuals need to be doing a lot of research on their own. Write to them and ask? And then share back what you learn?

(Lack of vetting and the general difficulty of evaluating X-risk charities is part of why I'm currently not giving to any.)

comment by DataPacRat · 2014-11-24T10:31:57.254Z · LW(p) · GW(p)

This week's writing lesson: If your motivation for writing is almost entirely internal, then you should write what you enjoy writing, not what you think you should write.

(I lost a few days' worth of productivity getting that one knocked into my skull, though hopefully I'm back to snuff.)

comment by NancyLebovitz · 2014-11-24T17:07:20.830Z · LW(p) · GW(p)

A song about self-awareness:

Yielding to Temptation by Mark Mandel, to the tune of Bin There, Dun That by Cat Faber

Something called me from the bookcase
and I answered quick and dumb
And I guess I'd still be reading there
if rescue hadn't come.
Well, I must have jumped six inches
and I answered "Coming, dear!"
Now the sf's in the basement
and it doesn't call so clear.

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the hours* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

  • changes with each chorus

I was filling up the ice cube tray
last night at half past ten
When I heard a voice entreating
"Won't you dance with me again?"
It's the caramel fudge ripple,
sweet as love and thick as sin.
I'm not dumb, I'm not expAndable,
and I'm not digging in!

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the calories* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

As I stroll around the dealers' room
I'm only there to look.
No, I don't need that CD,
no, I do not need that book.
I can live without a T-shirt
showing Asterix the Gaul...
But I'm wearing ten new buttons
I don't recognize at all!

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the dollars* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

And when it comes to filking,
I perpetually find
One particular composer
reappearing in my mind,
Like some goddam chimes are ringing
in my little fuzzy brain,
And they set my head on fire
and I'm filking him again.

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the lyrics go like nothing
and had
something weird to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the Hayes behind my eyes. *

We interrupt the writing
of this silly little song
'Cause my lady is reminding me
to not stay up too long.
She's reclining in the bedroom
with a warm and sultry smile,
And I'll write this down tomorrow
'cause the song can wait awhile!

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the hours go like nothing
and had
something* good to show!
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

comment by Artaxerxes · 2014-11-24T10:20:37.616Z · LW(p) · GW(p)

Has anyone been prompted to study or read anything thanks to MIRI's new research guide?

comment by NancyLebovitz · 2014-11-24T21:49:22.470Z · LW(p) · GW(p)

Development aid is really hard.

A project that works well in one place or for a little while may not scale. Focus on administrative costs may make charities less competent.

Nonetheless, some useful help does happen, it's just important to not chase after the Big Ideas.

Replies from: None
comment by [deleted] · 2014-11-25T03:12:38.057Z · LW(p) · GW(p)

One of the charities mentioned in the article, Deworm the World, is actually a Givewell top charity, due to "the strong evidence for deworming having lasting impact on childhood development". The article, on the other hand, claims that the evidence is weak, citing three studies in the British Medical Journal, which Givewell doesn't appear to mention in their review of the effectiveness of deworming.

Givewell's review of deworming

Might be worth looking into more.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-25T10:05:15.713Z · LW(p) · GW(p)

Something that should have occurred to me-- the deworming experiment was done in the late 90s, which means that the effect on lifetime income is an estimate.

comment by DataPacRat · 2014-11-24T10:39:06.849Z · LW(p) · GW(p)

What does your inner Quirrellmort tell you?

Has your internal model of the most competent person you can imagine ever given you an insight you wouldn't have thought of with more traditional methods?

Do you have more than one such useful sub-personality?

Does your main mode of thinking bring anything to the table that your useful mental models of others don't? If so, what?

Replies from: MathiasZaman, Sjcs, RowanE, maxikov
comment by MathiasZaman · 2014-11-24T11:00:11.534Z · LW(p) · GW(p)

He mostly tells me to kill annoying people.

Do you have more than one such useful sub-personality?

No, but I'm working on them. I've found my inner Hufflepuff to be particularly helpful in actually getting things done.

Incidentally, is there a name for the "sub-personality technique?"

Replies from: DataPacRat
comment by DataPacRat · 2014-11-24T11:03:41.495Z · LW(p) · GW(p)

Incidentally, is there a name for the "sub-personality technique?"

'Deliberately induced dissociative identity disorder'?

'Cultivation of tulpas'?

'Acting'?

Replies from: somnicule, None, Richard_Kennaway, Vulture
comment by somnicule · 2014-11-24T11:12:06.760Z · LW(p) · GW(p)

Internal Family Systems is the analogous therapy technique, I think.

comment by [deleted] · 2014-11-24T21:48:46.191Z · LW(p) · GW(p)

What would Jesus do?

comment by Vulture · 2014-11-25T03:34:37.950Z · LW(p) · GW(p)

Cultivation of tulpas

This already refers to a similar, but much dicier, technique.

comment by Sjcs · 2014-11-25T10:51:22.714Z · LW(p) · GW(p)

I unfortunately haven't developed a quirrellmort yet (the concept is on my to-do list though, along with a number of other personifications). I do have two loose internal models though, for very specific tasks.

The first is called "The Alien" or just "Alien". I created it in my mid-teens after reading the last samurai (not the movie), although my use of The Alien is not the same as the book's. The Alien is the voice in my head that says the pointlessly stupid or cruel things (generally about people) for no reason other than being able to. They aren't things I actually believe or feel, so I just tell The Alien to shut up. By doing this, I can create a divide between myself and these thoughts, not feel guilty about them occuring, and more quickly put them out of my mind.

The second I created very recently based off this thread. It is for the prevention of ego depletion when it comes to either starting big tasks or taking care of long lists of little tasks. Rather than think "Ok time to (make myself) do this" I defer the choice to an internal, slightly more rational model of myself that doesn't suffer from decision fatigue. The outcome is very predictable ("Do the goddarn task already"), but does seem to work very well for me. It's still quite new, and I probably don't use it as much as I should.

I have plans to make a number of other internal models to create an internal 'parliment' that can discuss and debate major decisions, or act on their own for specific required benefits. Other models that might be included include a cynic/pessimist (to help me be more pessimistic in my planning), an altruist (to consider if my actions are actually beneficial), a highly motivated being (to help renew my resolve), and some kind of quirrellmort. These are probably very liable to change as I try to implement them.

comment by RowanE · 2014-11-24T14:01:22.514Z · LW(p) · GW(p)

I've often considered producing such a personality, after observing a previous LW discussion about tulpas, but never even got past the stage of which character to use - I don't know who the "most competent person I can imagine" would be.

comment by maxikov · 2014-11-26T06:20:21.685Z · LW(p) · GW(p)

You know that spreading rationality is a strong net positive, right? How many lives could we save if people just stopped for a while and though about stuff in a relatively unbiased way? Even then the population of purely selfish but rational agents could do better than we do - and people usually aren't purely selfish. If we could only spread rationality better. But you know as good as I do: it's exactly the biases that make demagogy almost always sounds more convincing than the truth. It is so hard, so frustrating to explain the bitter truth, while competing against comforting lies, pushing all the buttons that - you've learned it - almost guaranteed to make one agree.

But what if you could do a little bit of... you know... marketing? Oh, spreading rationality through irrationality sounds so hypocritical!.. deontologically. But you're utilitarian, you know how to make trade-offs. And you know better than to make trade-offs against some general principles that may be reasonable rules of thumb, but don't even start to encompass the actual people and their happiness. How did you put that - shut up and multiply? Well, go on, multiply: billions of lives saved against millions slightly offended. And here is the thing - before learning about biases they won't be able to recognize your little tricks, and the job would be already done. Many will probably agree that it would have been net positive. Oh, your reputation could be damaged? Well, I though you were an altruist.

Can it even get any worse than it is now? I'm not even talking about the marketing of commodities - adding a little bit of your marketing isn't gonna change anything at all, even if you still believe in those deontological ideas. I'm talking about the market of ideas. You compete against people who learned some of the tricks, but use them with malicious intents, not for the benefit of the consumer. But you know better. They vaguely learned some buttons from classical novels and books by liberal arts majors. You learned how the whole machine works, with mathematical modeling. You know what buttons to push to make your point sweeter ans stickier. You can crush all that irrationality all at once.

After all, there are no arguments without any flavor with them. It's just that you either select to give the randomness and subconsciousness to choose the flavor, and call it "fair", or purposefully select the flavor, and call it "trickery" and "marketing". But since when do rationalists consider obliviousness better than knowledge?

Why do you choose to not use your force for good? What stops you? What's your choice?

Replies from: Lumifer, Richard_Kennaway
comment by Lumifer · 2014-11-26T16:18:10.944Z · LW(p) · GW(p)

Why do you choose to not use your force for good?

"I beseech you, in the bowels of Christ, think it possible that you may be mistaken" -- Oliver Cromwell

comment by Richard_Kennaway · 2014-11-26T09:25:32.803Z · LW(p) · GW(p)

I think that entire comment deserves the Cognitive Trope Therapy response. Is this "being spoken by the kindly old witch who has approached the fanatic knight with concern in her eyes and implored him to realize that he will only hurt others more by what he is doing", or is it being spoken by "figures wearing black robes, and speaking in a dry, whispering voice, and they are actually withered beings who touched the Stone of Evil".

Definitely the latter.

Replies from: maxikov
comment by maxikov · 2014-11-26T19:55:57.410Z · LW(p) · GW(p)

being spoken by "figures wearing black robes, and speaking in a dry, whispering voice, and they are actually withered beings who touched the Stone of Evil"

Isn't that what my inner Quirrellmort supposed to be?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-11-26T20:57:24.458Z · LW(p) · GW(p)

No, that's your inner Nazgul.

comment by fubarobfusco · 2014-11-29T06:24:35.420Z · LW(p) · GW(p)

I have been playing the card game Hanabi one hell of a lot recently, and I strongly recommend it to the LW community.

Hanabi is an abstract, cooperative game with limited information. And it's practically a tutorial in rational thinking in a group. Extrapolating unstated facts from other players' belief states is essential: "X did something that doesn't make sense given what I know; what is it that X knows but I don't, under which that action makes sense?" So, for that matter, is a consequentialist view of communication: "If I tell X the fact P, what will they do? Not what will they believe or know, but what actions should I expect they will take?"

Two people I've played with have told me that the game has positively affected their understanding of communication.

Replies from: MrMind, drethelin
comment by MrMind · 2014-12-01T08:33:56.127Z · LW(p) · GW(p)

Seconding too.
I've played in very small groups (~3), and the game usually stabilizes into predictable strategies (1 discards, 2 gives information, 3 puts down, and after a while switch between 2 and 3). Larger groups are probably messier and funnier, but nonetheless, very instructive.

comment by drethelin · 2014-12-01T01:11:12.540Z · LW(p) · GW(p)

Seconding this recommendation.

comment by Shmi (shminux) · 2014-11-28T22:32:10.275Z · LW(p) · GW(p)

From a comment on SSC:

Attempts to get the LW community to borrow some of the risk analysis tools that are used to make split second judgments in such communities effectively has been met with a crushing wall of failure and arrogance. Suggestion that LW-ers should take a simple training course at their local volunteer fire department so they can understand low probability high cost risk on an emotional level has been met with outright derision.

Does anyone close to CFAR know the specifics?

Replies from: gwillen, bogus, Nornagest
comment by gwillen · 2014-11-30T08:52:58.490Z · LW(p) · GW(p)

As someone who has taken the NIMS/ICS 100 course (online through FEMA), and gone to my local fire station and taken their equivalent of NIMS/ICS 100/200/70 -- I was not very impressed.

I can clearly see that there are valuable things in NIMS/ICS, and I can even believe that the movement which gave rise to the whole thing had valuable, interesting, and novel insights. But you're not going to get much of that by taking the course. It's got about one important concept -- which basically boils down to "it's good for different agencies to cooperate effectively, and here's one structure under which that empirically seems to happen well, therefore let's all use it" -- and the rest is a lot of details and terminology which are critically important to people actually working in said agencies, and mostly irrelevant otherwise.

EDIT: Boromir's big thing seems to be that HRO is about risk analysis, updating based on evidence, and dealing with low probabilities as mentioned in the excerpt. I can tell you that the basic ICS course covers exactly none of that. So I wonder what 'training course at the local volunteer fire department' he thinks we should all take. (I admit I have not taken the FEMA-official ICS 200 and 70 classes, which are online. But given the style of the 100 class, I cannot imagine them being dense with the kind of knowledge he thinks we should be gaining from them.)

comment by bogus · 2014-11-29T05:32:04.574Z · LW(p) · GW(p)

Interesting, though apparently this person made his suggestions to Salamon and Yudkowsky in person, not to the LW community itself - thus, his reference to "outright derision" is somewhat misleading. CFAR has indeed adopted some ideas that originally came from LW itself - the whole "goal factoring" theme of recent CFAR workshops seems to be a significant example.

comment by Nornagest · 2014-11-30T07:25:13.270Z · LW(p) · GW(p)

I'm not particularly close to the CFAR wing of that crowd, but: on the one hand, that sounds at least potentially valuable, and I'd look into it if I had anything more specific to go on than "a simple training course". (Poking around my local fire department's webpage turned up only something called "Community Emergency Response Training", which seems to consist of first aid, disaster prep, and basic firefighting -- too narrow and skill-based to be what Boromir's comment is talking about.)

On the other hand, though, I don't think we're getting the full story here. The fact that Boromir devotes most of his comment to flogging the organization he's (judging from his username's link) either a member or a fanboy of, in particular, is a very bad sign.

comment by philh · 2014-11-24T11:38:59.572Z · LW(p) · GW(p)

An idea I've been toying with in my head, and discussed slightly at LW London yesterday: a sort of Snopes for "has person X professed opinion Y?"

Has Scott Alexander endorsed GamerGate? Did Eric Raymond say that hackers tend to be libertarian (or neoconservative, depending who you ask)? Did Eliezer say the singularity was too close to bother getting a degree?

I'll put further thoughts in replies to this comment.

Replies from: Baughn, philh, None, philh, ChristianKl, philh, Artaxerxes, Gunnar_Zarncke
comment by Baughn · 2014-11-24T13:48:34.878Z · LW(p) · GW(p)

I'd be wary of making a thing like that. Even ignoring the EU's bizarre "Right to be forgotten" law, people should be allowed to change their opinion, and such a website would incentivise consistency only. Not truth; consistency.

Are you sure that's what you want?

Replies from: philh, DanielLC, NancyLebovitz
comment by philh · 2014-11-24T15:17:16.663Z · LW(p) · GW(p)

Mm, good point.

One of the things which inspired this idea was this thread: "okay, yes, it seems that Eliezer might well have said something like that, back in 2001". Eliezer already doesn't get to be forgotten. But if people are attacking him for things he said back in 2001, it seems like an improvement if we make it obvious that he said them back in 2001.

But for other people, I can see how this could be a bad thing to have. I'd like to be able to write "they said this in 2001, but in 2010 they said the opposite" and have people accept "okay, they changed their mind", but that doesn't seem entirely realistic.

I've updated from "probably good idea, unsure how valuable" to "possibly good idea, high variance".

comment by DanielLC · 2014-11-24T22:30:42.415Z · LW(p) · GW(p)

Ideally it would have "he said it", "he did not say it", and "he has since retracted it". As is, you could find where someone originally said something, and have no way of knowing if it has ever been retracted.

comment by NancyLebovitz · 2014-11-24T14:37:06.448Z · LW(p) · GW(p)

:My idea version of the wiki would include a history of the person's ideas.

There still might be be problems with people (I'm thinking of Moldbug) whose ideas are hard to parse.

Replies from: Baughn
comment by Baughn · 2014-11-24T14:55:50.492Z · LW(p) · GW(p)

That wouldn't prevent selective quoting, and all the other typical human behaviour which would, still, incentivise consistency.

comment by philh · 2014-11-24T11:45:31.676Z · LW(p) · GW(p)

The answers to questions like this aren't necessarily "yes" or "no". But it could still be valuable to say things like "the source for this seems to be this article from 2004, in which he is quoted as saying ...." Or, "he was quoted as saying this in this article. He encouraged people to read the article, but years later, he said that that line was a misquote."

Replies from: bogus, ChristianKl
comment by bogus · 2014-11-24T17:08:01.139Z · LW(p) · GW(p)

That's pretty much how TakeOnIt works already.

Replies from: philh
comment by philh · 2014-11-25T00:11:50.629Z · LW(p) · GW(p)

That seems pretty similar to what I'm envisioning, but transposed. They want to look at positions, and ask "whose opinions on this position are notable?" where notability is based on whether they're likely to have a clue. I'm going for looking at people, and asking "which of this person's positions are notable?" where notability is based on (something like) whether people are talking about it being their position.

Replies from: bogus
comment by bogus · 2014-11-25T00:51:13.987Z · LW(p) · GW(p)

They want to look at positions, and ask "whose opinions on this position are notable?"

That's just the default view. You can click on the name of any "expert" and bring up a nice report where all of their positions are listed and compared with other experts'.

And "notability" is viewed quite generally anyway. As long as the person has something genuinely worthwhile to say, you can add their opinion on all sorts of stuff.

comment by ChristianKl · 2014-11-24T14:43:35.852Z · LW(p) · GW(p)

Or, "he was quoted as saying this in this article. He encouraged people to read the article, but years later, he said that that line was a misquote."

The fact that I recommend people to read an article in which I'm cited doesn't imply that I believe that the article is 100% factually correct.

In general journalists do simply the positions of the people they quote. Depending on the context I might be okay with a slight alteration of my position in the article as long as the main points I want to make appear in the article. If the quote then gets lifted into another context, I might have a problem.

comment by [deleted] · 2014-11-24T19:04:56.895Z · LW(p) · GW(p)

I assume you're talking about internet figures in the greater LW-memeplex. If so, I think this is a bad idea.

Tidy reasons this may have low-to-moderate value:

  • It's already easy to find the public positions of an internet figure.
  • Reasons are more important than conclusions. Unless you think you can present the arguments better than the original source, you'll just end up simply linking to the original source, which is, again, easy to find.

Messy reasons this might have negative value:

  • As a rule, no online community has ever suffered from a lack of introspection. I'm so very sick of hearing groups talk about themselves. In particular, talking about prominent group figures is extremely off-putting to newcomers.
  • It will become a source of emotional stress for those quoted. "Popular-online-writer" is a world apart from being a real public figure. Empirically, the latter handle third-party discussion of themselves poorly.
  • Realistically, this will not guard against drama involving the unfair attributions of positions. If somebody wants to pattern match so-and-so to a particular archetype, there's nothing you can do to stop them.
  • I love my favorite blogs, but gaining an audience is a quality-quantity game, with an emphasis on quantity. Why give particular attention to the conclusions of a figure who have been selected in this way?
Replies from: philh
comment by philh · 2014-11-25T00:55:18.268Z · LW(p) · GW(p)

I'm not intending it to be LW-focused at all (except perhaps by accident of userbase). Other public figures I recall seeing misrepresented include Eric S Raymond, Orson Scott Card and Larry Summers.

It's already easy to find the public positions of an internet figure.

I've read enough ESR that when RationalWiki says

ESR wrote a blog post suggesting that the Haitian people really did summon up the Voudon god Ogun to kill off all the white Frenchmen.

I know that the blog post in question suggests that they really did perform a ritual for that purpose, and that the ritual had a significant effect on the mental state of the participants, but ESR does not believe that the ritual was effective in summoning any kind of god. The blog post doesn't make that last part explicit, but if pressed I could find a slashdot comment where he does say so explicitly.

I don't think it's easy to do this.

(The RW line could be considered not-completely-false, because one can summon a god without the god answering. And it might even be honest, if the writer didn't understand where ESR was coming from. But to the extent that people read it and think that ESR believes that Ogun was successfully summoned, that line isn't true.)

I'm also not interested in arguing over whether or not that ritual ever took place. I don't think anyone's particularly interested in that. I think some people are interested in making fun of ESR, and I'm interested in making it as easy as possible to debunk those people when they say things that aren't true. So I don't need to present ESR's arguments, I just want to say "no, you're misrepresenting his conclusions".

Replies from: Lumifer
comment by Lumifer · 2014-11-25T02:19:13.764Z · LW(p) · GW(p)

Other public figures I recall seeing misrepresented

The list of misrepresented public figures is the list of public figures.

comment by philh · 2014-11-24T12:04:29.532Z · LW(p) · GW(p)

There are a lot of true claims of the form "person X said thing Y". It would be a mistake to only include false claims, because then a claim which isn't listed may be considered true by default. But including every claim would make it impossible to find the one someone is interested in. I'm not sure what notability guidelines would look like.

comment by ChristianKl · 2014-11-24T14:39:21.747Z · LW(p) · GW(p)

As far as famous/notable people go, skeptics.stackexchange works perfectly well for those questions.

In general however focusing on "he said, she said" is bad. I might argue I wide arrange of positions depending on the context. Sometimes I play devils advocate to make points.

Focusing on actual content instead of focusing on what someone said in a single instance if often better.

comment by philh · 2014-11-24T11:59:42.734Z · LW(p) · GW(p)

I'm envisioning this as a mediawiki, where a given person will have a page, and that page lists claims about things they have said. Edit wars can hopefully be fixed by having a number of editors who know how to be impartial, and being trigger-happy on locking pages so that only they can edit. The talk page can be used for discussion, and for the person themselves to weigh in.

comment by Artaxerxes · 2014-11-24T14:51:43.334Z · LW(p) · GW(p)

I like this idea a lot. I honestly think it would be a useful resource, should it be well researched and accurate.

comment by Gunnar_Zarncke · 2014-11-24T14:25:26.926Z · LW(p) · GW(p)

What is your intention? If you hope to espouse truth then I doubt it helps. People have lots of opinions - many of them uninformed or guesswork. And such a site has the risk of additionally weighing the prominent voices too much.

But assuming there is a sensible purpose then I think care must be taken to balance against prominence. User pages are prone to become hubs and mouthpieces of prominent people. Same for popular topics.

I think wikipedias approach of mentioning popular backers for claims is a good balance. Maybe this could be realized as an add-on to existing sites like Wikipedia. "What did X say about Wikipediapage Y?"

Replies from: philh
comment by philh · 2014-11-24T16:57:32.335Z · LW(p) · GW(p)

I'm not hoping to espouse truth in general - I don't think this is a good way to give people correct opinions about, say, neoreaction. I'm hoping to espouse truth about what people actually think, and I'm hoping that this will help to quell bullshit rumours.

So if someone starts a rumour that Eliezer is neoreactionary, someone else could add a section "Eliezer on neoreaction" saying things like: this rumour might be triggered by Eliezer's associations with Mike Anissimov and LW; Eliezer has never publicly endorsed neoreaction; in fact he has publicly disclaimed it in a comment on this article, and hasn't said much else on the subject.

(A lot of this has the implied qualification "as far as the editor knows". I'm not sure how explicit this should be.)

And then anyone who sees the rumour will have an easy way to find out whether or not it's true, instead of googling for "Eliezer Yudkowsky neoreaction" which by then could be a self-citing tumblr-storm, and will not show up anything by Eliezer on neoreaction because he hasn't actually said all that much about it.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-11-24T17:25:04.523Z · LW(p) · GW(p)

There's an unavoidable disconnect between "what people actually think" and "what people report about what they think".

As a matter of good faith, I think people should be taken at their word and deed for what they say they think. Others disagree, and will ascribe all manner of beliefs to a person, regardless of that person's protestations. Eliezer might not say he's neoreactionary, but they can read between the lines. They can probably put together a plausible post-hoc justification for it as well.

If someone's motivated enough to believe Eliezer is a neoreactionary, I don't think your site stops that. I don't think Eliezer getting a "Seriously, Fuck NRx" tattoo stops that. It just gives them a new venue to try and make their case.

Replies from: philh
comment by philh · 2014-11-25T00:30:55.518Z · LW(p) · GW(p)

There are also people who would believe that Eliezer is a neoreactionary if they were told it, but would also believe that Eliezer is not a neoreactionary if they were told that.

I guess I'm hoping that if this question comes up on a public forum, most people won't really know or care about Eliezer. The narrative in my head is along the lines of: someone says Eliezer is NRx, and someone else looks it up and says, no, Eliezer is not NRx, it says so right here. Then if the first person wants to convince anyone, their arguments become complicated and boring and nobody reads them.

comment by Capla · 2014-11-26T20:36:28.782Z · LW(p) · GW(p)

This may be a naive question, which has a simple answer, but I haven't seen it. Please enlighten me.

I'm not clear on why an AI should have a utility function at all.

The computer I'm typing this on doesn't. It simply has input-output behavior. When I hit certain keys it reacts in certain, very complex ways, but it doesn't decide. It optimizes, but only when I specifically tell it to do so, and only on the parameters that I give it.

We tend to think of world-shaping GAI as an agent with it's own goals, which it seeks to implement. Why can't it be more like a computing machine in a box. We could feed it questions, like "given this data, will it rain tomorrow?", or "solve this protein folding problem", or "which policy will best reduce gun-violence?", or even "given these specific parameters and definitions, how do we optimize for human happiness?" For the complex answers like the last of those, we could then ask the AI to model the state of the world that results from following this policy. If we see that it leads to tiling the universe with smiley faces, we know that we made a mistake somewhere (that wasn't what we were trying to optimize for), and adjust the parameters. We might even train the AI over time, so that it learns how to interpret what we mean from what we say. When the AI models a state of the world that actually reflects our desires, then we implement it's suggestions ourselves, or perhaps only then hit the implement button, by with the AI takes the steps to carry out it's plan. We might even use such a system to check the safety of future generations of the AI. This would slow recursive self improvement, but it seems it would be much safer.

Replies from: JStewart, Wes_W, gedymin, ChristianKl
comment by JStewart · 2014-11-26T20:58:55.753Z · LW(p) · GW(p)

This has been proposed before, and on LW is usually referred to as "Oracle AI". There's an entry for it on the LessWrong wiki, including some interesting links to various discussions of the idea. Eliezer has addressed it as well.

See also Tool AI, from the discussions between Holden Karnofsky and LW.

Replies from: Capla
comment by Capla · 2014-11-26T22:28:16.015Z · LW(p) · GW(p)

I was just reading though the Eliezer article. I'm not sure I understand. Is he saying that my computer actually does have goals?

Isn't there a difference between simple cause and effect and an optimization process that aims at some specific state?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-27T10:21:41.410Z · LW(p) · GW(p)

Maybe it would help to "taboo" the word "goal".

A process can progress towards some end state even without having any representation of that state. Imagine a program that takes a positive number at the beginning, and at each step replaces the current number "x" with value "x/2 + 1/x". Regardless of the original number, the values will gradually move towards a constant. Can we say that this process has a "goal" or achieving the given number? It feels wrong to use this word here, because the constant is nowhere in the process, it just happens.

Typically, when we speak about having a "goal" X, we mean that somewhere (e.g. in human brain, or in the company's mission statement) there is a representation of X, and then the reality is compared with X, various paths from here to X are evaluated, and then one of those paths is followed.

I am saying this to make more obvious that there is a difference between "having a representation of X" and "progressing towards X". Humans typically create representations of their desired end states, and then try finding a way to achieve them. Your computer doesn't have this, and neither does "Tool AI" at the beginning. Whether it can create representations later, that depends on technical details, how specifically such "Tool AI" is programmed.

Maybe there is a way to allow superhuman thinking even without creating representations corresponding to things normally perceived in our world. (For example AIXI.) But even in such case, there is a risk of having a pseudo-goal of the "x/2 + 1/x" kind, where the process progresses towards an outcome even without having a representation of it. AI can "escape from the box" even without having a representation of "box" and "escape", if there exists a way to escape from it.

Replies from: torekp
comment by torekp · 2014-11-29T19:58:32.270Z · LW(p) · GW(p)

I don't get this explanation. Sure, a process can tend toward a certain result, without having an explicit representation of that result. But such tendencies often seem to be fragile. For example, a car engine homeostatically tends toward a certain idle speed. But take out one or all spark plugs, and the previously stable performance evaporates. Goals-as-we-know-them, by contrast, tend to be very robust. When a human being loses a leg, they will obtain a synthetic one, or use a wheelchair. That kind of robustness is part of what makes a very powerful agent scary, because it is intimately related to the agent's seeing many things as potential resources to use toward its ends.

comment by Wes_W · 2014-11-27T01:44:51.748Z · LW(p) · GW(p)

First, there's the political problem: if you can build agent AI and just choose not to, this doesn't help very much when someone else builds their UFAI (which they want to do, because agent AI is very powerful and therefore very useful). So you have to get everyone on board with the plan first. Also, having your superintelligent oracle makes it much easier for someone else to build an agent: just ask the oracle how. If you don't solve Friendliness, you have to solve the incentives instead, and "solve politics" doesn't look much easier than "solve metaethics."

Second, the distinction between agents and oracles gets fuzzy when the AI is much smarter than you. Suppose you ask the AI how to reduce gun violence: it spits out a bunch of complex policy changes, which are hard for you to predict the effects of. But you implement them, and it turns out that they result in drastically reduced willingness to have children. The population plummets, and gun violence deaths do too. "Okay, how do I reduce per capita gun violence?", you ask. More complex policy changes; this time they result in increased pollution which disproportionately depopulates the demographics most likely to commit gun violence. "How do I reduce per capita gun violence without altering the size or demographic ratios of the population?" Its recommendations cause a worldwide collapse of the firearms manufacturing industry, and gun violence plummets, along with most metrics of human welfare.

If you have to blindly implement policies you can't understand, you're not really much better off than letting the AI implement them directly. There are some things you can do to mitigate this, but ultimately the AI is smarter than you. If you could fully understand all its ideas, you wouldn't have needed to ask it.

Does this sound familiar? It's the untrustworthy genie problem again. We need a trustworthy genie, one that will answer the questions we mean to ask, not just the questions we actually ask. So we need an oracle that understands and implements human values, which puts us right back at the original problem of Friendliness!

Non-agent AI might be a useful component of realistic safe AI development, just as "boxing" might be. Seatbelts are a good idea too, but it only matters if something has already gone wrong. Similarly, oracle AI might help, but it's not a replacement for solving the actual problem.

comment by gedymin · 2014-11-27T12:20:18.756Z · LW(p) · GW(p)

This is actually one of the standard counterarguments against the need for friendly AI, at least against the notion that is should be an agent / be capable of acting as an agent.

I'll try to quickly summarize the counter-counter arguments Nick Bostrom gives in Superintelligence. (In the book, AI that is not agent at all is called tool AI. AI that is an agent but cannot act as one (has no executive power in the real world) is called oracle AI.)

Some arguments have already been mentioned:

  • Tool AI or friendly AI without executive power cannot stop the world from building UFAI. Its abilities to prevent this and other existential risks are greatly diminished. It especially cannot guard us against the "unknown unknowns" (an oracle is not going to give answers to questions we are not asking.)
  • The decisions of an oracle or tool AI might look good, but actually be bad for us in ways we cannot recognize.

There is also a possibility of what Bostrom calls mind crime. If a tool or oracle AI is not inherently friendly, it might simulate sentient minds in order to give the answers to the questions that we ask; kill or possibly even torture these minds. The possibility that these simulations have moral rights is low, but there can be trillions of them, so even a low probability cannot be ignored.

Finally, it might be that the best strategy for a tool AI to give answer is to internally develop an agent-type AI that is capable of self-improvement. If the default outcome of creating a self-improving AI is doom, then the tool AI scenario might in fact be less safe.

comment by ChristianKl · 2014-11-27T07:05:16.349Z · LW(p) · GW(p)

If you use a spell checking engine while you are typing that likely has an utility function buried in it's code.

comment by SodaPopinski · 2014-11-26T15:14:45.070Z · LW(p) · GW(p)

This is a disturbing talk from Schmidhuber (who worked with Hutter and one of the founders of Deep Mind at the Swiss AI lab).
I say disturbing because of the last minute where he basically says we should be thankful for being the stepping stone to the next step in an evolution towards a world ran by AI's.
This is the nonsense we see repeated almost everywhere (outside lesswrong) that we should be happy to have humanity supplanted by the more intelligent AI, and here it is coming from a pretty wellknown AI researcher... https://www.youtube.com/watch?v=KQ35zNlyG-o

comment by [deleted] · 2014-11-26T03:52:37.714Z · LW(p) · GW(p)

Today I read a post by Bryan Caplan aimed toward effective altruists:

Question: How hard would it be to set up a cost-effective charity to help sponsor the global poor for immigration to Argentina? Responses from GiveWell, the broader Effective Altruism community, and Argentina experts are especially welcome.

For context, Argentina essentially allows immigration by anybody who can get an employer to sponsor them.

Replies from: bramflakes
comment by bramflakes · 2014-11-26T13:29:33.889Z · LW(p) · GW(p)

what could a faltering, medium-trust country like argentina need more than millions of poor, low-trust immigrants

Replies from: Salemicus
comment by Salemicus · 2014-11-26T14:58:15.547Z · LW(p) · GW(p)

It's a common framing, and so I don't intend to pick on you, but I think the key issue isn't levels of trust, but levels of trustworthiness. Yes, there can be feedback effects in both directions between trust and trustworthiness, but fundamentally, it is possible for people and institutions with high trustworthiness to thrive in an otherwise low-trust/trustworthiness society. Indeed, lacking competitors, they may find it particularly easy to do so, and through gradual growth and expansion, lead to a high-trust/trustworthiness society over time. It is not possible for people and institutions with high trust to thrive in an otherwise low-trust/trustworthiness society, as they will be taken advantage of.

You can't bootstrap a society to a high-trust equilibrium by encouraging people to trust more. You need to encourage them to keep their promises.

Replies from: None
comment by [deleted] · 2014-11-26T17:32:54.457Z · LW(p) · GW(p)

I think this line of thinking is productive. Other thoughts:

For cooperative agents to thrive among non-cooperators, they must be able to identity other cooperators. Of course you can wait for the non-cooperators to identity themselves (via an act of non-cooperation in tit-for-tat, or a costly signal), but other agents are inevitably going to rely on other heuristics and information to predict the hidden strategies of others, and, when the agents are human, they will do this in a risk-averse way.

Accordingly, a low-trust society (one in which no single entity is able or willing to enforce cooperative behavior over all individuals) is seldom homogeneously low-trust (or low trustworthiness), but rather a amalgamation of subgroups, each of which is relatively more trusting and trustworthy, but only within the subgroup. Because of the need to guess at the hidden strategies of others, these subgroups don't necessarily split the society into "levels of trustworthiness".

The task of moving to a high trust/trustworthiness society becomes the task of getting cooperative subgroups to identity other potentially cooperative subgroups, and for those two subgroups to figure out a way to share the duty of enforcing cooperative behavior, or of allowing more true information about the cooperative behavior of individuals to flow between groups.

Since evolution produces a special cooperation in close-kinship relations, the simplest artificial grounds for merging two previously uncooperative subgroups is to stretch the kinship relation as far as possible (as in clans, or any society where third- and fourth-cousin relationships are considered relevant).

Some other examples related to this process:

  • The spread of shared religious identity (when this involves submitting to a punitive religious law).
  • Trade unions, cartels and guilds.
  • Language boundaries (which impede information about trustworthiness from flowing across groups).
  • Race, (as an amalgam of language, religion, class etc packaged with a convenient visual ID)
  • The cultivation of national and class identities.
  • The oft-maligned internal division of political parties, which smash together otherwise separate subgroups.
  • The forcible crushing of the old markers of old subgroups (old religions, old kinship practices, old languages)

It's a bit of theory of everything, but I think this is a helpful framing.

comment by JoshuaFox · 2014-11-26T20:23:34.264Z · LW(p) · GW(p)

Anyone want to comment on a pilot episode of a podcast "Rationalists in Tech"? Please PM or email me. I'll ask for your feedback and suggestions for improvement on a 30-minute audio interview with a leading technologist from the LW community. This will allow me to plan an even better series of further interviews with senior professionals, consultants, founders, and executives in technology, mostly in software.

  • Discussion topics will include the relevance of CfAR-style techniques to the career and daily work of a tech professional; tips on career aimed at LWer technologists; and the rationality-related products and services of some interviewees;

  • The goal is to show LessWrongers in the tech sector that they have a community of like-minded people. Often engineers, particularly those just starting out, have heard of the value of networking, but don't know where they can find people who they can and should connect to. Similarly, LWers who are managers or owners are always on the lookout for talent. This will highlight some examples of other LWers in the sector as an inspiration for networking.

comment by DataPacRat · 2014-11-24T11:01:39.378Z · LW(p) · GW(p)

Many Interacting Worlds: Boffo or Bunk?

From my blogfeed: http://theness.com/neurologicablog/index.php/the-many-interacting-worlds-hypothesis/ , which links to http://www.nature.com/news/a-quantum-world-arising-from-many-ordinary-ones-1.16213 , which links to http://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.041013 .

Does anyone with a better understanding of Schrodinger's Equation(s) than I know if any of the above is worth paying attention to?

Replies from: MrMind, Slider, JoshuaZ, Manfred
comment by MrMind · 2014-11-25T08:11:18.099Z · LW(p) · GW(p)

It's interesting, but I wouldn't be much concerned with models that "reproduce some generic quantum phoenomena".
Thanks to categorical quantum mechanics, we already know that many finite toy models do that: heck, you can have quantum phoenomena in databases.

comment by Slider · 2014-11-27T03:18:46.879Z · LW(p) · GW(p)

I had a similar prompt for knowledge seeking in wanting to figure out how the math supports or doesn't support "converging worlds" or "mangled worlds". The notion of a converging world is also porbably of note worthy intuitive reference point in thought-space. You could have a system that is in a quantum indeterministic state each state have a different interaction so that the futures of the states are identical. At that point you can drop the distinguising of the worlds and just say that two worlds have become one. Now there is a possibility that a state left alone first splits and then converges or that it does both at the same time. There would be middle part that would not be being able to be "classified" which in these theories would be represented by two worlds in different configurations (and waves in more traditional models).

Some times I have stumbled upon an argument that if many worlds creates extra worlds whether that forms as a kind of growing block ontology (such as the flat splitters in the sequence post). Well if the worlds also converge that could keep the amount of "ontology stuff" constant or able to vary in both directions.

I stumbled upon that |psi(x)^2| was how you calculated the evolution of a quantum state which was like taking a second power and then essentailly taking a square root by only careing about the magnitude and not the phase of the complex value. For a double slit wtih L being left and R being R it resulted in P(L+R)^2= ^2+C+^2 (where C was either 1, 2 or sqr(2) don't remember and didn't understand which) . The squarings in the sum I found was claimed to be the classical equivalent of the two options. The interference fridges would be great and appear where the middle term was strong. I also that you could read as something like "obtain X if situation was/is y". Getting L when the particle went L is thus very ordinary and all. You can also note that the squaring have the same form as the evolution of a pure state. However I didn't find anything in whether the middle term was interpretable or not. If you try to put it into words it looks a lot like "probability of getting L when the situation was R" which seems very surprising that it could be anything else than zero. But then again I dont' know what imaginary protoprobabilties are. Because it's a multipication of two "chains of events" it's clear you can't single out the "responcible party", it can be a contribution from both. I somehow suspect that this correlates that if your "base" is |L> then the |R>|L> base doesn't apply, ie you can't know the path taken and still get interference. I get that many worlds posits the R world and the L word but it seems there is like a bizarre combination world also involved. One way I in my brute naivity think migth be goign on is taht the particle started in the L world but then "crossed over" to the R world. If worlds in contact can exchange particles it might seem as particles "mysteriously jumped" while the jumping would be loosely related where the particle was. They would have continous trajectories when tracked within the multiverse but they get confused for each other in the single worlds.

However I was unable to grasp the intuition how bras and kets work or what they mean. I pushed the strangeness to wavefunctions but was unable to reach my goal.

It still seems mysterious to me how the single photon state turns into two distinct L and R. I could imagine the starting state to "do a full loop" be a kind of spiral where the direction that photon is travelling is a superpositon of it travelloing in each particular direction with each direction differing from it's neighbour by the phase of the protoprobability phase with their magnitudes summing to 1. That way if the photon has probability one at L it can't have probability 1 as the real part of the protoprobability at R can't be 1 as it is known to differ in phase. I know these intuitions are not well founded, I know the construction of them is known to be unsafe. However intuitive pictures are more easy for me to work with even if it means needing to reimagine them rather than just have them in the right configuration (if somebody know s a more representative way to think about it please tip me about it).

I am also using a kind of guess that you can take a protoprobaility strip it of imaginary parts an dyou get a "single world view" and I am using a view of having 2 time dimensions: a second additional clock makes the phases of the complex values sweep forward (or sweep equal surface areas) even if the "ordinary clock time" would stay still. The undeterminancy under this time would be that a being that is not able to measure the meta-time would be ignorant on what part of the cycle the world is in. Thus you would be ignorant of the phases, but the phases would "resonate". I am assuming one could turn this into a equivalent view where the imaginary component would just select a spatial world in a 1-time multiverse (in otherwise totally real-part only worlds).

I don't have known better understanding but I have a bunch of different understadnings of unknown fittness.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-27T09:51:30.014Z · LW(p) · GW(p)

I don't quite understand this topic, but maybe this could be useful:

The problem with "converging / mangled worlds" is statistical. To make two worlds interact (and become the same world, or erase each other, depending on mutual orientation of their amplitudes), those worlds must have all their particles in the same position. In usual circumstances, this seems unlikely. Imagine the experiment with the cat, where in one world the cat is dead, and in other world the cat happily walks away. How likely is it that at some moment in the future, both universes will have all particles in the same positions?

So, in usual circumstances two worlds interact only if a moment ago they were the same world, and the only difference was one particle going two different paths. (Yes, there are also all the other particles in the universe, also splitting all the time. But this happens the same way in both branches, so it cancels out.)

It still seems mysterious to me how the single photon state turns into two distinct L and R.

My intuition is that this "single state" was never literally one point, but always a small interval (wave? hump?). An interval can break into two parts, and those can travel in different directions. There is no such thing as a single point in quantum physics.

(Disclaimer: I don't really understand quantum physics; I am just interpreting the impression I got from looking at Eliezer's drawings. If you have better knowledge, feel free to ignore this.)

Replies from: Slider
comment by Slider · 2014-11-27T18:01:29.518Z · LW(p) · GW(p)

What forces the worlds to be same in order to interact? You could also have merely adjacent worlds where the "collisions angle" could compensate for small differences. It is just a little harder to imagine how worlds of unrelated state would interact. Maybe dark energy is the sum total of gravity from other worlds?

It's also that two worlds won't long stay singular, but branch all the time into subworlds. The probability of some of the pairwise worlds being close enough is higher.

edit: Also there are settings where splitting doesn't mean lack of structure. For example in the mirror experiements the two paths will systematically intersect and this is a pretty stable result of the mirror positionings.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-28T10:12:29.387Z · LW(p) · GW(p)

It's also that two worlds won't long stay singular, but branch all the time into subworlds. The probability of some of the pairwise worlds being close enough is higher.

If something branches in a limited space, soon the branches will touch each other. The question is, how soon is "soon". If we imagine a real 3D tree in a 3D world, the branches will touch before dozen splits. But if the tree would be extremely large (a few kilometers) and the branches extremely tiny (a few milimeters), there could be more splits.

If we imagine the history of the whole universe as a branching tree in a many-dimensional world, we have to realize there are many dimension (I guess approximately six dimensions for each particle: position and momentum), and compared with the size of the dimension, the branches are really tiny (take two random particles in the whole universe, what is their probability of hitting each other). So there is a lot of time for the tree to grow.

Eventually, the branches will run out of space and start hitting each other all the time. But I think this will happen at the "heat death" of the universe. Then the branches will hit each other so much that the whole concept of time or even reality may become meaningless. But I think this is not happening now, yet. There is still a lot of space for the universe to grow without intersecting with other branches.

What forces the worlds to be same in order to interact? ... Maybe dark energy is the sum total of gravity from other worlds?

This seems to me like a new hypothesis, outside of the quantum physics as we know it yet, not supported by experimental results. Maybe it is so; maybe it isn't. Without good evidence for it the prior probability seem small (there are many possible new hypotheses we could make to explain dark energy, this is just one of them, why should it be preferred to the alternatives).

comment by JoshuaZ · 2014-11-26T04:19:08.063Z · LW(p) · GW(p)

This doesn't seem to give a straightforward explanation for whether it could reproduce the expected Bell-type experiments, especially a CHSH experiment, and from a glance I don't see how they'll get that correct without forcing some sort of completely ad-hoc rule for how the universes interact.

comment by Manfred · 2014-11-24T16:57:13.670Z · LW(p) · GW(p)

Sure, it's doable. It may even be trivial - one can recast partial time derivatives of a wave function as total time derivatives of a distribution of particles with velocities.

Unfortunately this seems doable an infinite number of ways, and in general probably isn't useful.

comment by tog · 2014-11-24T09:17:17.321Z · LW(p) · GW(p)

It's an appealing and easy enough hack that I'll plug my recent LessWrong discussion post Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission. Especially now that Black Friday week has started on Amazon.

Replies from: tog, Drayin
comment by tog · 2014-11-24T09:18:22.692Z · LW(p) · GW(p)

On the same topic, Gunnar_Zarncke recently started a LessWrong Financial Effectiveness Repository

comment by Drayin · 2014-11-24T09:19:25.974Z · LW(p) · GW(p)

That is a neat hack - who said there's no such thing as a free lunch?

Replies from: Sysice
comment by Sysice · 2014-11-24T11:13:05.227Z · LW(p) · GW(p)

This isn't necessarily- if you have to think about using that link as charity while shopping, it could decrease your likelihood of doing other charitable things (which is why you should set up a redirect so you don't have to think about it, and you always use it every time!)

Replies from: faul_sname
comment by faul_sname · 2014-11-24T19:16:15.504Z · LW(p) · GW(p)

Amazon already does that for you -- if you go to buy something without using that link, it'll ask you if you want to.

comment by Artaxerxes · 2014-11-27T01:34:04.823Z · LW(p) · GW(p)

Calico, the aging research company founded by Google, is hiring.

comment by Torello · 2014-11-25T02:21:16.472Z · LW(p) · GW(p)

TLDR: Requesting articles/papers/books that feature detailed/explicit "how-to" sections for bio-feedback/visualization/mental training for improving performance (mostly mental, but perhaps cognitive as well)

Years ago I saw an interview with Michael Phelps' (Olympic swimmer) coach in which he claims that most Olympic-finalist caliber swimmers have nearly indistinguishable physical capabilities, Phelps' ability to focus and visualize success is what set him apart.

I also saw a program about free divers (staying underwater for minutes) who slow their heart-rates through meditation.

I also read that elite military units visualize to remain calm and carry out complex tasks despite incredible stress (for instance, bomb squad members with heart rates lower in the presence of a bomb than on an average afternoon at the base). Unfortunately I didn't record the sources of these various pieces, so I can't link to them

Has anyone read any specific how-to books on the topic, i.e., here are step-by-step instructions for visualizations, lowering heart rate, mental clarity, etc?

Replies from: Sjcs, Brillyant, ChristianKl
comment by Sjcs · 2014-11-25T11:26:49.727Z · LW(p) · GW(p)

The book On Combat by Dave Grossman discusses some of these things. I haven't read it yet, but have read reviews and listened to a podcast by two people I consider highly evidence-based and reputable (here). In particular, the book discusses a method of physiologically lowering your heart rate he calls "Combat Breathing". This entails 4 phases, each for the durations of a count of 4 (no unit specified, I do approx 4 seconds):

  1. Breathe in

  2. Hold in

  3. Breathe out

  4. Hold out

It sounds very simple, but I have heard multiple recommendations of it from both the armed-forces and medical worlds. I can also add a data point confirming it works well for me (mostly only for reducing heart rate to below 100, not all the way down to resting rate).

comment by Brillyant · 2014-11-26T17:30:29.176Z · LW(p) · GW(p)

Years ago I saw an interview with Michael Phelps' (Olympic swimmer) coach in which he claims that most Olympic-finalist caliber swimmers have nearly indistinguishable physical capabilities, Phelps' ability to focus and visualize success is what set him apart.

I'm skeptical of this.

No doubt it is relatively true that professional/elite athletes have similar physical capabilities, but even very small differences in athletic ability can be very consequential over the course of XXX meters in a swimming race or, say an entire season of football. We are talking about very small margins of victory in many (or most) cases.

Replies from: Torello
comment by Torello · 2014-12-01T03:02:30.693Z · LW(p) · GW(p)

I agree that small physical differences can be very consequential--wouldn't small mental differences be similarly consequential?

http://www.radiolab.org/story/91618-lying-to-ourselves/

This radiolab episode discusses how swimmers who engage in more self-deception win more frequently, controlling for other factors (i.e., self-deceivers on a division 3, 2, and 1 teams are more likely to beat their opponents, so at different levels of physical skill their mentality is predictive).

We are talking about very small margins of victory in many (or most) cases.

I'm not sure what you're getting at here--that the victory of a particular person is attributable to noise because the margin of error is small?

Replies from: Brillyant
comment by Brillyant · 2014-12-02T00:25:52.566Z · LW(p) · GW(p)

Great points.

In Phelps' case, I think he is physically superior—though perhaps only slightly—compared to the competition. Same with Usain Bolt.

I'd agree confidence, even to the extent it is self-deception, can make a significant difference when it comes to sports performance. However, when an athlete—like Phelps or Bolt—routinely wins over the course of several races spanning years, I think physical capability differences are the main reason.

In team sports, or really any sport that requires more than just straight line speed, I think psychological difference are very important. But swimming and sprinting are largely physical contests. Unless you have problems with false starts, I'm not seeing where the mental edge figures in.

(Obviously longer races that require endurance and pacing considerations are more prone to psychological influence.)

comment by ChristianKl · 2014-11-25T15:27:58.772Z · LW(p) · GW(p)

The first step of how to of biofeedback means getting a biofeedback device.

Direct heart rate is no good goal. Doing biofeedback on heart rate variance is better.

I also read that elite military units visualize to remain calm and carry out complex tasks despite incredible stress (for instance, bomb squad members with heart rates lower in the presence of a bomb than on an average afternoon at the base).

I'm not sure whether you want a bomb squad to have a heart rate that's lower than normal.

Has anyone read any specific how-to books on the topic, i.e., here are step-by-step instructions for visualizations, lowering heart rate, mental clarity, etc?

Step-by-step instructions are not how you achieve the kind of results of Phelps or the bomb squat. Both are done through the guidance of coaches.

To the extend that the main way I meditate has steps it has three:

  1. Listen to the silence
  2. Be still
  3. Close your eyes.

Among those (3) is obvious in meaning. (1) takes getting used to and is probably not accessible by mere reading. Understanding the meaning of (2) takes months.

Replies from: Torello
comment by Torello · 2014-11-25T22:39:27.659Z · LW(p) · GW(p)

Thanks for your reply.

Can you point me to any articles/sites about biofeedback devices? Have you done biofeedback yourself?

Perhaps you're right about the bomb squad heart rate, maybe a moderately raised rate would be a proxy for optimal/peak arousal levels. However, I'd guess that a little too much calm is better than overwhelming panic, which would probably be a more typical reaction to approaching a bomb that's about to explode.

I agree that a coach would be better, but a book is a more practical option at the moment.

(this may sound snarky, but isn't) Did you learn meditation from a teacher, or from a step-by-step book? The steps you give seem are simple (not easy), and a good starting point. I think a meditation coach would help you flesh these out, but those kinds of precise instruction are what I'm looking for.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-26T10:50:41.252Z · LW(p) · GW(p)

The steps you give seem are simple (not easy),

Yes, and people at LW are in generally very bad at simple. People here have the skills for dealing with complex intellectual subjects.

The problem with "be still" is that it leaves you with question like: "4 minutes in the meditation I feel the desire to adjust my position, what do I do?" It doesn't give you a easy criteria to decide when moving to change your position violates "be still" and when it doesn't.

Can you point me to any articles/sites about biofeedback devices? Have you done biofeedback yourself?

Doing biofeedback is still on my todo list.

My device knowledge might be 1-2 years out of date. Before that point the situation was that emWave2 and wilddivine were the good non-EGG based solutions. Good EGG based solutions are more expensive. See also a QS-forum article on neurofeedback. Even through the QS forum is very low in terms of posts, posting a question there on topics like this is still a good idea (Bias disclosure: I'm a mod at the QS-Forum).

Among those two emWave2 basically only goes over heart rate variance (HRV) and WildDevine also measures skin conductance level (SCL) with is a proxy for the amount that you sweat. WildDevine also has a patent for doing biofeedback with HRV + SCL. emWave2 is with 149$ at the moment AFAIK the cheapest choice for a good device that comes with a good explanation of how to do training with it and that you can just use as is.

(this may sound snarky, but isn't) Did you learn meditation from a teacher, or from a step-by-step book?

I started with learning meditation from a book by Aikido master Koichi Tohei ten years ago. I have roughly three years of in person training. I also have NLP/Hypnosis training since that time. If I would switch out an emotional response of the bomb swat, then hypnosis is probably the tool of choice. With biofeedback I would see no reason for overcompensation. Switching out an emotional response via hypnosis on the other hand can lead to such effects. Hearing an alarm of an ambulance might also lower my heart rate ;)

There are also safety issues. I don't like the idea of people messing themselves up and are faced with experiences that they can't handle because they don't have proper supervision.

comment by JoshuaFox · 2014-11-24T09:08:03.499Z · LW(p) · GW(p)

We're considering Meetup.com for the Tel Aviv LW group. (Also, the question was asked here.) It costs money, but we'd pay if it's worthwhile. I note that there are only 5 LessWrong groups at Meetup of which 2-3 are active. I'll appreciate feedback on the usefulness of Meetup.

comment by artemium · 2014-11-27T17:49:39.471Z · LW(p) · GW(p)

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

http://nthlook.wordpress.com/2014/11/26/why-fear-ai/

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-11-28T09:55:07.509Z · LW(p) · GW(p)

Seems very good, but this is coming from a person familiar with the topic. I wonder how good it would seem to someone who hasn't heard about the topic yet.

comment by Error · 2014-11-26T03:56:32.172Z · LW(p) · GW(p)

I'm looking for an old post. Something about an extinct species of primate that may once have been nearly as smart as humans, but evolved over time to be much dumber, apparently because the energy costs of intelligence were maladaptive in its environment.

Can anyone point me in the right direction?

Replies from: Unknowns
comment by [deleted] · 2014-11-30T22:33:39.175Z · LW(p) · GW(p)

This site drains my energy. Too many topics seem interesting on the surface but are really just depressing and not actionable, with the big example being a bad singularity.

I have also found in my life that general, useful advice is rare. Most advice here seems either too vague or too specific to the poster. I did find at least one helpful book (by Scott Adams) and a couple of good posts, but think other sources could help at less cost. There are many smart people here, but if you look you can find something much more useful: smart people who have already achieved the particular goals you seek.

Bye.

comment by [deleted] · 2014-11-24T20:22:47.719Z · LW(p) · GW(p)

The year is 1800. You want to reduce existential-risk. What do you do?

Replies from: Alicorn, lmm, imuli, polymathwannabe
comment by Alicorn · 2014-11-24T20:24:17.497Z · LW(p) · GW(p)

Are you a time-traveler or a native?

Replies from: None
comment by [deleted] · 2014-11-24T20:51:59.734Z · LW(p) · GW(p)

A native (but optionally a very insightful and visionary native).

EDIT: I said native, but all that I really want to avoid is an answer like "I would use all my detailed 21-st century scientific knowledge to do something that a native couldn't possibly do".

Replies from: Lumifer, Lumifer
comment by Lumifer · 2014-11-24T21:12:17.461Z · LW(p) · GW(p)

all that I really want to avoid is an answer like "I would use all my detailed 21-st century scientific knowledge to do something that a native couldn't possibly do".

How about "I would use all my detailed 21-st century scientific knowledge to be concerned about something that a native couldn't possibly be concerned about"?

Replies from: None
comment by [deleted] · 2014-11-24T21:19:41.604Z · LW(p) · GW(p)

Sure, if it leads to an interesting point.

For example, if you were trying to avoid suffering: "I would kill 12 year old Hitler" isn't very interesting, but "I would do BLAH to improve European relations" or "There's nothing I could do" are interesting.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-24T22:28:59.439Z · LW(p) · GW(p)

"I would kill 12 year old Hitler"

Did you mean 1800 or 1900?

Replies from: None
comment by [deleted] · 2014-11-24T23:01:06.346Z · LW(p) · GW(p)

I didn't mean that example to refer to original question; I just wanted to demonstrate a vague but somewhat intuitive difference between "fair" and "unfair" use of future knowledge.

comment by Lumifer · 2014-11-24T20:58:51.095Z · LW(p) · GW(p)

Well, being concerned about existential risk in 1800 probably means you were very much impressed by Thomas Malthus' An Essay on the Principle of Population (published in 1798) and were focused on population issues.

Of course, if you were a proper Christian you wouldn't worry too much about X-risk anyway -- first, it's God's will, and second, God already promised an end to this whole life: the Judgement Day.

Replies from: Brillyant
comment by Brillyant · 2014-11-25T00:16:31.298Z · LW(p) · GW(p)

Of course, if you were a proper Christian you wouldn't worry too much about X-risk anyway -- first, it's God's will, and second, God already promised an end to this whole life: the Judgement Day.

Still true today.

Replies from: Lumifer
comment by Lumifer · 2014-11-25T02:04:10.785Z · LW(p) · GW(p)

Sure, but the percentage of fully believing Christians was much higher in 1800.

comment by lmm · 2014-11-25T23:32:25.303Z · LW(p) · GW(p)

I give Napoleon a hand, on the basis that he was one of the more scientifically-minded world leaders, and the theory that a strong France makes our future more multipolar. For the same reason I try to spread the notion of the limited-liability corporation in the islamic world (no idea how to do that though). I might try to convince nations of the (AIUI genuine) non-profitability of colonialism.

Replies from: TimS
comment by TimS · 2014-11-26T16:59:32.581Z · LW(p) · GW(p)

If you want multi-polar, Napoleon is the last person you should help. He was clearly acting to reduce the number of Great Powers to 1. He even succeed for a bit re: Prussia & Austria.

Alternatively, if he wins, how do you prevent France v. USA instead of Russia v. USA.

Replies from: lmm, Lumifer
comment by lmm · 2014-11-27T18:46:44.576Z · LW(p) · GW(p)

Alternatively, if he wins, how do you prevent France v. USA instead of Russia v. USA.

If it ends up more even and more positive-sum, I call that a win.

Replies from: TimS
comment by TimS · 2014-12-03T12:01:06.307Z · LW(p) · GW(p)

Why would you expect any different outcome at all? Two-power dynamics often unstable - absent external stabilizer like MAD.

comment by Lumifer · 2014-11-26T17:03:20.785Z · LW(p) · GW(p)

if he wins, how do you prevent France v. USA instead of Russia v. USA.

You just have to keep the Canadian-Mexican border quiet :-)

comment by imuli · 2014-11-25T19:03:37.710Z · LW(p) · GW(p)

Start an insurance company with a focus on risk mitigation.

(Amass resources, collect information, you get the idea.)

comment by polymathwannabe · 2014-11-24T21:44:18.580Z · LW(p) · GW(p)

Vaccination for everyone! Aqueduct (AND toilets) for everyone!

Make good publicity for Mr. Volta's new chemical battery, and convince everyone of how ugly the world is when tainted by coal smoke. This has a dual purpose: ease the way for early development of electric cars, thus fighting global warming, and delay Western meddling in the Middle East for oil extraction purposes, which contributed largely to the mess the region is now.

Find Mr. Heinrich Marx at his law practice in Trier and quietly castrate him.

Popularize DIY production of blue cheese and thus increase the chances that someone playing with Penicillium fungi will get creative.

Recruit would-be Temperance Leagues and redirect their strength to strangle the tobacco industry in its crib.

Edited to add: only massive distribution of aqueducts and toilets would be obvious to a true native of 1800.

Replies from: ChristianKl, fubarobfusco
comment by ChristianKl · 2014-11-25T08:52:08.933Z · LW(p) · GW(p)

Batteries still mean that you need electricity and that means burning coal.

comment by fubarobfusco · 2014-11-25T02:40:28.947Z · LW(p) · GW(p)

Uranium was discovered in 1789 in Saxony. What's the minimal technological path from there to reasonably-safe reactors? I would imagine it involves not only the obvious physics, but photography (to detect radiation) and significant advances in metallurgy (to refine ores) ....

comment by MarkusRamikin · 2014-11-24T10:37:42.529Z · LW(p) · GW(p)

Markus Ramikin's Semimonthly Dumb Question time. Since we seem to have both experts on physics and on editing wikipedia:

What do you think of the quality of the current Wikipedia article on heat death? Is it a fair treatment?

I keep seeing intelligent people talk about this concept like it's obviously useful and relevant, and to my layman mind it is, but the article sounds a little like it's basically bunk now, with the opening summary ending this way:

it has been recognized by a respected authority on thermodynamics, Max Planck, that the phrase 'entropy of the universe' has no meaning because it admits of no accurate definition.[1][2] Kelvin's speculation falls with this recognition.

The style, and the way these words are repeated verbatim down the page, makes me suspect the work of a single editor with strong opinions, and so I wonder. Just because of definition problems?

(I'll admit my proximate reason for asking is kinda trivial: the claim sometimes comes up in Madoka fandom that appreciating Kyubey's agenda requires trusting his civilisation's greater understanding of physics, and I wanna say that no, the show isn't making it up, that life ultimately running out of fuel is an idea that we humans have been considering seriously. But if I should mention "heat death" to someone who doesn't know what it is, and they look it up and see that, the first thing they'll say is "well this is disproven and there's nothing to worry about").

Replies from: IlyaShpitser, ChristianKl
comment by IlyaShpitser · 2014-11-24T13:01:12.577Z · LW(p) · GW(p)

There is no reason, other than happy cultural accident, for any given Wikipedia article on a technical topic to be good. Technical subjects I know something about are generally treated very poorly. Wikipedia has no incentives in place for experts to correct things, and for non-experts to shut up.

Replies from: Vulture, MarkusRamikin
comment by Vulture · 2014-11-25T03:43:46.028Z · LW(p) · GW(p)

When did you get this impression? I'm only asking because I'm given to believe that the situation on wikipedia with regards to experts and specialized subjects has improved substantially starting in about 2008 or so(?), at least in the humanities but possibly in other fields.

Replies from: IlyaShpitser, satt, satt
comment by IlyaShpitser · 2014-11-25T15:06:58.863Z · LW(p) · GW(p)

This was in fact prior to 2008 (my advisor asked me to change something in the Bayesian network article, and I got into a slight edit war with the resident bridge troll who knew a lot less than me, but had more time and whose first reflex was to just blindly undo any edits. These sorts of issues with Wikipedia are very well documented).


The horrible article on confounders is another good example. I brought it up before here, and got the "that's like, your opinion" kind of reply. At least they cite Tyler's paper with me now! Of course, this particular case might be more widespread than just Wikipedia, and might be a general confusion in statistics as a field. I went to a talk last week where someone just got this wrong in their talk (and presumably in their research).


I don't doubt that there are isolated communities within Wikipedia that generate good content. For example, I know there are Wikipedia articles for some areas of mathematics of shockingly high quality. My point is, when this happens it is a sort of happy cultural accident that is happening in spite of, not because of, the Wikipedia editing model.


There has been quite a bit of experimentation online to incentivize experts to talk and non-experts to shut up, recently. I think that's great!

comment by satt · 2014-11-27T00:49:30.746Z · LW(p) · GW(p)

[deleted duplicate comment]

comment by satt · 2014-11-27T00:37:45.083Z · LW(p) · GW(p)

Wikipedia is more comprehensive now than in 2008, but I speculate that its average article quality might be lower, because of (1) competent editors being spread more thinly, and (2) the gradual entrenchment of a hierarchy of Wikipedia bureaucrats who compensate for a lack of expertise with pedantry and rules lawyering.

(I may be being unfair here? I'm going by faint memories of articles I've read, and my mental stereotype of Wikipedia, which I haven't edited regularly in years.)

Replies from: Vulture
comment by Vulture · 2014-11-27T04:42:32.030Z · LW(p) · GW(p)

Average article quality is almost certainly going down, but the main driving force is probably mass-creation of stub articles about villages in Eastern Europe, plant genera, etc. Of course, editors are probably spread mpre thinly even among important topics as well. A lot of people seem to place the blame for any and all of Wikipedia's problems on bureaucracy, but as a regular editor such criticisms often seem foreign, like they're talking about a totally different website. True, there's a lot of formalities, but they're mostly invisible, and a reasonably intelligent person can probably pick up the important customs quite quickly. In the past 6 months of relatively regular editing, I can't say I remember ever interacting involuntarily with any kind of bureaucratic process or individual (I occasionally putter around the deletion nominations for fun, but that's just to satisfy my need for conflict). Writing an article (for example), especially if it's any good, is virtually never going to get you ensnared in some kind of Kafkaesque editorial process. Such things seem to operate mainly for the benefit of people who enjoy inflicting such things on each other (e.g., descending hierarchies of committees for dealing with mod drama).

It's late, so hopefully the above makes some modicum of sense.

comment by MarkusRamikin · 2014-11-26T09:57:17.171Z · LW(p) · GW(p)

Is that a "no"?

comment by ChristianKl · 2014-11-24T10:55:39.524Z · LW(p) · GW(p)

it has been recognized by a respected authority on thermodynamics, Max Planck, that the phrase 'entropy of the universe' has no meaning because it admits of no accurate definition.[1][2] Kelvin's speculation falls with this recognition.

The fact that Max Planck is a respected authority can't be easily disproved and he's cited.

On the other hand he did write that more than 100 years ago.

The introductory section doesn't contain any modern physics but 19th century views. If you would gather more modern sources, you might use them to update the article.

comment by Capla · 2014-11-25T19:35:12.153Z · LW(p) · GW(p)

I think there may people here that can benefit from this.

http://www.nerdfitness.com/

Replies from: RowanE
comment by RowanE · 2014-11-26T10:24:31.987Z · LW(p) · GW(p)

We shouldn't select our fitness gurus for whether they're of our tribe, we should select our fitness gurus for the effectiveness and truth of what they teach.

On that basis, do you have any reasons beyond "it's nerdy!" for recommending this website over any number of other ones, many of which are very good? If it's the gimmicky motivational approaches, I think LessWrong has that down pat - loads of us play HabitRPG and I'm pretty sure Beeminder's founders were some of our own.

Edit: For some reason my links ate themselves and the text between them so I took them out.

Replies from: Capla, Wes_W
comment by Capla · 2014-11-26T20:17:51.373Z · LW(p) · GW(p)

You are right, but much of the fitness game is motivation, and we are tribal organisms. Being part of a community to which one relates, that pushes you to be better, is a huge benefit.

Maybe this is a solved problem, but I think there might be at least one person here with whom it resonates, and to whom it could provide substantial value.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-26T20:54:04.130Z · LW(p) · GW(p)

In general what this community is about is having good arguments for doing what you do. As such it usually makes sense if a person who advocates some practices makes the case for the practice instead of simply posting a link.

In this case, did you follow that program? What results did you get?

comment by Wes_W · 2014-12-01T19:38:55.799Z · LW(p) · GW(p)

I'm not especially impressed with Steve Kamb as a fitness guru. He has a writing style I find accessible, and doesn't seem to mind covering introductory material, which are pluses, but not outstanding in the fitness world. The gimmicky motivational approaches probably work for some people, but I find them silly.

I've found the forums to be a very valuable resource, though. Lots of knowledgeable people whose brains you can pick, and a structure for social support/accountability, which can be scarce in meatspace.

comment by CAE_Jones · 2014-11-25T00:48:25.296Z · LW(p) · GW(p)

It seems that, in order to accomplish anything, one needs some combination of conscientiousness, charisma, and/or money*. It seems that each of the three can strengthen the others:

  • Conscientiousness correlates with earning potential
  • A conscientious person can exert extraordinary effort to learn, practice, and internalize behaviors that increase charisma.
  • a charismatic person can make connections and get deals and convince people to give them money.
  • Money can buy charisma/conscientiousness training or devices, or can pay people to be charismatic/conscientious in pursuit of one's goals.

If someone lacks all of these resources severely enough, is there any way to correct that? It rather seems like the answer is "no, but most people can't imagine someone with that much of a deficit in all three at the same time".

* Yes, I could have gone for alliteration with "cash", "credit", or "capital". Money seems different enough that the dissonance seemed like a better idea at the time.

Replies from: Torello, Lumifer, gjm
comment by Torello · 2014-11-25T02:26:42.712Z · LW(p) · GW(p)

This is not exactly a reply to your question, but I think your question is fits this dynamic:

Miller's Iron Law of Iniquity

In principle, there is an evolutionary trade-off between any two positive traits. But in practice, every good trait correlates positively with every other good trait.

http://edge.org/response-detail/11314

comment by Lumifer · 2014-11-25T16:43:04.508Z · LW(p) · GW(p)

Don't start with the resources you lack. Start with the resources you have and then look how can you utilize them to achieve your aims.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-11-25T23:27:01.394Z · LW(p) · GW(p)

... bearing in mind that "ability to discover new resources" is itself a resource, too.

comment by gjm · 2014-11-25T12:17:31.436Z · LW(p) · GW(p)

All of those things can be mitigated by other traits. Connections can be useful even without very much charisma. Cleverness can lead to pretty good earning potential even with relatively little conscientiousness, and may help one think of ways to improve charisma and conscientiousness. At any given level of earning potential, being cheap ("frugal" would be a better word but begins with the wrong letter) eases the transition from gradually sliding into debt to gradually accumulating savings. Other aspects of character besides conscientiousness make a difference -- e.g., a reputation for honesty may be helpful.

Given a bad enough deficit in everything that matters, it's certainly possible to be so screwed that recovery is unlikely. It's also possible to overestimate those deficits and the resulting screwage, e.g. on account of depression. There's probably a nasty positive feedback loop where doing so makes getting unscrewed harder.

comment by A1987dM (army1987) · 2014-11-24T11:46:40.995Z · LW(p) · GW(p)

I am considering deleting all of my comments on Less Wrong (or, for comments I can't delete because they've been replied to, editing them to replace their text with a full stop and retracting them) and then deleting my account. Is there an easier way of doing that than by hand?

(In case you're wondering, that's because thanks to Randall Munroe the probability that any given person I know in meatspace will read my comments on Less Wrong just jumped up by orders of magnitude.)

Replies from: Artaxerxes, army1987, Sjcs, ChristianKl, IlyaShpitser, Lumifer, lfghjkl, Capla, Emile
comment by Artaxerxes · 2014-11-24T13:22:54.011Z · LW(p) · GW(p)

Is there an easier way of doing that than by hand?

I account hop a lot, and also would like to know if anyone knows.

Will you be making a new account that will be even less tied to you, or will you stop posting on LW?

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-24T17:11:57.900Z · LW(p) · GW(p)

Will you be making a new account that will be even less tied to you,

I probably will. I might also create an account under my full name which I will only use for things I'm (100 - epsilon)% sure I wouldn't mind anyone reading.

Replies from: army1987, army1987
comment by A1987dM (army1987) · 2014-11-26T08:49:44.642Z · LW(p) · GW(p)

I have been convinced that deleting my comments would be overkill, so I'm going to just delete my account, which will anonymize my comments, and hope that the permalink page title bug will be fixed.

I might come back here with a different username later.

Thanks to Baughn for their offered help.

Have a nice day.

comment by Sjcs · 2014-11-25T11:33:30.226Z · LW(p) · GW(p)

You could try changing your username. I am not sure whether it would change the username that appears on all your past comments, but I suspect it would. You could email and ask.

comment by ChristianKl · 2014-11-24T13:38:56.730Z · LW(p) · GW(p)

Do you really think that who you are in meatspace is possible to identify from reading a few LW posts?

I think if you are worried I would simply remove references to your location.

I would also think that it's likely that you overrate the cost of people knowing you participate on LW.

Replies from: army1987, NancyLebovitz
comment by A1987dM (army1987) · 2014-11-24T17:21:34.402Z · LW(p) · GW(p)

Do you really think that who you are in meatspace is possible to identify from reading a few LW posts?

My username is formed by a shortening (though not one I often go by) of my real first name and my real birth year, and I've used it elsewhere, including in my main non-work e-mail address; so anyone who knows my e-mail would at least suspect that this LW account is mine.

(I first picked this username when I was 14 and kept using it everywhere out of habit.)

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2014-11-24T19:58:15.943Z · LW(p) · GW(p)

afaic, 99% of the people you meet in meat space don't read very much, let alone go through archives of anonymous forums. Internet trolls, on the other hand..

Replies from: Lumifer, army1987
comment by Lumifer · 2014-11-24T20:43:56.592Z · LW(p) · GW(p)

99% of the people you meet in meat space don't read very much, let alone go through archives of anonymous forums

The percentage of people in meatspace who would throw an email handle into Google is rather large.

A Google search for his username has his LW account as the third hit (after the two Wikipedia hits).

Replies from: Larks, DanielFilan
comment by Larks · 2014-11-27T04:15:49.232Z · LW(p) · GW(p)

You might perhaps like to edit out the username from this comment now.

Replies from: Lumifer
comment by Lumifer · 2014-11-29T04:28:10.814Z · LW(p) · GW(p)

Aha, thanks.

comment by DanielFilan · 2014-11-24T22:20:33.400Z · LW(p) · GW(p)

Google searches aren't ideal for this sort of thing, because your google results are tailored to you personally. Using DuckDuckGo, which shows the same search results to everyone, is probably a bit more reliable for these purposes (although in this case it gets the same results).

Replies from: Lumifer
comment by Lumifer · 2014-11-25T00:10:43.407Z · LW(p) · GW(p)

your google results are tailored to you personally

Not in my case. I take countermeasures to Google tracking.

comment by A1987dM (army1987) · 2014-11-24T21:16:20.658Z · LW(p) · GW(p)

I only agree for certain values of “meet”.

comment by NancyLebovitz · 2014-11-24T14:41:17.299Z · LW(p) · GW(p)

Suppose that identification through writing habits gets a lot cheaper and easier.

The cost might be fairly low among people who are even vaguely reasonable. The risk of attracting a mob is low, but the cost is non-trivial.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-24T15:00:49.974Z · LW(p) · GW(p)

The risk of attracting a mob is low, but the cost is non-trivial.

The cost very much depends on whether you are employed in a antifragile way or a fragile way.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-24T15:58:37.457Z · LW(p) · GW(p)

There's more to life than one's employment-- some mobs also go after their target's relatives.

Also, a fairly high proportion of people get highly distracted and upset by violent threats even if the likelihood of physical attacks has been low so far.

Replies from: army1987, ChristianKl
comment by A1987dM (army1987) · 2014-11-25T10:36:59.940Z · LW(p) · GW(p)

How many of said threats are not bluffs? I mean, I know that some of them aren't, but I can't get myself to alieve it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-25T11:21:26.018Z · LW(p) · GW(p)

So far as I know, these threats are quite common, but I haven't heard of any physical action being taken on them.

If you haven't been on the receiving end of such threats, you may be underestimating the way you'd react to them.

One thing people report is that they get frightened because there are people putting in a notable amount of effort to make them feel bad.

comment by ChristianKl · 2014-11-24T22:46:40.333Z · LW(p) · GW(p)

There's more to life than one's employment-- some mobs also go after their target's relatives.

I'm not really aware of that happening as a result of internet disputes.

Also, a fairly high proportion of people get highly distracted and upset by violent threats even if the likelihood of physical attacks has been low so far.

A high proportion of people also don't draw mobs.

I know one person who did and he has no issue dealing with it.

Given that you are a woman I can understand that it's a more reasonable risk for you. Unfortunately online women get attacked more easily and more nasty than a lot of men.

Still you have chosen to be quite open.

Replies from: NancyLebovitz, Azathoth123
comment by NancyLebovitz · 2014-11-25T00:35:28.759Z · LW(p) · GW(p)

I've chosen to be open because it feels like the right thing for me to do. I have no idea whether I'm taking an excessive risk.

comment by Azathoth123 · 2014-11-27T04:08:28.740Z · LW(p) · GW(p)

Unfortunately online women get attacked more easily and more nasty than a lot of men.

What I've heard is that men are more likely to get attacked (makes sense given where they hang out), it's just that women are more likely to make a big deal of it.

comment by IlyaShpitser · 2014-11-24T11:55:20.219Z · LW(p) · GW(p)

the probability that any given person I know in meatspace will read my comments on Less Wrong just jumped up by orders of magnitude.

Why not use your real name and own what you write?

Replies from: MathiasZaman, army1987
comment by MathiasZaman · 2014-11-24T12:44:56.174Z · LW(p) · GW(p)

This certainly isn't a safe option for everyone.

comment by A1987dM (army1987) · 2014-11-24T12:37:42.914Z · LW(p) · GW(p)

I would own much but not all of what I've written on LW, and selectively deleting only the things I wouldn't own would take infeasibly long.

Replies from: Baughn
comment by Baughn · 2014-11-24T13:39:54.447Z · LW(p) · GW(p)

How badly do you want to delete everything? There might be easier options, but if there aren't I can certainly cook up a mass-deletion script. Just, I don't want to test it on my own account so you'd need to let me access yours.

(Yes, I could make a test account for the purpose. That would be more work.)

EDIT: I got a little way into implementing this before [deleted] bade me stop, thus the spate of retracted comments. Hopefully ve'll change ver mind, as some of those comments were quite interesting; however, this has gotten me thinking. The site has a comment deletion option, but not if you're deleting your account entirely; should it have that? If we don't want people to do use it, but still leave the scripting option open, am I expected not to use that option?

Replies from: Larks, army1987, army1987
comment by Larks · 2014-11-27T04:15:39.160Z · LW(p) · GW(p)

You might perhaps like to edit out the username from this comment now.

Replies from: Baughn
comment by Baughn · 2014-11-28T14:46:12.055Z · LW(p) · GW(p)

Right, thanks.

comment by A1987dM (army1987) · 2014-11-24T21:31:12.501Z · LW(p) · GW(p)

As of now, after both you and lfghjkl suggested me not to delete valuable comments, I'm leaning towards just deleting my account. (I've already removed my location from it as per ChristianKl's suggestion.) If I was in the Hyperbolic Time Chamber I'd delete all comments excepts those with positive karma which I wouldn't mind anybody I know reading, but...

(BTW FWIW I'm a “he”.)

Replies from: Baughn, Sarunas
comment by Baughn · 2014-11-25T03:32:17.505Z · LW(p) · GW(p)

(BTW FWIW I'm a “he”.)

I'm not going to remember that. My memory for people isn't small, but it's mostly taken up by fictional ones.

comment by Sarunas · 2014-11-25T20:06:58.333Z · LW(p) · GW(p)

Use this to find your comments that have negative karma (you do not have that many of those, it will not be that time-consuming to delete them manually) and/or contain certain keywords. Then you can delete them without having to delete everything.

comment by A1987dM (army1987) · 2014-11-24T18:11:53.230Z · LW(p) · GW(p)

I've changed my password to a temporary one and sent you a PM with it.

comment by Lumifer · 2014-11-24T17:45:03.054Z · LW(p) · GW(p)

Keep in mind that you can delete posts from LW, but you can't delete things from internet archives.

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-24T18:28:31.973Z · LW(p) · GW(p)

I'm mostly worried about people stumbling upon LW e.g. from the title text of that comic, starting browsing the site, reading my comments, and recognizing my username from elsewhere. Granted, someone motivated to doxx me enough to overcome trivial inconveniences could still do so, but I don't think that's likely to happen enough for me to worry about it.

Replies from: Nornagest
comment by Nornagest · 2014-11-24T23:52:57.328Z · LW(p) · GW(p)

At a guess, I'd say that the chances of there being:

  • someone you know
  • who read that particular XKCD
  • and was led to LW for the first time as a consequence of it
  • and continued to read enough of the site to stumble on your username
  • and was motivated to dox you for whatever reason

...is too low to motivate precipitous action. XKCD's pretty popular, but it's not so popular that I'd expect this to lead to a very big spike in long-term readership; at most you might want to remove your location tag (which I see you've already done) and maybe lurk for a while.

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-25T10:57:43.590Z · LW(p) · GW(p)
  • You might be underestimating P(X read that particular XKCD | I know X), as I am a physicist, and know a fair number of engineers and computer scientists and a few mathematicians;
  • you might be underestimating P(X continued to read enough of the site to stumble on my username | X was led to LW for the first time) -- I've commented a lot, including on many of the posts linked to on the about page and the welcome threads;
  • it's not motivated doxxing (which I know is very very unlikely) that I'm worried about -- comments which I would mind someone I know in meatspace reading comprise a sizeable minority of all my comments (not just for the consequences to myself -- I'd dislike, as a terminal value, certain people to hear certain things I've said about certain topics, especially other people).
comment by lfghjkl · 2014-11-24T18:17:35.421Z · LW(p) · GW(p)

The easiest solution is to just delete your current account and start a new one. None of your meatspace friends could then know which posts from [deleted] was from you or even that any of them came from you in the first place (unless they are an LW admin, but then I don't think you should be worried about them knowing you post here).

This solution also has the benefit of not removing valuable comments in old threads (which looking at your karma I assume there are many of).

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-24T18:40:41.578Z · LW(p) · GW(p)

You can still tell who wrote such comments by following the permalink and looking at the title of the page.

Replies from: lfghjkl
comment by lfghjkl · 2014-11-24T20:06:26.443Z · LW(p) · GW(p)

Wow, you're right. Someone should probably fix that.

At least deleting your account will make it very hard to track down any of your old posts unless they already know which comments to look for, so if they aren't already aware of LW you'd probably be safe.

comment by Capla · 2014-11-24T16:45:16.858Z · LW(p) · GW(p)

the probability that any given person I know in meatspace will read my comments on Less Wrong just jumped up by orders of magnitude.

Why are you concerned about this?

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-24T17:36:48.744Z · LW(p) · GW(p)

I've written things about other people without their consent, figuring there would be a negligible chance anybody could guess who they were. But now I think that chance, while still not huge, is no longer that negligible.

(I've also written certain politically incorrect things, but as someone working in a non-humanities field over 4000 miles away from Harvard, and who isn't going to apply for a job in the US any time soon, and likely not anywhere else in the Anglosphere either, I'm not terribly worried about that.)

Replies from: Unknowns, Capla
comment by Unknowns · 2014-11-24T17:49:39.872Z · LW(p) · GW(p)

Searching Google for your username leads to a Wikipedia account with fairly detailed information which should be easily identifiable to people who know you personally, so if someone suspected your identity they could probably easily verify it.

comment by Capla · 2014-11-24T17:47:34.200Z · LW(p) · GW(p)

I've written things about other people without their consent,

Could you just delete those things?

Replies from: army1987
comment by A1987dM (army1987) · 2014-11-24T18:42:00.135Z · LW(p) · GW(p)

It'd be a hell of a lot of work to find all of them.

comment by Emile · 2014-11-24T13:12:54.425Z · LW(p) · GW(p)

?! But your name seems even less tractable to yourself than mine is, and I don't worry about that!

(also, if you take into account the probability that they will link those comments to you, and that they will think badly of you because of it, no?)

comment by Unknowns · 2014-11-30T15:50:54.161Z · LW(p) · GW(p)

If there is a future Great Filter, it seems likely it would be one of two things:

1) a science experiment that destroys the world even though there was no reason to think that it would.

2) something analogous to nuclear weapons except easily constructable by an individual using easily obtainable materials, so that as soon as people have the knowledge, any random person can inflict immense destruction.

Are there any strategies that would guard against these possibilities?

Replies from: Izeinwinter
comment by Izeinwinter · 2014-11-30T19:33:43.923Z · LW(p) · GW(p)

1: No. Well, in theory, an presence on the moons of neptune that could survive indefinately without contact would do it, but that's not going to happen any time soon.

2: Arguably, we already live in this world. There are very destructive things in the canon of human knowledge, only people don't conceptualize them as weapons at all, but merely as dangers to be avoided. So.. good news, this does not work as a filter, and the actually odd thing is that we do* think of runaway super criticality as a weapon. Conditioning by lots of wars to think of explosions as ways to kill people?

*I'm not going to name examples in this context, because that might theoretically "help" someone to think of said example as a weapon. Which would be bad.

comment by ilzolende · 2014-11-29T23:53:57.911Z · LW(p) · GW(p)

I will donate N dollars to an x-risk organization within the next month. I tried to check what the effective altruism site recommended, but it required an email address. What organization should I donate to?

(N is predefined, and donating to the organization must not take longer than a standard online purchase.)

comment by artemium · 2014-11-26T07:00:23.669Z · LW(p) · GW(p)

This is really worrying. Hubris and irrational geopolitical competition may create existential risks sooner then expected. http://motherboard.vice.com/read/how-the-pentagons-skynet-would-automate-war

comment by blogospheroid · 2014-11-25T16:28:30.494Z · LW(p) · GW(p)

Weird fictional theoritical scenario. Comments solicited.

In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).

We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner's dilemma, but they defect. In our dying gasps, one of us asks them "We thought you were rational. WHY?..."

They reply " We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors. In your treatment of super intelligences that were alive amongst you, the ones you call Azathoth and Mammon, we see that you really crushed them. I mean, you smashed them to the ground and then ran a road roller, twice. I am pretty certain you cooperated with us only because you were afraid. We do to you what you did to them"

What do we do if we could anticipate this scenario? Is it too absurd? Is the idea of extending our "empathy" to the impersonal forces that govern our life too much? What if the aliens simply don't see it that way?

Replies from: polymathwannabe, Wes_W, Eliezer_Yudkowsky, Lumifer, Document
comment by polymathwannabe · 2014-11-25T17:24:52.448Z · LW(p) · GW(p)

The whole scenario depends on a reification fallacy. You don't negotiate with, or engage in prediction theory games with, impersonal forces (and calling capitalism a force of nature seems a stretch to me).

comment by Wes_W · 2014-11-25T17:09:16.813Z · LW(p) · GW(p)

Evolution is powerful, but that doesn't make it an intelligence, certainly not a superintelligence. We're not defecting against evolution, evolution just doesn't/can't play PD in the first place. But I'm also not sure how important the PD game is to this scenario, as opposed to the aliens just crushing us directly.

And as long as we're personifying evolution, an argument could be made that the triumph of human civilization would still be a win for evolution's "values", like survival and unlimited reproduction.

We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors.

I don't understand how this rule leads to the described behavior. As written, it suggests that the aliens would like to be crushed by their superiors...?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-11-25T18:08:07.452Z · LW(p) · GW(p)

That's not how TDT works.

Replies from: MrMind
comment by MrMind · 2014-11-26T11:02:36.171Z · LW(p) · GW(p)

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Replies from: wedrifid, IlyaShpitser
comment by wedrifid · 2014-11-26T12:34:07.343Z · LW(p) · GW(p)

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.

comment by IlyaShpitser · 2014-11-26T17:18:15.567Z · LW(p) · GW(p)

I view TDT as a bit unnatural, UDT is more natural to me (after people explained TDT and UDT to me).

I think of UDT as a decision theory of 'counterfactually equitable rational precommitment' (?controversial phrasing?).

So you (or all counterfactual "you"s) precommit in advance to do the [optimal thing], and this [optimal thing] is defined in such a way as to not give preferential treatment to any specific counterfactual version of you. This is vague. Unfortunately the project to make this less vague is of paper length.

:)


Folks working on UDT, feel free to chime in to correct me if any of above is false.

Replies from: MrMind
comment by MrMind · 2014-11-27T08:11:59.568Z · LW(p) · GW(p)

But isn't UDT relying on perfect information about the problem at hand?

If this is so, could it be seen as the limit of TDT with complete information?

comment by Lumifer · 2014-11-25T16:58:34.795Z · LW(p) · GW(p)

Is the idea of extending our "empathy" to the impersonal forces that govern our life too much?

Deification of natural forces is a standard human culture trait. A large proportion of early gods just personified natural phenomena.

Shinto is a contemporary religion that still does that a lot.

comment by Document · 2014-11-26T02:13:04.463Z · LW(p) · GW(p)

Similar "problem"(?): Acausal trade with Azathoth

comment by Punoxysm · 2014-11-25T15:27:50.816Z · LW(p) · GW(p)

In business, almost all executive decisions (headcount and budget allocation, which unproven products to push ahead with aggressively, translating forecasts for macroeconomic risks into business-specific policies, who to promote to other executive level positions, etc.) are made with substantial uncertainty. Or to put it another way, any executive-level decision-maker would be paralyzed without strong priors. This is especially true in fast-changing or competitive markets, where the only way to collect more evidence without direct risk is to let your competitors jump in the water first.

In other words, the kind of certainty we hold out for (often vainly) in science is almost unknown in many aspects of business, and the most critical decisions are often the most uncertain.

It's very "Black Swan" (in the sense of Taleb's whole, not just tail risk).

Thoughts?

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2014-11-25T16:37:39.804Z · LW(p) · GW(p)

any executive-level decision-maker would be paralyzed without strong prior

I don't think that's necessarily true, just having a high risk tolerance works as well. I also think you underestimate the amount of evidence present -- e.g. in most organizations the next-year budget is a variation on the previous year's budget.

the kind of certainty we hold out for (often vainly) in science is almost unknown in many aspects of business

Yes, of course. That's why, for example, risk management is an important part of doing business but is not normally a big part of doing science...

Replies from: Punoxysm
comment by Punoxysm · 2014-11-25T21:39:15.955Z · LW(p) · GW(p)

Risk tolerance is a good, possibly more correct, way of looking at it. Actually most executives probably have a mixture of risk tolerance and strong priors.

Some businesses can get away with only relatively low-risk, safe decisions and focus on efficient operations. However, I think the majority of businesses, especially newer and growing ones, can't get away with this consistently or for a long time. And most businesses simply don't have that long a life, period.

Setting a budget based off last years' when your revenue is growing 50%+ YoY won't work well.

What I was thinking of more specifically is that something like setting a budget can be defined as a rigorous optimization problem, but with highly uncertain parameters (marginal return on investment from various units of the business). Any decision made implies a combination of prior over those values and risk tolerance.

Replies from: Lumifer
comment by Lumifer · 2014-11-25T21:56:36.014Z · LW(p) · GW(p)

Any decision made implies a combination of prior over those values and risk tolerance.

If you treat budgeting as an optimization problem, you need forecasts, not priors.

I would also suspect that real-life business budgets will be hard to set as "rigorous optimization problems" because in reality you have discontinuities, nonlinear responses, and all kinds of funky dependencies between different parts of the budget.

comment by ChristianKl · 2014-11-25T15:56:57.727Z · LW(p) · GW(p)

It's very "Black Swan".

I don't think you understand what the term means. It's unknown unknowns and not known unknowns. Whether or not an unproven product will succeed is a question about a known unknown.

This is especially true in fast-changing or competitive markets, where the only way to collect more evidence without direct risk is to let your competitors jump in the water first.

I don't think that's true. There are various forms of doing market research that simply involve money but not additional risk.

Replies from: Punoxysm
comment by Punoxysm · 2014-11-25T16:25:18.915Z · LW(p) · GW(p)

I use "Black Swan" in the context of the whole book. That is, we build narratives after-the-fact to explain correct priors as skill and judgment. Also, the greater impact of more uncertain decisions, in a way that ties uncertainty to the impact, is exactly the nature of unknown-unknown black swans (which I'd say the launching of a substantially new product category fits into, in a mild form. The iPod/iTunes was not a black swan for Apple, though they took considerable risks with it. It was a black swan for the music industry.).

Market research is better than nothing, but still has many problems. Most of it wouldn't pass peer review, and we know peer review makes plenty of mistakes. So when taking it into account, decision-makers must apply strong priors.

And on the occasions that market research really is that good, it's a no-brainer; your competitors will do it too.

Replies from: TimS
comment by TimS · 2014-11-26T17:03:30.395Z · LW(p) · GW(p)

I use "Black Swan" in the context of the whole book

Please don't take terminology with fairly precise meaning and use it idiosyncratically. At best, you unnecessary increase your inferential distance. At worst, you dilute the term so that it increases everyone's inferential distance.

Replies from: Punoxysm
comment by Punoxysm · 2014-11-27T02:50:41.067Z · LW(p) · GW(p)

Edited for clarity. Thought terms get diluted all the time.

Maybe "Talebian" would be more appropriate.

comment by advancedatheist · 2014-11-24T15:53:49.039Z · LW(p) · GW(p)

I thought this article about coaching in pickup techniques kind of misses the point:

I Took A Class on How to Pick Up Women—But I Learned More About Male Anxiety

http://www.alternet.org/culture/i-took-class-how-pick-women-i-learned-more-about-male-anxiety

I posted in response:

For some reason we have this notion that the young man's "sexual debut," as the scientific literature about human sexuality calls it, happens as an organic developmental stage in the late teens, with a median age of around 17. If a 17 year old boy picked at random can probably figure out how to close the deal with a girl for the first time, this accomplishment certainly can't depend on coaching or life experience, because what the hell does a 17 year old boy know? But apparently a nontrivial number of boys in every generation miss this developmental window, and then they wind up in their 20's without an adult man's skill set for dealing with women, like the adult virgins who pay to receive instruction by alleged PUA's. If you have a teenage son, and you can see that girls don't find this boy sexually attractive, that has to affect how you view your son, and in a bad way. Perhaps we should consider earlier and more radical interventions into these boys' lives to help them to develop the adult man's skill set for relationships with women, instead of leaving this to the haphazard because of romantic nonsense that "the right girl will come along some day."

BTW, in case someone brings up the P-word, I'd like to know how seeing a prostitute will help a young man develop the skills he needs to get into sexual relationships through dating - because I just don't see the connection.

Replies from: Viliam_Bur, chaosmage, MrMind, advancedatheist, bogus, advancedatheist
comment by Viliam_Bur · 2014-11-25T10:33:47.234Z · LW(p) · GW(p)

I'd like to know how seeing a prostitute will help a young man develop the skills he needs to get into sexual relationships through dating

Seeing sex as less "magical" could help reduce tension with trying to get sex.

(By the way, the whole article seems to me like: "Look, some people have less social skills -- let's make fun of them! Oh, they are trying to overcome their weakness -- wow, that's even funnier!" The elephant in the room is that in our culture it is taboo to express empathy towards men and boys.)

Replies from: chaosmage
comment by chaosmage · 2014-11-25T20:12:59.578Z · LW(p) · GW(p)

in our culture it is taboo to express empathy towards men and boys.

Really? I do that all the time and literally nobody has ever tried to stop me or punish me for it. Do your actual personal experiences differ?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-11-25T22:17:20.768Z · LW(p) · GW(p)

FWIW, there are contexts in which I've seen this criticized.

Usually, the context is that someone has started a discussion about some situation in which men or boys have caused suffering or otherwise behaved badly, and someone else has responded by expressing empathy towards the men or boys in question, and the person who started the discussion has criticized the attempt to switch the conversation focus from empathy towards the objects of the behavior, to empathy for the agents of it. (The jargon term for this is "derailing" in many contexts.)

Of course, this is only a subset of the general category of expressing empathy towards men and boys, but it's one that gets a lot of attention.

Replies from: fubarobfusco, bogus
comment by fubarobfusco · 2014-11-25T23:44:15.433Z · LW(p) · GW(p)

This is hardly unique to situations involving gender.

For instance, sometimes this sort of thing happens —

  • Person A makes a decision or takes an action that hurts Person B — perhaps accidentally; perhaps out of negligence or bias.
  • Person B makes a demand — such as restitution for the harm done; or that the situation be corrected so that people like A won't hurt people any more.
  • A or A's supporters ignore or deflect B's demand, saying things such as that A's decision-making role is difficult; that A's guilt over hurting B is unpleasant to A; or that continuing to discuss A's mistake (and not "moving on") is a sign of malice, unfairness, or mental imbalance on B's part.

That's derailing: Person A changing the subject from "A hurt B, and B wants it fixed" to "A's life is so hard and people are being so harsh to A" in order to avoid talking about fixing the situation for B, the injured party.

Replies from: TheOtherDave, bogus
comment by TheOtherDave · 2014-11-26T01:11:35.856Z · LW(p) · GW(p)

Yes, I agree that it's not unique to situations involving gender.

comment by bogus · 2014-11-26T00:15:07.446Z · LW(p) · GW(p)

That's derailing: Person A changing the subject from "A hurt B, and B wants it fixed" to "A's life is so hard and people are being so harsh to A" in order to avoid talking about fixing the situation for B, the injured party.

Let's pick an example to make things more concrete. Person B owns a field, and Person A runs trains on a nearby railroad that throw dangerous sparks onto the field. Person B demands that Person A either stop the trains from passing near his property, or else fit them with a mechanism that will prevent sparks. Now Person A complains that the trains are used by low-income commuters who will be forced to pay unreasonably high prices in order to cover these additional costs. Is Person A "derailing the conversation", or is this a valid point? Extra credit: What might influence your answer to this question?

comment by bogus · 2014-11-25T22:26:22.403Z · LW(p) · GW(p)

"Derailing" is a very broad term actually, as is the synonimous term "manscaping". It just means "I-DIDN'T-HEAR-THAT' and you can use it in all sorts of contexts.

Replies from: Nornagest, TheOtherDave
comment by Nornagest · 2014-11-25T22:31:01.732Z · LW(p) · GW(p)

I think you mean "mansplaining". "Manscaping" means, er, something different.

Replies from: bogus
comment by bogus · 2014-11-25T22:39:54.927Z · LW(p) · GW(p)

Whoops, you're right. This stuff gets quite confusing!

comment by TheOtherDave · 2014-11-25T22:53:23.039Z · LW(p) · GW(p)

Yes, like many terms it has a lot of meanings in different contexts. But I'm operating here within the context that Viliam_Bur and chaosmage established.

comment by chaosmage · 2014-11-25T10:50:14.098Z · LW(p) · GW(p)

I'd like to know how seeing a prostitute will help a young man develop the skills he needs to get into sexual relationships through dating - because I just don't see the connection.

Dating and sex are related skills. I assume we agree a prostitute could give a good intro to sex. So why shouldn't she be a good dating coach too? The young man won't need to fear rejection from her, nor fear being talked about later, so they can role-play in emotional safety. She can still tell him what's going to cause rejection when he's not a customer, and what's going to work better. Best of all, she can lead all the way, past exchanging numbers and kissing all the way to sex etiquette.

Of course there's the drawback of possible shame over having visited a prostitute - but virginity can be a source of shame too. So I figure that for the median male adult virgin, seeing a prostitute would be net plus, especially if he manages to specifically ask for dating and first time sex roleplay.

Replies from: Username
comment by Username · 2014-11-25T15:28:13.613Z · LW(p) · GW(p)

(Posted using the anonymous community account; username and password are Username and password)

Dating and sex are related skills. I assume we agree a prostitute could give a good intro to sex. So why shouldn't she be a good dating coach too? The young man won't need to fear rejection from her, nor fear being talked about later, so they can role-play in emotional safety. She can still tell him what's going to cause rejection when he's not a customer, and what's going to work better. Best of all, she can lead all the way, past exchanging numbers and kissing all the way to sex etiquette.

I hear that prostitutes who do that charge a lot -- more than typical 17-year-olds can easily afford, and low-end prostitutes basically just let you masturbate with their bodies.

Replies from: chaosmage
comment by chaosmage · 2014-11-25T19:05:30.530Z · LW(p) · GW(p)

Prostutites don't need a statutory rape charge any more than anybody else, so obviously I'm not talking about 17-year-olds. I mean guys of legal age.

Concerning economics, it's hard to compare. Here in Germany, prostitution is legal, the market is efficient, and there are lots of sex workers competent and professional enough to pull off what I described, available for 100-200 euros per hour. I imagine that in places where prostitution is illegal, the situation would be very different - especially if due to the threat of prosecution, potential customers can't simply email their needs and budget to a couple of providers to get a good offer...

Replies from: Username
comment by Username · 2014-11-25T22:09:55.951Z · LW(p) · GW(p)

(posted by another user using this account)

I'm not sure whether this is really a neutral coaching situation. For really independent sex-workers maybe. But I hear that many still work for a pimp, are highly motivated the extract high amounts from the yongster and wouldn't necessarily provide a neutral emotionally safe environment. This is from the source with significant (but possibly somewhat out-dated) work-experience in this field.

comment by MrMind · 2014-11-25T08:35:30.186Z · LW(p) · GW(p)

I wouldn't be too much concerned. The article is a lot less dismissive of PUA than what is usually put forward, even on this site. Plus, it's not that La Ruina isn't another little Mystery clone.

If a 17 year old boy picked at random can probably figure out how to close the deal with a girl for the first time

Based on what I know of my culture (US or other European countries might differ), not even 17 yo boys who do get girls know better. They usually get them because of a combination of some better looks, wider social circle, inferior opinions on women.
Those who apply for a PUA seminar are the ones who are trying to optimize their understanding of females, letting aside the fact that you cannot will yourself into being non-anxious. My opinion is that if they could be at ease around the opposite sex, they would wind up with a better sexual life than their "natural" peers.

comment by advancedatheist · 2014-11-24T19:43:59.014Z · LW(p) · GW(p)

Another post I made to this AlterNet piece:

I can see why progressives want to discredit PUA coaches and belittle the men who seek their help, setting aside the question of these coaches' competence at doing what they advertise about themselves.

One, the PUA subculture promotes a politically incorrect view of women which sounds like the world view of traditional, conservative patriarchy, only read in reverse, so to speak: PUA coaches endorse the patriarchal view of women's weaknesses and vulnerabilities, and they teach men how to exploit these for sex by adopting the strategies of old-school cads. And I feel some sympathy for this view of women because to me women seem to have defective agency relative to men. If PUA coaches and writers can make a living with this message, perhaps their advice to men based on this traditional understanding of women has some validity after all.

And two, these men seek to improve themselves in an era of "You didn't build that" and the denigration of the self-made man. They've sought help in civil society and in the market instead of turning to the collectivist institutions created, maintained and thought-policed by progressives. They've rejected the progressive ethic of helplessness, dependency and victimization, in other words, in favor of the conservative ethic of self-reliance.

Replies from: bogus, ChristianKl
comment by bogus · 2014-11-24T20:37:08.211Z · LW(p) · GW(p)

PUA coaches endorse the patriarchal view of women's weaknesses and vulnerabilities, and they teach men how to exploit these for sex by adopting the strategies of old-school cads.

I think most pickup coaches would object to this point of view, and it might make some of them quite unhappy. PUAs teach strategies that they believe will increase your attractiveness to the opposite sex. But it's silly to see attraction as a "weakness" or "vulnerability". Many people (women included, of course) want to feel attracted in the first place, especially to someone with other good qualities - they just don't get to make that choice most of the time! That's the one sense in which 'reduced agency' could be said to be relevant - but it doesn't negate the fact that agency really is quite heavily involved in any kind of pickup.

comment by ChristianKl · 2014-11-25T08:55:55.445Z · LW(p) · GW(p)

If PUA coaches and writers can make a living with this message, perhaps their advice to men based on this traditional understanding of women has some validity after all.

There are a lot of quick success schemes sold with the same marketing that PUA products are sold. The fact that people are willing to pay money for a dream of quick success doesn't mean that they can deliver on the promise.

PUA is a quite complex topic.

Male anxiety is an issue, and I don't think that an expensive 3 to 4 day bootcamp normally fixes it. Neither does watching a 24 DVD set sold for 499$.

If I could either send a 18 year old to a tantra seminar or to a PUA seminar, I'm not sure that the PUA seminar is the one that gives the higher return as far as improving his success with the opposite sex.

And I feel some sympathy for this view of women because to me women seem to have defective agency relative to men.

The fact that you believe that might be the problem and illustrate lack of ability of dealing with women.

Replies from: Viliam_Bur, Lumifer
comment by Viliam_Bur · 2014-11-25T10:41:34.974Z · LW(p) · GW(p)

Male anxiety is an issue, and I don't think that an expensive 3 to 4 day bootcamp normally fixes it. Neither does watching a 24 DVD set sold for 499$.

Irrationality is an issue, and I don't think that reading the Sequences normally fixes it. Neither does a 3-day rationality seminar for $3900.

Still, for some people it's a good option.

If I could either send a 18 year old to a tantra seminar or to a PUA seminar, I'm not sure that the PUA seminar is the one that gives the higher return as far as improving his success with the opposite sex.

I would expect different things working for different people.

The interesting thing is that the tantra seminar would not motivate people to write similar articles. Even if there is also no guarantee that it is something more than just someone's strategy to make money quickly.

comment by Lumifer · 2014-11-25T16:51:48.875Z · LW(p) · GW(p)

If I could either send a 18 year old to a tantra seminar

Tantra isn't really new-age exotic sex practices.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-25T17:26:38.210Z · LW(p) · GW(p)

Wikipedia has little influence on what's practiced in a seminar with the headline tantra. At the same time of course it's not simply about the stereotype it has.

One element of tantra is for example strong eye contact. You can go to a PUA seminar and hear a lecture by a guy about holding eye contact. That often leads to guys going out and being uncalibrated. If you on the other hand learn eye contact in a tantra seminar the resulting behavior is likely much better calibrated.

Replies from: Lumifer
comment by Lumifer · 2014-11-25T17:39:22.719Z · LW(p) · GW(p)

I feel we are using the word "tantra" in entirely different meanings.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-25T18:09:39.825Z · LW(p) · GW(p)

I speak about the kind of event that's titled a tantra seminar and take my knowledge of what happens there from people I meet in meatspace who took part in such events.

Replies from: None
comment by [deleted] · 2014-11-25T23:55:18.673Z · LW(p) · GW(p)

Well, what happens there?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-26T10:51:26.044Z · LW(p) · GW(p)

That's a fair demand, but I don't want to go in too much detail on that point. There a lot of inferential distance in talking about New Age practices on LW and Tantra isn't a subject I studied deeply enough to be confident that I fully understand it's theory base.

comment by bogus · 2014-11-24T16:44:03.131Z · LW(p) · GW(p)

Yeah, that article has a weirdly dismissive tone. It reads like pickup is all about helping these 'painfully shy', inexperienced guys boost their self-confidence, and there's nothing more to it than that. But ISTM that folks who sign up for a random intro bootcamp are quite likely to be a lot shier and more intraverted than average. There's quite a bit of innovative stuff in pickup, but people probably come across it on internet forums, or perhaps through proprietary guides/videos or in the most 'elite', costly workshops/bootcamps.

Replies from: advancedatheist
comment by advancedatheist · 2014-11-24T16:59:27.610Z · LW(p) · GW(p)

I've noticed a similar lack of understanding in other men who had their sexual debuts at developmentally appropriate ages. It becomes a kind of cognitive barrier separating sexually experienced men from the inexperienced ones.

I also notice a lack of curiosity about this phenomenon in professional sex researchers. I have three different college textbooks of the Human Sexuality 101 sort, and none of them has a section on adult virgins, much less adult male virgins.

Replies from: MrMind
comment by MrMind · 2014-11-26T11:31:13.199Z · LW(p) · GW(p)

I also notice a lack of curiosity about this phenomenon in professional sex researchers.

That's the thing that bugs me the most. Why can't we just have quality research on the subject?

comment by advancedatheist · 2014-11-24T16:06:24.471Z · LW(p) · GW(p)

More along these lines by Dr. Helen Smith, the wife of blogger Glenn Reynolds, the Instapundit:

Geeks on Strike?

http://pjmedia.com/drhelen/2014/11/20/geeks-on-strike/

She references Vox Day's observations about how many young men these days find themselves alienated from young women, hence their willingness not to pull their punches when female social justice warriors start to mess with their gaming activities. What can these young women really do to these guys to punish them - withhold sex? They've already done that. Rejections have consequences.

Replies from: Viliam_Bur, bogus
comment by Viliam_Bur · 2014-11-25T11:47:49.255Z · LW(p) · GW(p)

I believe that it is a factor, it is far from being the only factor, probably not even the most important one. But it points in an interesting direction.

Okay, some political stuff here, because the topic is inherently political, and I even want to go one step more meta, which is deeper in politics:

Feminists have been complaining for a long time about traditional power structures in our society. Which is a legitimate complaint in my opinion, but I disagree with their choice of the word "patriarchy", because it has the unfortunate connotation that the traditional power structures are merely about something that (all? most? some?) men do to women, and so it makes us blind about things that some women do to men to maintain the traditional power structures. Suggesting that women as a group even have some kind of social power probably already is a heresy.

The list of the techniques women are traditionally allowed to use against men is here. They are mostly ad-homined arguments that a woman (for more powerful impact: a group of young women; but also their male defenders) can use against a man who tries to step out of the line.

"You are bitter!" "You hate women!" Because everyone is already primed to see men as dangerous and hateful. "You are afraid!" "Man up!" When convenient, the stereotypes of masculinity become a useful tool to shame men. "You are immature!" Grow up!" Again a reminder of failing the traditional role. "Stop whining!" "Your fragile male ego!" People have less empathy towards men, so remind them to not expect any. "You just can't get laid!" "You probably have a small penis!" Even this kind of argument is relatively accepted against men. It doesn't prove anything, it just suggests that the man is somehow defective, therefore low-status, therefore his opinions don't matter.

Each of these critiques makes more or less sense separately, but when we take them together, it becomes apparent that as a set they can be used in any situation. A man can be shamed for following his traditional gender role and for deviating from it. Maybe even both at the same time. Neither power nor weakness is acceptable. Perhaps, as a rule of thumb, a man should follow all his traditional obligations (get a job, make a lot of money, move all the heavy objects) but should not expect any traditional advantages (because that would be sexist). Even having a hobby is suspicious, unless the man can explain how the hobby will help him make more money in the future. In our culture, men have instrumental value; only women have terminal value. (Unless the man is really high-status, in which case different rules apply.)

So, in a way, if feminists complain about the traditional gender roles, they should celebrate gamers as allies, because those break the male stereotypes, and they do it on their own, no education or propaganda or change of laws necessary. But of course there is a difference between being a feminist in a sense "trying to change the traditional power structures (patriarchy)" and in a sense "cheering for the 'team women'". It's situations like this when the difference becomes visible; when weakening "patriarchy" also removes some systemic power from the "team women".

Equality comes at a price. The price is that you don't have servants anymore. If you complain about it, you probably didn't want equality in the near mode, only as a far-mode slogan.

From a proper point of view, gamers' resistance towards patriarchal shaming technuiques is an important victory of feminism. However, I would not be surprised if most self-identified feminists don't get it.

What can these young women really do to these guys to punish them - withhold sex?

And what about women in gaming? Or gays, or asexuals? (Or course the official party line is that they don't exist.) All these people are now considered equal and respected members of the society... which includes the right to not give a fuck about what some young ladies are telling them to do.

Again, the true equality works both ways.

Replies from: NancyLebovitz, IlyaShpitser, NancyLebovitz
comment by NancyLebovitz · 2014-11-25T16:21:58.700Z · LW(p) · GW(p)

People underestimate the effect of the worst behaved people on their own side.

This being said, unless I've missed something (quite possible), feminists don't have a comparable history of doxing and violent threats.

Replies from: Salemicus, Viliam_Bur
comment by Salemicus · 2014-11-25T16:45:22.151Z · LW(p) · GW(p)

Feminists do have a long history of doxing. My impression is that they don't make the same level of violent threats, but they certainly aren't rare. For example, Chloe Madeley.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-25T17:27:32.676Z · LW(p) · GW(p)

Details about the history of doxxing?

comment by Viliam_Bur · 2014-11-25T22:30:59.468Z · LW(p) · GW(p)

feminists don't have a comparable history of doxing and violent threats

You mean feminists in general, or just recent events?

EDIT: By the way, in the second link, the victim is a feminist, too.

Replies from: NancyLebovitz, TimS
comment by NancyLebovitz · 2014-11-26T01:18:33.151Z · LW(p) · GW(p)

Yeah, and you could throw in Erin Pizzey having been threatened for saying that a bit more than half the women in her domestic violence shelter were violent themselves.

Still, the list so far isn't comparable to the number of women who've been threatened just over GamerGate.

Replies from: Viliam_Bur, VoiceOfRa
comment by Viliam_Bur · 2014-11-26T11:25:36.402Z · LW(p) · GW(p)

I'm at a huge risk of motivated thinking here, but I want to make a few points:

1) Not all forms of "threatening" are equal. For example killing someone's dog is much worse than sending someone a tweet "i hope you die". If we put these things in the same category, by such metric the latest tumblr debate may seem more violent than WW2. Also, the threats of blacklisting in an industry seem to me less serious, but also more credible than the threats of physical violence.

2) We have selective reporting here, often without verification. Journalists have a natural advantage at presenting their points of view in journals. Also, one side makes harrassment their central topic (and sometimes a source of income), while for the other side complaining about being harrassed is tangential to their goals. I haven't examimed the evidence, but seems to me there are almost no cases, on either side, where the threat is (a) documented, and (b) credibly linked to the opposing side, as opposed to a random troll, or some other unrelated conflict.

3) Lest we forget the parallel NotYourShield campaign, threats against gamers and game developers are technically also threats against women, and there are quite possibly more women in gamergate than in gaming journalism. Women are women even when they are not marching under the banner of feminism.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-26T14:48:16.448Z · LW(p) · GW(p)

Yeah, I'd say motivated thinking.

Not all forms of threatening are equal, but "I'm having extremely violent fantasies about you and I know where you (and your children) live" isn't a tiny thing, and it goes rather beyond "I hope you die". (Is there a name for the rhetorical trick of choosing, not just a non-central example, but a minimized non-central example?)

Part of the point is that women are sometimes the target of harassment campaigns online. Some of the attackers may have an interest in the ostensible issue, some may be pure trolls. It seems as though a lot of the attackers are male.

I doubt that there are a number of women who left their homes because of nothing in particular.

When I mentioned above that people underestimate the effect of the worst people on their own side, I meant that just as I tend to underestimate the way feminism can add up, I think you're underestimating the number and forcefulness of the vicious people on your side.

I'm still incredibly angry at the way Kathy Sierra was driven out of public life.

Replies from: NancyLebovitz, Viliam_Bur
comment by NancyLebovitz · 2014-11-27T16:00:22.890Z · LW(p) · GW(p)

I'm curious about why this comment got so many downvotes, if anyone would care to try explaining. I'm saying "try explaining" because any one person can only know the reason for at most one downvote.

Replies from: lfghjkl
comment by lfghjkl · 2014-11-27T21:33:21.227Z · LW(p) · GW(p)

Yeah, I'd say motivated thinking.

Comments like these are not helpful. Especially not on a highly politicized topic such as the one the two of you are discussing.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-27T22:44:53.938Z · LW(p) · GW(p)

I don't know if it's enough to matter, but I only mentioned motivated thinking because Villiam brought up the possibility.

Replies from: lfghjkl
comment by lfghjkl · 2014-11-27T22:54:14.599Z · LW(p) · GW(p)

The problem is that no matter your intentions the phrase reads as a complete dismissal of Viliam_Bur's argument. That is how these discussions turn ugly.

comment by Viliam_Bur · 2014-11-26T17:13:59.595Z · LW(p) · GW(p)

Would this qualify as a sufficiently scary threat? Both men and women receive various kinds of abuse online. I would guess that most of the aggressors are men, but victims are of both genders. Being a victim of online harrassment is not a uniquely female experience, although some specific forms of harrassment may be, mostly of sexual kind. I would also guess that victims of "swatting" are typically men, but I have no data about it.

Now I feel it would be good to split the debate into two completely separated topics: feminism and GamerGate. Debating them as if they are the same thing would make this all extremely confusing. Framing GamerGate as "angry white men against feminists" is merely a propaganda of one side; in reality, both sides include angry white men, and both sides include feminists.

1) I believe I have read a few stories about violent behavior of feminists, but I usually don't keep records of things I read online. If my memory is reliable, the complaints about abuse from feminists usually came from LGBT people, although officially the feminists are supposed to be on their side. Googling for "violent feminists" mostly brings false positives, but also this.

I admit I am confused about the phenomenon of online SJWs. Are they supposed to be a part of feminism, or is that a separate thing? Because their opinions seem similar to some extreme feminist opinions. Seems to me these people do a lot of online harrassment, although on internet it is difficult to prove something isn't merely trolling. And generally, even if someone is a feminist, that doesn't mean that everything they do is done in the name of feminism.

2) Here is a collection of abuse towards pro-Gamergate people. Again, it's difficult to prove who did that. We would have to debate each piece of evidence individually, but I'd rather avoid that.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-26T18:20:52.337Z · LW(p) · GW(p)

That first link strikes me as not extremely scary, and it seems to be a rant rather than a threat which was sent to someone in particular. Furthermore, it doesn't have specific details about injuries and degradation. It isn't a photoshopped image of the person being threatened, either.

Gamergate is hopelessly weird-- as you may know, the initial post was basically a man talking about having been emotionally abused by a woman, with only a minor mention of games and journalism, and it morphed into something completely different.

As far as I can tell, SJWs consider themselves to be part of feminism and/or the one true feminism. I haven't seen a claim anywhere that they aren't feminists, and at least one suggestion that there's no point is saying that they aren't feminists, even if they're wrong-headed.

It wouldn't surprise me if a lot of moderate feminists (like most people) aren't engaging with SJs because that looks like a lot of work and no fun.

Replies from: hawkice
comment by hawkice · 2014-11-29T18:47:42.844Z · LW(p) · GW(p)

Is it just me or is this a proxy bravery debate? Are we collectively committed to getting to the bottom of who / which tribe is the true victim of those mean people on the internet? I'm not entirely sure why this has been promoted to the level of "have two extremely smart LW posters discuss". You both are quite keen thinkers, and I imagine the topics this funges against for your attention will delight yourselves and the wider LW community even more.

comment by VoiceOfRa · 2015-10-28T17:52:46.017Z · LW(p) · GW(p)

Still, the list so far isn't comparable to the number of women who've been threatened just over GamerGate.

Well, since the number of women who appear to have been threatened over GamerGate (as opposed to the number of women who claim to have been threatened, but the evidence vanishes whenever these allegations are investigated) appears to be 0. Furthermore, given your recent demonstrated lack of ability to detrmene whether something is a thread (hint: someone saying something that might imply he believes something you find threatening is not a threat), you probably shouldn't be making judgements on these issues.

comment by TimS · 2014-11-26T16:25:28.913Z · LW(p) · GW(p)

I could be wrong, but I thought the consensus was that your recent event example was not a dox of A by B (or only linking to a public dox by third party).

That said, it's very clear that A and B don't like each other and spin the facts unfavorably about each other.

comment by IlyaShpitser · 2014-11-25T17:07:34.260Z · LW(p) · GW(p)

Here is a problem with an interest group:

http://thinkprogress.org/world/2014/03/05/3362801/nra-ivory-elephants-guns/

It's easy to hate the NRA if you come from certain parts. But the NRA is not very unusual in this respect. Interest groups, by their nature are unable to have the overview to know when to throw their cause under the bus for the "greater good." This is a general problem for all interest groups, regardless of whether their cause is noble or not.


The real question is how do we fight Moloch by a different method than competing interest groups (which will follow the usual "behavior physics" of interest groups, which feminism is not exempt from, regardless of how noble its goal is).

Replies from: Salemicus, Lumifer, None
comment by Salemicus · 2014-11-25T17:37:03.523Z · LW(p) · GW(p)

Like Lumifer, I think the NRA is doing the right thing here - even strictly from a conservationist perspective. If we all stopped eating eggs, would there be more chickens? Of course not. When I mentioned similar logic here at least the vegetarians were honest that they wanted to drastically reduce the chicken population. But if using fewer chicken products leads to fewer chickens, how will using fewer elephant products lead to more elephants? And note that these two contradictory answers are frequently pushed by the very same people.

If you really wanted to preserve elephant populations, you'd make it easier for people to farm them for their ivory, which would go, in part, into making gun handles. But because the NRA are culturally alien to you, you'd like to throw their cause under the bus "for the greater good," for the very slightest reason.

So yeah, we all want causes we don't care about to shut up and get out of our way. It's a good thing that we can't make them. After all, NRA members aren't just gun enthusiasts, they are also citizens in every other way. If NRA policy interferes too much with (say) economic wellbeing in the eyes of its members, then the NRA will lose force as an interest group.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-11-25T17:50:12.111Z · LW(p) · GW(p)

I think the NRA is doing the right thing here - even strictly from a conservationist perspective

I think maybe you do not realize how poor the institutions are here. There isn't some actor with long term overview maximizing ivory profits (and incidentally ensuring elephants continue as a species). Commercial overexploitation of resources in the biosphere is extremely common, and requires coordination to solve properly (see for example cod stocks collapse in the Atlantic for one example historically important for Europe). Collapse (the book) gave some examples where coordinating a long term exploitation of the environment was solved properly and examples where it wasn't.

But my point isn't about the NRA, or environmentalists specifically, I just used them as an example. My point is about a general problem with interest group ecosystems. If an interest group advocates a bridge to nowhere it is not going to lose force, it is doing precisely what it is meant to do.


But because the NRA are culturally alien to you

I would like to add here that I have been very very careful not to discuss my actual politics. Most of your assumptions about my culture or my politics are false. (So I guess I passed the ideological Turing test?)

Back when I had long hair, I was once accosted by a dude trolling for Obama votes who said: "you have long hair, you must be an Obama supporter!" What you are doing is basically this. Filling a hole with a pigeon is going to be very frustrating for you in this case.

Replies from: Lumifer
comment by Lumifer · 2014-11-25T18:09:45.727Z · LW(p) · GW(p)

requires coordination to solve properly

Not necessarily. An effective solution to the tragedy of the commons is property rights. While at the moment there may not be an actor with a long-term commercial interest in elephants, this kind of legislation is making sure that there never will be one.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-11-25T22:24:19.086Z · LW(p) · GW(p)

Property rights do not magically enforce themselves, you need a government to enforce it for you. Everyone agreeing to a government's monopoly on force is yet another coordination problem. This is not so easy in places where elephant poaching happens. That aside, Collapse had examples where property rights were not sufficient in themselves. You should read it, I enjoyed it a lot!

Replies from: Lumifer
comment by Lumifer · 2014-11-26T01:17:01.813Z · LW(p) · GW(p)

Property rights do not magically enforce themselves, you need a government to enforce it for you.

Again, not necessarily. A private security force works fine -- especially in places where the government isn't... particularly effective. Such governments aren't all that good at coordination, either, by the way.

But the argument boiled down to its core is just incentives. It's much better to have incentives for private people to have herds of elephants roam on their ranches than depend on government bureaucrats who, frankly, don't care that much.

An international ban on ivory trading by itself wont' save the elephants -- the locals will just hunt them down for meat and because they destroy crops.

I think you just chose a bad example. Your underlying point that special-interest groups have tunnel vision and are constitutionally incapable of deviating from their charter is certainly valid.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-11-26T11:51:13.901Z · LW(p) · GW(p)

I don't understand what this is about anymore (I think you just like to argue?)

(a) There aren't "private security forces" replacing governments making Africa a kind of modern day Snowcrash universe. Governments are mostly weak and corrupt, and there are warlords running around killing folks and each other, and taking their loot.

(b) The way the NRA makes its decisions has nothing to do with the political situation in Africa, the state of elephant herds in Africa, the long term fate of the African elephant species, or anything like that. They consult relevant gun makers, and decide based on that. This is contrary to the original claim that the NRA was making the correct decision even from a conservational point of view. They aren't in this case, but if we did the math and found out they did, it would certainly be by accident, because they surely didn't do the math.

(c) Do you actually know how many elephants are killed in Africa for non-ivory reasons?

Replies from: Salemicus, Lumifer
comment by Salemicus · 2014-11-26T12:31:25.054Z · LW(p) · GW(p)

The way the NRA makes its decisions has nothing to do with the political situation in Africa, the state of elephant herds in Africa, the long term fate of the African elephant species, or anything like that... This is contrary to the original claim the NRA was making the correct decision even from a conservational point of view. They aren't in this case, but if we did the math and found out they did, it would certainly be by accident, because they surely didn't do the math.

I didn't claim that they made the correct decision for the right reasons. Of course it's (in a sense) a felicitous coincidence that the NRA is in the right here from a conservationist point of view. But if environmental groups are helping the environment, I'd view that as even more of a felicitous coincidence, given their methods of making decisions.

It's remarkable, but not hugely so, that the policies of a group who care about the property rights of American gun owners should align with strong property rights worldwide, and hence a flourishing environment. It would be far more remarkable if the policies of a group who care about purity rituals should lead to a flourishing environment.

comment by Lumifer · 2014-11-26T15:41:05.426Z · LW(p) · GW(p)

I think you just like to argue?

Only as long as interesting things are being said :-)

There aren't "private security forces" replacing governments

And nobody said that. But hiring guards for your farm/ranch/pasture is quite common and does happen to be private enforcement of property rights.

They consult relevant gun makers

I can't imagine why contemporary gun makers would care about decades-old ivory. If anything, they'd prefer more constraints on sales of old guns as that enlarges the market for new guns.

And I don't think anyone made a claim that NRA's decision was correct from a conservationist point of view. The claim is that the law fails the cost-benefit analysis for certain (implied widespread) sets of values. I am sure ardent environmentalists are happy with it, but not everyone is an ardent environmentalist.

Do you actually know how many elephants are killed in Africa for non-ivory reasons?

Ah, good question. My pre-Google answer would be "some" and if pressed for numbers I'd say 10-20% at the moment, but with not much conviction. Accio Google!

Hmm... Lots of data but all of it is on "illegally killed" elephants which isn't particularly useful in this context, as killing elephants is mostly illegal everywhere and so the meaning is just "human-killed". My impression is that in areas with LOTS of poaching the great majority of elephants are killed for the ivory, but in areas with few "illegal kills" situation may differ. No data to support this impression, though. It also seems that there is a lot variability in the numbers killed year-to-year.

comment by Lumifer · 2014-11-25T17:18:26.903Z · LW(p) · GW(p)

Here is a problem with an interest group

I don't see a problem. Or, rather, I see a problem with the blanket prohibition on the sale of <100-year-old ivory as it looks unreasonable to me.

Replies from: chaosmage
comment by chaosmage · 2014-11-25T20:05:51.568Z · LW(p) · GW(p)

Do you see a problem with the dwindling elephant population too? If so, are you able to judge which is the greater problem? If so, what is your judgement?

Replies from: Lumifer
comment by Lumifer · 2014-11-25T20:16:18.961Z · LW(p) · GW(p)

Do you see a problem with the dwindling elephant population too?

Yes, of course.

If so, are you able to judge which is the greater problem?

You are engaging in a classic false dilemma fallacy.

Do tell, how the prohibition on selling 50-year-old ivory helps the dwindling elephant population?

Replies from: chaosmage
comment by chaosmage · 2014-11-25T20:47:04.861Z · LW(p) · GW(p)

Lots of existing ivory becomes illegal, leading to a local drop in value, leading to lots of US ivory being traded to countries where it isn't illegal. Right?

So that first of all that sets up excellent opportunities for police sting operations. But it also drives down prices (at least for a few years), making elephant poaching less lucrative.

In parallel to that, the US is setting an example. A lot of countries copy US criminal laws rather than thinking them up from scratch (the War on Drugs being the textbook example), and since almost everyone loves elephants and the ivory trade is a huge and growing threat to them, there'll be a particularly low threshold to copying this one.

Replies from: Lumifer
comment by Lumifer · 2014-11-25T21:28:33.476Z · LW(p) · GW(p)

Lots of existing ivory becomes illegal, leading to a local drop in value, leading to lots of US ivory being traded to countries where it isn't illegal. Right?

Sigh. Wrong. Why don't you at least look at the original link to the article about the ban? Notably, it says (emphasis mine):

Last month, the White House announced a ban on the commercial trade of elephant ivory, placing a total embargo on the new import of items containing elephant ivory, prohibiting its export except in the case of bona fide antiques, and clarified that “antiques” only refers to items more than 100 years old when it comes to ivory.

Replies from: chaosmage
comment by chaosmage · 2014-11-25T22:09:52.271Z · LW(p) · GW(p)

I neither said nor meant it was going to be exported legally. It'll be black market trade, but it'll still respond to market forces, just like drug trafficking does.

Replies from: Salemicus, Lumifer
comment by Salemicus · 2014-11-26T11:22:57.288Z · LW(p) · GW(p)

Hold on. No new ivory products can (legally) be imported or exported from the US, but ivory products already in the US can still be bought and sold, albeit subject to restrictions. Providing demand for ivory remains roughly constant, and the US continues not to be an ivory producer, we would expect that to lead to a rise in ivory prices in the US market, and almost no ivory being exported (but some being imported on the black market).

comment by Lumifer · 2014-11-25T22:24:19.800Z · LW(p) · GW(p)

So how much ivory do you expect to be illegally exported out of the US as a result of that law?

And if you don't care about legality, why would you export ivory, anyway? The prohibition destroys legal markets, but tends to raise prices in the black markets.

Replies from: chaosmage
comment by chaosmage · 2014-11-25T23:41:01.620Z · LW(p) · GW(p)

The prohibition destroys legal markets, but tends to raise prices in the black markets.

False. Scarcity raises prices, and black market goods are often scarce, but where illegal goods are not scarce (say street quality heroin) the profit margins are fairly low because illegality makes it hard to compete on brand so everyone competes on price.

So how much ivory do you expect to be illegally exported out of the US as a result of that law?

I don't see how my estimate would matter in the slightest.

Replies from: Lumifer
comment by Lumifer · 2014-11-26T01:11:37.267Z · LW(p) · GW(p)

Scarcity raises prices

And you don't think ivory is scarce in the US..?

I don't see how my estimate would matter in the slightest.

It would because your argument is that US exports will depress prices in the rest of the world. If the US exports amount to half a tusk, it's not going to depress world prices much :-/

In any case, this seems to be descending into bickering. Agree to disagree?

Replies from: chaosmage
comment by chaosmage · 2014-11-26T11:36:40.525Z · LW(p) · GW(p)

you don't think ivory is scarce in the US..?

No, I'm saying this law makes it less scarce, because it makes buyers leave the market.

I can't make an informed prediction of how much ivory is going to leave the US because I know nothing about future rates of persecution or the effectiveness of the ivory trade. I imagine that a few people will "help" ivory owners avoid law enforcement by buying their illegal ivory at a sharp discount, then trading them for drugs and letting the drug traffickers get the stuff out of the country. Other, still legal ivory is going to be traded off too, since it is obvious the legal trend is going only one way. The economic incentives are pretty obvious, it'd be really weird if this didn't happen at least a little. But I can't know how much. If I had to take a wild guess, I'd say 15% of ivory inside US borders is leaving it in the next ten years.

Agree to disagree?

No. On what do we still disagree? Much of my argument on the likely effect on the ivory market is prediction descending into outright speculation - but this is all a sub-point answering your refusal to judge whether this or the survival of the elephant species is more important. You disputed neither of my other points on why these are causally linked (ease of sting operations and the prediction other countries would copy this law). So this does not appear to be a false dilemma. Which is why I'd like to return to my main point: Isn't helping the elephant species worth this law?

Replies from: Lumifer
comment by Lumifer · 2014-11-26T15:18:44.461Z · LW(p) · GW(p)

No.

Suit yourself.

comment by [deleted] · 2014-11-26T18:57:11.775Z · LW(p) · GW(p)

Even ignoring the common good: Why do interest groups so often impede the long-term progress of their own goals?

Why, when X is simple, strong, and sufficient to advance the group purpose, will a group instead focus on advancing some complicated and contentious Y?

Many groups, (including some I support), appear genuinely unable to do any long-term strategic thinking at all, or powerless to control their internal social forces.

comment by NancyLebovitz · 2014-11-27T16:14:53.836Z · LW(p) · GW(p)

At least some of the attacks you describe are used against women as well-- in particular the "grow up" or "be tougher because our project is more important than your emotions" range. I'm not sure it's all as gendered as you think.

This being said, there are gendered insults (notably small penis,neckbeard, and sausage fest) that are common among feminists. I've seen some feminists argue against the first two, but not the third.

I'm wondering whether it makes sense to try to keep your opponents' identity small, and not modelling a large number of people as one big person with a unified agenda.

comment by bogus · 2014-11-24T17:59:16.268Z · LW(p) · GW(p)

Gamers aren't "pulling their punches" online because SJW don't pull their punches either. It's all random Internet fun anyway until people actually get doxxed (or 'swatted', or worse).

comment by Slider · 2014-11-27T01:27:56.685Z · LW(p) · GW(p)

Studying computers I have ran into Turings name occasionally. When I actually looked up the papers he had wrote that seeded the concepts that caryy his name, this was a very refreshing read. To me they stand the test of tmie well. I knew that Turing committed suicide that had to do with him being a homosexual. Now I have learned of suggestions that official instituitons might have had a helping hand in that and that there wil be no offcial apology.

Turing was quite young and what he produced was pretty good stuff. I would have been really exited to read what he would have written if he had been on the field for 5 times as much. Shortening that lifespan motivated with something as silly as homosexuality inflamed me with a big anger emotion.

You can add to your list of why we don't have the singularity yet the item of "not tolerant enough".

Replies from: polymathwannabe, MrMind, artemium, Slider
comment by MrMind · 2014-11-27T08:05:38.129Z · LW(p) · GW(p)

Yeah, I was thinking about similar themes some days ago. My reference was Galois, a very young genius of the field. After single-handedly inventing group theory, he died. At 20. In a duel. Over a girl (allegedly).

Or Ramanujan. Died because he refused to eat healthily.

There are many examples of geniuses that died early, and had not the time to contribute much more to humanity, usually over silly things.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-11-27T16:06:15.134Z · LW(p) · GW(p)

Ramanujan died as the result of compulsive behavior from two cultures. He was (so far as I know) doing alright until WWI happened.

comment by artemium · 2014-11-27T18:00:03.097Z · LW(p) · GW(p)

I think you posts was interesting., so why the downvote? I'm new here, and I'm just trying to understand Karma system. Any particular reason?

Replies from: ChristianKl, TheOtherDave, RowanE
comment by ChristianKl · 2014-11-28T11:13:26.550Z · LW(p) · GW(p)

The post argues that a single instance proves that lack of tolerance holds back the singularity. That's a stupid argument. The kind of argument people make if the operate in the mental domain of politics and suddenly throw out their standards for rational reasoning.

It also quite naive in that it thinks that having the singularity now would be a good thing. Given that we don't know how to build FAI at the moment having the singularity now might mean an obliteration of the human race.

comment by TheOtherDave · 2014-11-27T20:09:09.862Z · LW(p) · GW(p)

I don't know, but a pattern I've noticed lately is that posts that can be understood as "soldiers for the progressive side" will often get two or three downvotes pretty quickly, and then get upvoted back to zero over the next few days. (If they are otherwise interesting they typically get lots more upvotes.)

I suspect that pattern is relevant here.

Replies from: bogus
comment by bogus · 2014-11-27T22:40:27.827Z · LW(p) · GW(p)

I've noticed similar things. Probably some knee-jerk votes coming from NRX's, or from folks who just hate seeing political comments here. Or both.

comment by RowanE · 2014-11-28T08:30:46.054Z · LW(p) · GW(p)

It was already downvoted when I saw it so I didn't give it the most charitable reading, I thought it amounted to little more than a political cheer and not something that belongs here.

comment by Slider · 2014-11-27T02:13:24.093Z · LW(p) · GW(p)

I failed to do basic googling. They are sorry for the fate but don't revert any official decision.

comment by polymathwannabe · 2014-11-26T13:03:58.443Z · LW(p) · GW(p)

The Wikipedia article on the Ferguson crisis says,

"the population is only one-third white and about two-thirds black"

and then says,

"Ferguson police were twice as likely to arrest African Americans during traffic stops as they were whites"

which only appears anomalous if you ignore the base rate of finding a black driver vs. a white one. (Edited to add: other factors, like how many people in each group own/drive cars, may be relevant.)

There are many valid reasons to worry about racial tensions in that town (e.g. 48/53 police members are white), but the arrest rates is not one of them.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-26T16:51:42.701Z · LW(p) · GW(p)

Statistics don't work like you think they do. The number is controlled.

If you come to that conclusion, the thing you should do as a rationalist is "notice confusion". Then you would check the source and would see:

While black residents accounted for 67 percent of Ferguson’s population, black drivers accounted for more than 86 percent of the traffic stops made last year by the Ferguson Police Department, according to a report produced by the office of Missouri Attorney General Chris Koster.

If you want to learn the relevant statistical literacy skills to understand what the sentence "Ferguson police were twice as likely to arrest African Americans during traffic stops as they were whites" usually means, the relevant subject is regressian analysis.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-26T17:09:56.109Z · LW(p) · GW(p)

Thank you.

comment by [deleted] · 2014-11-30T12:33:20.104Z · LW(p) · GW(p)

Anyone want to have a Winter Solstice Anti-Celebration, in which we express our generalized bitterness with the dark and cold, and then leave all the nice celebratory bits until Midsummer?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-30T13:07:51.417Z · LW(p) · GW(p)

Bitterness at things you can't change isn't a useful mental state. There no reason to have it.

Replies from: None
comment by [deleted] · 2014-11-30T14:32:49.378Z · LW(p) · GW(p)

Then why is there a Solstice holiday in winter?

Replies from: ChristianKl, Lumifer, bramflakes
comment by ChristianKl · 2014-11-30T14:51:45.308Z · LW(p) · GW(p)

In case you are refering to Christmas, the solstice is on the 21st while Christmas is depending on culture the 24, 25 and/or 26.

In any case I don't see how Christmas is about expressing bitterness.

Replies from: None
comment by [deleted] · 2014-11-30T22:10:31.016Z · LW(p) · GW(p)

In any case I don't see how Christmas is about expressing bitterness.

It's basically about pretending it's not so damn dark and cold out. Ironically, only some cultures that put ever so much effort into their Christmas celebrations even bother to have anything at all for Midsummer, which is the actual nice part of the year.

Replies from: bramflakes, Gondolinian
comment by bramflakes · 2014-11-30T22:39:57.686Z · LW(p) · GW(p)

sounds like you're just projecting, honestly

look, people need some good-natured drinking, feasting and gift-exchanging during the depths of winter in order to not go mad or depressed from the bleakness of it all. you don't need that kind of stimulus during the summer months precisely because it's so nice anyway - it'd be like taking medicine when you're healthy.

either you're being deliberately obtuse about the whole thing for some inexplicable reason, or else you've got a serious deficiency in your understanding of social customs

Replies from: None
comment by [deleted] · 2014-12-01T06:49:51.374Z · LW(p) · GW(p)

look, people need some good-natured drinking, feasting and gift-exchanging during the depths of winter in order to not go mad or depressed from the bleakness of it all.

Ah, I think I've figured out my deficiency: no amount of good-natured drinking, feasting, and gift-exchanging stops me from going slightly mad-depressed from winter bleakness.

Replies from: gjm, bramflakes
comment by gjm · 2014-12-01T12:00:58.825Z · LW(p) · GW(p)

Does it reduce the extent to which you do? Could still be worth it for you if so.

comment by bramflakes · 2014-12-01T15:02:57.958Z · LW(p) · GW(p)

This is a medically recognized thing that you should get treatment for.

Replies from: None
comment by [deleted] · 2014-12-01T16:30:54.709Z · LW(p) · GW(p)

Yes, I quite realize! I used to have a light box until I up and moved somewhere warmer and sunnier.

comment by Gondolinian · 2014-11-30T22:30:18.865Z · LW(p) · GW(p)

Ironically, only some cultures that put ever so much effort into their Christmas celebrations even bother to have anything at all for Midsummer, which is the actual nice part of the year.

In the US, which seems pretty big on Christmas, we've got Independence Day in early July.

comment by Lumifer · 2014-11-30T22:14:07.467Z · LW(p) · GW(p)

Because it's a turning point -- the days stop getting shorter and begin to get longer.

comment by bramflakes · 2014-11-30T21:35:18.446Z · LW(p) · GW(p)

are you serious?