Open Thread March 21 - March 27, 2016

post by Gunnar_Zarncke · 2016-03-20T19:54:49.073Z · LW · GW · Legacy · 164 comments

Contents

164 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

164 comments

Comments sorted by top scores.

comment by gjm · 2016-03-23T15:47:53.343Z · LW(p) · GW(p)

In the last few days, a few people have newly joined LW, posting only in a welcome thread and in articles about Gleb_Tsipursky's "Intentional Insights" work. Their comments have been very enthusiastic about II.

Now here's the thing. Intentional Insights is based in the US. Everyone on its leadership team (I think) and advisory board is in the US. Most of its activity, so far as I can see, is focused on promoting ... whatever exactly II promotes ... in US-based media like the Huffington Post. But it just happens that two of these people are a brother and sister (I think) from Nigeria, and a chap from the Philippines. The last person I recall turning up on LW and gushing about how great II is and how wonderful GT's articles are was also from the Philippines. Isn't that odd?

(For the avoidance of doubt, of course there's nothing in any way wrong about being from Nigeria or the Philippines. I'm just asking: isn't this a rather improbable sequence of events?)

Now, Gleb has an answer, sort of:

In fact, many of the people who engage with Intentional Insights content are from developing countries, as we collaborate with international skeptic and reason-oriented organizations such as Atheist Alliance International.

It's not clear what range of organizations Gleb is referring to, but the specific one he names (AAI) is indeed an international organization -- but I don't see any sign that it's more active in (say) Nigeria than in the US. And none of these II fans who have turned up on Less Wrong has said anything about hearing of either LW or II through any other organization.

I think there is another obvious explanation, which is that these people are being paid to publicize II, and that the reason why the II-fans we see on LW come disproportionately from developing countries is that it's much cheaper to buy publicity from people in developing countries than from people in, say, the US or Western Europe.

Am I too paranoid?

... Oh, look. Twitter feed of LW user Sarginlove. The description on the Twitter account says "works for Intentional Insight". Take a look at Sarginlove's comments and tell me this isn't a deliberate attempt to look like someone not affiliated with II who's just seen their material and been impressed by it.

I don't know quite what Gleb is actually trying to do with II, but I think this goes beyond "weird and creepy" (the usual complaint on LW hitherto, I think) to "actively deceptive".

Replies from: Lumifer, OrphanWilde, MrMind, Gleb_Tsipursky
comment by Lumifer · 2016-03-23T16:00:44.707Z · LW(p) · GW(p)

Sarginlove, that is, Sargin Rukevwe, works as a "virtual assistant". Basically you hire him to do whatever and in this case he seems to have been hired to promote InIn.

It's interesting that his page says he graduated from the Polytechnic in 2013, but his introductory post here says he is a student at that school.

Let me repeat the observation I've made before -- Gleb_Tsipursky is a very clear case of cargo-cult behaviour. He has no clue about marketing, but he's been told which motions to make so that the planes will come and he's making them very earnestly. One of these motions is "native" (or covert) promotion which is designed to look like spontaneous endorsement -- and so he hires a lad from Lagos to post cringeworthy stuff here and everywhere...

P.S. Hey, look, Sarginlove has a Google+ account and his entire post history consists of -- drumroll, please! -- InIn reposts.

I guess he was hired on Dec 3, 2015, amiright?

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2016-03-23T17:18:32.113Z · LW(p) · GW(p)

He's a witch. Burn him already, on balance of probabilities.

Except, do we want to censor commercial speech per se. If people are being paid to say interesting things, why not? If people are talking rubbish and spamming, shouldn't we have a mechanism for silencing them irrespective of whether they're getting money?

I'm ranting insanely about the thyroid-bee in my empiricist-bonnet, totally for free! Why not ban me?

Replies from: gjm, Lumifer
comment by gjm · 2016-03-23T17:41:03.979Z · LW(p) · GW(p)

If someone turns up saying "I've just discovered X and I love it", the information I gain from that is quite different in the cases (1) where they really have just discovered X and love it and (2) where they're saying it because someone paid them to.

Indeed, the fact that these people are presumably being paid isn't the point. The fact that they are promoting something dishonestly is the point. The fact that they're being paid is relevant only as evidence that their promotion is dishonest.

Why not ban me?

Because your ranting is not in fact particularly insane, and because your participation in the LW community is not confined to ranting about hypothyroidism.

If you talked about literally nothing else, and if it transpired that you're only promoting your theory because someone paid you to drum up sales for thyroid hormone supplements, then you'd probably be contributing nothing of value. (Whether banning you would be a good response is a different question.) I mean, it might turn out that actually what you're saying about thyroid hormones is right (or at least enlightening) even though you were saying it on account of being paid, but the odds wouldn't be good.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2016-03-23T17:47:03.546Z · LW(p) · GW(p)

What if I was so convinced I was right that I started a 'Rational Thyroid Treatment Corporation'? (Just teasing now, sorry)


And actually there wouldn't be any point, since the bloody stuff is cheap as chips. I think that might be the problem. There's never been anyone to fight its corner for it.

Which is verging on conspiracy theory. Except that there's no conspiracy, just perverse incentives.

Which is what we say when we want to say 'conspiracy theory'.


I used to know some Socialist Workers. And one of them used to refer to people as 'lumpen'. One day I asked her if that was what Socialist Workers said when they meant 'common', and she went red and said 'yes' in a very small voice.

Which increased my respect for her a lot. Unfortunately she ruined it all about a month later when at the end of an argument about the correct method of determining wage levels for firemen she completely lost it with the immortal words 'Under Socialism there WOULDN'T BE FIRES'.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2016-03-23T18:49:44.595Z · LW(p) · GW(p)

What if I was so convinced I was right that I started a 'Rational Thyroid Treatment Corporation'? (Just teasing now, sorry)

If you would then hire Nigerians to promote it on LW, we would have a problem.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2016-03-25T17:38:12.078Z · LW(p) · GW(p)

Nigerians! How could RTTC afford Nigerians? I paid Tammy Lowe £50 for what appears to be a three year supply of magic thyroid panacea, including several hours of her time and mine. And if I did start making my own and then spend the money to promote it properly, I'd just get undercut. There is no honour in a perfectly competitive market.

comment by Lumifer · 2016-03-23T19:05:52.541Z · LW(p) · GW(p)

I used to know some Socialist Workers. And one of them used to refer to people as 'lumpen'. One day I asked her if that was what Socialist Workers said when they meant 'common', and she went red and said 'yes' in a very small voice.

ROFL...

To quote Karl Marx on who constitutes lumpenproletariat:

Alongside decayed roués with dubious means of subsistence and of dubious origin, alongside ruined and adventurous offshoots of the bourgeoisie, were vagabonds, discharged soldiers, discharged jailbirds, escaped galley slaves, swindlers, mountebanks, lazzaroni, pickpockets, tricksters, gamblers, maquereaux [pimps], brothel keepers, porters, literati, organ grinders, ragpickers, knife grinders, tinkers, beggars—in short, the whole indefinite, disintegrated mass, thrown hither and thither, which the French call la bohème.

comment by Lumifer · 2016-03-23T18:55:28.927Z · LW(p) · GW(p)

The issue is not with commercial speech. The issue is with misrepresentation and deception.

comment by OrphanWilde · 2016-03-24T18:33:18.364Z · LW(p) · GW(p)

For his motivations, he's already stated them; Gleb is attempting to prove he belongs here. His angle is social acceptance, but he's... critically undersocialized.

Dealing with him is going to be a matter of setting boundaries and making sure he understands them. I think he's probably too useful to get rid of, and also seems likely to go crazy-stalkery if it was attempted besides.

Replies from: Lumifer
comment by Lumifer · 2016-03-24T18:43:55.281Z · LW(p) · GW(p)

His angle is social acceptance, but he's... critically undersocialized.

Being critically undersocialized in not necessarily a problem at LW :-/

I think Gleb's ambitions are broader. He wants to be the head of a large and successful charity. That would bring him a cornucopia of benefits, from social status to income.

And he is building a tower out of sticks and a runway out of mud so that the metal birds will come and bring treasure.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-03-24T19:39:57.616Z · LW(p) · GW(p)

You do realize that I am a professor and have income, right? In fact, my wife and I are the largest donors to Intentional Insights, contributing about 88% of the 42K operating budget of the organization.

My ambition always has been to spread rationality to a broad audience. Intentional Insights is just an instrumental way to get to that goal. If I see a better way of doing it, I'll abandon InIn and jump on that other opportunity :-)

Replies from: Lumifer
comment by Lumifer · 2016-03-24T19:57:07.038Z · LW(p) · GW(p)

You do realize that I am a professor and have income, right?

Yes, I do. But I don't think a state school pays a lot of money to assistant professors in humanities.

My ambition always has been...

You know what you've spent with all that weaseling around and what you're completely out of? Credibility.

comment by MrMind · 2016-03-24T08:20:26.456Z · LW(p) · GW(p)

Maybe I'm cynic, but it's pretty commonplace for business to hire fake social supporters. Considering that we do not have certainty that those are of that kind, and that it is plausible that LW gets attended from all over the world, what is your suggested course of action?
What would you suggest that people do?

Replies from: gjm, Lumifer, Gleb_Tsipursky
comment by gjm · 2016-03-24T10:50:02.099Z · LW(p) · GW(p)

what is your suggested course of action?

I wasn't suggesting any particular course of action, unless you interpret "action" broadly enough to include this: I suggest that LW participants who encounter newcomers raving about how great Intentional Insights is or how wonderful Gleb's articles are should be aware that they may be raving only because they've been paid to do so, in which case their ravings give pretty much exactly zero evidence of anything either effective or appealing in II's material or Gleb's articles here.

Replies from: Lumifer
comment by Lumifer · 2016-03-24T18:37:51.782Z · LW(p) · GW(p)

in which case their ravings give pretty much exactly zero evidence

Au contraire, they do give evidence.

To quote Maggie, "it's like being a lady... if you have to tell people you are, you aren't." And if you have to hire people to shout at street corners that you're a lady... X-)

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2016-03-25T17:48:18.288Z · LW(p) · GW(p)

Hmmm.... Only the true messiah denies his divinity?

I've got a visceral contempt for advertising, but I also think that's me being irrational. Plenty of good stuff needs paid promotion to get noticed. There are good ideas that spread on their own, but I don't think that spreadiness <=> good.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2016-03-27T20:53:04.924Z · LW(p) · GW(p)

Good marketing isn't about saying: "Hey look at me I'm the greatest."

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2016-03-30T23:16:14.168Z · LW(p) · GW(p)

What about 'Coke is it!', or Muhammed Ali?

I'm sure there are more. I know nothing about marketing, but these seem to have worked.

Replies from: ChristianKl, gjm
comment by ChristianKl · 2016-04-04T22:40:00.126Z · LW(p) · GW(p)

'Coke is it

That statement doesn't contain any direct value judgement about Coke. It's about making Coke a default.

Simon Anholt recounts in one of his talks about how Nike's "Just do it" brand is a tool for Nike to spend less time in meetings to discuss puchasing decisions for office furniture. It allows any manager to just buy the "Just do it"-desk, so they don't have to hold a meeting about whether to buy a more classy or a more hip desk.

Muhammed Ali is a special case. When he says "I'm the greatest" people might think that's he's an arrogant asshole but he's an arrogant asshole that can beat up everyone. That's a persona that's interesting for the media to talk about. He was antifragile against journalists considering him to be an arrogant asshole.

In the case of Intentional Insights there no reason to polarize people the way Muhammed Ali polarized by claiming he's the greatest and generally doing his own press interviews instead of letting his managers do them.

comment by gjm · 2016-03-31T10:52:25.448Z · LW(p) · GW(p)

I have never drunk Coke or watched a boxing match, but my impression is that Coke's and Ali's slogans were only able to be effective because (1) lots of people already really liked drinking Coke and (2) Muhammed Ali was in fact a really good boxer.

I think the "real thing" / "Coke is it" slogans were adopted exactly because other companies were making their own competing products that were intended to be like Coca-Cola. So they were aimed at people who already liked Coca-Cola, or who at least knew that Coca-Cola was a drink lots of people liked, saying "That thing you admire? It's our product, not any of those inferior imitations".

So perhaps we can amend CK's comment to something like this: Good marketing isn't about saying "look at me, I'm the greatest" except in some special cases where people are already looking at you and at least considering the possibility that you might be the greatest.

I still don't know whether it's right, though. I would be entirely unsurprised to hear of a product that had a lot of success by going in with a we're-the-best marketing campaign very early in its life.

[EDITED to remove superfluous parentheses.]

comment by Lumifer · 2016-03-28T00:54:28.951Z · LW(p) · GW(p)

Plenty of good stuff needs paid promotion to get noticed.

The critical difference here is between good promotion and bad promotion. It is quite possible to promote the idea that you're a lady, it's just that it does not involve hiring people to shout at street corners.

comment by Lumifer · 2016-03-24T14:30:19.270Z · LW(p) · GW(p)

it's pretty commonplace for business to hire fake social supporters

So, is InIn a business that hires fake social supporters? And is LessWrong one of those "social media channels" that they "manage"? Inquiring minds want to know.

comment by Gleb_Tsipursky · 2016-03-24T15:32:02.559Z · LW(p) · GW(p)

Just to clarify, I have no interest in marketing InIn content to Less Wrong. That would be stupid, everyone on LW but the newbies would benefit much more from more complex writings than InIn content. InIn is an outward-facing branch of the rationality movement, not a (mostly) inward-facing one like CFAR. I'm trying to get InIn participants involved in LW to help them grow more rational after they already got familiarity with InIn content and can go beyond that, to venues such as ClearerThinking, CFAR, and LW itself.

It's not surprising that folks who come to LW from InIn would appreciate both InIn content and stuff that looks like InIn content, namely beginner-oriented materials. However, as I mentioned above, due to Eliot's suggestion, I will wait to get more InIn audience members involved in LW until it has a newbie thread.

comment by Gleb_Tsipursky · 2016-03-23T23:50:42.339Z · LW(p) · GW(p)

Upvoted, I appreciate the concern, and thanks for expressing it! Some other folks might have noticed this and been concerned without expressing it openly, so it's good to get this out into the open.

Intentional Insights has an international reach and aim. While we are based in the US, less than a third of our traffic comes from there, and the next three highest venues are India, Philippines, and Pakistan. We write regularly for internationally-oriented venues. We have plenty of volunteers who are from those places as well, and I encourage them regularly to join Less Wrong after they have engaged sufficiently with InIn content.

Most are currently lurkers, but as I have seen positive changes coming with the LW 2.0 transformation, I encouraged a number to be active contributors to the site. So some have responded, and naturally talked about how they found LW. I'm sad, but unsurprised, to see this met with some suspicion.

Sargin in particular volunteers at Intentional Insights for about 25 hours, and gets paid as a virtual assistant to help manage our social media for about 15 hours. He decided to volunteer so much of his time because of his desire to improve his thinking and grow more rational. He's been improving through InIn content, and so I am encouraging him to engage with LW. Don't discourage him please, he's a newbie here.

However, he made a mistake by not explicitly acknowledging that he works at InIn as well as volunteers at it. It's important to be explicit about stuff like this - his praise for InIn content should be taken with a grain of salt, just like praise from CFAR staff for CFAR content should be taken with a grain of salt. Otherwise, there is an appearance of impropriety. I added a comment to his welcome thread to make that clear.

Thanks for raising this issue, gjm, appreciate it!

EDIT: Edited with a comment I made on Sargin's welcome thread.

Replies from: gjm, MrMind
comment by gjm · 2016-03-24T00:53:42.722Z · LW(p) · GW(p)

Would you like to comment on Beatrice Sargin (his sister, I think) and Alex Wenceslao? Does either of those people receive any compensation from Intentional Insights?

(I'm curious. What does a person do for 25 hours a week when "volunteering at Intentional Insights"?)

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-03-24T01:11:07.427Z · LW(p) · GW(p)

Upvoted, thanks for letting me know they didn't indicate it as well. I should have realized that if Sargin didn't say that, others might not either. Both of them volunteer most of their time, and get paid part of their time.

I'll make sure any future people who both volunteer and get paid at InIn make that clear. It's important to be transparent about these things and ensure no appearance of impropriety.

Separately, I talked to Eliot, and he suggested it would be good to hold off on getting newbies engaged with Less Wrong until the LW 2.0 newbie sub is set up, so I'll hold off on doing that except for people who already signed up with accounts.

Now, on to your question. They work on a variety of tasks, such as website management, image creation, managing social media channels such as Delicious, StumbleUpon, Twitter, Facebook, Google+, etc. Here's an image of the organizational Trello showing some of the things that they do (Trello is a platform to organize teams together). We also have a couple more who do other stuff, such as Youtube editing, Pinterest, etc.

EDIT: Edited to add image.

Replies from: gjm
comment by gjm · 2016-03-24T03:46:55.931Z · LW(p) · GW(p)

Looks like you forgot to do it with JohnC2015, who has just appeared and is singing from the same hymnsheet as all the others: hi, I'm a newbie from the Philippines who has just happened to come across all this stuff, and wow, Gleb Tsipursky is awesome!

Any bets on whether JohnC2015 is also paid by Intentional Insights to promote them? I'm sure we wouldn't want any appearance of impropriety.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-03-24T15:29:01.336Z · LW(p) · GW(p)

I didn't forget, he just had not introduced himself at the time I was replying to Alex and Beatrice (you can check the timestamps). He must not have seen the e-mail I sent after talking to Eliot by the time he posted. I did comment on his welcome thread now.

BTW, just to clarify, I have no interest in marketing InIn content to Less Wrong. That would be stupid, everyone on LW but the newbies would benefit much more from more complex writings than InIn content. InIn is an outward-facing branch of the rationality movement, not a (mostly) inward-facing one like CFAR. I'm trying to get InIn participants involved in LW to help them grow more rational after they already got familiarity with InIn content and can go beyond that, to venues such as ClearerThinking, CFAR, and LW itself. However, as I mentioned above, due to Eliot's suggestion, I will wait to get more InIn audience members involved in LW until it has a newbie thread.

Replies from: Lumifer
comment by Lumifer · 2016-03-24T15:59:05.575Z · LW(p) · GW(p)

I'm trying to get InIn participants involved in LW

The word you are looking for is "employees". Out of four people from InIn who recently popped up on LW 100% are being paid by you.

to help them grow more rational

Funny how they start growing more rational by loudly proclaiming the awesomeness of InIn...

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-03-24T16:53:10.113Z · LW(p) · GW(p)

The word you want is "employees"

I didn't use that word because I'm not only trying to get people who are paid by InIn to engage with LW :-)

For example, see Lisper's participation here and here, or RevPitkin. Neither are being paid by InIn.

Hope that clarifies things!

Replies from: gjm
comment by gjm · 2016-03-24T17:33:38.754Z · LW(p) · GW(p)

The differences are rather stark.

The ones you're not paying turn up because they have some specific thing to say that they think will be interesting (e.g., about the relationship between religion and rationality). They behave more or less like typical Less Wrong participants.

The ones you're paying turn up to gush in the comments to your articles about how wonderful the articles are, and how great Intentional Insights is, and how excited they are to be growing in rationality (without any specifics about what they've actually learned and how it's helping them). They behave more or less like typical blog comment spammers.

Replies from: OrphanWilde, Gleb_Tsipursky
comment by OrphanWilde · 2016-03-24T17:53:46.718Z · LW(p) · GW(p)

Additionally, the individuals who suspect Gleb of manipulating upvotes to try to be taken more seriously all just updated.

I defended a post as not representing Less Wrong because it was massively downvoted, demonstrating the community attitude towards the post. The reciprocal - that upvotes equate to community support - clearly doesn't hold, but it does waggle its eyebrows suggestively and point.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-03-24T19:07:18.285Z · LW(p) · GW(p)

If I was interested in manipulating upvotes, it would much more easy for me to create sock puppets or have volunteers/virtual assistants create sock puppets than have people take their time to introduce themselves and engage with Less Wrong.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-03-24T20:07:34.907Z · LW(p) · GW(p)

"That's not how I would do it" generally is not the best way to respond to something you take as an accusation. Especially when you include a novel element in your protest, namely, the "have volunteers/virtual assistances create sock puppets" piece.

Firstly, because you don't want to share those kinds of ideas. Secondly, because you've added details to a story you're trying to repudiate, making it seem more likely.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-03-24T22:01:44.739Z · LW(p) · GW(p)

Good points, upvoted. I fall into this trap too often. Thanks for the helpful suggestions!

comment by Gleb_Tsipursky · 2016-03-24T19:12:04.356Z · LW(p) · GW(p)

Like I said, it's not surprising that they expressed enthusiasm about the idea and content of Intentional Insights. They found out about rationality from InIn, and are volunteering about 2/3 of their time on average, because they're enthusiastic about this topic.

Blog comment spammers usually promote something with a link attached. These people involved with InIn take the time to introduce themselves and describe their perspectives, and do not post links.

More broadly, like I said earlier, InIn is an outward-facing arm of the rationality movement, not an inward-facing one. There's no need or interest for InIn to promote its content to LWs.

comment by MrMind · 2016-03-24T08:25:49.041Z · LW(p) · GW(p)

I'm sad, but unsurprised

Ah, planning fallacy. If you're not surprised by the negative turn of event, you could have possibly anticipated it and corrected.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-03-24T15:09:42.444Z · LW(p) · GW(p)

Fair enough! I told all InIn participants to indicate their association with Intentional Insights, but I should have been more specific with those who are paid by Intentional Insights for stuff they do to acknowledge this in their welcome threads.

comment by Viliam_Bur · 2016-03-20T20:49:44.471Z · LW(p) · GW(p)

moderator action: Torchlight_Crimson is banned

Another account of Eugine_Nier / Azathoth123 / Voiceofra / The_Lion / The_Lion2 / Old_Gold is banned, effective now. This is an enforcement of the already existing ban, therefore only this message in Open Thread.

EDIT: Also Crownless_Prince.

Replies from: SanguineEmpiricist, gjm, username2
comment by SanguineEmpiricist · 2016-03-21T01:41:14.465Z · LW(p) · GW(p)

What's going on in his thought process? Is he still downvoting people? What is he doing that's this bad? I mean i'm sure there's a good reason, but it's sort of strange he keeps coming back and not changing his behavior or not moving on to one of our tangent communities.

I've not dealt with him, so can someone explain to me what he is doing?

Replies from: Viliam_Bur, gjm, Elo
comment by Viliam_Bur · 2016-03-21T19:08:15.812Z · LW(p) · GW(p)

How I see it, deleting of Eugine's new accounts is a continuous enforcement of the permanent ban from 2014 (explained here). Whether he continues in his previous behavior should in theory be irrelevant; I would delete his new accounts anyway because that's what "permanent" means. But in practice, he continues with his old behavior, which makes him easier to detect, and motivates me to overcome my laziness.

comment by gjm · 2016-03-21T13:05:47.653Z · LW(p) · GW(p)

Yes, still downvoting. As Elo says, that's one way to find him, though there are others (e.g., some of his aliases have been identified very quickly, probably on style alone).

As for what he's doing, I think he's fighting a culture war. He doesn't really care if he keeps getting banned; he is still able to keep posting what he does, and upvoting it with his sockpuppets[1], and mass-downvoting people who conspicuously disagree with him in the hope of reducing their credibility (and maybe making them go away).

[1] That's a plausible conjecture rather than (so far as I'm aware) something known to be true. Conditional on its being true, I would guess that his socks probably also upvote other people's expressions of sociopolitical views compatible with his own.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2016-03-21T18:52:10.127Z · LW(p) · GW(p)

=/ We should tell him the opportunity cost of this stuff is too large, don't run down the clock on your life. Eugine_Nier go get a more productive way to channel this frustration.

Replies from: Lumifer, gjm
comment by Lumifer · 2016-03-21T19:16:48.660Z · LW(p) · GW(p)

Telling people to get more productive is dangerous -- they actually might :-/

comment by gjm · 2016-03-21T20:28:27.608Z · LW(p) · GW(p)

I wouldn't expect that to go well.

comment by Elo · 2016-03-21T07:13:02.315Z · LW(p) · GW(p)

He was originally banned for downvoting. That's how we keep finding him.

He also holds contentious views and feels as though he is being silenced for his contentious views. As we know though; lesswrong is usually very happy to entertain contentious views so long as they are presented carefully and handled like the potential mindkillers that they are.

He would like his views to appear to his opposers that they are stronger than they are.

Replies from: None
comment by [deleted] · 2016-03-21T15:26:40.367Z · LW(p) · GW(p)

Contentious views are potential mind killers?

Replies from: Elo
comment by Elo · 2016-03-21T20:11:45.720Z · LW(p) · GW(p)

In the way that people treat them as identity issues, political party issues and blue/green (you're either with us or you're against us), they can be. I would say I see a light correlation between identity and mindkilled.

information around these ideas: http://lesswrong.com/lw/idj/use_your_identity_carefully/

comment by gjm · 2016-03-22T11:50:17.926Z · LW(p) · GW(p)

His new account is (p=0.9) Crownless_Prince.

Replies from: gjm, gjm
comment by gjm · 2016-03-23T14:07:21.723Z · LW(p) · GW(p)

And (p=0.8) The_Bird is another. (Another pair of words that appears in Lepanto, though there's nothing super-distinctive about that.)

Replies from: Lumifer, None
comment by Lumifer · 2016-03-23T14:46:23.104Z · LW(p) · GW(p)

Good catch on Lepanto :-)

Replies from: gjm
comment by gjm · 2016-03-23T15:08:46.890Z · LW(p) · GW(p)

It was GoodBurningPlastic who caught it, some time ago.

comment by [deleted] · 2016-03-23T23:42:46.689Z · LW(p) · GW(p)

If he's looking for a new account name, I suggest Timeless_Houri.

Replies from: gjm
comment by gjm · 2016-03-24T00:50:54.792Z · LW(p) · GW(p)

I had that down as a less plausible name...

comment by gjm · 2016-03-23T10:45:01.732Z · LW(p) · GW(p)

And curiously (1) has, if my karma is anything to go by, been downvoting vigorously already even though (2) he's currently sitting with karma=4, and I think you need >=10 to vote. But I think yesterday he had more. (I also thought I saw more comments yesterday in his overview than I do now, but perhaps I imagined that.)

comment by username2 · 2016-03-24T12:31:30.937Z · LW(p) · GW(p)

What a surprise, somebody has been downvoting every comment in this subthread at least once.

comment by 2ZctE · 2016-03-22T06:54:11.478Z · LW(p) · GW(p)

t;dr how do you cope with death?

My dog has cancer in his liver and spleen, and learning this has strongly exacerbated some kind of predisposition towards being vulnerable to depression. He's an old dog so it probably wouldn't have changed his life expectancy THAT much, but it's still really sad. If you're not a pet person this might be counterintuitive, but to me it's losing a friend, and the things people say to me are mostly unhelpful. Which is why I'm posting it here specifically: the typical coping memes about doggy heaven or death as some profoundly important part of Nature are ruined for me. So I wanted to ask how people here deal with this sort of thing. Especially on the cognitive end of things, what types of frames and self talk you used. I do already know the basics, like exercise and diet and meditation, but I sure wouldn't mind a new insight on getting myself to actually do that stuff when I'm this down.

I've thought about cryopreserving him, but even if that were a good way to use the money I just don't think I can afford it. All I'll have is an increasingly vague and emotionally distant memory, I guess, and it sucks. I've been regretting not valuing him more during his peak health, as well, although maybe I'd always feel guilty for anything short of having been perfect.

I've been thinking a lot about chapter 12 of HPMOR, and trying play with and video and pamper him while I can. I don't want to say "fuck, it's too late" about anything else. It's the best thing I can think of right now.

This whole business with seeking Slytherin's secrets... seemed an awful lot like the sort of thing where, years later, you would look back and say, 'And that was where it all started going wrong.'

And he would wish desperately for the ability to fall back through time and make a different choice...

Wish granted. Now what?

Replies from: johnlawrenceaspden, polymathwannabe, pseudobison, RainbowSpacedancer, ScottL, PipFoweraker, MrMind
comment by johnlawrenceaspden · 2016-03-22T20:32:05.471Z · LW(p) · GW(p)

Hide everything that reminds you of your dog. Keep it all, in a drawer somewhere, so that you can take it out and have a good cry sometimes, when you want to. But don't put pictures or other triggers where they'll keep making you sad.

You're good at grieving. Nature did not design us to be crippled by the loss of friends. If you hide all the triggers you'll forget to be sad quickly.

Your dog is unlikely to want you to be miserable after he is gone. Don't do that for him if you don't have to and he wouldn't want you to. Imagine if the position was reversed. What would you want?

comment by polymathwannabe · 2016-03-22T14:05:42.217Z · LW(p) · GW(p)

My roommate died from cancer 3 years ago. It never stops being a sad memory, except that the hard pang of the initial shock is gone after some time. I don't feel guilty for no longer feeling that pang, because I know I still wish it hadn't happened and it still marked my life in several ways, so I haven't stopped doing what I privately call "honoring my pain." The usual feel-good advice of forgetting it all and moving on sounds to me as dangerously close to no longer honoring my pain, by which I mean acknowledging that the sad event occurred, and giving it its deserved place in my emotional landscape, but without letting it define my life.

Several of my pets died when I was a kid, and at some point I just sort of integrated the implicit assumption that every new pet would eventually die as well. If I began with that assumption, the actual event would no longer be such a strong shock. I no longer have pets, though.

For some years I had problems with the concept of acceptance. It felt like agreeing to everything that happened, and I just didn't want to give my consent to a series of adverse occurrences that it's not relevant to mention here. Some time afterwards I found somewhere a different definition of acceptance: it's not about agreeing with what happened, but simply no longer pretending that the world is otherwise, which to me sounded like a much healthier attitude. With that in mind, I'm more capable of enjoying the time with my friends while knowing that all living things die.

I don't know whether any of my strategies will work in your situation, but this might: doctors specialized in the treatment of pain distinguish between the physical perception of pain and the emotional experience of suffering. Your dog has no awareness of its impending death; he only knows the physical pain. As strong as the pain may be on a purely physical level, he is spared the existential anguish that worries you. Perhaps making a conscious effort to not project your own emotional experience onto him may make the burden lighter for you.

I hope I haven't said anything insensitive, and preemptively apologize if it sounded that way.

comment by gabrielrecc (pseudobison) · 2016-03-22T09:01:41.822Z · LW(p) · GW(p)

I'm very sorry to hear about your dog. It's a very difficult thing to go through even without any predisposition towards depression.

This is probably an idiosyncratic thing that only helps me, but I find remembering that time is a dimension just like space helps a little bit. In the little slice of time I inhabit, a pet or person who has passed on is gone. From a higher-dimensional perspective, they haven't gone anywhere. If someone were to be capable of observing from a higher dimension, they could see the deceased just as I remember them in life. So in the same way that someone whose children are living far from home can remind themselves that their children are in another place, likewise your dog is living happily in another time. English doesn't quite have a tense that conveys the sentiment I want to convey, but I think you get the idea. Don't know if that line of thought does anything for you - I find it a small but useful comfort.

Re actually doing exercise/positive self-talk when you're down, setting up little conditionals that I make into automatic habits by following them robotically has sometimes worked for me. "IF notice self getting anxious - THEN take five minute walk outside". Obviously setting up those in the first place and following through on them the first n times only works when in an OK mood, but once they become habits they're easier to follow through on in more difficult states of mind. I've also found the Negative Self-Talk/Positive Thinking table at the bottom of the page here to be useful.

But hard things are hard no matter what. Sounds like you're doing the right thing now by making the most of the time you have together. Best of luck to you.

comment by RainbowSpacedancer · 2016-03-23T08:13:01.361Z · LW(p) · GW(p)

It's unlikely that someone is going to say something that will take away your pain. Death sucks. Losing someone you love sucks, and sadness is a normal reaction to that. There are emotionally healthy ways to deal with grief. Give yourself more self-care than you think you need throughout this process to counter the planning fallacy and better to err on the side of too much than too little.

If you do find yourself depressed, seeking professional help is not a sign of weakness and I would encourage you to seek it out. Summoning motivation can be an impossible effort when you are depressed and sometimes someone outside your un-motivated brain is the best thing to stop you from falling down an emotional spiral. If money or something else prevents you from doing that, there are other things you can try here and some more here.

comment by ScottL · 2016-03-23T00:47:39.917Z · LW(p) · GW(p)

I tend to view depression as an evolved adaptation and a certain state which it is natural for humans to move into in certain situations. I think that it can be helpful to recognize that dysphoria, sadness and grief are all natural reactions. It is ok to be sad. Although, like with all conditions if it becomes chronic or persists for an overly long time then you should probably get some help to deal with it. See here for more information.

For general advice for dealing with grief, see this article and apply whatever you think is applicable or would be helpful. Excerpt:

  • Establish a simple routine
    • Regular meal and bed times
  • Increase pleasant events
  • Promote self-care activities
    • Regular medical check-ups
    • Daily exercise
    • Limited alcohol intake
  • Provide information about grief and what to expect
    • Grief is unique and follows a wave-like pattern
    • Grief is not an illness with a prescribed cure
    • Children benefit from being included and learning that grief is a normal response to loss
  • Compartmentalise worries
    • List the things that are worrying
    • Create a ‘to-do’ list, prioritise and tick off items as they are completed
    • Use different coloured folders for the paperwork that needs to be finalised
  • Prepare to face new or difficult situations
    • Graded exposure to situations that are difficult or avoided
    • Plan for the ‘firsts’ such as the first anniversary of the death – How do you want it to be acknowledged? Who do you want to share it with?
    • Adopt a ‘trial and error’ approach; be prepared to try things more than once
  • Challenge unhelpful thinking
    • Encourage identification of thoughts leading to feelings of guilt and anger
    • Gently ask the following questions – What would your loved one tell you to do if they were here now? What are the alternatives to what you thought? Where is the evidence for what you thought?
  • Provide a structured decision-making framework to deal with difficult decisions e.g., When to sort through belongings? Whether to take off the wedding ring? Whether to move or not?

    • Base decisions on evidence, not emotions
    • Avoid making major, irreversible decisions for 12 months to prevent decisions being based on emotion
    • Identify the problem and possible solutions
    • List the positives and negatives for each potential solution
    • Determine the consequences for each solution – can they be lived with?

    I guess, and it sucks. I've been regretting not valuing him more during his peak health, as well, although maybe I'd always feel guilty for anything short of having been perfect.

I would try to stop doing this. It will gnaw at you and we can always find something that we could have done better in the past. The better thing to do is learn from the past, appreciate it and experience, to the utmost, what is happening now.

comment by PipFoweraker · 2016-03-27T00:18:34.652Z · LW(p) · GW(p)

You may want to spend some time thinking about how you can give your dog the best end of life experience that you can.

Losing a dog is painful. However, and I'm only speaking from personal experience here, you will probably have the opportunity to control to a great extent how your dog dies, its relative level of pain / discomfort, and in what situation and setting the death takes place.

Knowing that my dog - who my parents found abandoned a few weeks before I was born, who I grew up with, and who died in my early adulthood - died at home, surrounded by her family, having spent her last days lovingly attended and not in great physical pain, makes remembering her whole and relatively joyful life more pleasant for me now. It may help you too.

comment by MrMind · 2016-03-22T09:16:46.772Z · LW(p) · GW(p)

So I wanted to ask how people here deal with this sort of thing.

Well, I actually try to emotionally distance myself every day a little bit.

comment by gjm · 2016-03-21T13:22:18.645Z · LW(p) · GW(p)

Finding comments on LW is more painful than it should be because sometimes this happens:

  • You remember that X replied to Y saying something with words Z in.
  • You put something like <> into Google (directly or via the "Google custom search" in the right sidebar.)
  • You get back a whole lot of pages, but
    • they all contain X and Y because of the top-contributors or recent-comments sections of the right sidebar;
    • they all contain Z because of the recent-comments section of the right sidebar.
  • None of those pages now contains either the comment in question or a link to it.
  • Using the "cached" link from the search results doesn't help, because the right sidebar is generated dynamically and is simply absent from the cached pages.
    • So how come they're found by the search? Beats me.

Here's a typical example; it happens to use only Z (I picked one of my comments from a couple of weeks ago) but including X and Y seldom helps.

I just tried the equivalent search in Bing and the results were more satisfactory, but only because the comment in question happened to appear fairly near the top of the overview page for the user I was replying to. I would guess that Bing isn't actually systematically better for these searches, but I haven't tested.

Does anyone know a good workaround for this problem?

Is there a way to make the dynamically-generated sidebar stuff on LW pages invisible to Google's crawler? It looks like there is. Should I file an issue on GitHub?

Replies from: Vaniver, TheAltar
comment by Vaniver · 2016-03-21T15:40:00.015Z · LW(p) · GW(p)

Is there a way to make the dynamically-generated sidebar stuff on LW pages invisible to Google's crawler? It looks like there is. Should I file an issue on GitHub?

Yes, you should do this.

Replies from: gjm, Viliam
comment by gjm · 2016-03-21T18:44:23.748Z · LW(p) · GW(p)

Done.

comment by Viliam · 2016-03-22T08:38:24.423Z · LW(p) · GW(p)

Unfortunately, there is no standard way to make parts of page disappear from search engines' indexes. Which is super annoying, because almost every page contains some navigational parts which do not contribute to the content.

HTML 5 contains a semantic tag which defines navigational links in the document. I think a smart search engine should exclude these parts, but I have no idea if any engine actually does that. Maybe changing LW pages to HTML 5 and adding this tag would help.

Some search engines use specific syntax to exclude parts of the page, but it depends on the engine, and sometimes it even violates the HTML standards. For example, Google uses HTML comments ... , Yahoo uses HTML attribute class="robots-nocontent", and Yandex introduces a new tag . (I like the Yahoo way most.)

The most standards-following way seems to be putting the offending parts of the page into separate HTML pages which are included by , and use the standard robots.txt mechanism to block those HTML pages. I think the disadvantage is that the included frames will have fixed dimensions, instead of changing dynamically with their content. Another solution would be to insert those texts by JavaScript, which means that users with JavaScript disabled would not see them.

Replies from: Vaniver, philh
comment by Vaniver · 2016-03-22T13:18:26.071Z · LW(p) · GW(p)

Since our local search is powered by Google, I'm content with a solution that only works for Google.

comment by philh · 2016-03-22T15:48:17.091Z · LW(p) · GW(p)

Another solution would be to insert those texts by JavaScript, which means that users with JavaScript disabled would not see them.

They're already inserted by javascript. E.g. the 'recent comments' one works by fetching http://lesswrong.com/api/side_comments and inserting its contents directly in the page.

Editing robots.txt might exclude those parts from the google index, but idk.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2016-03-22T17:33:47.268Z · LW(p) · GW(p)

I think robots.txt would work.

comment by TheAltar · 2016-03-21T14:35:12.264Z · LW(p) · GW(p)

I've run into this problem several times before. It would be very helpful if the search feature ignored the text in the sidebar.

comment by Lumifer · 2016-03-23T15:43:43.779Z · LW(p) · GW(p)

Andrew Gelman mentioned "the Kahneman-Gigerenzer catfight, or more generally the endless debate between those who emphasize irrationality in human decision making and those who emphasize the adaptive and functional qualities of our shortcuts." This looked worth checking, so I followed the link to the following statement by Gigerenzer:

The “half-empty” versus “half-full” explanation of the differences between Kahneman and us misses the essential point: the difference is about the nature of the glass of rationality, not the level of the water. For Kahneman, rationality is logical rationality, defined as some content-free law of logic or probability; for us, it is ecological rationality, loosely speaking, the match between a heuristic and its environment. For ecological rationality, taking into account contextual cues (the environment) is the very essence of rationality, for Kahneman it is a deviation from a logical norm and thus, a deviation from rationality. In Kahneman’s philosophy, simple heuristics could never predict better than rational models; in our research we have shown systematic less-is-more effects.

LW's dog in this catfight is probably on the Kahneman's side, but the debate is interesting.

Replies from: MrMind, johnlawrenceaspden, fubarobfusco, username2, SanguineEmpiricist
comment by MrMind · 2016-03-24T08:30:52.333Z · LW(p) · GW(p)

LW's dog in this catfight is probably on the Kahneman's side

Well, probability is about reasoning with logic under imperfect information, and when you factor in the cost of elaboration you see that "ecological" model could be better, but evolution and thermodynamics. I think that simply distinguishing "correct" and "useful" dissolves the debate.

Replies from: Lumifer
comment by Lumifer · 2016-03-24T14:34:28.134Z · LW(p) · GW(p)

I think that simply distinguishing "correct" and "useful" dissolves the debate.

No, I think it's more complicated than that.

For example, imagine a complex decision, say what college to go to. Can you write out a Bayesian model that will tell you what to do? Well, kinda. You can, but it's going to be woefully incomplete and involve a lot of guesses without much support from data. A set of heuristics will do much better in this situation. Are you going to say that this Bayesian model is "correct" regardless? I don't think it's a useful application of the word.

comment by johnlawrenceaspden · 2016-04-14T17:16:51.557Z · LW(p) · GW(p)

Not necessarily. "You can't do inference without making assumptions".

Is it even a fight? What is it that they disagree about? Neither side is saying "Decision heuristics that once worked well still work well in our changed world".

Replies from: Lumifer
comment by Lumifer · 2016-04-14T17:44:48.737Z · LW(p) · GW(p)

Is it even a fight?

It's a fight like a croquet mallet is a billy club :-)

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2016-04-14T21:20:54.955Z · LW(p) · GW(p)

I mean, is there some prediction that they disagree about, rather than 'falling tree sound' issues.

comment by fubarobfusco · 2016-03-25T04:09:44.794Z · LW(p) · GW(p)

Eh. This is sounding more and more like a dispute over definitions, and hence tedious; and I would be unsurprised to find that it arose from either self-promotion or ideology; q.v. the Gould and Eldredge v. Maynard Smith et al. kerfuffle.

comment by username2 · 2016-03-23T17:47:16.481Z · LW(p) · GW(p)

I admit that I don't get the explanation. Wouldn't both approaches lead to the same thing?

Replies from: Lumifer
comment by Lumifer · 2016-03-23T19:01:39.679Z · LW(p) · GW(p)

The two approaches might but not necessarily will lead you to the same thing. I suspect that part of the tension is between "theoretically correct" and "works better in practice" which in theory should match but in practice do not often enough.

Here is what looks to be the major Gigerenzer paper.

comment by SanguineEmpiricist · 2016-03-24T21:07:05.935Z · LW(p) · GW(p)

Something tells me Gigerenzer is misquoting Kahneman, he is just saying any deviation from that counts as irrational and measuring that as his baseline, i'm more than sure he would be happy to use ecological rationality as a baseline as well.

comment by Arshuni · 2016-03-21T05:25:40.826Z · LW(p) · GW(p)

How many of you guys keep a journal? How many of you would like to? What do you specifically write down?

I feel like it should help, but I have trouble coming up with a structure with which it could: Opening up a journal, with separate sections for work done, (and TODOs for the future, and how these two diverged), exercise, and others seems more useful than one with a massed 'Dear Diary' format.

Replies from: Pfft, pseudobison, Elo, Gunnar_Zarncke
comment by Pfft · 2016-03-21T14:30:18.728Z · LW(p) · GW(p)

I write down one line (about 80 characters) about what things I did each day. Originally I intended to write down "accomplishments" in order to incentivise myself into being more accomplished, but it has since morphed into also being a record of notable things that happened, and a lot of free-form whining over how bad certain days are. It's kindof nice to be able to go back and figure out when exactly something in the past happens, or generally reminisce about what was going on some years ago.

comment by gabrielrecc (pseudobison) · 2016-03-22T09:58:55.837Z · LW(p) · GW(p)

I keep a daily journal. Beginning of day: Two things that I'm grateful for. End of day: Two things that went well that day, two things that could have gone better. Each "thing" is usually only a sentence or few long. I find that going back through the end-of-day sentences every so often is useful for doing 80-20 analyses to find out what seems to be bringing me the most happiness / dissatisfaction (at least as judged by my end-of-day assessments).

comment by Elo · 2016-03-21T12:39:34.551Z · LW(p) · GW(p)

I wrote this on the topic - might help with the habit of keeping a book -

http://lesswrong.com/r/discussion/lw/mpz/making_notes_an_instrumental_rationality_process/

comment by Gunnar_Zarncke · 2016-03-21T11:39:25.514Z · LW(p) · GW(p)

Poll for it!

[pollid:1132]

Please take journal to mean anything that contains personal information, insights or the like. A slip box of articles you read and commented on might already count.

Replies from: Elo
comment by Elo · 2016-03-21T12:37:46.910Z · LW(p) · GW(p)

didn't vote because I keep a sporadic journal of when I have good conversations (usually lw meetups). I pull out a book and make notes on topics we cover and ideas I come up with.

comment by Lumifer · 2016-03-24T16:35:57.966Z · LW(p) · GW(p)

From Bruce Schneier (who knows Alice and Bob's shared secret), a very relevant observation:

Cryptography is harder than it looks, primarily because it looks like math. Both algorithms and protocols can be precisely defined and analyzed. This isn't easy, and there's a lot of insecure crypto out there, but we cryptographers have gotten pretty good at getting this part right. However, math has no agency; it can't actually secure anything. For cryptography to work, it needs to be written in software, embedded in a larger software system, managed by an operating system, run on hardware, connected to a network, and configured and operated by users. Each of these steps brings with it difficulties and vulnerabilities.

comment by Lumifer · 2016-03-24T14:43:44.119Z · LW(p) · GW(p)

Out of curiosity, is LW doing some sort of A/B testing? visualwebsiteoptimizer.com wants to run an awful lot of scripts on the page...

comment by Gunnar_Zarncke · 2016-03-20T19:57:29.541Z · LW(p) · GW(p)

[Link] Scientists Say Smart People Are Better Off With Fewer Friends (from slashdot)

Replies from: James_Miller, Viliam
comment by James_Miller · 2016-03-21T15:45:16.912Z · LW(p) · GW(p)

Not according to Razib Khan who writes in part:

"A new paper on which has some results on life satisfaction, intelligence and the number of social interactions one has has generated some mainstream buzz....The figure above shows the interaction effect between intelligence, life satisfaction, and number of times you meet up with friends over the week. What you see is that among the less intelligent more interactions means more life satisfaction and among the more intelligent you see the reverse...But take a look at the y-axis...The effect here is very small....These are not actionable results for anyone."

comment by Viliam · 2016-03-21T08:35:13.979Z · LW(p) · GW(p)

I wonder if the reason could be that for smart people it is difficult to find many good friends. So the actual choice for most of them is between having only a few great friends (which is better), or having many friends that suck (which is worse). But maybe given a chance, having many great friends could be even better.

By difficulty to find many good friends I mean that for people with very high intelligence the set of their peers is already small enough, and then within this set they need to find people with similar values, hobbies, personality, etc. Even admitting this problem is a huge taboo (essentially you are telling 99% of your social environment "I don't consider you a good friend material"), so many people probably don't have good strategies for solving it.

Replies from: gjm, username2
comment by gjm · 2016-03-21T12:55:23.045Z · LW(p) · GW(p)

It would be interesting to know whether the alleged finding (assuming it holds up, which is always uncertain for this sort of thing) looks different in places where very smart people are easier to find, or for populations with more effective ways of finding very intelligent friends.

(For instance, I live near a city with a world-class university and a pretty vigorous tech industry in the area that encourages smart people to stay around. There's a pretty good supply of highly intelligent potential friends around here.)

comment by username2 · 2016-03-21T11:46:58.472Z · LW(p) · GW(p)

Abstract

More intelligent individuals experience lower life satisfaction with more frequent socialization with friends.

The paper is paywalled but here it is claimed that life satisfaction is negatively correlated with the frequency of socialization and not the number of friends. Granted, those two are likely to be positively correlated.

comment by Bound_up · 2016-03-21T21:47:37.579Z · LW(p) · GW(p)

I'm making a list of common arguments that can all be resolved in the same way the "tree falls in a forest, does it make a sound" argument can be resolved.

Namely, by tabooing a key word and substituting a non-ambiguous, comprehensive description, and then finding out you were never disagreeing about anything in the first place.

Examples so far:

Is Islam a "religion of peace?"

Is there a "wall of separation between Church and State" in the US?

Is America a "Christian nation?"

Are Catholics/Mormons/Jehovah's Witnesses/etc "Christian?"

"Should" women take preventative measures against sexual assault?

Is atheism a "religion?"

Any others you can think of?

Replies from: Gunnar_Zarncke, ChristianKl, Jiro, MrMind
comment by Gunnar_Zarncke · 2016-03-21T22:24:49.361Z · LW(p) · GW(p)

I'd like to see an example of how you resolve these.

Replies from: Bound_up
comment by Bound_up · 2016-03-21T22:48:49.724Z · LW(p) · GW(p)

Is there a wall of separation between Church and Sate?

Well, what's a wall of separation?

We all know there could be MORE religious stuff going on in government, like it could be establishing a state religion.

And we all know there could be LESS religious stuff going on in government, like all governmental officers could be forbidden from praying.

So we have a range of minimum to maximum religious stuff going on in government, and we're somewhere in the middle, by any measure.

Identify where we are on that range, and check whether or not it's above or below "wall of separation between Church and State" level.

Except, once you've identified where we are on the range, you've already fully described the reality of the situation. The word you use as a referent for that reality is a comparatively trivial manner. People might (and do) argue about which words we "should" use to describe reality, but right now, most of them argue about the words and think they're arguing about reality.

Just like the sound vs no sound people on the fallen tree question

Replies from: Lumifer
comment by Lumifer · 2016-03-21T23:52:34.644Z · LW(p) · GW(p)

Well, what's a wall of separation?

We all know there could be MORE religious stuff going on in government, like it could be establishing a state religion.

And we all know there could be LESS religious stuff going on in government, like all governmental officers could be forbidden from praying.

Sigh. The separation of church and state has quite well-specified meaning in the constitutional law. If you want to define it, look up the appropriate legal authority. Hint: it doesn't have much to do with forbidding government officials to pray.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-22T07:39:44.447Z · LW(p) · GW(p)

"no religious test shall ever be required as a qualification to any office or public trust under the United States"

-- https://en.wikipedia.org/wiki/Separation_of_church_and_state_in_the_United_States

Replies from: Bound_up, None
comment by Bound_up · 2016-03-22T15:35:51.747Z · LW(p) · GW(p)

Ah, yes, but is that really a "wall of separation of church and state?"

comment by [deleted] · 2016-03-23T15:38:31.474Z · LW(p) · GW(p)

Here's a religious test: does the official NOT pray on the job?

Separation of church and state is a requirement for official agnosticism, not official atheism.

comment by ChristianKl · 2016-03-21T22:03:34.787Z · LW(p) · GW(p)

I think a lot of those debates do have differences in opinion that go beyond definitions.

Replies from: Bound_up
comment by Bound_up · 2016-03-21T22:18:47.338Z · LW(p) · GW(p)

You're right.

Examples of arguments which are significantly improved by applying the taboo principle are acceptable, too.

comment by Jiro · 2016-03-22T15:06:29.851Z · LW(p) · GW(p)

That only moves the disagreement into the decision about what description would be appropriate. I wouldn't call that a resolution.

Replies from: gjm, Bound_up
comment by gjm · 2016-03-22T16:37:45.220Z · LW(p) · GW(p)

I agree that "resolved" may be too optimistic, but at any rate argument about these questions can (in principle) be markedly improved by moving from ill-defined questions to better-defined ones. Different people might prefer different descriptions but there's at least some chance that when the question is framed as "which description do you prefer?" they will recognize that a large part of their disagreement is about individual preferences rather than external facts.

Suppose Alice says that Islam is a religion of peace and Bob says it isn't. If they have this conversation:

A. Islam is a religion of peace.

B. No it's not.

C. Alice, what do you mean by "religion of peace" and why do you say Islam is one?

A. I mean a religion whose teachings say that peace is valuable and tell its followers to seek it. Islam does those things. (Perhaps at this point Alice will adduce some quotations from the Qur'an in support of her claims. I haven't any to hand myself.)

C. And Bob, what do you mean and why do you say Islam isn't one?

B. I mean a religion whose followers actually behave peacefully. Muslims make up a disproportionate fraction of the world's terrorists and even those who are not terrorists do a very bad job of living at peace with their neighbours.

... then for sure they haven't reached agreement yet -- Alice will doubtless want to suggest that it's a small fraction of Muslims killing people and making war and so on, while Bob will doubtless want to say that there are Islamic teachings that explicitly endorse violence, etc. -- but they have made a breakthrough because now they can talk about actual facts rather than merely about definitions, and even if they never agree they will have a much clearer idea what they disagree about.

And of course each may still think the other's usage of "religion of peace" untenable, but again they have a clearer idea what they're disagreeing about there, and ought to be able to see e.g. that they can dispute the best definition of "religion of peace" separately from disputing how well any given definition applies to Islam.

In practice, Alice and Bob may well be too cross at one another to have so productive a conversation. But if, e.g., instead of Alice and Bob we have two ideas fighting it out within one person's mind, the practice of separating definitional questions from factual ones is likely to be very helpful.

comment by Bound_up · 2016-03-22T15:33:27.062Z · LW(p) · GW(p)

If everyone understood the disagreement as merely semantic, few would care (once they got used to not thinking of it as a defense of religion or of secularism or whatever).

comment by MrMind · 2016-03-22T09:13:54.778Z · LW(p) · GW(p)

How about "black people are less intelligent"/"asians are more intelligent"?

comment by username2 · 2016-03-21T11:48:39.948Z · LW(p) · GW(p)

What does Nassim Taleb think about existential risks and existential risk research? He sounds like a kind of person who might be interested in such things?

Replies from: Daniel_Burfoot, SanguineEmpiricist
comment by Daniel_Burfoot · 2016-03-21T23:26:59.048Z · LW(p) · GW(p)

My guess is that he is worried about existential risks, but of the Black Swan type: risks that can't be predicted or theorized about far in advance.

comment by SanguineEmpiricist · 2016-03-24T21:02:38.131Z · LW(p) · GW(p)

He liked Bostrom's new institute dedicated to existential risks. He doesn't think AI is a ruin-style risk, saying it requires "risk vigilance" but isn't a ruin type risk yet, and that he would be willing to reconsider later.

He has his own risk initiative called the "Extreme risk initiative".

comment by DataPacRat · 2016-03-21T02:07:42.862Z · LW(p) · GW(p)

Seeking ideas: Stupid Em Tricks

To help with one of my story projects; how many (useful, interesting, other) things can an uploaded mind do that a meat-based person can't?

I've got a GDoc with an initial set of basic ideas here, and I've temporarily turned on worldwide editing and commenting. I'd appreciate all the useful suggestions you can think of, there or here.

Replies from: username2
comment by username2 · 2016-03-24T10:19:49.187Z · LW(p) · GW(p)

Separate itself into multiple personalities

comment by JohnGreer · 2016-03-23T02:13:24.151Z · LW(p) · GW(p)

Has there been any response to Brett Hall's critique of Bostrom's Superintelligence? What do y'all think? http://www.bretthall.org/superintelligence.html

Replies from: Vaniver
comment by Vaniver · 2016-03-23T14:39:48.178Z · LW(p) · GW(p)

I wish I had read the ending first; Hall is relying heavily on Deutsch to make his case. Deutsch has come up on LW before, most relevantly here. An earlier comment of mine still seems true: I think Deutsch is pointing in the right direction and diagnosing the correct problems, but I think Deutsch underestimates the degree to which other people have diagnosed the same problems and are working on solutions to address those problems.

Hall's critique is multiple parts, so I'm writing my response part by part. Horizontal lines distinguish breaks, like so:


It starts off with reasoning by analogy, which is generally somewhat suspect. In this particular analogy, you have two camps:

  1. Builders, who build ever-higher towers, hoping that they will one day achieve flight (though they don't know how that will work theoretically).

  2. Theorists, who think that they're missing something, maybe to do with air, and that the worries the builders have about spontaneous liftoff don't make sense, because height doesn't have anything to do with flight.

But note that when it comes to AI, the dividing lines are different. Bostrom gets flak for not knowing the details of modern optimization and machine learning techniques (and I think that flak is well-targeted), but Bostrom is fundamentally concerned about theoretical issues. It's the builders--the Ngs of the world who focus on adding another layer to their tower--who think that things will just work out okay instead of putting effort into ensuring that things will work out okay.

That is, the x-risk argument is the combination of a few pieces of theory: the Orthogonality Thesis, that intelligence can be implemented in silicon (the universality of intelligence), and that there aren't hard constraints to intelligence anywhere near the level of human intelligence.


One paragraphs, two paragraphs, three paragraphs... when are we going to get to the substance?

Okay, 5 paragraphs in, we get the idea that "Bayesian reasoning" is an error. Why? Supposedly he'll tell us later.

The last paragraph is good, as a statement of the universality of computation.


And the first paragraph is one of the core disagreements. Hall correctly diagnoses that we don't understand human thought at the level that we can program it. (Once we do, then we have AGI, and we don't have AGI yet.) But Hall then seems to claim that, basically, unless we're already there we won't know when we'll get there. Which is true but immaterial; right now we can estimate when we'll get there, and what our estimate is determines how we should approach.

And then the latter half of this section is just bad. There's some breakdown in communication between Bostrom and Hall; Bostrom's argument, as I understand it, is not that you get enough hardware and then the intelligence problem solves itself. (This is the "the network has become self-aware!" sci-fi model of AGI creation.) The argument is that there's some algorithmic breakthrough necessary to get to AGI, but that the more hardware you have, the smaller that breakthrough is.

(That is, suppose the root of intelligence was calculating matrix determinants. There are slow ways and fast ways to do that--if you have huge amounts of hardware, coming across Laplace Expansion is enough, but if you have small amounts of hardware, you can squeak by only if you have fast matrix multiplication.)

One major point of contention between AI experts is, basically, how many software breakthroughs we have left until AGI. It could be the case that it's two; it could be the case that it's twenty. If it's two, then we expect it to happen fairly quickly; if it's twenty, then we expect it to happen fairly slowly. This uncertainty means we cannot rule out it happening quickly.

The claim that programs do not engage in creativity and criticism is simply wrong. This is the heart and soul of numerical optimization, metaheuristic programs in particular. Programs are creative and critical beyond the abilities of humans in the narrow domains that we've been able to communicate to those programs, but the fundamental math of creativity and criticism exist (in the terms of sampling from a solution space, especially in ways that make use of solutions that we've already considered, and objective functions that evaluate those solutions). The question is how easily we will be able to scale from well-defined problems (like routing trucks or playing Go) to poorly-defined problems (like planning marketing campaigns or international diplomacy).


Part 4 is little beyond "I disagree with the Orthogonality Thesis." That is, it sees value disagreements as irrationality. Bonus points for declaring Bayesian reasoning false for little reason that I can see besides "Deutsch disagrees with it" (which, I think, is due to Deutsch's low familiarity with the math of causal models, which I think are the solution he correctly thinks is missing with EDT-ish Bayesian reasoning).


Not seeing anything worth commenting on in part 5.


Part 6 includes a misunderstanding of Arrow's Theorem. (Arrow's Theorem is a no-go theorem, but it doesn't rule out the thing Hall thinks it rules out. If the AI is allowed to, say, flip a coin when it's indifferent, Arrow's Theorem no longer applies.)

Replies from: JohnGreer, None, RaelwayScot, MrMind
comment by JohnGreer · 2016-03-23T21:01:08.432Z · LW(p) · GW(p)

Thanks for the in-depth response, Vaniver! I don't have a good grasp on these issues so it helps reading others' analyses.

Replies from: Vaniver
comment by Vaniver · 2016-03-23T21:04:12.636Z · LW(p) · GW(p)

You're welcome!

comment by [deleted] · 2016-03-23T15:36:25.098Z · LW(p) · GW(p)

If you have time for another, I'd be interested in your response to Goertzel's critique of Superintelligence:

http://jetpress.org/v25.2/goertzel.htm

Replies from: Vaniver
comment by Vaniver · 2016-03-23T17:31:17.715Z · LW(p) · GW(p)

Overall, very sensible. I'll ignore minor quibbles (a 'strong AI' and a 'thinking machine' seem significantly different to me, since the former implies recursion but the latter doesn't) and focus on the main points of disagreement.

The related question I care more about, though, is: In practice, which goals are likely to be allied with which kinds and levels of intelligence, in reality? What goals will very, very smart minds, existing in the actual universe rather than the domains of abstract mathematics and philosophy, be most likely to aim for?

Goertzel goes on to question how likely Omohundro's basic AI drives are to be instantiated. Might an AI that doesn't care for value-preservation outcompete an AI that does?

Overall this seems very worth thinking about, but I think Goertzel draws the wrong conclusions. If we have a 'race-to-the-bottom' of competition between AGI, that suggests evolutionary pressures to me, and evolutionary pressures seem to be the motivation for expecting the AI drives in the first place. Yes, an AGI that doesn't have any sort of continuity impulses might be able to create a more powerful successor than an AGI that does have continuity impulses. But that's the start of the race, not the end of the race--any AGI that doesn't value continuity will edit itself out of existence pretty quickly, whereas those that do won't.

The nightmare scenario, of course, is an AGI that improves rapidly in the fastest direction possible, and then gets stuck somewhere unpleasant for humans.

And since I used the phrase "nightmare scenario," a major disagreement between Goertzel and Bostrom is over the role of uncertainty when it comes to danger. Much later, Goertzel brings up the proactionary principle and precautionary principle.

Bostrom's emotional argument, matching the precautionary approach, seems to be "things might go well, they might go poorly, because there's the possibility it could go poorly we must worry until we find a way to shut off that possibility."

Goertzel's emotional argument, matching the proactionary approach, seems to be "things might go well, they might go poorly, but why conclude that they will go poorly? We don't know enough." See, as an example, this quote:

Maybe AGIs that are sufficiently more advanced than humans will find some alternative playground that we humans can’t detect, and go there and leave us alone. We just can’t know, any more than ants can predict the odds that a human civilization, when moving onto a new continent, will destroy the ant colonies present there.

Earlier, Goertzel correctly observes that we're not going to make a random mind, we're going to make a mind in a specific way. But the Bostromian counterargument is that because we don't know where that specific way leads us, we don't have a guarantee that it's different from making a random mind! It would be nice if we knew where safe destinations were, and how to create pathways to funnel intelligences towards those destinations.

Which also seems relevant here:

Many of Bostrom’s hints are not especially subtle; e.g. the title of Chapter 8 is “Is the default outcome doom?” The answer given in the chapter is basically “maybe – we can’t rule it out; and here are some various ways doom might happen.” But the chapter isn’t titled “Is doom a plausible outcome?”, even though this is basically what the chapter argues.

I view the Bostromian approach as saying "safety comes from principles; if we don't follow those principles, disaster will result. We don't know what principles will actually lead to safety." Goertzel seems to respond with "yes, not following proper principles could lead to disaster, but we might end up accidentally following them as easily as we might end up accidentally violating them." Which is on as solid a logical foundation as Bostrom's position that things like the orthogonality thesis are true "in principle," and which seems more plausible or attractive seems to be almost more a question of personal psychology or reasoning style than it is evidence or argumentation.

There are massive unknowns here, but it doesn’t seem sensible to simply assume that, for all these non-superintelligence threats, defenses will outpace offenses. It feels to me like Bostrom – in his choices of what to pay attention to, and his various phrasings throughout the book – downplays the risks of other advanced technologies and over-emphasizes the potential risks of AGI. Actually there are massive unknowns all around, and the hypothesis that advanced AGI may save humanity from risks posed by bad people making dangerous use of other technologies is much more plausible than Bostrom makes it seem.

This is, I think, a fairly common position--a decision on whether to risk the world on AGI should be made knowing that there are other background risks that the AGI might materially diminish. (Supposing one estimates that a particular AGI project is a 3 in a thousand chance of existential collapse, one still has work to do in determining whether or not that's a lower or higher risk than not doing that particular AGI project.)

I don't see any reason yet to think Bostrom's ability to estimate probabilities in this area are any better than Goertzel's, or vice versa; I think that the more AI safety research we do, the easier it is to pull the trigger on an AGI project, and the sooner we can do so. I agree with Goertzel that it's not obvious that AI research slowdown is desirable, let alone possible, but it is obvious to me that AI safety research speedup is desirable.

I think Goertzel overstates the benefit of open AI development, but agree with him that Bostrom and Yudkowsky overstate the benefit of closed AI development.

I haven't read about open-ended intelligence yet. My suspicion, from Goertzel's description of it, is that I'll find it less satisfying than the reward-based view. My personal model of intelligence is much more inspired by control theory. The following statement, for example, strikes me as somewhat bizarre:

But I differ from them in suspecting that these advances will also bring us beyond the whole paradigm of optimization.

I don't see how you get rid of optimization without also getting rid of preferences, or choosing a very narrow definition of 'optimization.'


I think that there's something of a communication barrier between the Goertzelian approach of "development" and the Yudkowskyian approach of "value preservation." On the surface, the two of those appear to contradict each other--a child who preserves their values will never become an adult--but I think the synthesis of the two is the correct approach--value preservation is what it looks like when a child matures into an adult, rather than into a tumor. If value is fragile, most processes of change are not the sort of maturation that we want, but are instead the sort of degeneration that we don't want; and it's important to learn the difference between them and make sure that we can engineer that difference.

Biology has already (mostly) done that work for us, and so makes it look easy--which the Bostromian camp thinks is a dangerous illusion.

Replies from: None, None
comment by [deleted] · 2016-03-23T20:04:18.373Z · LW(p) · GW(p)

Thank you for taking the time to write that up. I strongly disagree, as you probably know, but it provided a valuable perspective into understanding the difference in viewpoint.

No two rationalists can agree to disagree... but pragmatists sometimes must.

Replies from: Vaniver
comment by Vaniver · 2016-03-23T20:22:41.641Z · LW(p) · GW(p)

You're welcome!

I strongly disagree, as you probably know

Did we meet at AAAI when it was in Austin, or am I thinking of another Mark? (I do remember our discussion here on LW, I'm just curious if we also talked outside of LW.)

Replies from: None
comment by [deleted] · 2016-03-23T21:25:02.873Z · LW(p) · GW(p)

No I'm afraid you're confusing me with someone else. I haven't had the chance yet to see the fair city of Austin or attend AAAI, although I would like to. My current day job isn't in the AI field so it would sadly be an unjustifiable expense.

To elaborate on the prior point, I have for some time engaged with not just yourself, but other MIRI-affiliated researchers as well as Nate and Luke before him. MIRI, FHI, and now FLI have been frustrating to me as their PR engagements have set the narrative and in some cases taken money that otherwise would have gone towards creating the technology that will finally allow us to end pain and suffering in the world. But instead funds and researcher attention are going into basic maths and philosophy that have questionable relevance to the technologies at hand.

However the precautionary vs proactionary description sheds a different light. If you think precautionary approaches are defensible, in spite of overwhelming evidence of their ineffectiveness, then I don't think this is a debate worth having.

I'll go back to proactively building AI.

Replies from: Vaniver
comment by Vaniver · 2016-03-24T02:24:41.786Z · LW(p) · GW(p)

in some cases taken money that otherwise would have gone towards creating the technology that will finally allow us to end pain and suffering in the world.

If one looks as AI systems as including machine learning development, I think the estimate is something like a thousand times as many resources are spent on development as on safety research. I don't think taking all of the safety money and putting it into 'full speed ahead!' would make much difference in time to AGI creation, but I do think transferring funds in the reverse direction may make a big difference for what that pain and suffering is replaced with.

I'll go back to proactively building AI.

So, in my day job I do build AI systems, but not the AGI variety. I don't have the interest in mathematical logic necessary to do the sort of work MIRI does. I'm just glad that they are doing it, and hopeful that it turns out to make a difference.

Replies from: None
comment by [deleted] · 2016-03-24T16:47:12.365Z · LW(p) · GW(p)

If one looks as AI systems as including machine learning development, I think the estimate is something like a thousand times as many resources are spent on development as on safety research.

Because everyone is working on machine learning, but machine learning is not AGI. AI is the engineering techniques for making programs that act intelligently. AGI is the process for taking those components and actually constructing something useful. It is the difference between computer science and a computer scientist. Machine learning is very useful for doing inference. But AGI is so much more than that, and there are very few resources being spent on AGI issues.

By the way, you should consider joining ##hplusroadmap on Freenode IRC. There's a community of pragmatic engineers there working on a variety of transhumanist projects, and you AI experience would be valued. Say hi to maaku or kanzure when you join.

comment by [deleted] · 2019-12-21T01:52:49.032Z · LW(p) · GW(p)

Vaniver, 4 years on and I wonder if your opinion on this issue has evolved in the time elapsed? I respect you for your clear and level-headed thinking on this issue. My own thinking has changed somewhat, and I have a new appreciation for the value of AI safety work. However this is for reasons that I think are atypical for the LW or MIRI orthodox community. I wonder if your proximity to the Berkeley AI safety crowd and your ongoing work in narrow AI has caused your opinion to change since 2016?

Replies from: Vaniver
comment by Vaniver · 2019-12-21T22:54:58.331Z · LW(p) · GW(p)

Thanks!

Vaniver, 4 years on and I wonder if your opinion on this issue has evolved in the time elapsed?

My opinion definitely has more details than it did 4 years ago, but I don't see anything in the grandparent (or great-grandparent) comment that I disagree with. I will confess to not keeping up with things Goertzel has published in the meantime, but I'd be happy to take a look at something more recent if there's anything you recommend. I hear Deutsch is working on a paper that addresses whether or not Solomonoff Induction resolves the problem of induction, but to the best of my knowledge it's not out yet.

However this is for reasons that I think are atypical for the LW or MIRI orthodox community. 

I'd be interested in hearing about those reasons; one of the things that has happened is talking to many more people about their intuitions and models both for and against risk (or for or against risk being shaped particular ways). 

Replies from: None
comment by [deleted] · 2019-12-22T12:51:28.977Z · LW(p) · GW(p)

I wasn’t actually asking about your views on Goertzel per se. In fact I don’t even know if he has published anything more recent, or what his current view are. Sorry for the confusion there.

I was wondering about your views on the topic as a whole, including the prior probability of a “nightmare scenario” arising from developing a not-provably-Friendly AI before solving the control problem, or the proactionary vs precautionary principle as applied here, etc. You are one of the few people I’ve met online or in person (we met at a CFAR-for-ML workshop some years back, if you recall) that is able to comprehend and articulate reasonable steelmans of both Bostrom and Goertzel’s views. In your comment above you seemed generally on the fence in terms of the hard evidence. Given that I’m puzzling though a few large updates to my own mental model on this subject, anything that has caused you to update in the time since would be highly relevant to me. So I thought I’d ask.

> However this is for reasons that I think are atypical for the LW or MIRI orthodox community.
I’d be interested in hearing about those reasons

Okay. I’m concerned there’s a large inferential gap. Let’s see if I can compactly cross it, and let me know if any steps don’t make sense. My apologies for the length.

First, I only ever came to care about AGI because of the idea of the Singularity. I personally want to live a hundred billion years, to explore the universe and experience the splintering of humanity into uncountable different diverse cultures and ways of life. (And I want to do so without some nanny-AGI enforcing some frozen extrapolated ideal human ethics we exist today.) To personally experience that requires longevity escape velocity, and to achieve that in the few decades remaining of my current lifetime requires something like a Vernor Vinge-style Singularity.

I also want to end all violent conflict, cure all diseases, create abundance so everyone can live their full life potential, and stop Death from snatching my friends and loved ones. But I find it more honest and less virtue signaling to focus on my selfish reasons, which is that I read to much sci-fi as a kid and want to see it happen for myself.

So far, so good. I expect that’s not controversial or even unusual around here. But the point is that my interest in AGI is largely instrumental. I need the Singularity, and the Singularity is started by the development of true artificial general intelligence, in the standard view.

Second, I’m actually quite concerned that if any AGI were to “FOOM” (even and perhaps especially a so-called “Friendly” AI), then we would be stuck in what is, by my standards, a less than optimal future where a superintelligence infringes on our post-human freedom to self-modify, creating the unconstrained, diverse shards of humanity I mentioned earlier. Wishing for a nanny-AGI to solve our problems is like wishing to live in a police state, just one where the police are trustworthy and moral. But it’s still a police state. I need a frontier to be happy.

It’s on the above second point that I anticipate disagreement. That my notion of Friendliness is off, that negative utility outcomes are definitionally impossible when guided by a so-called Friendly AI, etc. Because I don’t want this to go too long, I will merely point out that there is a difference between individual utility functions and (extrapolated, coherent) societal utility functions. Maybe, just maybe, it’s not possible for everyone to achieve maximal happiness, and some must suffer for the good of the many. As chronic iconoclast, I fear being stomped by the boot of progress. In any case, if you object on this point then please don’t get stuck here. Just presume it and move on; it is important but not a lynchpin of my position.

So as the reasoning goes, I need superintelligent tool AI. And Friendly AI, which is necessarily agent-y, is actually an anti-goal.

So the first question on my quest: is it possible to create tool AGI, without the world ending as a bunch of smart people on LW seem to think would happen? I dove deep into this and came to the conclusion of: “No, it is quite possible to build AGI that does not destroy the world without it being provably Friendly. There are outlines of adequate safety measures that once fully fleshed out could be employed to safeguard so-called tool/oracle AI that is used to jumpstart a Singularity, but still leaves humans, or our transhuman descendants at the top of the metaphorical food chain.”

Again, I’m sorry that I’m skipping justification of this point, but this is a necro comment to a years-old discussion thread not a full post, or the sequence of posts that would be required. When I later decided that LW’s largely non-evidential approach to philosophy was what had obscured reality here, I decided to leave and go about building this AI rather than discussing it further.

It was not long after when I belatedly discovered the obvious fact that the arguments I made against the possibility of a “FOOM” moving fast enough to cause existential risk also argued against the utility of AGI for jumpstarting a real Singularity, of the world-altering Vernor Vinge type, which I had decided was my life’s purpose.

“Oops…”

…is that sound we make when we realized we wasted years of our lives on an important-sounding problem that turned out to be actually irrelevant to our cause. Oh well. Back to working on the problem of medical nanotechnology directly.

But upon leaving LW and pronouncing the Sequences to be info hazards, I had set a 4-year timer to remind myself to come back and re-evaluate that decision. My inner jury is still out on that point, but in reviewing some posts related to AI safety it occurred to me that solving the control problem also solves most of the mundane problems that I expect AGI projects to encounter.

One of my core objections to the “nightmare scenario” of UFAI is that AGI approaches which are likely to be tried in practice (as opposed to abstract models like AIXI) are far more likely to get stuck early, far far before they reach anything near take-over-the-world levels of power. Probably before they even reach the "figure out how to brew coffee" level. Probably before they even know what coffee is. Debugging an AI in such a stuck state would require manual intervention, which is both a timeline extender and a strong safety property. Doing something non-trivial with the first AGI is likely to take years of iterated development, with plenty of opportunity to introspect and alter course.

However a side effect of solving the control problem is that it necessarily must involve being able to reason about the effects of self-modification on future behavior.. which lets the AI avoid getting stuck at all!

If true, this is both good news and bad news.

The good is that a Vingeian Singularity is back on the table! We can solve the worlds problems and usher in an age of abundance and post-human states of being in one generation with the power of AI.

The bad is that there is a weird sort of uncanny-valley like situation where AI today is basically safe, but once a partial solution is found to the tiling problem, and perhaps a few other aspects of the AI safety problem, it does become possible to write an UFAI that can “FOOM” with unpredictable consequences.

So I still think the AI x-risk crowd has seriously overblown the issue today. Deepmind’s creations are not going to take over the world and turn us all into paperclips. But, ironically, if MIRI is at least partially successful in their research, then that work could be applied to make a real Clippy-like entity with all the scary consequences.

That said, I don’t expect this to seriously alter my prediction that tool/oracle AI is achievable. So UFAI + partial control solution could be deployed with appropriate boxing safeguards to get us the Singularity with humans at the helm. But I’m still in the midst of a deep cache purge to update my own feelings of likelihood here.

But yeah, I doubt many at MIRI are working on the control problem explicitly because it is necessary to create the scary kind of UFAI (albeit also the kind that can assist humans to hastily solve their mass of problems!).

comment by RaelwayScot · 2016-03-25T01:13:24.238Z · LW(p) · GW(p)

Deutsch briefly summarized his view on AI risks in this podcast episode: https://youtu.be/J21QuHrIqXg?t=3450 (Unfortunately there is no transcript.)

What are your thoughts on his views apart from what you've touched upon above?

comment by MrMind · 2016-03-24T08:35:06.729Z · LW(p) · GW(p)

Thank you for sparing us the time to sift throught the garbage... Why the hell any discussion about AGI must start with a bad flight analogy?

comment by Stefan_Schubert · 2016-03-22T14:14:30.956Z · LW(p) · GW(p)

I have a maths question. Suppose that we are scoring n individuals on their performance in an area where there is significant uncertainty. We are categorizing them into a low number of categories, say 4. Effectively we're thereby saying that for the purposes of our scoring, everyone with the same score performs equally well. Suppose that we say that this means that all individuals with that score get assigned the mean actual performance of the individuals with that that score. For instance, if there were three people who got the highest score, and their perfomance equals 8, 12 and 13 units, the assigned performance is 11 units.

Now suppose that we want our scoring system to minimise information loss, so that the assigned performance is on average as close as possible to the actual performance. The question is: how do we achieve this? Specifically, how large a proportion of all individuals should fall into each category, and how does that depend on the performance distribution?

It would seem that if performance is linearly increasing as we go from low to high performers, then all categories should have the same number of individuals, whereas if the increase is exponential, then the higher categories should have a smaller number of individuals. Is there a theorem that proves this, and which exacty specifies how large the categories should be for a given shape of the curve? Thanks.

Replies from: Vitor, philh, johnlawrenceaspden
comment by Vitor · 2016-03-22T16:18:04.547Z · LW(p) · GW(p)

Your problem is called a clustering problem. First of all, you need to answer how you measure your error (information loss, as you call it). Typical error norms used are l1 (sum of individual errors), l2 (sum of squares of errors, penalizes larger errors more) and l-infinity (maximum error).

Once you select a norm, there always exists a partition that minimizes your error, and to find it there are a bunch of heuristic algorithms, e.g. k-means clustering. Luckily, since your data is one-dimensional and you have very few categories, you can just brute force it (for 4 categories you need to correctly place 3 boundaries, and naively trying all possible positions takes only n^3 runtime)

Hope this helps.

Replies from: Stefan_Schubert, gjm
comment by Stefan_Schubert · 2016-03-22T20:45:20.908Z · LW(p) · GW(p)

Thanks a lot! Yes, super-useful.

comment by gjm · 2016-03-22T16:41:08.149Z · LW(p) · GW(p)

A possibly relevant paper for anyone wanting to do this in one dimension to a dataset large enough that they care about efficiency.

comment by philh · 2016-03-22T16:00:21.659Z · LW(p) · GW(p)

If I'm understanding this correctly, it sounds like you're performing k-means clustering.

comment by johnlawrenceaspden · 2016-03-22T15:56:35.014Z · LW(p) · GW(p)

You'd minimize information loss by giving the actual scores.

The argument is 'grading on the curve' vs 'ABCDEF'. The first way is fair, but it promotes extreme competition to be in the top 1% (or 'The Senior Wrangler', as we used to call it), which may not be desirable. The second way hands out random bonuses and penalties to individuals near the arbitrary boundaries.

I was in the top 25% of my year in terms of marks, I believe. I was a 'Senior Optime', or 'got a second'. A class that stretched from around 25%-75%.

Not bitter, or anything.

comment by SanguineEmpiricist · 2016-03-24T20:38:40.580Z · LW(p) · GW(p)

Excellent piece of epistemology from Yudkowsky, someone put this in main right now.

https://www.facebook.com/yudkowsky/posts/10154067130774228

AllLivesMatterButBlackLivesAreEspeciallyLikelyToBeEndedByPolice AndItsOkayForNationalPoliticsToFocusOnThatPartForAWhile

Running this through my parser I was able to extract the statement "All live matter but black lives are especially likely to be ended by police and it's okay for national politics to focus on that part for awhile".

http://imgur.com/sbxLjPb

comment by [deleted] · 2016-03-22T19:44:25.778Z · LW(p) · GW(p)

Containment thread

1. Negotiating self defense

Stranger A: I'm going to beat you up

Stranger B: I have photographic memory and illustrate professionally.

Stranger A: I could kill, blind or main you.

Stranger B: I could escape then submit your likeness to the authorities

2. Experimental philosophy

Is anyone here an ethical intuitionist or not?

3. Happy home

Observational studies suggest distance to work is associated with happiness. I moved about 15km closer to work and now walk there, and guess what, I'm a lot happier! Potential confounders: moved out of family home, family home was hostile environment

4. Unhappy babies

Babies frequently cry and are generally helpless. What if they are suffering beyond our comprehension and our shallow memories are our respite from that early trauma?

5. Techno v.s. natural reincarnation

People compare the possibility of resurrection after cryopreservation to the possibility of resurrection without cryopreservation as a slim chance v.s. no chance. I'm skeptical that brain death would imply information theoretic death, when you considr the shear size of the universe, and possible arrangements of matter over time in order to arrange in the form that materialises you, in some stable way, again. Reincarnation, if you will. My reservation against cryonics is not the slim chance, it's that paying for that slight increase in chance is trivial.

Replies from: WalterL, Dagon, Elo
comment by WalterL · 2016-03-22T20:01:12.394Z · LW(p) · GW(p)

I don't understand what's going on here.

What does it mean that this is a "containment thread"

What is going on with this dialog?

Replies from: Lumifer
comment by Lumifer · 2016-03-22T20:43:22.708Z · LW(p) · GW(p)

Clarity has a habit of shotgunning shards of his mind dumps across multiple comments. He has been trying to limit the spread, though, and place most of these shards within a "containment thread" the purpose of which is to be a designated place for the shards and thus constrain the collateral damage.

Replies from: None
comment by [deleted] · 2016-03-25T09:45:06.883Z · LW(p) · GW(p)

Confirmed

comment by Dagon · 2016-03-24T15:49:39.976Z · LW(p) · GW(p)

negotiating self-defense

B is likely in a poor negotiating position to start with, and better off just giving A the wallet or running. A's strong position comes from the precommittment strategy of poor impulse control: "I don't care if you can get me in trouble, I'm too involved in this attack to think beyond the next few minutes".

Replies from: Lumifer
comment by Lumifer · 2016-03-24T16:03:05.834Z · LW(p) · GW(p)

I'm sure A will listen to Reason.

comment by Elo · 2016-03-23T00:16:53.647Z · LW(p) · GW(p)
  1. you might be motivating a simple, "punch him and steal his wallet " to "kill him and get away with it". Might not want to do that.
comment by Gunnar_Zarncke · 2016-03-20T19:55:21.032Z · LW(p) · GW(p)

Meta discussion goes here

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-20T19:56:37.222Z · LW(p) · GW(p)

Apparently no Open Thread was created this week yet. I guess it makes sense to make this one span two weeks. Or does that break some automation here or there?

Replies from: Douglas_Knight, Elo, MrMind
comment by Douglas_Knight · 2016-03-20T20:13:30.871Z · LW(p) · GW(p)

The title is a duplicate of this post, if you really think this open thread is late. Alternatively, it is ~12 hours early.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-21T11:33:27.751Z · LW(p) · GW(p)

I did check for that. Anyway. I renamed it to 21st to 27th

comment by Elo · 2016-03-20T22:27:04.457Z · LW(p) · GW(p)

confused. Douglas_knight is right. I am going to treat this as the 21st->27th open thread; you should change the title.

Replies from: MrMind
comment by MrMind · 2016-03-21T08:05:40.501Z · LW(p) · GW(p)

Exactly. We should PM Gunnar, or create another thread entirely.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-21T11:32:54.403Z · LW(p) · GW(p)

Renamed to 21-27th.

comment by MrMind · 2016-03-21T08:05:17.457Z · LW(p) · GW(p)

Ah, but there's no automation. Only people creating Open thread out of their own good will!
I still think Open threads should be weekly... if nobody has created one, you can create one following the customs...

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-03-21T11:32:22.234Z · LW(p) · GW(p)

I know that they are not created automatically. But I wondered whether they are used (indexed, listed,...) in some automatic way that depends on the title or one post per week.

Replies from: philh
comment by philh · 2016-03-21T11:45:45.883Z · LW(p) · GW(p)

IIRC the sidebar used to have a link to the latest open thread, which I think was based on the open_thread tag. That seems to have vanished now.

Replies from: Vaniver
comment by Vaniver · 2016-03-21T13:21:13.345Z · LW(p) · GW(p)

I'm still seeing it, and it is tag-based, I believe. Changing the name seems to have made the links somewhat weird, though (it looks like both open_thread_march_14_march_20_2016 and open_thread_march_21_march_27 _2016 might work?).

Replies from: philh
comment by philh · 2016-03-21T13:53:19.337Z · LW(p) · GW(p)

Oh, it shows up on /r/discussion/new, but not on /r/all/recentposts.

Weird. I used to have a page that would redirect you to the latest open thread, finding it through the sidebar API. I took it down a month or so back because the API had vanished, but now it's apparently back.

The important part of the URL of this thread is /nf7/. The stuff after that is intended for human use, you can replace it arbitrarily.

Replies from: Vaniver
comment by Vaniver · 2016-03-21T13:58:53.416Z · LW(p) · GW(p)

The important part of the URL of this thread is /nf7/. The stuff after that is intended for human use, you can replace it arbitrarily.

Good to know!

comment by bogus · 2016-03-24T23:37:15.471Z · LW(p) · GW(p)

Apparently, Microsoft is now hopping on the "friendly AI" bandwagon. Let's just say that their first attempt did not work very well.

Replies from: gjm
comment by gjm · 2016-03-25T00:37:14.644Z · LW(p) · GW(p)

I think it is useful to distinguish between "friendly AI" and "patronizing AI".

comment by [deleted] · 2016-03-24T19:59:06.420Z · LW(p) · GW(p)

Is there a continuum of realizing that you are dreaming? I ask because I sometimes dream of the city where I live, and I would go, 'oh, this is my Dream Kyiv, with steep wooded slopes and broken bridges and a cathedral of The College (all somewhat resembling real places), let's see what we'll get now...' and when I wake up I often remember th overall image.

Replies from: PipFoweraker, polymathwannabe
comment by PipFoweraker · 2016-03-27T00:41:29.969Z · LW(p) · GW(p)

There is a continuum that moves from complete dream-obliviousness (not being aware one has dreamed upon waking) all the way up to comprehensively lucid dreaming, where a dreamer is able to create and control their dream environment at will and then retain an accurate memory upon waking.

There are obvious problems with the self-reporting of dreams and dream recall, so the exact definitions of the continuum are fuzzy, but I'm not aware of anyone seriously disputing the continuum exists.

Also making matters more interesting is the mechanics of dreaming in terms of what frames of reference the brain uses to create the imagery of the dream. It's not surprising that people dream about places similar to their environments if we think about terms of raw data in the brain as it dreams.

comment by polymathwannabe · 2016-03-27T04:57:27.157Z · LW(p) · GW(p)

At age 17 I had the common experience of dreaming of my recently deceased mother, but my brain didn't take long to realize that seeing her was not possible, and I realized it was a dream. For some years I kept that ability to quickly see the inconsistencies in the dream world, but as of now my asleep brain is back to normal gullibility. Because I have a strong preference for living in the real world, I very strongly (verbally, actually) forbade my mind from showing me my dead mother again, and it obeyed.

comment by Elo · 2016-03-22T23:51:39.278Z · LW(p) · GW(p)

http://setosa.io/ev/conditional-probability/

Visualisation of conditional probability. It makes it very clear and takes a very short time to understand.

comment by cousin_it · 2016-03-22T13:23:08.839Z · LW(p) · GW(p)

Politically charged comment coming in the wake of the Brussels attack.

I just realized that the deep-sounding question "should we be tolerant of intolerance" is trivially solved by game theory. It's exactly the same question as "should I play Cooperate against someone who plays Defect". The right answer is "no". The people who answer "yes" are just trying to appear virtuous.