Open thread, Mar. 20 - Mar. 26, 2017

post by MrMind · 2017-03-20T08:01:08.320Z · LW · GW · Legacy · 208 comments

Contents

208 comments

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

208 comments

Comments sorted by top scores.

comment by Viliam · 2017-03-24T23:59:59.945Z · LW(p) · GW(p)

Okay, so I recently made this joke about future Wikipedia article about Less Wrong:

[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.

A few days later I actually looked at the Wikipedia article about Less Wrong:

In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, calling it "stupid". Discussion of Roko's basilisk was banned on LessWrong for several years before the ban was lifted in October 2015.

The majority of the LessWrong userbase identifies as atheist, consequentialist, white and male.

The neoreactionary movement is associated with LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology. In the 2014 self-selected user survey, 29 users representing 1.9% of survey respondents identified as "neoreactionary". Yudkowsky has strongly repudiated neoreaction.

Well... technically, the article admit that at least Yudkowsky considers the basilisk stupid, and disagrees with neoreaction. Connotationally, it suggests that basilisk and neoreaction are 50% of what is worth mentioning about LW, because that's the fraction of the article these topics got.

Oh, and David Gerard is actively editing this page. Why am I so completely unsurprised? His contributions include:

  • making a link to a separate article for Roko's basilisk (link), which luckily didn't materialize;
  • removing suggested headers "Rationality", "Cognitive bias", "Heuristic", "Effective altruism", "Machine Intelligence Research Institute" (link) saying that "all of these are already in the body text"; but...
  • adding a header for Roko's basilisk (link);
  • shortening a paragraph on LW's connection to effective altruism (link) -- by the way, the paragraph is completely missing from the current version of the article;
  • an edit war emphasising that it is finally okay to talk on LW about the basilisk (link, link, link, link, link);
  • restoring the deleted section on basilisk (link) saying that it's "far and away the single thing it's most famous for";
  • adding neoreaction as one of the topics discussed on LW (link), later removing other topics competing for attention (link), and adding a quote that LW "attracted some readers and commenters affiliated with the alt-right and neoreaction, that broad cohort of neofascist, white nationalist and misogynist trolls" (link);

...in summary, removing or shortening mentions of cognitive biases and effective altruism, and adding or developing mentions of basilisk and neoreaction.

Sigh.

EDIT: So, looking back at my prediction that...

Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games.

...I'd say I was (1) right about the basilisk; (2) partially right about the white supremacism, which at this moment is not mentioned explicitly (yet! growth mindset), but the article says that the userbase is mostly white and male, and discusses eugenics; and (3) wrong about the computer games. 50% success rate!

Replies from: Elo, TheAncientGeek
comment by Elo · 2017-03-25T00:45:18.198Z · LW(p) · GW(p)

can we fix this please?

Edit: I will work on it.

Replies from: Viliam
comment by Viliam · 2017-03-25T15:14:37.919Z · LW(p) · GW(p)

I'd suggest being careful about your approach. If you lose this battle, you may not get another chance. David Gerard most likely has 100 times more experience with wiki battling than you. Essentially, when you make up a strategy, sleep on it, and then try imagining how a person already primed against LW would read your words.

For example, expect that any edit made by anyone associated with LW will be (1) traced back to their identity and LW account, and consequently (2) reverted, as a conflict of interest. And everyone will be like "ugh, these LW guys are trying to manipuate our website", so the next time they are not going to even listen to any of us.

Currently my best idea -- I didn't make any steps yet, just thinking -- is to post a reaction to the article's Talk page, without even touching the article. This would have two advantages: (1) No one can accuse me of being partial, because that's what I would openly disclose first, and because I would plainly say that as a person with a conflict of interest I shouldn't edit my article. Kinda establishing myself as the good guy who follows the Wikipedia rules. (2) A change in article could be simply reverted by David, but he is not allowed to remove my reaction from the talk page, unless I make a mistake and break some other rule. That means, even if I lose the battle, people editing the article in the future will be able to see my reaction. This is a meta move: the goal is not to change the article, but to convince the impartial Wikipedia editors that it should be changed. If I succeed to convince them, I don't have to do the edit myself; someone else will. On the other hand, if I fail to convince them, any edit would likely be reverted by David, and I have neither time nor will to play wiki wars.

What would be the content of the reaction? Let's start with the assumption that on Wikipedia no one gives a fuck about Less Wrong, rationality, AI, Eliezer, etc.; to most people this is just an annoying noise. By drawing their attention to the topic, you are annoying them even more. And they don't really care about who is right, only who is technically correct. That's the bad news. The good news is that they equally don't give a fuck about RationalWiki or David. What they do care about is Wikipedia, and following the rules of Wikipedia. Therefore the core of my reaction would be this: David Gerard has a conflict of interest about this topic; therefore he should not be allowed to edit it, and all his previous edits should be treated with suspicion. The rest is simply preparing my case, as well as I can, for the judge and the jury, who are definitely not Bayesians, and want to see "solid", not probabilistic arguments.

The argument for David's conflict of interest is threefold. (1) He is a representative (admin? not sure) of RationalWiki, which is some sense is LessWrong's direct competitor, so it's kinda like having a director of Pepsi Cola edit the article on Coca Cola, only at a million times smaller scale. How are these two websites competitors? They both target the same niche, which is approximately "a young intelligent educated pro-science atheist, who cares a lot about his self-image as 'rational'". They have "rational" in their name, we have it pretty much everywhere except in the name; we compete for being the online authorities on the same word. (2) He has a history of, uhm, trying to associate LW with things he does not like. He made (not sure about this? certainly contributed a lot) the RW article on Roko's Basilisk several years ago; LW complained about RW already in 2012. Note: It does not matter for this point whether RW or LW was actually right or wrong; I am just trying to establish that these two have a several years of mutual dislike. (3) This would be most difficult to prove, but I believe that most sensational information about LW was actually inspired by RW. I think most mentions of Roko's Basilisk could be traced back to their article. So what David is currently doing in Wikipedia is somewhat similar to citogenesis... he writes something on his website, media find it and include it in their sensationalist reports, then he "impartially" quotes the media for Wikipedia. On some level, yes, the incident happened (there was one comment, which was once deleted by Eliezer -- as if nothing similar ever happened on any online forum), but the whole reason for its "notability" is, well, David Gerard; without his hard work, no one would give a fuck.

So this is the core, and then there are some additional details. Such as, it is misleading to tell the readers what 1% of LW survey identify as, without even mentioning the remaining 99%. Clearly, "1% neoreactionaries" is supposed to give it a right-wing image, which adding "also, 4% communists, and 20% socialists" (I am just making the numbers up at the moment) would immediately disprove. And the general pattern of David's edits, for increasing the length of the parts talking about basilisk and neoreaction, and decreasing the lenght of everything else.

My thoughts so far. But I am quite a noob as far as wiki wars are concerned, so maybe there is an obvious flaw in this that I haven't noticed. Maybe it would be best if a group of people could cooperate in precise wording of the comment (probably at a bit more private place, so that parts of the debate couldn't be later quoted out of context).

Replies from: ChristianKl, David_Gerard, David_Gerard, David_Gerard
comment by ChristianKl · 2017-03-27T08:27:00.599Z · LW(p) · GW(p)

It's worth noting that David Gerard was a LW contributor with a significant amount of karma: http://lesswrong.com/user/David_Gerard/

comment by David_Gerard · 2017-04-23T01:06:43.482Z · LW(p) · GW(p)

This isn't what "conflict of interest" means at Wikipedia. You probably want to review WP:COI, and I mean "review" it in a manner where you try to understand what it's getting at rather than looking for loopholes that you think will let you do the antisocial thing you're contemplating. Your posited approach is the same one that didn't work for the cryptocurrency advocates either. (And "RationalWiki is a competing website therefore his edits must be COI" has failed for many cranks, because it's trivially obvious that their true rejection is that I edited at all and disagreed with them, much as that's your true rejection.) Being an advocate who's written a post specifically setting out a plan, your comment above would, in any serious Wikipedia dispute on the topic, be prima facie evidence that you were attempting to brigade Wikipedia for the benefit of your own conflict of interest. But, y'know, knock yourself out in the best of faith, we're writing an encyclopedia here after all and every bit helps. HTH!

If you really want to make the article better, the guideline you want to take to heart is WP:RS, and a whacking dose of WP:NOR. Advocacy editing like you've just mapped out a detailed plan for is a good way to get reverted, and blocked if you persist.

Replies from: Viliam
comment by Viliam · 2017-04-24T10:19:12.556Z · LW(p) · GW(p)

Is any of the following not true?

  • You are one of the 2 or 3 most vocal critics of LW worldwide, for years, so this is your pet issue, and you are far from impartial.

  • A lot of what the "reliable sources" write about LW originates from your writing about LW.

  • You are cherry-picking facts that descibe LW in certain light: For example, you mention that some readers of LW identify as neoreactionaries, but fail to mention that some of them identify as e.g. communists. You keep adding Roko's basilisk as one of the main topics about LW, but remove mentions of e.g. effective altruism, despite the fact that there is at least 100 times more debate on LW about the latter than about the former.

Replies from: David_Gerard
comment by David_Gerard · 2017-04-25T07:27:26.177Z · LW(p) · GW(p)

The first two would suggest I'm a subject-matter expert, and particularly the second if the "reliable sources" consistently endorse my stuff, as you observe they do. This suggests I'm viewed as knowing what I'm talking about and should continue. (Be careful your argument makes the argument you think it's making.) The third is that you dislike my opinion, which is fine, but also irrelevant. The final sentence fails to address any WP:RS-related criterion. HTH!

Replies from: eternal_neophyte, Viliam
comment by eternal_neophyte · 2017-04-25T12:40:38.063Z · LW(p) · GW(p)

The first two would suggest I'm a subject-matter expert

Why? Are the two or three most vocal critics of evolution also experts? Does the fact that newspapers quote Michio Kaku or Bill Nye on the dangers of global warming make them climatology experts?

comment by Viliam · 2017-04-25T09:54:54.436Z · LW(p) · GW(p)

Oh, I see, it's one of those irregular words:

I am a subject-matter expert
you have a conflict of interests

Replies from: TheAncientGeek, David_Gerard
comment by TheAncientGeek · 2017-04-25T11:16:11.310Z · LW(p) · GW(p)

He is a paid shill

Replies from: David_Gerard
comment by David_Gerard · 2017-04-25T12:08:14.796Z · LW(p) · GW(p)

despite hearing that one a lot at Rationalwiki, it turns out the big Soros bucks are thinner on the ground than many a valiant truthseeker thinks

Replies from: gjm
comment by gjm · 2017-04-25T15:53:59.103Z · LW(p) · GW(p)

In case it wasn't obvious (it probably was, in which case I apologize for insulting your intelligence, or more precisely I apologize so as not to insult your intelligence), TheAncientGeek was not in fact making a claim about you or your relationship with deep-pocketed malefactors but just completing the traditional "irregular verb" template.

Replies from: David_Gerard
comment by David_Gerard · 2017-04-25T18:42:50.453Z · LW(p) · GW(p)

That's fine :-) It ties in with what I commented above, i.e. conspiracists first assuming that disagreement must be culpable malice.

Replies from: gjm
comment by gjm · 2017-04-25T21:32:06.124Z · LW(p) · GW(p)

I think you must somehow have read what I wrote as the exact reverse of what I intended. (Unless you are calling yourself a conspiracist.) TAG is not assuming that anything must be culpable malice, he is just finishing off a joke left 2/3 done.

Replies from: David_Gerard
comment by David_Gerard · 2017-04-25T23:09:49.482Z · LW(p) · GW(p)

That's the joke, when a conspiracist calls one a "paid shill".

Replies from: gjm
comment by gjm · 2017-04-26T01:09:49.237Z · LW(p) · GW(p)

No one called anyone a paid shill.

Perhaps I am just being particularly dim at the moment. Perhaps you're being particularly obtuse for some reason. Either way, probably best if I drop this now.

comment by David_Gerard · 2017-04-25T11:57:25.125Z · LW(p) · GW(p)

Or just what words mean in the context in question, keeping in mind that we are indeed speaking in a particular context.

[here, let me do your homework for you]

In particular, expertise does not constitute a Wikipedia conflict of interest:

https://en.wikipedia.org/wiki/Wikipedia:Conflict_of_interest#External_roles_and_relationships

While editing Wikipedia, an editor's primary role is to further the interests of the encyclopedia. When an external role or relationship could reasonably be said to undermine that primary role, the editor has a conflict of interest. (Similarly, a judge's primary role as an impartial adjudicator is undermined if she is married to the defendant.)

Any external relationship—personal, religious, political, academic, financial or legal—can trigger a COI. How close the relationship needs to be before it becomes a concern on Wikipedia is governed by common sense. For example, an article about a band should not be written by the band's manager, and a biography should not be an autobiography or written by the subject's spouse.

Subject-matter experts are welcome to contribute within their areas of expertise, subject to the guidance on financial conflict of interest, while making sure that their external roles and relationships in that field do not interfere with their primary role on Wikipedia.

Note "the subject doesn't think you're enough of a fan" isn't listed.

Further down that section:

COI is not simply bias

Determining that someone has a COI is a description of a situation. It is not a judgment about that person's state of mind or integrity.[5] A COI can exist in the absence of bias, and bias regularly exists in the absence of a COI. Beliefs and desires may lead to biased editing, but they do not constitute a COI. COI emerges from an editor's roles and relationships, and the tendency to bias that we assume exists when those roles and relationships conflict.[9] COI is like "dirt in a sensitive gauge."[10]

On experts:

https://en.wikipedia.org/wiki/Wikipedia:Expert_editors

Expert editors are cautioned to be mindful of the potential conflict of interest that may arise if editing articles which concern an expert's own research, writings, discoveries, or the article about herself/himself. Wikipedia's conflict of interest policy does allow an editor to include information from his or her own publications in Wikipedia articles and to cite them. This may only be done when the editors are sure that the Wikipedia article maintains a neutral point of view and their material has been published in a reliable source by a third party. If the neutrality or reliability are questioned, it is Wikipedia consensus, rather than the expert editor, that decides what is to be done. When in doubt, it is good practice for a person who may have a conflict of interest to disclose it on the relevant article's talk page and to suggest changes there rather than in the article. Transparency is essential to the workings of Wikipedia.

i.e., don't blatantly promote yourself, run it past others first.

You're still attempting to use the term "conflict of interest" when what you actually seem to mean is "he disagrees with me therefore should not be saying things." That particular tool, the term "conflict of interest", really doesn't do what you think it does.

The way Wikipedia deals with "he disagrees with me therefore should not be saying things" is to look at the sources used. Also, "You shouldn't use source X because its argument originally came from Y which is biased" is not generally a winning argument on Wikipedia without a lot more work.

Before you then claim bias as a reason, let me quote again:

https://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sources#Biased_or_opinionated_sources

Wikipedia articles are required to present a neutral point of view. However, reliable sources are not required to be neutral, unbiased, or objective. Sometimes non-neutral sources are the best possible sources for supporting information about the different viewpoints held on a subject.

Common sources of bias include political, financial, religious, philosophical, or other beliefs. Although a source may be biased, it may be reliable in the specific context. When dealing with a potentially biased source, editors should consider whether the source meets the normal requirements for reliable sources, such as editorial control and a reputation for fact-checking. Editors should also consider whether the bias makes it appropriate to use in-text attribution to the source, as in "Feminist Betty Friedan wrote that...", "According to the Marxist economist Harry Magdoff...," or "Conservative Republican presidential candidate Barry Goldwater believed that...".

So if, as you note, the Reliable Sources regularly use me, that would indicate my opinions would be worth taking note of - rather than the opposite. As I said, be careful you're making the argument you think you are.

(I don't self-label as an "expert", I do claim to know a thing or two about the area. You're the one who tried to argue from my opinions being taken seriously by the "reliable sources".)

Replies from: gjm
comment by gjm · 2017-04-25T17:22:17.829Z · LW(p) · GW(p)

No one is actually suggesting that either "expertise" or "not being enough of a fan" constitutes a conflict of interest, nor are those the attributes you're being accused of having.

On the other hand, the accusations actually being made are a little unclear and vary from occasion to occasion, so let me try to pin them down a bit. I think the ones worth taking seriously are three in number. Only one of them relates specifically to conflicts of interest in the Wikipedia sense; the others would (so far as I can see) not be grounds for any kind of complaint or action on Wikipedia even if perfectly correct in every detail.

So, they are: (1) That you are, for whatever reasons, hostile to Less Wrong (and the LW-style-rationalist community generally, so far as there is such a thing) and keen to portray it in a bad light. (2) That as a result of #1 you have in fact taken steps to portray Less Wrong (a.t.Lsr.c.g.s.f.a.t.i.s.a.t.) in a bad light, even when that has required you to be deliberately misleading. (3) That your close affiliation with another organization competing for mindshare, namely RationalWiki, constitutes a WP:COI when writing about Less Wrong.

Note that #3 is quite different in character from a similar claim that might be made by, say, a creationist organization; worsening the reputation of the Institute for Creation Research is unlikely to get more people to visit RationalWiki and admire your work there (perhaps even the opposite), whereas worsening the reputation of Less Wrong might do. RW is in conflict with the ICR, but (at least arguably) in competition with LW.

For the avoidance of doubt, I am not endorsing any of those accusations; just trying to clarify what they are, because it seems like you're addressing different ones.

Replies from: David_Gerard
comment by David_Gerard · 2017-04-25T18:34:16.529Z · LW(p) · GW(p)

I already answered #3: the true rejection seems to be not "you are editing about us on Wikipedia to advance RationalWiki at our expense" (which is a complicated and not very plausible claim that would need all its parts demonstrated), but "you are editing about us in a way we don't like".

Someone from the IEET tried to seriously claim (COI Noticeboard and all) that I shouldn't comment on the deletion nomination for their article - I didn't even nominate it, just commented - on the basis that IEET is a 501(c)3 and RationalWiki is also a 501(c)3 and therefore in sufficiently direct competition that this would be a Wikipedia COI. It's generally a bad and terrible claim and it's blitheringly obvious to any experienced Wikipedia editor that it's stretching for an excuse.

Variations on #3 are a perennial of cranks of all sorts who don't want a skeptical editor writing about them at Wikipedia, and will first attempt not to engage with the issues and sources, but to stop the editor from writing about them. (My favourite personal example is this Sorcha Faal fan who revealed I was editing as an NSA shill.) So it should really be considered an example of the crackpot offer, and if you find yourself thinking it then it would be worth thinking again.

(No, I don't know why cranks keep thinking implausible claims of COI are a slam dunk move to neutralise the hated outgroup. I hypothesise a tendency to conspiracist thinking, and first assuming malfeasance as an explanation for disagreement. So if you find yourself doing that, it's another one to watch out for.)

Replies from: gjm
comment by gjm · 2017-04-25T22:58:03.753Z · LW(p) · GW(p)

I already answered #3

No, you really didn't, you dismissed it as not worth answering and proposed that people claiming #3 can't possibly mean it and must be using it as cover for something else more blatantly unreasonable.

I understand that #3 may seem like an easy route for anyone who wants to shut someone up on Wikipedia without actually refuting them or finding anything concrete they're doing wrong. It is, of course, possible that that Viliam is not sincere in suggesting that you have a conflict of interest here, and it is also possible (note that this is a separate question) that if he isn't sincere then his actual reason for suggesting that you have is simply that he wishes you weren't saying what you are and feels somehow entitled to stop you for that reason alone. But you haven't given any, y'know, actual reasons to think that those things are true.

Unless you count one of these: (1) "Less Wrong is obviously a nest of crackpots, so we should expect them to behave like crackpots, and saying COI when they mean 'I wish you were saying nice things about us' is a thing crackpots do". Or (2) "This is an accusation that I have a COI, and obviously I don't have one, so it must be insincere and match whatever other insincere sort of COI accusation I've seen before". I hope it's clear that neither of those is a good argument.

Someone from the IEET tried to seriously claim [...]

I read the discussion. The person in question is certainly a transhumanist but I don't see any evidence he is or was a member of the IEET, and the argument he made was certainly bad but you didn't describe it accurately at all. And, again, the case is not analogous to the LW one: conflict versus competition again.

first assuming malfeasance as an explanation for disagreement

I agree, that's a bad idea. I don't quite understand how you're applying it here, though. So far as I can tell, your opponents (for want of a better word) here are not troubled that you disagree with them (e.g., they don't deny that Roko's basilisk was a thing or that some neoreactionaries have taken an interest in LW); they are objecting to your alleged behaviour: they think you are trying to give the impression that Roko's basilisk is important to LWers' thinking and that LW is a hive of neoreactionaries, and they don't think you're doing that because you sincerely believe those things.

So it's malfeasance as an explanation for malfeasance, not malfeasance as an explanation for disagreement.


I repeat that I am attempting to describe, not endorsing, but perhaps I should sketch my own opinions lest that be thought insincere. So here goes; if (as I would recommend) you aren't actually concerned about my opinions, feel free to ignore what follows unless they do become an issue.

  • I do have the impression that you wish LW to be badly thought of, and that this goes beyond merely wanting it to be viewed accurately-as-you-see-it. I find this puzzling because in other contexts (and also in this context, in the past when your attitude seemed different) the evidence available to me suggests that you are generally reasonable and fair. (Yes, I have of course considered the possibility that I am puzzled because LW really is just that bad and I'm failing to see it. I'm pretty sure that isn't the case, but I could of course be wrong.)

  • I do not think the case that you have a WP:COI on account of your association with RationalWiki, still less because you allegedly despise LW, is at all a strong one, and I think that if Viliam hopes that making that argument would do much to your credibility on Wikipedia then his hopes would be disappointed if tested.

  • I note that Viliam made that suggestion with a host of qualifications about how he isn't a Wikipedia expert and was not claiming with any great confidence that you do in fact have a COI, nor that it would be a good idea to say that you do.

  • I think his suggestion was less than perfectly sincere in the following sense: he made it not so much because he thinks a reasonable person would hold that you have a conflict of interest, as because he thinks (sincerely) that you might have a COI in Wikipedia's technical sense, and considers it appropriate to respond with Wikipedia technicalities to an attack founded on Wikipedia technicalities.

  • The current state of the Wikipedia page on Less Wrong doesn't appear terribly bad to me, and to some extent it's the way it is because Wikipedia's notion of "reliable sources" gives a lot of weight to what has attracted the interest of journalists, which isn't your fault. But there are some things that seem ... odd. Here's the oddest:

    • Let's look at those two refs (placed there by you) for the statement that "the neoreactionary movement takes an interest in Less Wrong" (which, to be sure, could be a lot worse ... oh, I see that you originally wrote "is associated with Less Wrong" and someone softened it; well done, someone). First we have a TechCrunch article. Sum total of what it says is that "you may have seen" neoreactionaries crop up "on tech hangouts like Hacker News and Less Wrong". I've seen racism on Facebook; is Facebook "associated with racism" in any useful sense? Second we have a review of "Neoreaction: a basilisk" claiming "The embryo of the [neoreactionary] movement lived in the community pages of Yudkowsky’s blog LessWrong", which you know as well as I do to be flatly false (and so do the makers and editors of WP's page on neoreaction, which quite rightly doesn't even mention Less Wrong). These may be Reliable Sources in the sense that they are the kind of document that Wikipedia is allowed to pay attention to. They are not reliable sources for the claim that neoreaction and Less Wrong have anything to do with one another, because the first doesn't say that and the second says it but is (if I've understood correctly) uncritically reporting someone else's downright lie.

    • I have to say that this looks exactly like the sort of thing I would expect to see if you were trying to make Less Wrong look bad without much regard for truth, and using Wikipedia's guiding principles as "cover" rather than as a tool for avoiding error. I hope that appearance is illusory. If you'd like to convince me it is, I'm all ears.

Replies from: David_Gerard
comment by David_Gerard · 2017-04-25T23:13:58.285Z · LW(p) · GW(p)

Villiam started with a proposal to brigade Wikipedia. This was sufficiently prima facie bad faith that I didn't, and still don't, feel any obligation to bend over backwards to construct a kernel of value from his post. You certainly don't have to believe me that his words 100% pattern match to extruded crank product from my perspective, but I feel it's worth noting that they do.

I feel answering his call for brigade with a couple of detailed link- and quote-heavy comments trying to explain what the rules actually are and how they actually work constituted a reasonable effort to respond sincerely and helpfully on my part, and offer guidance on how not to 100% pattern match to extruded crank product in any prospective editor's future Wikipedia endeavours.

If you have problems with the Wikipedia article, these are best addressed on the article talk page, and 0% here. (Readers attempting this should be sure to keep to the issues and not attempt to personalise issues as being about other editors.)

Anything further will be repeating ourselves, I think.

Replies from: gjm
comment by gjm · 2017-04-26T01:15:09.716Z · LW(p) · GW(p)

Viliam started with a proposal to brigade Wikipedia.

No, he didn't. He started with a description of something he might do individually. Literally the only things he says about anyone else editing Wikipedia are (1) to caution someone who stated an intention of doing so not to rush in, and (2) to speculate that if he does something like this it might be best for a group of people to cooperate on figuring out how to word it.

comment by David_Gerard · 2017-04-23T01:26:36.376Z · LW(p) · GW(p)

(More generally as a Wikipedia editor I find myself perennially amazed at advocates for some minor cause who seem to seriously think that Wikipedia articles on their minor cause should only be edited by advocates, and that all edits by people who aren't advocates must somehow be wrong and bad and against the rules. Even though the relevant rules are (a) quite simple conceptually (b) say nothing of the sort. You'd almost think they don't have the slightest understanding of what Wikipedia is about, and only cared about advocating their cause and bugger the encyclopedia.)

comment by David_Gerard · 2017-04-23T01:29:45.442Z · LW(p) · GW(p)

but in the context of Wikipedia, you should after all keep in mind that I am an NSA shill.

comment by TheAncientGeek · 2017-03-26T08:58:56.424Z · LW(p) · GW(p)

Yikes. The current version of the WP article is a lot less balanced than the RW one!

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Replies from: Viliam
comment by Viliam · 2017-03-27T09:15:06.758Z · LW(p) · GW(p)

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Problem is, this is probably not a good news for LW. Tomorrow, the RB section will most likely be back, possibly with a warning on the talk page that the evil cultists from LW are trying to hide their scandals.

comment by [deleted] · 2017-03-24T22:20:18.129Z · LW(p) · GW(p)

PhD acquired.

Replies from: Regex
comment by Regex · 2017-03-25T01:05:37.112Z · LW(p) · GW(p)

Now people have to call you doctor CellBioGuy

comment by turchin · 2017-03-23T11:12:46.618Z · LW(p) · GW(p)

Link on "discussion" disappeared from the lesswrong.com. Is it planned change? Or only for me?

Replies from: Elo
comment by Elo · 2017-03-23T11:31:11.873Z · LW(p) · GW(p)

Accidental css pull that caused unusual things. It's being worked on. Apologies.

comment by tristanm · 2017-03-20T22:54:00.643Z · LW(p) · GW(p)

Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.

Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.

And there are a few things I think we will observe first (some of which we are already observing) that will act as a catalyst for this. Number one, if economic inequality increases, I think a lot of the blame for this will be placed on the elite (as it always is), but in particular the cognitive elite (which makes up an ever-increasing share of the elite). Whatever the views of the cognitive elite are will become the philosophy of evil from the perspective of the masses. Because the elite are increasingly made up of very high intelligence people, many of whom with a connection to technology or Silicon Valley, we should expect that the dominant worldview of that environment will increasingly contrast with the worldview of those who haven't benefited or at least do not perceive themselves to benefit from the increasing growth and wealth driven by those people. What's worse, it seems that even if economic gains benefit those at the very bottom too, if inequality still increases, that is the only thing that will get noticed.

The second issue is that as technology improves, our powers of inference increase, and privacy defenses become weaker. It's already the case that we can predict a person's behavior to some degree and use that knowledge to our advantage (if you're trying to sell something to them, give them / deny them a loan, judge whether they would be a good employee, or predict whether or not they will commit a crime). There's already a push-back against this, in the sense that certain variables correlate with things we don't want them to, like race. This implies that the standard definition of privacy, in the sense of simply not having access to specific variables, isn't strong enough. What's desired is not being able to infer the values of certain variables, either, which is a much, much stronger condition. This is a deep, non-trivial problem that is unlikely to be solved quickly - and it runs into the same issues as all problems concerning discrimination do, which is how to define 'bias'. Is reducing bias at the expense of truth even a worthy goal? This shifts the debate towards programmers, statisticians and data scientists who are left with the burden of never making a mistake in this area. "Weapons of Math Destruction" is a good example of the way this issue gets treated.

We will also continue to observe a lot ideas from postmodernism being adopted as part of political ideology of the left. Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme. And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought. So if a lot of the above problems get worse, I think there is a chance that rationalism will get blamed as it has been in the framework of postmodernism.

The summary of this is: As politics becomes warfare between worldviews rather than arguments for and against various beliefs, populist hostility gets directed towards what is perceived to be the worldview of the elite. The elite tend to be more rationalist, and so that hostility may get directed towards rationalism itself.

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

Replies from: username2, satt, TheAncientGeek, Viliam
comment by username2 · 2017-03-21T03:08:48.244Z · LW(p) · GW(p)

(I thought the post was reasonably written.)

Can you say a word on whether (and how) this phenomenon you describe ("populist hostility gets directed towards what is perceived to be the worldview of the elite") is different from the past? It seems to me that this is a force that is always present, often led to "problems" (eg, the Luddite movement), but usually (though not always) the general population came around more in believing the same things as "the elites".

Replies from: tristanm
comment by tristanm · 2017-03-21T20:48:54.020Z · LW(p) · GW(p)

The process is not different from what occurred in the past, and I think this was basically the catalyst for anti-semitism during the post industrial revolution era. You observe a characteristic of a group of people who seem to be doing a lot better than you, in that case a lot of them happened to be Jewish, and so you then associate their Jewish-ness with your lack of success and unhappiness.

The main difference is that society continues to modernize and technology improves. Bad ideas for why some people are better off than others become unpopular. Actual biases and unfairness in the system gradually disappear. But despite that, inequality remains and in fact seems to be rising. What happens is that the only thing left to blame is instrumental rationality. I imagine that people will look as hard as they can for bias and unfairness for as long as possible, and will want to see it in people who are instrumentally rational.

In a free society, (and even more so as a society becomes freer and true bigotry disappears) some people will be better off just because they are better at making themselves better off, and the degree to which people vary in that ability is quite staggering. But psychologically it is too difficult for many to accept this, because no one wants to believe in inherent differences. So it's sort of a paradoxical result of our society actually improving.

comment by satt · 2017-03-24T02:18:56.526Z · LW(p) · GW(p)

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

Writing style looks fine. My quibbles would be with the empirical claims/predictions/speculations.

Is the elite really more of a cognitive elite than in the past?

Strenze's 2007 meta-analysis (previously) analyzed how the correlations between IQ and education, IQ and occupational level, and IQ and income changed over time. The first two correlations decreased and the third held level at a modest 0.2.

Will elite worldviews increasingly diverge from the worldviews of those left behind economically?

Maybe, although just as there are forces for divergence, there are forces for convergence. The media can, and do, transmit elite-aligned worldviews just as they transmit elite-opposed worldviews, while elites fund political activity, and even the occasional political movement.

Would increasing inequality really prevent people from noticing economic gains for the poorest?

That notion sounds like hyperbole to me. The media and people's social networks are large, and can discuss many economic issues at once. Even people who spend a good chunk of time discussing inequality discuss gains (or losses) of those with low income or wealth.

For instance, Branko Milanović, whose standing in economics comes from his studies of inequality, is probably best known for his elephant chart, which presents income gains across the global income distribution, down to the 5th percentile. (Which percentile, incidentally, did not see an increase in real income between 1988 and 2008, according to the chart.)

Also, while the Anglosphere's discussed inequality a great deal in the 2010s, that seems to me a vogue produced by the one-two-three punch of the Great Recession, the Occupy movement, and the economist feeding frenzy around Thomas Piketty's book. Before then, I reckon most of the non-economists who drew special attention to economic inequality were left-leaning activists and pundits in particular. That could become the norm once again, and if so, concerns about poverty would likely become more salient to normal people than concerns about inequality.

Will the left continue adopting lots of ideas from postmodernism?

This is going to depend on how we define postmodernism, which is a vexed enough question that I won't dive deeply into it (at least TheAncientGeek and bogus have taken it up). If we just define (however dodgily) postmodernism to be a synonym for anti-rationalism, I'm not sure the left (in the Anglosphere, since that's the place we're presumably really talking about) is discernibly more postmodernist/anti-rationalist than it was during the campus/culture wars of the 1980s/1990s. People tend to point to specific incidents when they talk about this question, rather than try to systematically estimate change over time.

Granted, even if the left isn't adopting any new postmodern/anti-rationalist ideas, the ideas already bouncing around in that political wing might percolate further out and trigger a reaction against rationalism. Compounding the risk of such a reaction is the fact that the right wing can also operate as a conduit for those ideas — look at yer Alex Jones and Jason Reza Jorjani types.

Is politics becoming more a war of worldviews than arguments for & against various beliefs?

Maybe, but evidence is needed to answer the question. (And the dichotomy isn't a hard and fast one; wars of worldviews are, at least in part, made up of skirmishes where arguments are lobbed at specific beliefs.)

comment by TheAncientGeek · 2017-03-22T11:56:51.118Z · LW(p) · GW(p)

Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme.

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought.

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof. What's worse still is that sceptical and relativistic claims can be supported using the toolkit of rationality. "Postmodernists" tend to be sceptics and relativists, but you don't have to be a "postmodernist" to be a relativist or sceptic. As non-bay-area, mainstream, rationalists understand well. If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

[*] "Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.

Replies from: bogus, tristanm
comment by bogus · 2017-03-22T13:41:46.950Z · LW(p) · GW(p)

"Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.

Not quite. "Poststructuralism" is an ex-post label and many of the thinkers that are most often identified with the emergence of "postmodern" ideas actually rejected it. (Some of them even rejected the whole notion of "postmodernism" as an unhelpful simplification of their actual ideas.) "Continental philosophy" really means the 'old-fashioned' sort of philosophy that Analytic philosophers distanced themselves from; you can certainly view postmodernism as encompassed within continental philosophy, but the notions are quite distinct. Similarly, "critical theory" exists in both 'modernist'/'high modern' and 'postmodern' variants, and one cannot understand the 'postmodern' kind without knowing the 'modern' critical theory it's actually referring to, and quite often criticizing in turn.

All of which is to say that, really, it's complicated, and that while describing postmodernism as a "different set of concerns, goals and approaches" may hit significantly closer to the mark than merely caricaturing it as an antithesis to rationality, neither really captures the worthwhile ideas that 'postmodern' thinkers were actually developing, at least when they were at their best. (--See, the big problem with 'continental philosophy' as a whole is that you often get a few exceedingly worthwhile ideas mixed in with heaps of nonsense and confused thinking, and it can be really hard to tell which is which. Postmodernism is no exception here!)

comment by tristanm · 2017-03-22T18:28:32.025Z · LW(p) · GW(p)

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth. And the 'goal' of postmodernism is to break apart and criticize everything that claims to be able to do those things. You would be hard pressed to find a better example of something diametrically opposed to rationalism. (I'm going to guess that with high likelihood I'll get accused of not understanding postmodernism by saying that).

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Well yeah, being able to unequivocally define anything is difficult, no argument there. But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. Then what happens is they get accused by postmodernists of claiming to have the One and Only True and Correct Definition of Truth and Correctness, and of claiming that we have access to the Objective Reality. The point is that as soon as you allow for any leeway in this at all (leeway in allowing for some in-between area of there being a true objective reality with 100% access to and 0% access to), you basically obtain rationalism. Not because the principles it derives from are that there is an objective reality that is possible to Truly Know, or that there are facts that we know to be 100% true, but only that there are sets of claims we have some degree of confidence in, and other sets of claims we might want to calculate a degree of confidence in based on the first set of claims.

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof.

It happens to be an attitude that works really well in practice, but the other two attitudes can't actually be used in practice if you were to adhere to them fully. They would only be useful for denying anything that someone else believes. I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict? In probability theory you can have degrees of confidence that are non-zero that add up to one, but it's unclear if this is the same thing as relativism in the sense of "multiple truths". I would guess that it isn't, and multiple truths really means holding two incompatible beliefs to both be true.

If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

Except that you can't demonstrate superiority of anything within the framework of postmodernism. Within rationalism it's very easy and straightforward.

I imagine the reason that some rationalists might find postmodernism to be useful is in the spirit of overcoming biases. This in and of itself I have no problem with - but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

Replies from: bogus, TheAncientGeek
comment by bogus · 2017-03-22T23:58:46.302Z · LW(p) · GW(p)

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

The actual ground-level stance is more like: "If you think that you know some sort of objective reality, etc., it is overwhelmingly likely that you're in fact wrong in some way, and being deluded by cached thoughts." This is an eminently rational attitude to take - 'it's not what you don't know that really gets you into trouble, it's what you know for sure that just ain't so.' The rest of your comment has similar problems, so I'm not going to discuss it in depth. Suffice it to say, postmodern thought is far more subtle than you give it credit for.

Replies from: tristanm
comment by tristanm · 2017-03-23T00:18:32.659Z · LW(p) · GW(p)

If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute. It seems like that's setting a very low bar for what postmodernism actually hopes to accomplish.

Replies from: bogus
comment by bogus · 2017-03-23T06:55:30.063Z · LW(p) · GW(p)

If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute.

The reason why postmodernism often looks like that superficially is that it specializes in critiquing "gigantic modern philosophical edifice[s]" (emphasis on 'modern'!). It takes a gigantic philosophy to beat a gigantic philosophy, at least in some people's view.

comment by TheAncientGeek · 2017-03-22T20:58:54.283Z · LW(p) · GW(p)

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

Citation needed.

Well yeah, being able to unequivocally define anything is difficult, no argument there

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. T

Engineers use an intuitive and pragmatic definition of truth that allows them to actually do things. Rationalists are more in the philosophy business.

It happens to be an attitude that works really well in practice,

For some values of "work". It's possible to argue in detail that predictive power actually doesn't entail correspondence to ultimate reality, for instance.

I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict?

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

Except that you can't demonstrate superiority of anything within the framework of postmodernism

That's not what I said.

but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

There's no such thing as postmodernism and I'm not particularly in favour of it. My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

Replies from: tristanm
comment by tristanm · 2017-03-22T23:04:11.485Z · LW(p) · GW(p)

Citation needed.

Citing it is going to be difficult, even the Stanford Encyclopedia of Philosophy says "That postmodernism is indefinable is a truism." I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion. It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense. This is mostly the sense in which I refer to it in my original post.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-03-25T19:39:47.248Z · LW(p) · GW(p)

Citing it is going to be difficult,

To which the glib answer is "that's because it isn't true".

" I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

Dennett gives a concise definition because he has the same simplistic take on the subject as you. What he is not doing is showing that there is an actually group of people who describe themselves as postmodernists, and have those views. The use of the terms "postmodernist" is a bad sign: it's a tern that works like "infidel" and so on, a label for an outgroup, and an ingroup's views on an outgroup are rarely bedrock reality.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

When we, the ingroup, can't define something it's Ok, when they, the outgroup, can't define something, it shows how bad they are.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

People are quite psychologically capable of having compartmentalised beliefs, that sort of thing is pretty ubiquitous, which is why I was able to find an example from the rationalist community itself. Relativism without contextualisation probably doesn't make much sense, but who is proposing it?

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion.

As you surely know that I mean there is no group of people who both call themselves postmodernists and hold the views you are attributing to postmodernists.

It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

It's kind of diffuse. But you can talk about scepticism, relativism, etc, if those are the issues.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense.

There's some terrible epistemology on the left, and on the right, and even in rationalism.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

I mean Yudkowsky's approach. Which flies under the flag of Bayesianism, but doesn't make much use of formal Bayesianism.

comment by Viliam · 2017-03-21T13:31:28.627Z · LW(p) · GW(p)

I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality.

That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things".

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)

Replies from: tristanm, bogus, dogiv, Lumifer
comment by tristanm · 2017-03-21T20:27:51.317Z · LW(p) · GW(p)

I don't think it's necessary for 'rationality' to be used an applause light for this to happen. The only things needed, in my mind, are:

  • A group of people who adopt rationality and are instrumentally rationalist become very successful, wealthy and powerful because of it.
  • This groups makes up an increasing share of the wealthy and powerful, because they are better at becoming wealthy and powerful than the old elite.
  • The remaining people who aren't as wealthy or successful or powerful, who haven't adopted rationality, make observations about what the successful group does and associates whatever they do / say as the tribal characteristics and culture of the successful group. The fact that they haven't adopted rationality makes them more likely to do this.

And because the final bullet point is always what occurs throughout history, the only difference - and really the only thing necessary for this to happen - is that rationalists make up a greater share of the elite over time.

comment by bogus · 2017-03-21T17:54:48.904Z · LW(p) · GW(p)

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party.

Somewhat ironically, this is exactly the sort of cargo-cultish "rationality" that originally led to the emergence of postmodernism, in opposition to it and calling for some much-needed re-evaluation and skepticism around all "cached thoughts". The moral I suppose is that you just can't escape idiocy.

Replies from: tristanm
comment by tristanm · 2017-03-21T20:09:21.347Z · LW(p) · GW(p)

Not exactly. What happened at first was that Marxism - which, in the early 20th century, became the dominant mode of thought for Western intellectuals - was based on rationalist materialism, until it was empirically shown to be wrong by some of the largest social experiments mankind is capable of running. The question for intellectuals who were unwilling to give up Marx after that time was how to save Marxism from empirical reality. The answer to that was postmodernism. You'll find that in most academic departments today, those who identify as Marxists are almost always postmodernists (and you won't find them in economics or political science, but rather in the english, literary criticism and social science departments). Marxists of the rationalist type are pretty much extinct at this point.

Replies from: bogus
comment by bogus · 2017-03-22T03:42:40.454Z · LW(p) · GW(p)

I broadly agree, but you're basically talking about the dynamics that resulted in postmodernism becoming an intellectual fad, devoid of much of its originally-meaningful content. Whereas I'm talking about what the original memeplex was about - i.e what people like the often-misunderstood Jacques Derrida were actually trying to say. It's even clearer when you look at Michael Foucault, who was indeed a rather sharp critic of "high modernity", but didn't even consider himself a post-modernist (whereas he's often regarded as one today). Rather, he was investigating pointed questions like "do modern institutions like medicine, psychiatric care and 'scientific' criminology really make us so much better off compared to the past when we lacked these, or is this merely an illusion due to how these institutions work?" And if you ask Robin Hanson today, he will tell you that we're very likely overreliant on medicine, well beyond the point where such reliance actually benefits us.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-03-23T05:13:49.291Z · LW(p) · GW(p)

postmodernism becoming an intellectual fad, devoid of much of its originally-meaningful content. Whereas I'm talking about what the original memeplex was about

So you concede that everyone you're harassing is 100% correct, you just don't want to talk about postmodernism? So fuck off.

comment by dogiv · 2017-03-21T17:34:02.276Z · LW(p) · GW(p)

This may be partially what has happened with "science" but in reverse. Liberals used science to defend some of their policies, conservatives started attacking it, and now it has become an applause light for liberals--for example, the "March for Science" I keep hearing about on Facebook. I am concerned about this trend because the increasing politicization of science will likely result in both reduced quality of science (due to bias) and decreased public acceptance of even those scientific results that are not biased.

Replies from: username2
comment by username2 · 2017-03-22T00:31:42.041Z · LW(p) · GW(p)

I agree with your concern, but I think that you shouldn't limit your fear to party-aligned attacks.

For example, the Thirty-Meter Telescope in Hawaii was delayed by protests from a group of people who are most definitely "liberal" on the "liberal/conservative" spectrum (in fact, "ultra-liberal"). The effect of the protests is definitely significant. While it's debatable how close the TMT came to cancelation, the current plan is to grant no more land to astronomy atop Mauna Kea.

Replies from: dogiv
comment by dogiv · 2017-03-22T17:06:49.910Z · LW(p) · GW(p)

Agreed. There are plenty of liberal views that reject certain scientific evidence for ideological reasons--I'll refrain from examples to avoid getting too political, but it's not a one-sided issue.

comment by Lumifer · 2017-03-21T17:13:42.310Z · LW(p) · GW(p)

As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective.

So, do you want to ask the Jews how that theory worked out for them?

comment by Vaniver · 2017-03-23T08:04:05.037Z · LW(p) · GW(p)

Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).

comment by Bound_up · 2017-03-22T13:36:56.916Z · LW(p) · GW(p)

Maybe there could be some high-profile positive press for cryonics if it became standard policy to freeze endangered species seeds or DNA for later resurrection

Replies from: ChristianKl
comment by moridinamael · 2017-03-20T15:09:47.971Z · LW(p) · GW(p)

What is the steelmanned, not-nonsensical interpretation of the phrase "democratize AI"?

Replies from: fubarobfusco, Lumifer, username2, WalterL
comment by fubarobfusco · 2017-03-20T17:59:58.639Z · LW(p) · GW(p)

One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

Replies from: Lumifer, qmotus
comment by Lumifer · 2017-03-20T18:24:55.230Z · LW(p) · GW(p)

Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

s/AI/capital/

Now, where have I heard this before..?

Replies from: Viliam, fubarobfusco, bogus
comment by Viliam · 2017-03-21T16:01:58.987Z · LW(p) · GW(p)

And your point is...?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn't stolen is used very inefficiently.

But on a smaller scale... companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).

Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let's assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don't think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn't mind... especially considering that going for the former option will make people much more willing to cooperate with him.

Replies from: Lumifer
comment by Lumifer · 2017-03-21T16:38:58.128Z · LW(p) · GW(p)

And your point is...?

Is it really that difficult to discern?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead.

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone

Capital is not just money. You tax, basically, production (=creation of value) and production is not a "benefit of capital".

In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve

Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?

Replies from: Viliam
comment by Viliam · 2017-03-21T16:57:44.450Z · LW(p) · GW(p)

Is it really that difficult to discern?

You mean this one?

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI... sure.

Although calling that "communism" is about as much of a central example, as calling the paperclip maximizer scenario "capitalism".

production is not a "benefit of capital".

Capital is a factor in production, often a very important one.

no one should own AI technology. As always, this means a government monopoly

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And "as always" does not seem like a good argument for Singularity scenarios.

In which realistic scenarios do you thing this will be a choice that someone faces?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Replies from: Lumifer
comment by Lumifer · 2017-03-21T17:08:27.492Z · LW(p) · GW(p)

this one

That too :-) I am a big fan of this approach.

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work.

But conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work? In particular, the economy will work?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Aaaaand let me quote you yourself from just a sentence back:

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete.

One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?

Besides, I thought that when Rapture comes...err... I mean, when the Singularity happens, humans will not decide anything any more -- the AI will take over and will make the right decisions for them-- isn't that so?

Replies from: gjm
comment by gjm · 2017-03-21T18:05:39.353Z · LW(p) · GW(p)

conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work?

If we're talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it's hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.

(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you're probably right that that wouldn't suffice.)

Replies from: Lumifer
comment by Lumifer · 2017-03-21T18:19:48.579Z · LW(p) · GW(p)

Actually, no, we're (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.

Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don't think that's true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn't well-functioning.

I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.

Replies from: gjm
comment by gjm · 2017-03-22T01:07:20.820Z · LW(p) · GW(p)

Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.

(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)

Replies from: Lumifer
comment by Lumifer · 2017-03-22T01:13:04.897Z · LW(p) · GW(p)

post-Singularity communism

I have no idea what this means.

Replies from: gjm
comment by gjm · 2017-03-22T03:10:19.977Z · LW(p) · GW(p)

It seems you agree with Viliam: see the second paragraph below.

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI... sure.

Although calling that "communism" is about as much of a central example, as calling the paperclip maximizer scenario "capitalism".

Replies from: Lumifer
comment by Lumifer · 2017-03-22T14:38:52.678Z · LW(p) · GW(p)

Right, but I am specifically interested in Viliam's views about the scenario where there is no AI, but we do have honest and competent rulers.

Replies from: Viliam, gjm
comment by Viliam · 2017-03-23T10:17:26.064Z · LW(p) · GW(p)

That is completely irrelevant to debates about AI.

But anyway, I object against the premise being realistic. Humans run on "corrupted hardware", so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.

In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.

Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.

Well... there doesn't seem to be a law of physics that would literally prevent this, it just seems very unlikely.

With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.

comment by gjm · 2017-03-22T16:36:03.462Z · LW(p) · GW(p)

Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying "so it might be a good idea to do such-and-such about superhumanly capable AI" and you came in and said "aha, that kinda pattern-matches to communism. Are you defending communism?" and then said oh, by the way, I'm only interested in communism in the case where there is no superhumanly capable AI.

But, well, trolls gonna troll, and you've already said trolling is your preferred mode of political debate.

Replies from: Lumifer
comment by Lumifer · 2017-03-22T16:42:21.588Z · LW(p) · GW(p)

Well, the kinda-sorta OP phrased the issue this way:

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve

...and that set the tone for the entire subthread :-P

comment by fubarobfusco · 2017-03-20T18:36:37.452Z · LW(p) · GW(p)

String substitution isn't truth-preserving; there are some analogies and some disanalogies there.

comment by bogus · 2017-03-21T18:03:21.081Z · LW(p) · GW(p)

Sure, but capital is a rather vacuous word. It basically means "stuff that might be useful for something". So yes, talking about democratizing AI is a whole lot more meaningful than just saying "y'know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that's so deeeep... puff", which is what your variant ultimately amounts to!

Replies from: Lumifer
comment by Lumifer · 2017-03-21T18:22:03.763Z · LW(p) · GW(p)

capital is a rather vacuous word. It basically means "stuff that might be useful for something"

Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it's not capital. The $20 bill in your wallet isn't capital either.

Replies from: satt, gjm, bogus
comment by satt · 2017-03-24T00:55:43.271Z · LW(p) · GW(p)

Um. Not in economics where it is well-defined. Capital is resources needed for production of value.

While capital is resources needed for production of value, it's a bit misleading to imply that that's how it's "well-defined" "in economics", since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.

* And sometimes "entrepreneurship", but that's always struck me as a pretty bogus "factor of production" — as economists tacitly admit by omitting it as a variable from their production functions, even though it's as free to vary as labour.

Replies from: Lumifer, g_pepper
comment by Lumifer · 2017-03-24T15:27:28.682Z · LW(p) · GW(p)

Sure, but that's all Econ 101 territory and LW isn't really a good place to get some education in economics :-/

comment by g_pepper · 2017-03-24T01:43:15.661Z · LW(p) · GW(p)

The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.

comment by gjm · 2017-03-22T01:11:16.416Z · LW(p) · GW(p)

None the less, "capital" and "AI" are extremely different in scope and I see no particular reason to think that if "let's do X with capital" turns out to be a bad idea then we can rely on "let's do X with AI" also being a bad idea.

In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I'm not sure it's entirely clear), but that hypothetical future is also one so different from the past that past failures of "let's do X with capital" aren't necessarily a good indication of similar future failure.

comment by bogus · 2017-03-21T18:51:58.631Z · LW(p) · GW(p)

Capital is resources needed for production of value.

And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency - that makes it "capital" from a strictly individual perspective (indeed, such claims are often called "financial capital"), although it's indeed not real "capital" in an economy-wide sense (because any such claim must be offset by a corresponding liability).

Replies from: Lumifer
comment by Lumifer · 2017-03-21T19:03:33.342Z · LW(p) · GW(p)

Sigh. You can, of course, define any word any way you like it, but I have my doubts about the usefulness of such endeavours. Go read).

comment by qmotus · 2017-03-21T09:47:44.684Z · LW(p) · GW(p)

I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).

comment by Lumifer · 2017-03-20T15:17:03.211Z · LW(p) · GW(p)

Why do you think one exists?

Replies from: moridinamael
comment by moridinamael · 2017-03-20T15:55:33.243Z · LW(p) · GW(p)

I try not to assume that I am smarter than everybody if I can help it, and when there's a clear cluster of really smart people making these noises, I at least want to investigate and see whether I'm mistaken in my presuppositions.

To me, "democratize AI" makes as much sense as "democratize smallpox", but it would be good to find out that I'm wrong.

Replies from: bogus, Lumifer
comment by bogus · 2017-03-20T18:26:02.077Z · LW(p) · GW(p)

To me, "democratize AI" makes as much sense as "democratize smallpox", but it would be good to find out that I'm wrong.

Isn't "democratizing smallpox" a fairly widespread practice, starting from the 18th century or so - and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of 'AIs' being developed by Google or Facebook are actually dangerous? Because that's quite ridiculous, TBH. It's the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as 'AI'] circles.)

Replies from: moridinamael
comment by moridinamael · 2017-03-20T21:30:57.876Z · LW(p) · GW(p)

Not under any usual definition of "democratize". Making smallpox accessible to everyone is no one's objective. I wouldn't refer to making smallpox available to highly specialized and vetted labs as "democratizing" it.

Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.

comment by Lumifer · 2017-03-20T15:57:26.399Z · LW(p) · GW(p)

Links to the noises?

Replies from: moridinamael
comment by moridinamael · 2017-03-20T16:03:12.995Z · LW(p) · GW(p)

It's mainly an OpenAI noise but it's been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can't find links. Also:

YCombinator.

Our long-term goal is to democratize AI. We want to level the playing field for startups to ensure that innovation doesn’t get locked up in large companies like Google or Facebook. If you’re starting an AI company, we want to help you succeed.

which is pretty close to "we don't want only Google and Facebook to have control over smallpox".

Microsoft in context of partnership with OpenAI.

At Microsoft, we believe everyone deserves to be able to take advantage of these breakthroughs, in both their work and personal lives.

In short, we are committed to democratizing AI and making it accessible to everyone.

This is a much more nonstandard interpretation of "democratize". I suppose by this logic, Henry Ford democratized cars?

Replies from: Lumifer
comment by Lumifer · 2017-03-20T16:22:57.998Z · LW(p) · GW(p)

Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.

Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That's just marketing speak.

Both expressions have nothing to do with democracy, of course.

Replies from: tristanm
comment by tristanm · 2017-03-20T19:08:04.226Z · LW(p) · GW(p)

Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.

There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital - much unlike money however - where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can't be lowered even in principle.

Replies from: Lumifer
comment by Lumifer · 2017-03-20T19:23:36.277Z · LW(p) · GW(p)

Having advantages in the field of AI research and having a monopoly are very different things.

a fair bit of research can only be done through heavy computing infrastructure

That's not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can't you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do?

Another issue is data

Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.

Replies from: tristanm
comment by tristanm · 2017-03-20T21:06:13.488Z · LW(p) · GW(p)

It's still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of "democratizing AI", I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources?

The issue with data isn't so much about control / privacy, it's mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there's really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There's a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.

Replies from: Lumifer
comment by Lumifer · 2017-03-20T21:16:56.645Z · LW(p) · GW(p)

we should expect that the firms with the most resources should have significant advantages over small startups

So how this is different from, say, manufacturing? Or pretty much any business for the last few centuries?

Replies from: tristanm, tristanm
comment by tristanm · 2017-03-24T19:54:28.346Z · LW(p) · GW(p)

I think I would update my position here to say that AI is different from manufacturing, in that you can have small scale manufacturing operations (like 3D printing as username2 mentioned), that satisfy some niche market, whereas I sort of doubt that there are any niche markets in AI.

I've noticed this a lot with "data science" and AI startups - in what way is their product unique? Usually its not. It's usually a team of highly talented AI researchers and engineers who need to showcase their skills until they get aqui-hired, or they develop a tool that gets really popular for a while and then it also gets bought. You really just don't see "disruption" (in the sense that Peter Thiel defines it) in the AI vertical. And you don't see niches.

Replies from: Lumifer
comment by Lumifer · 2017-03-24T20:04:52.918Z · LW(p) · GW(p)

I sort of doubt that there are any niche markets in AI

Hold on. Are you talking about niche markets, or are we talking about the capability to do some sort of AI at small-to-medium scale (say, startup to university size)?

You really just don't see "disruption" (in the sense that Peter Thiel defines it) in the AI vertical. And you don't see niches.

Um. I don't think the AI vertical exists. And what do you mean about niches? Wouldn't, I dunno, analysis of X-rays be a niche? high-frequency trading another niche? forecasting of fashion trends another niche? etc. etc.

Replies from: tristanm
comment by tristanm · 2017-03-24T23:35:50.637Z · LW(p) · GW(p)

Well, niche markets in AI aren't usually referred to as such, they're usually just companies that do task X with the help of statistics and machine learning. In that sense nearly all technology and finance companies could be considered an AI company.

AI in the generalist sense is rare (Numenta, Vicarious, DeepMind), and usually gets absorbed by the bigger companies. In the specialist sense, if task X is already well-known or identified, you still have to go against the established players who have more data and have people who have been working on only that problem for decades.

Thinking more about what YC meant in their "democratize AI' article, it seems they were referring to startups that want to use ML to solve problems that haven't traditionally been solved using ML yet. Or more generally, they want to help tech companies enter markets that usually aren't served by a tech company. That's fine. But I also get the feeling they really mean helping market certain companies by using the AI / ML hype train even if they don't, strictly speaking, use AI to solve a given task. A lot of "AI" startups just do basic statistical analysis but have a really fancy GUI on top of it.

comment by tristanm · 2017-03-20T22:08:48.306Z · LW(p) · GW(p)

Well I dont think it is. If someone said "let's democratize manufacturing" in the same sense as YC, would that sound silly to you?

Replies from: Lumifer, username2
comment by Lumifer · 2017-03-21T16:16:46.250Z · LW(p) · GW(p)

Generally speaking, yes, silly, but I can imagine contexts where the word "democratize" is still unfortunate but points to an actual underlying issue -- monopoly and/or excessive power of some company (or e.g. a cartel) over the entire industry.

comment by username2 · 2017-03-20T22:40:39.925Z · LW(p) · GW(p)

No, it would sound like a 3D printing startup (and perfectly reasonable).

comment by username2 · 2017-03-20T16:17:54.021Z · LW(p) · GW(p)

Open sourcing all significant advancements in AI and releasing all code under GNU GPL.

Replies from: Viliam, username2
comment by Viliam · 2017-03-21T16:05:52.485Z · LW(p) · GW(p)

Tiling the whole universe with small copies of GNU GPL, because each nanobot is legally required to contain the full copy. :D

comment by username2 · 2017-03-20T22:11:23.411Z · LW(p) · GW(p)

*GNU AGPL, preferably

comment by WalterL · 2017-03-20T15:43:48.584Z · LW(p) · GW(p)

"Make multiple AIs that can restrain one another instead of one tyrannical MCP"?

comment by -necate- · 2017-03-25T08:53:39.433Z · LW(p) · GW(p)

Hello guys, I am currently writing my master's thesis on biases in the investment context. One sub-sample that I am studying is people who are educated about biases in a general context, but not in the investment context. I guess LW is the right place to find some of those so I would be very happy if some of you would participate since people who are aware about biases are hard to come by elsewhere. Also I explicitly ask for activity in the LW community in the survey, so if enough of LWers participate I could analyse them as an individual subsample. Would be interesting to know how LWers perform compared to psychology students for example. Also I think this is related enough to LW that I could post a link to the survey in discussion, right? If so I would be happy about some karma, because I just registered and cant post yet. The link to the survey is: https://survey.deadcrab.de/

Replies from: Elo
comment by Elo · 2017-03-25T09:12:19.690Z · LW(p) · GW(p)

Look up a group called "the trading tribe" by Ed seykota.

comment by Lumifer · 2017-03-24T19:58:29.247Z · LW(p) · GW(p)

I, for one, welcome our new paperclip Overlord.

comment by dglukhov · 2017-03-21T21:05:13.537Z · LW(p) · GW(p)

Not the first criticism of the Singularity, and certainly not the last. I found this on reddit, just curious what the response will be here:

"I am taking up a subject at university, called Information Systems Management, and my teacher is a Futurologist! He refrains from even teaching the subject just to talk about technology and how it will solve all of our problems and make us uber-humans in just a decade or two. He has a PhD in A.I. and has already talked to us about nanotechnology getting rid of all diseases, A.I. merging with us, smart cities that are controlled by A.I. like the Fujisawa project, and a 20 minute interview to Ray Kurzweil about how the singularity will make us all immortal by 2045.

Now, I get triggered as fuck whenever my teacher opens his mouth, because not only does he sell these claims with no other basis than "technology is growing exponentially", but he also implies that all of our problems can and will be solved by it, empowering us to keep fucking up things along the way. But I prefer to stay in silence, because most idiots at my class are beyond saving anyway and I don't get off on confronting others, but that is beside the point.

I wanted to make a case for why the singularity is beyond the limits of this current industrial civilization, and I will base my assessment on these pillars:

-Declining Energy Returns: We are living in a world where the return for oil is what, a tenth of what it used to be last century? Not to mention that even this lower-quality oil is facing depletion, at least from profitable sources. Renewables are at an extremely early stage as to even hope they run an industrial, exponentially growing civilization like ours at this point, and there are some physical laws that limit the amount of energy that can be actually absorbed from the sun, along with what can be efficiently stored at batteries, not to mention intermittency issues, transport costs, etc. One would think that more complex civilizations require more and more energy, especially at exponential growth rates, but the only argument that futurists spew out is some free market bullshit about solar, or like my teacher did, only expect the idea will come true because humans are awesome and technolgy is increasing at exponential rates. These guys think applied science and technology exist in a vacuum, which brings me to the next point.

-Economic feasibility: I know it is easy to talk about the wonders of tech and the bright future ahead of us, when one lives in the developed world, and is part of a priviliged socio-economical class, being as such isolated from 99% of the misery of this planet. There are people today that cannot afford clean water. In fact, most people that are below the top 20% of the population in terms of income probably won't be able to afford many of the new technological developments more than they do today. In fact, if the wealth gap keeps increasing, only the top 1% would be able to turn into cyborgs or upload their minds into robots or whatever it is that these guys preach. I think the argument of a post-scarcity era is a lot less compelling once you realize it will only benefit a portion of the populations of developed countries.

-Political resistance and corruption: Electric cars have been a thing ever since the 20th century, and who know what technologies have been hidden and lobbied against by the big corporations that rule this capitalist system. Yet the only hope for the singularity is that is somehow profitable for the stockholders. Look at planned obsolescence. We could have products that are 100 times more durable, that are more efficient, that are safer, that pollute less, but then where would profits go? Who is to tell you that they won't do the same in the future? In fact, a big premise of smart cities is that they will reduce crime by constant suirvellance; In fujisawa every lightpost triggered a motion camera and houses had centralized information centers that could be easily turned into Orwellian control devices, which sounds terrifying to me. We will have to wait and see how the middle class and below react to automation taking many jobs, and how the UBI experiment is carried out, if at all.

-Time constraints: Finally, people hope for the Singularity to reach us by 2045. That would imply that we need around 30 years of constant technological development, disregarding social decline, resource depletion, global warming, crop failuers, droughts, etc. If civilization collapses before 2045, which I think is very likely, then that won't come around and save us, and as far as I know, there is no other hope from futurologists other than a major breakthrough in technology at this point. Plus, like the video "Are humans smarter than bacteria?" very clearly states, humans need time to figure out the problems we face, then we need some more time to design some solution, then we need even more time to debate, lobby and finally implement some form of the original solution, and hope no other problems arise from it, because as we know technology is highly unpredictable and many times it creates more problems than it solves. Until we do all that, on a global scale, without destroying civil liberties, I think we will all be facing severe environmental problems, and developing countries may very well have fallen apart long before that.

What do you think? Am I missing something? What is the main force that will stop us reaching the Singularity in time? "

Replies from: cousin_it, knb, ChristianKl
comment by cousin_it · 2017-03-21T22:38:36.885Z · LW(p) · GW(p)

I think most people on LW also distrust blind techno-optimism, hence the emphasis on existential risks, friendliness, etc.

comment by knb · 2017-03-23T04:40:01.821Z · LW(p) · GW(p)

Like a lot of reddit posts, it seems like it was written by a slightly-precocious teenager. I'm not much of a singularity believer but the case is very weak.

"Declining Energy Returns" is based on the false idea that civilization requires exponential increases in energy input, which has been wrong for decades. Per capita energy consumption has been stagnant in the first world for decades, and most of these countries have stagnant or declining populations. Focusing on EROI and "quality" of oil produced is a mistake. We don't lack for sources of energy; the whole basis of the peak oil collapse theory was that other energy sources can't replace oil's vital role as a transport fuel.

"Economic feasability" is non-sequitur concerned with whether gains from technology will go only to the rich, not relevant to whether or not it will happen.

"Political resistance and corruption" starts out badly as the commenter apparently believes in the really dumb idea that electric cars have always been a viable competitor to internal combustion but the idea was suppressed by some kind of conspiracy. If you know anything about the engineering it took to make electric cars semi-viable competitors to ICE, the idea is obviously wrong. Even without getting into the technical aspect, there are lots of countries which had independent car industries and a strong incentive to get off oil (e.g. Germany and Japan before and during WW2).

Replies from: dglukhov
comment by dglukhov · 2017-03-23T13:37:23.837Z · LW(p) · GW(p)

"Declining Energy Returns" is based on the false idea that civilization requires exponential increases in energy input, which has been wrong for decades. Per capita energy consumption has been stagnant in the first world for decades, and most of these countries have stagnant or declining populations. Focusing on EROI and "quality" of oil produced is a mistake. We don't lack for sources of energy; the whole basis of the peak oil collapse theory was that other energy sources can't replace oil's vital role as a transport fuel.

This seems relevant These statistics do not support your claim that energy consumption per capita has been stagnant. Did I miss something? Perhaps you're referring strictly to stagnation in per capita use of fossil fuels? Do you have different sources of support? After all, this is merely one data point.

I'm not particularly sure where I stand with regards to the OP, part of the reason I brought it up was because this post sorely needed evidence to be brought up to the table, none of which I see.

I suppose this lack of support gives a reader the impression of naiveté. but I was hoping members here would clarify with their own, founded claims. Thank you for the debunks, I'm sure there's plenty of literature to link to as such, which is exactly what I'm after. The engineering behind electric cars, and perhaps its history, will be a topic I'll be investigating myself in a bit. If you have any preferred sources for teaching purposes, I'd love a link.

Replies from: knb
comment by knb · 2017-03-25T05:54:42.255Z · LW(p) · GW(p)

This seems relevant These statistics do not support your claim that energy consumption per capita has been stagnant. Did I miss something?

Yep, your link is for world energy use per capita, my claim is that it was stagnant for the first world. E.g. in the US it peaked in 1978 and has since declined by about a fifth. Developed world is more relevant because that's where cutting edge research and technological advancement happens. Edit: here's a graph from the source you provided showing the energy consumption history of the main developed countries, all of which follow the same pattern.

I don't really have a single link to sum up the difference between engineering an ICE car with adequate range and refuel time and a battery-electric vehicle with comparable range/recharge time. If you're really interested I would suggest reading about the early history of motor vehicles and then reading about the decades long development history of lithium-ion batteries before they became a viable product.

comment by ChristianKl · 2017-03-22T10:57:47.427Z · LW(p) · GW(p)

It seems to me like a long essay for a reasonable position written by someone who doesn't make a good case.

Solar does get exponentially cheaper at a rate of doubling efficiency every 7 years. It's a valid answer to the question of where the energy will come from is the timeline is long enough. The article gives the impression that the poor in the third world stay poor. That's a popular misconception and in reality the fight against global poverty. Much more than the top 20% of this planet has mobile phones. Most people benefit from technologies like smart phones.

The "planned obsolescence" conspiracy theory narrative also doesn't really help with understanding how technology get's deployed.

Replies from: dglukhov
comment by dglukhov · 2017-03-22T14:37:08.845Z · LW(p) · GW(p)

Much more than the top 20% of this planet has mobile phones. Most people benefit from technologies like smart phones.

I wouldn't cherry-pick one technological example and make a case for the rest of available technological advancements as conducive to closing the financial gap between people. Tech provides for industry, industry provides for shareholders, shareholders provide for themselves (here's one data point in a field of research exploring the seemingly direct relationship between excess resource acquisition and antisocial tendencies, I will work on finding more, if any). I am necessarily glossing over the extraneous details, but since the corporate incentive system provides for a whole host of advantages, and since it has power over top-level governments (lobbying success statistics come to mind), this incentive system is necessarily prevalent and of major interest when tech advances are the topic of discussion. Those with power get tech benefits first, if any benefits exist beyond that point, fantastic. If not, the obsolescence conspiracy seems the likely next scenario. I have no awareness of an incentive system that dictates that those with money and power need necessarily provide for everyone else. If there was one, I wouldn't be the only unaware one, since clearly the OP isn't aware of such a thing either.

Are there any technological advancements you can think of that necessarily trickle down the socio-economic scale and help those poorest of the poor? My first idea would be agricultural advancements, but then I'd have to go and collect statistics on rates of food acquisition for the poorest subset of the world population, with maybe a start in the world census data for agriculture, which may not even have the data I'd need. Any ideas of your own?

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2017-03-23T10:48:14.925Z · LW(p) · GW(p)

I wouldn't cherry-pick one technological example and make a case for the rest of available technological advancements as conducive to closing the financial gap between people.

That sentence is interesting. The thing I care about improving the lives of the poor.

I have no awareness of an incentive system that dictates that those with money and power need necessarily provide for everyone else.

If you look at Bill Gates and Warren Buffet they see purpose in helping the poor.

In general employing poor people to do something for you and paying them a wage is also a classic way poor people get helped.

I wouldn't cherry-pick one technological example and make a case for the rest of available technological advancements as conducive to closing the financial gap between people.

The great thing about smart phones is that they allow for software to be distributed with little cost for additional copies. Having a smart phone means that you can use Duolingo to learn English for free.

Are there any technological advancements you can think of that necessarily trickle down the socio-economic scale and help those poorest of the poor?

We are quite successful in reducing the numbers of the poorest of the poor. We reduced them both in relative and in absolute numbers. It's debatable how much of that is due to new technology and how much is through other factors but we have now less people in extreme poverty.

Replies from: dglukhov
comment by dglukhov · 2017-03-23T12:48:28.798Z · LW(p) · GW(p)

If you look at Bill Gates and Warren Buffet they see purpose in helping the poor. In general employing poor people to do something for you and paying them a wage is also a classic way poor people get helped.

I'm happy that these people have taken actions to support such stances. However, I'm more interested in the incentive system, not a few outliers within it. Both of these examples hold about $80 billion in net worth, these are paltry numbers compared to the amount of money circulating in world today, GDP estimates ranging in the $74 trillion. I am therefore still unaware of an incentive system that helps the poor until I see the majority of this amount of money being circulated and distributed in the manner Gates and Buffett propose.

The great thing about smart phones is that they allow for software to be distributed with little cost for additional copies. Having a smart phone means that you can use Duolingo to learn English for free.

Agreed, and unfortunately utilizing a smartphone to its full benefit isn't necessarily obvious to somebody poor. While one could use it to learn English for free, they could also use it inadvertently as an advertising platform with firms soliciting sales from the user, or just as a means of contact with others willing to stay in contact with them (other poor people, most likely). A smartphone would be an example of a technology that managed to trickle down the socio-economic ladder and help poor people, but it can do harm as well as good, or have no effect at all.

We are quite successful in reducing the numbers of the poorest of the poor. We reduced them both in relative and in absolute numbers. It's debatable how much of that is due to new technology and how much is through other factors but we have now less people in extreme poverty.

Please show me these statistics. Are they adjusted to and normalized relative to population increase?

A cursory search gave me contradictory statistics. http://www.statisticbrain.com/world-poverty-statistics/

I'd like to know where you get such sources, because a growing income gap between rich and poor necessarily implies three things: the rich are getting richer, the poor are getting poorer, or both.

Note: we are discussing relative poverty, or absolute poverty? I'd like to keep it to absolute poverty, since meeting basic human needs is a solid baseline as long as you trust nutritional data sources and research with regards to health. If you do not trust our current understanding of human health, then relative poverty is probably the better topic to discuss.

EDIT: found something to support your conclusion, first chart shows the decrease of population of people in the lowest economic tier. These are not up to date, only comparing statistics from 2001 to 2011. I'm having a hard time finding anything more recent.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2017-03-24T23:13:48.251Z · LW(p) · GW(p)

I'm happy that these people have taken actions to support such stances. However, I'm more interested in the incentive system, not a few outliers within it.

When basic needs are fulfilled many humans tend to want to satisfy needs around contributing to making the world a better place. It's a basic psychological mechanism.

Replies from: dglukhov
comment by dglukhov · 2017-03-25T01:50:40.783Z · LW(p) · GW(p)

When basic needs are fulfilled many humans tend to want to satisfy needs around contributing to making the world a better place. It's a basic psychological mechanism.

This completely ignores my previous point. A few people who managed to self-actualize within the current global economic system will not change that system. As I previously mentioned, I am not interested in outliers, but rather systematic trends in economic behavior.

Replies from: ChristianKl
comment by ChristianKl · 2017-03-25T08:51:03.013Z · LW(p) · GW(p)

Bill Gates and Warren Buffet aren't only outliers in respect to donating but also in being the most wealthy people. Both of them basically believe that it's makes more sense to use their fortune for the public good than to inherit it to their children.

To the extend that this belief spreads (and it does with the giving pledge), you see more money being used this way.

comment by ChristianKl · 2017-03-24T14:38:34.766Z · LW(p) · GW(p)

they could also use it inadvertently as an advertising platform with firms soliciting sales from the user, or just as a means of contact with others willing to stay in contact with them (other poor people, most likely)

The ability to stay in contact with other poor people is valuable. If you can send the person in the next village a message you don't have to walk to them to communicate with them.

Please show me these statistics. Are they adjusted to and normalized relative to population increase?

What have the millennium development goals achieved?

MDG 1: The number of people living on less than $1.25 a day has been reduced from 1.9 billion in 1990 to 836 million in 2015

Replies from: dglukhov
comment by dglukhov · 2017-03-25T01:55:46.024Z · LW(p) · GW(p)

The ability to stay in contact with other poor people is valuable.

It is also dangerous, people are unpredictable and, similarly to my point about phones, can cause good, harm, or nothing at all.

A phone is not inherently, intrinsically good, it merely serves as a platform to any number of things, good, bad or neutral.

What have the millennium development goals achieved?

I hope this initiative continues to make progress and that policy doesn't suddenly turn upside-down anytime soon. Then again, Trump is president, Brexit is a possibility, and economic collapse an always probable looming threat.

Replies from: ChristianKl
comment by ChristianKl · 2017-03-25T08:51:59.759Z · LW(p) · GW(p)

A phone is not inherently, intrinsically good, it merely serves as a platform to any number of things, good, bad or neutral.

That's similar to saying that a car is not intrinsically good. Both technologies enable a lot of other actions.

Replies from: dglukhov
comment by dglukhov · 2017-03-27T13:44:35.311Z · LW(p) · GW(p)

Cars also directly involve people in motor vehicle accidents, one of the leading causes of death in the developed world. Cars, and motor vehicles in general, also contribute to an increasingly alarming concentration of emissions into the atmosphere, with adverse effects to follow, most notably global warming. My point still stands.

A technology is only inherently good if it solves more problems than it causes, with each problem weighed by their impacts on the world.

Replies from: Elo
comment by Elo · 2017-03-27T13:58:40.402Z · LW(p) · GW(p)

Cars are net positive.

Edit: ignoring global warming because it's really hard to quantify. Just comparing deaths to global productivity increase because of cars. Cars are a net positive.

Edit 2:

Edit: ignoring global warming because it's really hard to quantify

Clarification - it's hard to quantify the direct relationship of cars to global warming. Duh there's a relationship, but I really don't want to have a debate here. Ignoring that factor for a moment, net value of productivity of cars vs productivity lost by some deaths. Yea. Let's compare that.

Replies from: dglukhov, dglukhov
comment by dglukhov · 2017-03-28T13:05:09.925Z · LW(p) · GW(p)

Clarification - it's hard to quantify the direct relationship of cars to global warming

It is easy to illustrate that carbon dioxide, the major byproduct of internal combustion found in most car models today, causes global warming directly. If you look at this graph, you'll notice that solar radiation spans a large range of wavelengths of light. Most of these wavelengths of light get absorbed by our upper atmosphere according to chemical composition of said atmosphere, except for certain wavelengths in the UV region of the spectrum (that's the part of the spectrum most commercial sunscreens are designed to block). Different chemicals have different ranges over which wavelengths of light can excite their stable forms. Carbon dioxide, as it turns out, can be irradiated over a portion of the spectrum in the IR range, in the region around wavenumber 2351. When light is absorbed by carbon dioxide, it causes vibration in the molecule, which gets dissipated as heat, since this is technically an excitation of the molecule. This is why carbon dioxide is considered a greenhouse gas, because it absorbs solar energy in the form of light as an input, then dissipates that energy after vibrational excitation as output.

The amount of carbon dioxide in the atmosphere today far exceeds natural levels ever before seen on earth. There are, of course, natural fluctuations of these levels going up and down (according to natural carbon fixing processes), but the overall trend is very distinct, obvious, and significant. We are putting more carbon dioxide into the atmosphere through our combustion processes than the earth can fix out of the atmosphere.

The relationship has been quantified already. Please understand, there is absolutely no need to obscure this debate with claims that the relationship is hard to quantify. It is not, it has been done, the body of research surrounding this topic is quite robust, similarly to how robust the body of research around CFCs is. I will not stand idly by while people continue to misunderstand the situation. Your urge to ignore this factor indicates either misunderstanding of the situation, or it indicates an aversion to a highly politicized topic. In either case, it does not excuse the claim you made. The less obscurity on the topic exists, the better.

Replies from: Lumifer, Elo
comment by Lumifer · 2017-03-28T14:45:31.136Z · LW(p) · GW(p)

It is easy to illustrate that carbon dioxide ... causes global warming directly.

Actually, not that easy because the greenhouse effect is dominated by water vapor. CO2 certainly is a greenhouse gas and certainly contributes to global warming, but the explanation is somewhat more complicated than you make it out to be.

The amount of carbon dioxide in the atmosphere today far exceeds natural levels ever before seen on earth.

This is not true.

The relationship has been quantified already.

Demonstrate, please.

Replies from: dglukhov
comment by dglukhov · 2017-03-28T15:19:12.654Z · LW(p) · GW(p)

This is not true.

According to what sources, and how did they verify? Do you distrust the sampling techniques used to gather data on carbon dioxide levels before recorded history?

Demonstrate, please.

What more could you possibly need? I just showed you evidence pointing to unnatural amount of carbon dioxide in the atmosphere. Disturb that balance, you cause warming. This cascades into heavier rainfall, higher levels of water vapor and other greenhouse gases, and you get a sort of runaway reaction.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T15:22:17.407Z · LW(p) · GW(p)

According to what sources

Will Wikipedia suffice?

What more could you possibly need?

You did use the word "quantify", did you not? Do you know what it means?

Replies from: dglukhov
comment by dglukhov · 2017-03-28T15:42:25.194Z · LW(p) · GW(p)

You did use the word "quantify", did you not? Do you know what it means?

Putting data on the table to back up claims. Back up your idea of what is going on in the world with observations, notably observations you can put a number on.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T15:58:24.494Z · LW(p) · GW(p)

Putting data on the table to back up claims.

Turns out you don't know. The word means expressing your claims in numbers and, by itself, does not imply support by data.

Usually "quantifying" is tightly coupled to being precise about your claims.

Replies from: dglukhov
comment by dglukhov · 2017-03-28T16:05:53.618Z · LW(p) · GW(p)

Turns out you don't know. The word means expressing your claims in numbers and, by itself, does not imply support by data.

Usually "quantifying" is tightly coupled to being precise about your claims.

I'm confused. You wouldn't have claims to make before seeing the numbers in the first place. You communicate this claim to another, they ask you why, you show them the numbers. That's the typical process of events I'm used to, how is it wrong?

Replies from: Lumifer
comment by Lumifer · 2017-03-28T16:13:31.571Z · LW(p) · GW(p)

You wouldn't have claims to make before seeing the numbers in the first place.

LOL. Are you quite sure this is how humans work? :-)

You communicate this claim to another, they ask you why, you show them the numbers.

I want you to quantify the claim, not the evidence for the claim.

Replies from: dglukhov
comment by dglukhov · 2017-03-28T16:20:55.575Z · LW(p) · GW(p)

LOL. Are you quite sure this is how humans work? :-)

They don't, that's something you train to do.

I want you to quantify the claim, not the evidence for the claim.

Why? Are you asking me to write out the interpretation of the evidence I see as a mathematical model instead of a sentence in English?

Replies from: Lumifer
comment by Lumifer · 2017-03-28T16:38:17.262Z · LW(p) · GW(p)

Are you asking me to write out the interpretation of the evidence I see as a mathematical model

Not evidence. I want you to make a precise claim.

For example, "because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event" is a not-quantified claim. It's not precise enough to be falsifiable (which is how a lot of people like it, but that's a tangent).

A quantified equivalent would be something along the lines of "We expect the increase in atmospheric CO2 from 300 to 400 ppmv to lead to the increase of the average global temperature by X degrees spread over the period of Z years so that we forecast the average temperature in the year YYYY as measured by a particular method M to be T with the standard error of E".

Note that this is all claim, no evidence (and not a model, either).

Replies from: Good_Burning_Plastic, dglukhov
comment by Good_Burning_Plastic · 2017-03-28T17:25:15.301Z · LW(p) · GW(p)

It's not precise enough to be falsifiable

Yes it is. For example, if CO2 concentrations and/or global temperatures went down by much more than the measurement uncertainties, the claim would be falsified.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T17:51:33.668Z · LW(p) · GW(p)

I said:

For example, "because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event" is a not-quantified claim.

The claim doesn't mention any measurement uncertainties. Moreover, the actual claim is "CO2 cascades into a warming event" and, y'know, it's just an event. Maybe it's an event with a tiny magnitude, maybe another event happens which counterbalances the CO2 effect, maybe the event ends, who knows...

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2017-03-29T08:12:43.702Z · LW(p) · GW(p)

The claim doesn't mention any measurement uncertainties.

That's why I said "much more". If I claimed "X is greater than Y" and it turned out that X = 15±1 and Y = 47±1, would my claim not be falsified because it didn't mention measurement uncertainties?

comment by dglukhov · 2017-03-28T16:48:14.227Z · LW(p) · GW(p)

Well, at this point I'd concede its not easy to make a claim with standards fit for such an example.

I'll see what I can do.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T17:08:52.464Z · LW(p) · GW(p)

The general test is whether the claim is precise enough to be falsifiable -- is there an outcome (or a set of data, etc) which will unambiguously prove that claim to be wrong, with no wiggle room to back out?

And, by the way, IPCC reports are, of course, full of quantified claims like the one I mentioned. There might be concerns with data quality, model errors, overconfidence in the results, etc. etc, but the claims are well-quantified.

Replies from: dglukhov
comment by dglukhov · 2017-03-28T17:46:56.882Z · LW(p) · GW(p)

That is fair, so why was the claim that cars are a net positive not nearly as thoroughly scrutinized as my counterargument? I can't help but notice some favoritism here...

Was such an analysis done? Recently? Is this such common knowledge that nobody bothered to refute it?

Edit: my imagination only stretches so far as to see climate change being the only heavy counterargument to the virtue of cars. Anything else seems relatively minor, i.e deaths from motor accidents, etc.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T17:56:12.397Z · LW(p) · GW(p)

why was the claim that cars are a net positive not nearly as thoroughly scrutinized as my counterargument?

Because there is a significant prior to overcome. Whenever people get sufficiently wealthy, they start buying cars. Happened in the West, happened in China, Russia, India, etc. etc. Everywhere. And powers-that-be are fine with that. So to assert that cars are a net negative you need to assert that everyone is wrong.

Replies from: dglukhov
comment by dglukhov · 2017-03-28T18:10:09.018Z · LW(p) · GW(p)

Just out of curiosity, what is your stance on the impact of cars on climate change? And cars are too narrow, then what is your stance on fossil fuel consumptions and its impact on climate change?

You linked to parts of the debate I've never been exposed to, so I'm curious if there's more to know.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T18:45:27.212Z · LW(p) · GW(p)

tl;dr It's complicated :-)

Generally speaking, the issue of global warming is decomposable into several questions with potentially different answers. E.g.:

  • Have we observed general warming throughout the XX and early XXI century? That's a question about facts and can be answered relatively easily.

  • Does emitting very large amounts of CO2 into the atmosphere affect climate? That's a question about a scientific theory and by now it's relatively uncontested as well (note: quantifying the impact of CO2 on climate is a different thing. For now the issue is whether such an impact exists).

  • Are there other factors affecting climate on decade- and century- scales? Also a question about scientific theories and again the accepted answer is "yes", but quantifying the impact (or agreeing on a fixed set of such factors) is not so simple.

  • What do we expect the global temperatures to be in 20/50/100 years under certain assumptions about the rate of CO2 emissions? Ah, here we enter the realm of models and forecasts. Note: these are not facts. Also note that here the "complicated" part becomes "really complicated". For myself, I'll just point out that I distrust the confidence put by many people into these models and the forecasts they produce. By the way, there are a LOT of these models.

  • What consequences of our temperature forecasts do we anticipate? Forecasting these consequences is harder than forecasting temperatures, since these consequences are conditional on temperature forecasts. Some things here are not very controversial (it's unlikely that glaciers will stop retreating), some are (will hurricanes become weaker? stronger? more frequent? Umm....)

  • What should we do in response to global warming? At this point we actually leave the realm of science and enter the world of "should". For some reason many climate scientists decided that they are experts in economics and politics and so know what the response should be. Unfortunately for them, it's not a scientific question. It's a question of making a series of uncertain trade-offs where what you pick is largely decided by your values and your preferences. I expect the outcome to be as usual: "The strong do what they can and the weak suffer what they must".

Replies from: gjm
comment by gjm · 2017-03-29T12:32:18.670Z · LW(p) · GW(p)

By the way, there are a LOT of these models.

What inference are you expecting readers to draw from that?

Inferences I draw from it: 1. Looks like researchers are checking one another's results, developing models that probe different features, improving models as time goes on, etc.; those are all good things. 2. It would be good to know how well these models agree with one another on what questions. I don't know how well they do, but I'm pretty sure that if the answer were "they disagree wildly on central issues" then the denier/skeptic[1] camp would be shouting it from the rooftops. Unless, I guess, the disagreement were because some models predict much worse futures than currently expected. So my guess is that although there are such a lot of models, they pretty much all agree that e.g. under business-as-usual assumptions we can expect quite a lot more warming over the coming decades.

[1] I don't know of any good terms that indicate without value judgement that a given person does or doesn't broadly agree that global warming is real, substantially caused by human activities, and likely to continue in the future.

Replies from: Lumifer
comment by Lumifer · 2017-03-29T14:57:38.702Z · LW(p) · GW(p)

What inference are you expecting readers to draw from that?

That there is no single forecast that "the science" converged on and everyone is in agreement about what will happen.

It would be good to know how well these models agree with one another on what questions.

If you're curious, IPCC reports will provide you with lots of data.

we can expect quite a lot more warming over the coming decades

A nice example of a non-quantified claim :-P

any good terms that indicate without value judgement

"Sceptic" implies value judgement? I thought being a sceptic was a good thing, certainly better than being credulous or gullible.

Replies from: gjm
comment by gjm · 2017-03-29T15:32:24.602Z · LW(p) · GW(p)

That there is no single forecast that "the science" converged on

I confess it never occurred to me that anyone would ever think such a thing. (Other than when reading what you wrote, when I thought "surely he can't be attacking such a strawman".)

I mean, I bet there are people who think that or say it. Probably when someone says "we expect 2 degrees of warming by such-and-such a date unless something changes radically" many naive listeners take it to mean that all models agree on exactly 2 degrees of warming by exactly that date. But people seriously claiming that all the models agree on a single forecast? Even halfway-clueful people seriously believing it? Really?

A nice example of a non-quantified claim

Do please show us all the quantified claims you have made about global warming, so that we can compare. (I can remember ... precisely none, ever. But perhaps I just missed them.)

Not that I think there's anything very bad about non-quantified claims -- else I wouldn't go on making them, just like everyone else does. I simply think you're being disingenuous in complaining when other people make such claims, while avoiding making any definite claims to speak of yourself and leaving the ones you do make non-quantified. From the great-grandparent of this comment I quote: "relatively easily", "relatively uncontested", "really complicated", "I distrust the confidence ...", "a LOT of these models", "not very controversial", (implicitly in the same sentence) "very controversial".

But, since you ask: I think it is broadly agreed (among actual climate scientists) that business as usual will probably (let's say p=0.75 or thereabouts) mean that by 2100 global mean surface temperature will be at least about 2 degrees C above what it was before 1900. (Relative to the same baseline, we're currently somewhere around 0.9 degrees up from then.)

"Sceptic" implies value judgement? I thought being a sceptic was a good thing

I paraphrase: "How silly to suggest that 'sceptic' implies a value judgement. It implies a positive value judgement."

Being a skeptic is a good thing. I was deliberately using one word that suggests a positive judgement ("skeptic") alongside one that suggests a negative one ("denier"). Perhaps try reading what I write more charitably?

Replies from: Lumifer
comment by Lumifer · 2017-03-29T15:42:38.636Z · LW(p) · GW(p)

I confess it never occurred to me that anyone would ever think such a thing.

I recommend glancing at some popular press. There's "scientific consensus", dontcha know? No need to mention specific numbers, but all right-thinking men, err... persons know that Something Must Be Done. Think of the children!

Do please show us all the quantified claims you have made about global warming

Is this a competition?

I don't feel the need to make quantified claims because I'm not asking anyone to reduce their carbon footprint or introduce carbon taxes, or destroy their incandescent bulbs, or tar-and-feather coal companies...

Let me quote you some Richard Feynman: "I have approximate answers and possible beliefs and different degrees of uncertainty about different things, but I am not absolutely sure of anything".

Being a skeptic is a good thing

It is to me and, seems like, to you. I know people who think otherwise: a sceptic is a malcontent, a troublemaker who's never satisfied, one who distrusts what honest people tell him.

Replies from: gjm
comment by gjm · 2017-03-29T17:06:29.770Z · LW(p) · GW(p)

I recommend glancing at some popular press. There's "scientific consensus", dontcha know?

Yeah, there's all kinds of crap in the popular press. That's why I generally don't pay much attention to it. Anyway, what do the deficiencies of the popular press have to do with the discussion here?

Is this a competition?

No, it's a demonstration of your insincerity.

I don't feel the need to make quantified claims because I'm not [...]

Status quo bias.

In (implicitly) asking us not to put effort into reducing carbon footprints, introduce carbon taxes, etc., etc., you are asking us (or our descendants) to accept whatever consequences that may have for the future.

I fail to see why causing short-term inconvenience should require quantified claims, but not causing long-term possible disaster.

(I am all in favour of the attitude Feymnan describes. It is mine too. If there is any actual connection between that and our discussion, other than that you are turning on the applause lights, I fail to see it.)

I know people who think otherwise

Sure. So what?

Replies from: Lumifer
comment by Lumifer · 2017-03-29T17:39:09.646Z · LW(p) · GW(p)

Anyway, what do the deficiencies of the popular press have to do with the discussion here?

Because my original conversation was with a guy who, evidently, picked up some of his ideas about global warming from there.

In (implicitly) asking us not to put effort into reducing carbon footprints

LOL. I am also implicitly asking not to stop sex-slave trafficking, not to prevent starvation somewhere in Africa, and not to thwart child abuse. A right monster am I!

In any case, I could not fail to notice certain... rigidities in you mind with respect to certain topics. Perhaps it will be better if I tap out.

Replies from: gjm
comment by gjm · 2017-03-29T23:02:32.545Z · LW(p) · GW(p)

LOL. I am also implicitly asking not to [...]

Er, no.

I could not fail to notice certain... rigidities in your mind

I'm sorry to hear that. I would gently suggest that you consider the possibility that the rigidity may not be where you think it is, but I doubt there's much point.

comment by Elo · 2017-03-28T14:06:27.313Z · LW(p) · GW(p)

The relationship has been quantified already.

.

misunderstand the situation

Fourth clarification

IT IS HARD TO QUANTIFY THE EXACT PROPORTION OF GLOBAL WARMING THAT IS CAUSED BY CARS AS OPPOSED TO OTHER SOURCES OF GLOBAL WARMING, SAY EVERY OTHER REASON THAT CARBON DOIXIDE ENDS UP IN THE ATMOSPHERE AND AS AN ABSTRACTION FROM THAT HOW MUCH OF GLOBAL WARMING IS LITERALLY CAUSED BY CARS AND THEREFORE HOW MUCH DAMAGE TO PRODUCTIVITY CARS CAUSE BY CAUSING GLOBAL WARMING TO BE SOME FRACTION HIGHER THAN IT WOULD HAVE OTHERWISE BEEN.

Tapping out.

comment by dglukhov · 2017-03-27T18:02:45.042Z · LW(p) · GW(p)

ignoring global warming because it's really hard to quantify

Oh really? Since when?

Edit: Just in case you weren't convinced.

If you go into the sampling and analysis specifics, the chemistry is sound. There are a few assumptions made, as with any data sampling technique, but if you decide to want to dispute such details, you may as well dispute the technical details and call your objection there. Otherwise, I don't see where your claim holds, this is one of the better documented global disputes (makes sense, since so much is at stake with regards to both the industry involved as well as the alleged consequences of climate change.)

I can say that global productivity increase doesn't mean anything if it cannot be sustained.

Replies from: Lumifer
comment by Lumifer · 2017-03-27T18:26:32.323Z · LW(p) · GW(p)

Oh really?

Please illustrate, then. What is the net cost of "cars and motor vehicles in general" with respect to their "emissions into the atmosphere"? Use numbers and show your work.

Replies from: dglukhov
comment by dglukhov · 2017-03-27T18:46:19.163Z · LW(p) · GW(p)

Okay, consider this an IOU for a future post on an analysis. I'm assuming you'd want an analysis of emissions relative to automobile use, correct? Wouldn't an emissions based on fossil fuel consumption in general be more comprehensive?

Edit: In the meantime, reading this analysis that's already been done may help establish a better understanding on the subject of quantifying emissions costs.

Also please understand that what you're asking for is something whole analytical chemical organizations spend vast amounts of their funding on doing this analysis. To say that I alone will try to provide something anywhere close to the quality provided by these organizations is to exercise quite a bit of hubris on my part.

That said, my true rejection to Elo's comment wasn't that global warming isn't hard to quantify. My true rejection is that it seems entirely careless to discard global warming from the discussion of the virtue (or lack thereof) of motor vehicles and other forms of transportation.

Replies from: Lumifer
comment by Lumifer · 2017-03-27T19:12:27.778Z · LW(p) · GW(p)

We are talking about the cost-benefit analysis of cars and similar motor vehicles (let's define them as anything that moves and has an internal combustion engine). Your point seems to be that cars are not net beneficial -- is that so? A weaker claim -- that cars have costs and not only benefits -- is obvious and I don't think anyone would argue with it.

In particular, you pointed out that some of the costs involved have to do with global warming and -- this is the iffy part -- that this cost is easy to quantify. Since I think that such cost would be very-difficult-to-impossible to quantify, I'm curious about your approach.

Your link is to an uncritical Gish Gallop ("literature review" might be a more charitable characterization) through all the studies which said something on the topic.

Re update:

Cost is an economics question. Analytical chemistry is remarkably ill-equipped to answer such questions.

As to "careless to discard global warming", well, I believe Elo's point was that it's hard to say anything definite about the costs of cars in this respect (keep in mind, for example, that humans do need transportation so in your alternate history where internal-combustion-engine motor vehicles don't exist or are illegal, what replaces them?)

Replies from: dglukhov, dglukhov
comment by dglukhov · 2017-03-27T19:41:08.616Z · LW(p) · GW(p)

Cost is an economics question. Analytical chemistry is remarkably ill-equipped to answer such questions.

Analytical chemistry is well equipped to handle and acquire the data to show, definitively, that global warming is caused by emissions. To go further to say that we cannot use these facts to decide whether or not the automotive infrastructure isn't worth augmenting because its too hard to make a cost-benefit analysis in light of the potential costs associated with global warming and air pollution is careless. Coastal flooding is a major cost (with rising oceans), as are extreme weather patterns (the recent flooding in Peru comes to mind), as well as the inevitable mass migrations (or deaths) resulting from these phenomena. I'm not aware of such figures, but this is a start.

keep in mind, for example, that humans do need transportation so in your alternate history where internal-combustion-engine motor vehicles don't exist or are illegal, what replaces them?

Though I'm not asking for a replacement of motor vehicles (although electric cars come to mind), I am asking for augmentation. Why take the risk?

Replies from: Lumifer
comment by Lumifer · 2017-03-27T20:17:25.533Z · LW(p) · GW(p)

Analytical chemistry is well equipped to handle and acquire the data to show, definitively, that global warming is caused by emissions.

I was not aware that analytical chemists make climate models and causal models, too...

Coastal flooding is a major cost

You are confused about tenses. Coastal flooding, etc. is (note the present tense) is not a major cost. Coastal flooding might become a cost in the future, but that is a forecast. Forecasts are different from facts.

electric cars come to mind

Electric batteries do not produce energy, they merely store energy. If the energy to charge these batteries comes from fossil fuels, nothing changes.

I am asking for augmentation

What do you mean by augmentation?

Replies from: dglukhov
comment by dglukhov · 2017-03-27T22:07:13.404Z · LW(p) · GW(p)

I was not aware that analytical chemists make climate models and causal models, too...

They can. Though the people who came up with the infrared spectroscopy technique may not have been analytical chemists by trade. Mostly physicists, I believe. Why is this relevant? Because the same reason why infrared spectroscopy works also gives a reason for why emission cause warming.

You are confused about tenses. Coastal flooding, etc. is (note the present tense) is not a major cost. Coastal flooding might become a cost in the future, but that is a forecast. Forecasts are different from facts.

Coastal flooding damages infrastructure built on said coasts (unless said infrastructure was designed to mitigate said damage). That is a fact. I don't see what the problem is here.

Electric batteries do not produce energy, they merely store energy. If the energy to charge these batteries comes from fossil fuels, nothing changes.

Agreed. So let me rephrase. Solar energy comes to mind. Given enough time, solar panels that were built up using tools and manpower powered by fossil fuels will eventually outproduce the energy spent to build it. This does change things if that energy can then be stored, transferred, and used for transportation purposes, since our current infrastructure still relies on such transportation technology.

This is what I mean by augmentation. Change the current infrastructure to support and accept renewable energy source over fossil fuels. We cannot do this yet globally, though some regions have managed to beat those odds.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T14:31:02.273Z · LW(p) · GW(p)

They can. Though the people who came up with the infrared spectroscopy technique may not have been analytical chemists by trade.

You are confused between showing that CO2 is a greenhouse gas and developing climate models of the planet Earth.

Coastal flooding damages infrastructure

Yes, but coastal flooding is a permanent feature of building on the coasts. Your point was that coastal flooding (and mass migrations and deaths) are (note: present tense) the result of global warming.This is (note: present tense) not true. There are people who say that this will become (note: future tense) true, but these people are making a forecast.

Solar energy comes to mind

At which point we are talking about the whole energy infrastructure of the society and not about the costs of cars.

Replies from: dglukhov
comment by dglukhov · 2017-03-28T15:41:32.926Z · LW(p) · GW(p)

You are confused between showing that CO2 is a greenhouse gas and developing climate models of the planet Earth.

What other inferential steps does a person need to be shown to tell them that because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event?

There are people who say that this will become (note: future tense) true, but these people are making a forecast.

The recent weather anomalies hitting earth imply the future is here.

At which point we are talking about the whole energy infrastructure of the society and not about the costs of cars.

Indeed, so why not debate at the metalevel of the infrastructure, and see where the results of that debate lead in terms of their impacts on the automotive industry? It is a massive industry, worth trillions of dollars globally, any impacts on energy infrastructure will have lasting impacts on the automotive industry.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T15:52:19.716Z · LW(p) · GW(p)

What other inferential steps does a person need to be shown to tell them that because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event?

Look up a disagreement between two chaps, Svante Arrhenius and Knut Ångström :-)

Here is the argument against your position (there is a counter-argument to it, too):

water vapor, which is far more abundant in the air than carbon dioxide, also intercepts infrared radiation. In the infrared spectrum, the main bands where each gas blocked radiation overlapped one another. How could adding CO2 affect radiation in bands of the spectrum that H2O (not to mention CO2 itself) already made opaque?

.

The recent weather anomalies hitting earth imply the future is here.

Like the remarkable hurricane drought in the North America? Or are you going to actually argue that weather is climate?

so why not debate at the metalevel of the infrastructure

Sure, but it's a different debate.

Replies from: dglukhov
comment by dglukhov · 2017-03-28T16:42:20.967Z · LW(p) · GW(p)

there is a counter-argument to it, too

What was his counter-argument? I can't read German.

Like the remarkable hurricane drought in the North America? Or are you going to actually argue that weather is climate?

Well clearly we need to establish a time range. Most sources for weather and temperature records I've seen span a couple of centuries. Is that not a range large enough to talk about climate instead of weather?

Sure, but it's a different debate.

Its a related debate, especially relevant if conclusions in the debate a metalevel lower are unenlightened.

Replies from: Lumifer
comment by Lumifer · 2017-03-28T17:03:34.041Z · LW(p) · GW(p)

What was his counter-argument?

Here

comment by dglukhov · 2017-03-27T19:15:47.696Z · LW(p) · GW(p)

Noted, edited.

The description of the link is entirely unfair. It provides a (relatively) short summary of the language of the debate, as well as a slew of data points to overview. To frame the source as you describe it is entirely an exercise in poisoning the well.

Replies from: Lumifer
comment by Lumifer · 2017-03-27T19:34:20.080Z · LW(p) · GW(p)

The source is a one-guy organization which doesn't even pretend it's unbiased.

Replies from: dglukhov
comment by dglukhov · 2017-03-27T19:53:50.096Z · LW(p) · GW(p)

Ironic, since you just asked me to do my own analysis on the subject, yet you are unwilling to read the "one-guy organization" and what it has to say on the subject.

The merits (or lack thereof) of said organization has nothing to do with how true or false the source is. This is ad hominem.

Replies from: Lumifer
comment by Lumifer · 2017-03-27T20:20:10.187Z · LW(p) · GW(p)

I glanced at your source. The size is relevant because you told me that

what you're asking for is something whole analytical chemical organizations spend vast amounts of their funding on doing this analysis

...and the lack of bias (or lack of lack) does have much to do with how one treats sources of information.

Replies from: dglukhov
comment by dglukhov · 2017-03-27T20:35:23.848Z · LW(p) · GW(p)

If you'd filter out one-man firm as a source not worth reading, you'd filter out any attempt of an analysis on my part as well.

I am concerned about quality here, not so much who sources come from. This, necessarily, requires more than just a glance at material.

comment by Lumifer · 2017-03-22T14:41:23.537Z · LW(p) · GW(p)

I have no awareness of an incentive system that dictates that those with money and power need necessarily provide for everyone else.

It's called a survival instinct.

Replies from: dglukhov
comment by dglukhov · 2017-03-22T14:55:10.163Z · LW(p) · GW(p)

Good luck coalescing that in any meaningful level of resistance. History shows that leaders haven't been very kind to revolutions, and the success rate for such movements aren't necessarily high given the technical limitations.

I say this only because I'm seeing a slow tendency towards an absolution of leader-replacement strategies and sentiments.

Replies from: Lumifer
comment by Lumifer · 2017-03-22T16:18:15.488Z · LW(p) · GW(p)

coalescing that in any meaningful level of resistance

Resistance on whose part to what?

History shows that leaders haven't been very kind to revolutions

Revolutions haven't been very kind to leaders, too -- that's the point. When the proles have nothing to lose but their chains, they get restless :-/

an absolution of leader-replacement strategies

...absolution?

Replies from: Viliam, dglukhov
comment by Viliam · 2017-03-23T10:31:26.861Z · LW(p) · GW(p)

When the proles have nothing to lose but their chains, they get restless :-/

Is this empirically true? I am not an expert, but seems to me that many revolutions are caused not by consistent suffering -- which makes people adjust to the "new normal" -- but rather by situations where the quality of life increases a bit -- which gives people expectations of improvement -- and then either fails to increase further, or even falls back a bit. That is when people explode.

A child doesn't throw a tantrum because she never had a chocolate, but she will if you give her one piece and then take away the remaining ones.

Replies from: Lumifer
comment by Lumifer · 2017-03-24T15:25:34.037Z · LW(p) · GW(p)

seems to me that many revolutions are caused not by consistent suffering

The issue is not the level of suffering, the issue is what do you have to lose. What's the downside to burning the whole system to the ground? If not much, well, why not?

That is when people explode

Middle class doesn't explode. Arguably that's the reason why revolutions (and popular uprisings) in the West have become much more rare than, say, a couple of hundred years ago.

Replies from: gjm, Viliam, MaryCh
comment by gjm · 2017-03-24T18:06:53.781Z · LW(p) · GW(p)

The American revolution seems to have been a pretty middle-class affair. The Czech(oslovakian) "Velvet Revolution" and the Estonian "Singing Revolution" too, I think. [EDITED to add:] In so far as there can be said to be a middle class in a communist state.

Replies from: Lumifer
comment by Lumifer · 2017-03-24T19:29:47.849Z · LW(p) · GW(p)

Yeah, Eastern Europe / Russia is an interesting case. First, as you mention, it's unclear to what degree we can speak of the middle class there during the Soviet times. Second, some "revolutions" there were velvet primarily because the previous power structures essentially imploded leaving vacuum in their place -- there was no one to fight. However not all of them were and the notable post-Soviet power struggle in the Ukraine (the "orange revolution") was protracted and somewhat violent.

So... it's complicated? X-)

comment by Viliam · 2017-03-24T22:52:04.749Z · LW(p) · GW(p)

The issue is not the level of suffering, the issue is what do you have to lose.

More precisely, it is what you believe you have to lose. And humans seems to have a cognitive bias that they take all advantages of the current situation for granted, if they existed at least for a decade.

So when people see more options, they are going to be like: "Worst case, we fail and everything stays like it is now. Best case, everything improves. We just have to try." Then they sometimes get surprised, for example when millions of them starve to death, learning too late that they actually had something to lose.

In some sense, Brexit or Trump are revolutions converted by the mechanism of democracy into mere dramatic elections. People participating at them seem to have the "we have nothing to lose" mentality. I am not saying they are going to lose something as a consequence, only that the possibility of such outcome certainly exists. I wouldn't bother trying to convince them about that, though.

comment by MaryCh · 2017-03-24T16:43:47.644Z · LW(p) · GW(p)

(Yes it does.)

comment by dglukhov · 2017-03-22T17:05:55.919Z · LW(p) · GW(p)

Resistance on whose part to what?

Resistance of those without resources against those with amassed resources. We can call them rich vs. poor, leaders vs. followers, advantaged vs. disadvantaged. the advantaged groups tend to be characteristically small, the disadvantaged large.

Revolutions haven't been very kind to leaders, too -- that's the point. When the proles have nothing to lose but their chains, they get restless :-/

Restlessness is useless when it is condensed and exploited to empower those chaining them. For example, rebellion is an easily bought commercial product, a socially/tribally recognized garb you can wear. You'd be hard-pressed more to look the part of a revolutionary than to actually do anything that could potentially defy the oppressive regime you might be a part of. There are other examples, which leads me to my next point.

...absolution?

It would be in the best interest for leaders to optimize for a situation where rebellion cannot ever arise, that is the single threat any self-interested leader with the goal of continuing their reign needs to worry about. Whether it involves mass surveillance, economic manipulation, or simply despotic control is largely irrelevant, the idea behind them is what counts. Now when you bring up the subject of technology, any smart leader with a stake in their reign time will immediately seize any opportunity to extend it. Set a situation up to create technology that necessarily mitigates the potential for rebellion to arise, and you get to rule longer.

This is a theoretical scenario. It is a scary one, and the prevalence of conspiracy theories arising from such a theory simply plays to biases founded in fear. And of course, with bias comes the inevitable rationalist backlash to such idea. But I'm not interested in this political discourse, I just want to highlight something.

The scenario establishes an optimization process. Optimization for control. It is always more advantageous for a leader to worry more about their reign and extend it than to be benevolent, a sort of tragedy of the commons for leaders. The natural in-system solution for this optimization problem is to eliminate all potential sources of competition. The out-system solution for this optimization problem is mutual cooperation and control-sharing to meet certain needs and goals.

There currently exists no out-system incentive that I am currently aware of. Rationality doesn't count, since it still leads to in-system outcomes (benevolent leaders).

EDIT: I just thought of an ironic situation. The current solution to the tragedy of the commons most prevalent is through the use of government regulation. This is only a Band-Aid, since you get a recursion issue of figuring out who's gonna govern the government.

Replies from: Lumifer
comment by Lumifer · 2017-03-22T18:28:42.854Z · LW(p) · GW(p)

Restlessness is useless when it is condensed and exploited to empower those chaining them.

And when it's not? Consider Ukraine. Or if you want to go a bit further in time the whole collapse of the USSR and its satellites.

It is always more advantageous for a leader to worry more about their reign and extend it than to be benevolent

I don't see why. It is advantageous for a leader to have satisfied and so complacent subjects. Benevolence can be a good tool.

Replies from: dglukhov
comment by dglukhov · 2017-03-22T20:29:26.715Z · LW(p) · GW(p)

And when it's not? Consider Ukraine. Or if you want to go a bit further in time the whole collapse of the USSR and its satellites.

Outcompeted by economic superpowers. Purge people all you want, if there are advantages to being integrated into the world economic system, the people who explicitly leave will suffer the consequences. China did not choose such a fate, but neither is it rebelling.

I don't see why. It is advantageous for a leader to have satisfied and so complacent subjects. Benevolence can be a good tool.

Benevolence is expensive. You will always have an advantage in paying your direct subordinates (generals, bankers, policy-makers, etc) rather than the bottom rung of the economic ladder. If you endorse those who cannot keep you in power, those that would normally keep you in power will simply choose a different leader (who's probably going to endorse them more than you are). Of course, your subordinates are inevitably dealing with the exact same problem, and chances are they too will optimize by supporting those who can keep them in power. There is no in-system incentive to be benevolent. You could argue a traditional republic tries to circumvent this empowering those on the bottom to work better (which has no other choice but to improve living conditions), but the amount of uncertainty for the leader increases, and leaders in this system do not enjoy extended times of reign. To optimize to fix this solution, you absolve rebellious sentiment.

Convince your working populace that they are happy (whether they're happy or not), and your rebellion problem is gone. There is, therefore, still no in-system incentive to be benevolent (this is just a Band-Aid), the true incentive is to get rid of uncertainty as to the loyalty of your subordinates.

Side-note: analysis of the human mind scares me in a way. To be able to know precisely how to manipulate the human mind makes this goal much easier to attain. For example, take any data analytics firm that sell their services for marketing purposes. They can collaborate with social media companies such as facebook (which currently has over 1.7 billion active monthly users as data points, though perhaps more since this is old data), where you freely give away your personal information, and get a detailed understanding of population clusters in regions with access to such services.

comment by markan · 2017-03-20T18:29:30.959Z · LW(p) · GW(p)

I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI

Replies from: ChristianKl, dogiv, turchin
comment by ChristianKl · 2017-03-22T12:16:43.309Z · LW(p) · GW(p)

A good metaphor is a cliff. A cliff poses a risk in that it is physically possible to drive over it. In the same way, it may be physically possible to build a very dangerous AI. But nobody wants to do that, and—in my view—it looks quite avoidable.

That's sounds naive and gives the impression that you haven't taken the time to understand the AI risk concerns. You provide no arguments besides the fact that you don't see the problem of AI risk.

The prevailing wisdom in this community is that most GAI designs are going to be unsafe and a lot of the unsafety isn't obvious beforehand. There's the belief that if the value alignment problem isn't solved before human level AGI, that means the end of humanity.

comment by dogiv · 2017-03-20T18:58:18.597Z · LW(p) · GW(p)

The idea that friendly superintelligence would be massively useful is implicit (and often explicit) in nearly every argument in favor of AI safety efforts, certainly including EY and Bostrom. But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development. I am not convinced.

Your argument rests on the proposition that current research on AI is so specific that its contribution toward human-level AI is very small, so small that the modest efforts of EAs (compared to all the massive corporations working on narrow AI) will speed things up significantly. In support of that, you mainly discuss vision--and I will agree with you that vision is not necessary for general AI, though some form of sensory input might be. However, another major focus of corporate AI research is natural language processing, which is much more closely tied to general intelligence. It is not clear whether we could call any system generally intelligent without it.

If you accept that mainstream AI research is making some progress toward human-level AI, even though it's not the main intention, then it quickly becomes clear that EA efforts would have greater marginal benefit in working on AI safety, something that mainstream research largely rejects outright.

Replies from: MrMind
comment by MrMind · 2017-03-22T11:07:09.407Z · LW(p) · GW(p)

But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development.

This is almost the inverse Basilisk argument.

comment by turchin · 2017-03-20T19:12:00.762Z · LW(p) · GW(p)

If you prove that HLAI is safer than narrow AI jumping in paper clip maximiser, it is good EA case.

If you prove that risks of synthetic biology is extremely high if we will not create HLAI in time, it would also support your point of view.

comment by mortal · 2017-03-25T13:11:42.478Z · LW(p) · GW(p)

What do you think of the idea of 'learning all the major mental models' - as promoted by Charlie Munger and FarnamStreet? These mental models also include cognitive fallacies, one of the major foci of Lesswrong.

I personally think it is a good idea, but it doesn't hurt to check.

Replies from: ChristianKl
comment by ChristianKl · 2017-03-27T08:31:18.176Z · LW(p) · GW(p)

Learning different mental models is quite useful.

On the other hand I'm not sure that it makes sense to think that there's one list with "the major mental models". Many fields have their own mental models.

comment by PhilGoetz · 2017-03-25T04:37:54.247Z · LW(p) · GW(p)

The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-03-26T09:00:57.885Z · LW(p) · GW(p)

Yep.

comment by username2 · 2017-03-24T11:06:18.036Z · LW(p) · GW(p)

Something happened to the mainpage. It no longer contains links to Main and Discussion.

Replies from: username2, Elo
comment by username2 · 2017-03-24T12:14:55.846Z · LW(p) · GW(p)

Preparing for the closure of the discussion forums? "Management" efforts to kickstart things with content-based posts seem to have stalled after the flurry in Nov/Dec.

comment by Elo · 2017-03-24T12:14:36.083Z · LW(p) · GW(p)

yes, we are working on it.

comment by Bound_up · 2017-03-20T23:49:25.496Z · LW(p) · GW(p)

Suppose there are 100 genes which figure into intelligence, the odds of getting any one being 50%.

The most common result would be for someone to get 50/100 of these genes and have average intelligence.

Some smaller number would get 51 or 49, and a smaller number still would get 52 or 48.

And so on, until at the extremes of the scale, such a small number of people get 0 or 100 of them that no one we've ever heard of or has ever been born has had all 100 of them.

As such, incredible superhuman intelligence would be manifest in a human who just got lucky enough to have all 100 genes. If some or all of these genes could be identified and manipulated in the genetic code, we'd have unprecedented geniuses.

Replies from: Viliam, philh, MrMind, Qiaochu_Yuan
comment by Viliam · 2017-03-21T16:19:33.202Z · LW(p) · GW(p)

Let me be the one to describe this glass as half-empty:

If there are 100 genes that participate in IQ, it means that there exists an upper limit to human IQ, i.e. when you have all 100 of them. (Ignoring the possibility of new IQ-increasing mutations for the moment.) Unlike the mathematical bell curve which -- mathematically speaking -- stretches into infinity, this upper limit of human IQ could be relatively low; like maybe IQ 200, but definitely no Anasûrimbor Kellhus.

It may turn out that to produce another Einstein or von Neumann, you need a rare combination of many factors, where having IQ close to the upper limit is necesary but not sufficient, and the rest is e.g. nutrition, personality traits, psychological health, and choices made in life. So even if you genetically produce 1000 people with the max IQ, barely one of them becomes functionally another Einstein. (But even then, 1 in 1000 is much better than 1 per generation globally.)

(Actually, this is my personal hypothesis of IQ, which -- if true -- would explain why different populations have more or less the same average IQ. Basicly, let's assume that having all those IQ genes gives you IQ 200, and that all lower IQ is a result of mutational load, and IQ 100 simply means a person with average mutational load. So even if you would populate a new island with Mensa members, in a few generations some of them would receive bad genes not just by inheritance but also by random non-fatal mutations, gradually lowering the average IQ to 100. On the other hand, if you would populate a new island with retards, as long as all the IQ genes are present in at least some of them, in a few generations natural selection would spread those genes in the population, gradually increasing the average IQ to 100.)

Replies from: Lumifer, gathaung
comment by Lumifer · 2017-03-21T16:26:16.057Z · LW(p) · GW(p)

it means that there exists an upper limit to human IQ

I'm pretty sure that that there is an upper limit to the IQ capabilities of a blob of wetware that has to fit inside a skull.

comment by gathaung · 2017-03-27T16:04:15.820Z · LW(p) · GW(p)

AFAIK (and wikipedia tells), this is not how IQ works. For measuring intelligence, we get an "ordinal scale", i.e. a ranking between test-subjects. An honest reporting would be "you are in the top such-and-so percent". For example, testing someone as "one-in-a-billion performant" is not even wrong; it is meaningless, since we have not administered one billion IQ tests over the course of human history, and have no idea what one-in-a-billion performance on an IQ test would look like.

Because the IQ is designed by people who would try to parse HTML by regex (I cannot think of a worse insult here), it is normalized to a normal distribution. This means that one applies the inverse error-function with SD of 15 points to the percentile data. Hence, IQ is Gaussian-by-definition. In order to compare, use e.g. python as a handy pocket calculator:

from math import *

iqtopercentile = lambda x: erfc((x-100)/15)/2

iqtopercentile(165)

4.442300208692339e-10

So we see that claims of any human being having an IQ of 165+ is statistically meaningless. If you extrapolated to all of human history, an IQ of 180+ is meaningless:

iqtopercentile(180)

2.3057198811629745e-14

Yep, by current definition you would need to test 10^14 humans to get one that manages an IQ of 180. If you test 10^12 humans and one god-like super-intelligence, then the super-intelligence gets an IQ of maybe 175 -- because you should not apply the inverse error-function to an ordinal scale, because ordinal scales cannot capture bimodals. Trying to do so invites eldritch horrors on our plane who will parse HTML with a regex.

Replies from: Good_Burning_Plastic, Viliam
comment by Good_Burning_Plastic · 2017-03-28T15:47:01.893Z · LW(p) · GW(p)

iqtopercentile = lambda x: erfc((x-100)/15)/2

The 15 should be (15.*sqrt(2)) actually, resulting in iqtopercentile(115) = 0.16 as it should be rather than 0.079 as your expression gives, iqtopercentile(165) = 7.3e-6 (i.e. 7 such people in a city with 1 million inhabitants in average), and iqtopercentile(180) = 4.8e-8 (i.e. several hundred such people in the world).

(Note also that in python (x-100)/15 returns an integer whenever x is an integer.)

comment by Viliam · 2017-03-28T10:50:36.290Z · LW(p) · GW(p)

Yeah, I agree with everything you wrote here. For extra irony, I also have Mensa-certified IQ of 176. (Which would put me 1 IQ point above the godlike superintelligence. Which is why I am waiting for Yudkowsky to build his artificial intelligence, which will become my apprentice, and together we will rule the galaxy.)

Ignoring the numbers, my point, which I probably didn't explain well, was this:

  • There is an upper limit to biological human intelligence (ignoring new future mutations), i.e. getting all the intelligence genes right.

  • It is possible that people with this maximum biological intelligence are actually less impressive than what we would expect. Maybe they are at an "average PhD" level.

  • And what we perceive as geniuses, e.g. Einstein or von Neumann, that's actually a combination of high biological intelligence and many other traits.

  • Therefore, a genetic engineering program creating thousand new max-intelligence humans could actually fail to produce a new Einstein.

Replies from: gathaung, Lumifer
comment by gathaung · 2017-03-28T15:20:01.771Z · LW(p) · GW(p)

Congrats! This means that you are a Mensa-certified very one-in-a-thousand-billion-special snowflake! If you believe in the doomsday argument then this ensures either the continued survival of bio-humans for another thousand years or widespread colonization of the solar system!

On the other hand, this puts quite the upper limit on the (institutional) numeracy of Mensa... wide guessing suggests that at least one in 10^3 people have sufficient numeracy to be incapable of testifying an IQ of 176 with a straight face, which would give us an upper bound on the NQ (numeracy quotient) of Mensa at 135.

(sorry for the snark; it is not directed at you but at the clowns at Mensa, and I am not judging anyone for having taken these guys seriously at a younger age)

Regarding your serious points: Obviously you are right, and equally obviously luck (living at the right time and encountering the right problem that you can solve) also plays a pretty important role. It is just that we do not have sensible definitions for "intelligence".

IQ is by design incapable of describing outliers, and IMHO mostly nonsense even in the bulk of the distribution (but reasonable people may disagree here). Also, even if you somehow construct a meaningful linear scale for "intelligence", then I very strongly suppose that the distribution will be very far from Gaussian at the tails (trivially so at the lower end, nontrivially so at the upper end). Also, applying the inverse error-function to ordinal scales... why?

Replies from: gjm, Viliam
comment by gjm · 2017-03-29T12:20:32.035Z · LW(p) · GW(p)

On the other hand, any regular reader of LW will (1) be aware that LW folks as a population are extremely smart and (2) notice that Viliam is demonstrably one of the smartest here, so the Mensa test got something right.

Of course any serious claim to be identifying people five standard deviations above average in a truly normally-distributed property is bullshit, but if you take the implicit claim behind that figure of 176 to be only "there's a number that kinda-sorta measures brainpower, the average is about 100, about 2% are above 130, higher numbers are dramatically rarer, and Viliam scored 176 which means he's very unusually bright" then I don't think it particularly needs laughing at.

Replies from: gathaung
comment by gathaung · 2017-05-16T15:49:51.540Z · LW(p) · GW(p)

It was not my intention to make fun of Viliam; I apologize if my comment gave this impression.

I did want to make fun of the institution of Mensa, and stand by them deserving some good-natured ridicule.

I agree with your charitable interpretation about what an IQ of 176 might actually mean; thanks for stating this in such a clear form.

comment by Viliam · 2017-03-29T00:08:02.375Z · LW(p) · GW(p)

Well, Mensa sucks at numbers since its very beginning. The original plan was to select 1% of the most intelligent people, but by mistake they made it 2%, and when they later found out, they decided to just keep it as it is.

"More than two sigma, that means approximately 2%, right?" "Yeah, approximately." Later: "You meant, 2% at both ends of the curve, so 1% at each, right?" "No, I meant 2% at each." "Oh, shit."

Replies from: tut
comment by tut · 2017-03-29T11:51:26.138Z · LW(p) · GW(p)

What? 2 sigma means 2.5% at each end.

Replies from: Lumifer, Viliam
comment by Lumifer · 2017-03-29T15:07:52.070Z · LW(p) · GW(p)

2 sigma means 2.5% at each end

That sentence is imprecise.

If you divide a standard Gaussian at the +2 sigma boundary, the probability mass to the left will be 97.5% and to the right ("the tail") -- 2.5%.

So two sigmas don't mean 2.5% at each end, they mean 2.5% at one end.

On the other hand, if you use a 4-sigma interval from -2 sigmas to +2 sigmas, the probability mass inside that interval will be 95% and both tails together will make 5% or 2.5% each.

comment by Viliam · 2017-03-29T13:34:47.979Z · LW(p) · GW(p)

Apparently, Mensa didn't get any better at math since then. As far as I know, they still use "2 sigma" and "top 2%" as synonyms. Well, at least those of them who know what "sigma" means.

comment by Lumifer · 2017-03-28T14:47:00.396Z · LW(p) · GW(p)

Therefore, a genetic engineering program creating thousand new max-intelligence humans could actually fail to produce a new Einstein.

Only if what makes von Neumanns and Einsteins is not heritable. Once you have a genetic engineering program going, you are not limited to adjusting just IQ genes.

comment by philh · 2017-03-21T12:36:24.465Z · LW(p) · GW(p)

You're also assuming that the genes are independently distributed, which isn't true if intelligent people are more likely to have kids with other intelligent people.

comment by MrMind · 2017-03-21T08:20:44.033Z · LW(p) · GW(p)

Well, yes. You have re-discovered the fact that a binomial distribution resembles, in the limit, a normal distribution.

comment by Qiaochu_Yuan · 2017-03-21T04:33:40.009Z · LW(p) · GW(p)

I mean, yes, of course. You might be interested in reading about Stephen Hsu.