Open thread, Nov. 30 - Dec. 06, 2015

post by MrMind · 2015-11-30T08:05:47.654Z · LW · GW · Legacy · 105 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

105 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2015-11-30T11:36:25.897Z · LW(p) · GW(p)

I request the attention of a moderator to the wiki editing war that began a week ago between Gleb_Tsipursky and VoiceofRa, regarding the article on Intentional Insights. So far. VoiceofRa has deleted it twice, and Gleb_Tsipursky has restored it twice.

Due to the way the editing to remove the page was done, to see the full editing history it is necessary to look also at the pseudo-article titled Delete.

I do not care whether there is an article on Intentional Insights or not, but I do care about standards for editing the wiki.

Replies from: Elo, gjm, Elo
comment by Elo · 2015-11-30T20:07:08.713Z · LW(p) · GW(p)

Thank you for raising this.

I suggest Gleb not be permitted to edit the page as he is motivated to not be impartial. I also suggest Ra equally not edit the page and we leave it to others to modify. (I hate saying "others will do it" but at worst I will)

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-01T09:20:29.167Z · LW(p) · GW(p)

Perhaps also best to add that Intentional Insights is not officially affiliated with LW?

comment by gjm · 2015-11-30T11:45:52.046Z · LW(p) · GW(p)

I think there surely should be an article on Intentional Insights but it should be as neutrally written as possible. Deleting it seems like mere vandalism to me.

Replies from: gjm
comment by gjm · 2015-11-30T15:05:58.298Z · LW(p) · GW(p)

I've put a new version of the page in place. It is much less uncritical than what Gleb wrote.

Replies from: gjm
comment by gjm · 2015-11-30T19:34:34.408Z · LW(p) · GW(p)

... Er, but now that seems to be gone and I don't even see any record of my edits. Perhaps I needed to do something different on account of the page having been deleted. (I just visited the page with redirect=no or redirect=off or whatever it is, edited its contents to replace the magic #REDIRECT[[Delete]] or whatever it was, and saved -- was that wrong?)

Anyway, it looks as if Gleb has now recreated the page again, so it exists but is rather one-sidedly promotional.

[EDITED to add:] No, wait, I was seeing an old cached version. I think the wiki must be serving up pages with misleading cache-control headers or something. I think it's all OK.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-11-30T20:15:03.855Z · LW(p) · GW(p)

This is rather strange. If I go to https://wiki.lesswrong.com/wiki/Intentional_Insights, I see Gleb's last version. If I go to https://wiki.lesswrong.com/index.php?title=Intentional_Insights, I see gjm's version. However, clicking on the history tab of the latter page lists no edits since Gleb's of 19 November.

On the "Delete" page, the history at https://wiki.lesswrong.com/index.php?title=Delete&action=history shows a third attempt by VoiceofRa at 18:49 on 30 November 2015 (timezone unspecified but probably UDT) to delete the InIn material.

There seems to be some sort of inconsistency in the wiki. VoiceofRa's misuse of redirection does not help.

Replies from: gjm
comment by gjm · 2015-11-30T22:07:39.235Z · LW(p) · GW(p)

Try doing control-f5 to force a full reload of the page. Until I did that, I found that I saw "my" version when logged in and Gleb's version when logged out. I think I saw weird behaviour in the histories too, but I forget exactly what.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-11-30T22:53:15.742Z · LW(p) · GW(p)

I'm using Safari on a Mac, so ctrl-F5 isn't a meaningful keystroke. Trying a different browser that I've never accessed these pages before with gave the same behaviour. That browser (Firefox) has ctrl-shift-R for a forced reload, and that made the wiki pages for the article itself show your version. However, the history and discussion pages weren't up to date, until I did more forced reloads.

Now even more weirdness is happening. In Firefox, if I force-reload https://wiki.lesswrong.com/wiki/Intentional_Insights, I get the latest version. If I do an ordinary reload, I get Gleb's old version. Force reload -- new. Ordinary reload -- old. I have never seen this behaviour before. Clearly something is wrong somewhere, but what?

Back in Safari, I cleared all history and cookies. Same behaviour as before: one URL gets Gleb's old version, one gets your version. This happens whether I'm logged in or out.

I see that the history has a few minor edits by Gleb around 06:38, 1 December 2015. UDT right now is 22:53, 30 November 2015. What timezone does the wiki run on?

Replies from: Elo
comment by Elo · 2015-12-01T00:04:50.241Z · LW(p) · GW(p)

looks like this happened:

by taking the old ININ page and renaming it "delete" it took the history over there to a page named "delete".

Assuming gleb made a new inin page; that would be a second set of history. and be making a mess of the wiki in general.

Nothing fancy; probably not purposefully done to hide the edit history; rather probably done to add permanence to the action of trying to delete the page (and had the effect of messing with the edit history as well)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-12-01T00:09:24.630Z · LW(p) · GW(p)

Thank you.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2015-12-01T04:33:18.343Z · LW(p) · GW(p)

Thanks for figuring this out, all! I have little wiki editing experience, so this is quite helpful knowledge.

Replies from: None
comment by [deleted] · 2015-12-01T17:07:56.643Z · LW(p) · GW(p)

You need to stop editing that article, GJM had it to a good place.

Replies from: NancyLebovitz, Gleb_Tsipursky
comment by NancyLebovitz · 2015-12-01T19:26:08.896Z · LW(p) · GW(p)

I think GJM was too harsh.

Replies from: IlyaShpitser, Vaniver
comment by Vaniver · 2015-12-01T21:48:46.496Z · LW(p) · GW(p)

I think argument for you editing the wiki on InIn to correct gjm's version is much better than the argument for Gleb editing the wiki on InIn; there's a reason Wikipedia has a rule against autobiographies outside of user pages.

comment by Gleb_Tsipursky · 2015-12-01T17:10:44.763Z · LW(p) · GW(p)

NancyLebovitz made a post about it here, so you can share your perspective there.

comment by Elo · 2015-11-30T23:10:36.685Z · LW(p) · GW(p)

it's worth considering: https://wiki.lesswrong.com/wiki/Help:User_Guide

Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away.

maybe we should keep the contents of that page to a minimum, link to InIn and let it stand for itself.

Worth noting as well - the following affiliates do not have their own pages;

  • mealsquares
  • beeminder
  • prediction book
  • omnilibrium

While it does not necessarily mean that they should not; InIn would do well to stand with it's peers.

suggested text:

Intentional Insights(link) is a nonprofit started by a lesswrong user Gleb_Tsipursky(link) (gleb[at]intentionalinsights.org)

it has been targetted towards a relatively lowbrow audience, and has been criticized for doing so (optional link to discussions).

they have successfully published content into popular venues, and while taking on board the critical feedback; will at this point in time likely continue to publish into the future.

more information can be found on their website.

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-12-01T03:04:12.468Z · LW(p) · GW(p)

Worth noting as well - the following affiliates do not have their own pages;

  • mealsquares
  • beeminder
  • prediction book
  • omnilibrium

This is a good point, certainly prediction book and omnilibrium are much more worthy of a wiki page then Gleb's pseudo-rationality site.

Replies from: Elo
comment by Elo · 2015-12-01T06:27:13.205Z · LW(p) · GW(p)

Such is the nature of a wiki: if no one has put in effort to make it; it won't be there.

Given that Gleb demonstrates his motivation to publish; it comes as no surprise that he is trying to put inin there; however I would have expected beeminder (as a profiting entity) to be motivated to grow similarly (although not as voraciously).

I have reached out to beeminder to put themselves onto the wiki. Would you be interested in putting omni in?

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-12-02T00:43:08.690Z · LW(p) · GW(p)

Unfortunately the end result is likely to be the wiki getting taken over by shameless self-promoters.

Replies from: polymathwannabe
comment by polymathwannabe · 2015-12-02T00:55:09.171Z · LW(p) · GW(p)

It doesn't have to be a bad thing. LW itself was born from EY's self-promotion.

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-12-02T06:26:45.833Z · LW(p) · GW(p)

True, however, most shameless self promoters are BS artists.

comment by Viliam · 2015-11-30T11:05:20.040Z · LW(p) · GW(p)

For those of you who always wanted to know what is it like to put your head in a particle accelerator when it's turned on...

On 13 July 1978, Anatoli Petrovich Bugorski was checking a malfunctioning piece of the largest Soviet particle accelerator, the U-70 synchrotron, when the safety mechanisms failed. Bugorski was leaning over the equipment when he stuck his head in the path of the 76 GeV proton beam. Reportedly, he saw a flash "brighter than a thousand suns" but did not feel any pain.

The left half of Bugorski's face swelled up beyond recognition and, over the next several days, started peeling off, revealing the path that the proton beam (moving near the speed of light) had burned through parts of his face, his bone and the brain tissue underneath. However, Bugorski survived and even completed his Ph.D. There was virtually no damage to his intellectual capacity, but the fatigue of mental work increased markedly. Bugorski completely lost hearing in the left ear and only a constant, unpleasant internal noise remained. The left half of his face was paralyzed due to the destruction of nerves. He was able to function well, except for the fact that he had occasional complex partial seizures and rare tonic-clonic seizures.

Bugorski continued to work in science and held the post of coordinator of physics experiments. In 1996, he applied unsuccessfully for disabled status to receive his free epilepsy medication. Bugorski showed interest in making himself available for study to Western researchers but could not afford to leave Protvino.

comment by Lumifer · 2015-11-30T19:33:02.241Z · LW(p) · GW(p)

A paper.

Abstract:

Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.

Replies from: Soothsilver, VincentYu
comment by Soothsilver · 2015-11-30T21:23:56.045Z · LW(p) · GW(p)

I liked this part:

"Participants were also given an attention check. For this, participants were shown a list of activities (e.g., biking, reading) directly below the following instructions: “Below is a list of leisure activities. If you are reading this, please choose the “other” box below and type in ‘I read the instructions’”. This attention check proved rather difficult with 35.4% of the sample failing (N = 99). However, the results were similar if these participants were excluded. We therefore retained the full data set."

comment by VincentYu · 2015-12-01T03:52:24.144Z · LW(p) · GW(p)

Nice paper.

p. 558 (Study 4):

Participants also completed a ten item personality scale (Gosling, Rentfrow & Swann, 2003) [the TIPI; an alternative is Rammstedt and John's BFI-10] that indexes individual differences in the Big Five personality traits (extraversion, agreeableness, conscientiousness, emotional stability, and openness). These data will not be considered further.

It's strange not to say why the data will not be considered further. The data are available, the reduction is clean, but the keys look a bit too skeletal given that copies of the orignal surveys don't seem to be available (perhaps because Raven's APM and possibly some other scales are copyrighted). Still, it's great of the journal and the authors to provide the data. Anyway, I'll take a look.

The supplement contains the statements and the corresponding descriptive statistics for their profundity ratings. It's an entertaining read.

ETA: For additional doses of profundity, use Armok_GoB's profound LW wisdom generator.

comment by skeptical_lurker · 2015-12-01T09:04:34.269Z · LW(p) · GW(p)

I'm not sure what the best way is to add new data to an old debate - going back to post in the original thread means that only one person will see it - so I thought I'd post it here.

Anyway, the new data pertains to my previous debates with VoiceOfRa over gay rights and fertility rate. I just found out that Singapore bans male homosexuality (but lesbianism is legal) but women have only 1.29 children each, while similar countries Hong Kong and Japan have legal homosexuality, and fertility rates of 1.3 and 1.41.

Now, obviously three countries are not statistically significant, and it could be that if Singapore would have an even lower birth rate if they legalised homosexuality. But it still seems unlikely that it would have much impact, if any, and for someone who cares a lot about increasing the birth rate, sexuality is a distraction from the issue that careers are higher status than raising children.

Replies from: Vaniver
comment by Vaniver · 2015-12-01T16:11:42.982Z · LW(p) · GW(p)

I'm not sure what the best way is to add new data to an old debate - going back to post in the original thread means that only one person will see it - so I thought I'd post it here.

It also shows up in 'Recent Comments.' In general, I think it's better to continue old conversations in the old spot, rather than disconnecting them.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-12-01T19:28:21.476Z · LW(p) · GW(p)

On the other hand, a new comment is more likely to be seen if it's in the current open thread. I'm not sure whether keeping a conversation in one place is more important.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-01T22:19:01.324Z · LW(p) · GW(p)

Maybe a very specific conversation (such as giving a specific person advice) should be kept in one place, whereas a conversation of more general interest is inevitably going to be discussed repeatedly in different threads.

comment by tog · 2015-11-30T08:50:02.713Z · LW(p) · GW(p)

Here's drawing your attention to this year's Effective Altruism Survey, which was recently released and which Peter Hurford linked to in LessWrong Main. As he says there:

This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.

Take the EA Survey

comment by Soothsilver · 2015-11-30T18:25:45.625Z · LW(p) · GW(p)

When will be the next LessWrong census and who will run it?

Replies from: Elo
comment by Elo · 2015-12-01T06:29:44.567Z · LW(p) · GW(p)

I will run it if no one else has piped up. Scheduled for Feb if no one else takes it on. I will post asking for questions in a month.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2015-12-01T11:28:04.600Z · LW(p) · GW(p)

I was going to email Scott, i.e., Yvain, if he needed any help, and/or if he was planning on running it at all. I never bothered to carry that out. I will email him now, though, unless you've confirmed he won't be doing it for 2015, or early 2016. Anyway, if you'll ultimately be doing it, let me know if you need help, and I can pitch in.

Replies from: Elo
comment by Elo · 2015-12-01T14:14:08.826Z · LW(p) · GW(p)

will give it a few days for other replies, I will PM you when I am ready.

Replies from: philh
comment by philh · 2015-12-01T14:57:47.828Z · LW(p) · GW(p)

http://slatestarscratchpad.tumblr.com/post/134165170061/do-you-plan-on-doing-an-lwssc-survey-this-year

Do you plan on doing an LW/SSC survey this year?

Yes, but I will probably be a few months late due to other projects.

comment by HungryHobo · 2015-11-30T16:57:54.100Z · LW(p) · GW(p)

I've occasionally seen lists of peoples favorite sequences articles or similar but is there any inverse? Articles or parts of sequences on lesswrong which contain errors or which are probably misleading or poorly written that anyone would like to point to?

Replies from: Soothsilver, skeptical_lurker, Jiro
comment by Soothsilver · 2015-11-30T22:20:43.268Z · LW(p) · GW(p)

I understand that the quantum physics sequence is controversial even within LessWrong.

Generally, though, all of the sequences could benefit from annotations.

comment by skeptical_lurker · 2015-12-01T09:06:35.239Z · LW(p) · GW(p)

Apparently the metaethics sequence confused everyone.

Replies from: Manfred
comment by Manfred · 2015-12-01T17:25:45.188Z · LW(p) · GW(p)

I definitely didn't get it the first time I read it, but currently I think it's quite good. Maybe it's written in a way that's confusing if you don't already know the punchline (or maybe metaethics confuses people).

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-01T18:45:05.170Z · LW(p) · GW(p)

I know the punchline - CEV. To me, it seemed to belabour points that felt obvious to me, while skipping over, or treating as obvious, points that are really confusing.

Regardless of whether CEV is the correct ethical system, it seems to me that CEV or CV is a reasonably good schelling point, so that could be a good argument to accept it on pragmatic grounds.

Replies from: Lumifer, Manfred
comment by Lumifer · 2015-12-01T18:56:12.058Z · LW(p) · GW(p)

it seems to me that CEV or CV is a reasonably good schelling point

How could it be a Schelling point when no one has any idea what it is?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-01T19:27:52.998Z · LW(p) · GW(p)

I meant 'program the FAI to calculate CEV' might be a reasonably good Schelling point for FAI design. I wasn't suggesting that you or I could calculate it to inform everyday ethics.

Replies from: Lumifer
comment by Lumifer · 2015-12-01T19:57:33.889Z · LW(p) · GW(p)

Um, doesn't the same objection apply?

How could programming the FAI to calculate CEV be a Schelling point when no one has any idea what CEV is? It is not the case that we only don't know how to calculate it -- we have no good idea what it is.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-01T22:11:01.429Z · LW(p) · GW(p)

Its, you know, human values.

My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values.

I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone's values impossible.

Anyway, since I've heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be.

Personally, I'm surprised I haven't heard more about a 'Libertarian FAI' that implements each person's volition separately, as long as it doesn't non-consensually affect anyone else. Admittedly, there's problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.

Replies from: Lumifer
comment by Lumifer · 2015-12-01T22:33:48.163Z · LW(p) · GW(p)

Anyway, since I've heard a lot about CEV ... it seems that CEV is a Schelling point

Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.

But even ignoring this, CEV is just too vague to be a Schelling point. It's essentially defined as "all of what's good and none of what's bad" which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent -- which is why there is an "E" that allows unlimited handwaving.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-01T23:36:08.110Z · LW(p) · GW(p)

Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.

I realise that it's not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.

Still, I think I am using 'Schelling point' wrongly here - what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.

Human values are simply not that consistent -- which is why there is an "E" that allows unlimited handwaving.

Do you think that it is impossible for an FAI to implement CEV?

Replies from: Lumifer
comment by Lumifer · 2015-12-02T01:12:58.513Z · LW(p) · GW(p)

A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not "the best" by some criterion, it's not a compromise, in some sense it's an irrational choice from equal candidates -- it's just that people's minds are drawn to it.

In particular, a Schelling point is not something you agree on -- in fact, it's something you do NOT agree on (beforehand) :-)

Do you think that it is impossible for an FAI to implement CEV?

I don't know what CEV is. I suspect it's an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don't consider it satisfactory.

comment by Manfred · 2015-12-02T07:29:20.573Z · LW(p) · GW(p)

I know the punchline - CEV.

Hmm, that's not what I think is the punchline :P I think it's something like "your morality is an idealized version of the computation you use to make moral decisions."

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-02T12:46:03.669Z · LW(p) · GW(p)

Really? That seems almost tautological to me, and about as helpful as 'do what is right'.

Replies from: Manfred
comment by Manfred · 2015-12-02T20:10:57.885Z · LW(p) · GW(p)

Well, perhaps the controversy is that that's it. That it's okay that there's no external morality and no universally compelling moral arguments, and that we can and should act morally in what turns out to be a fairly ordinary way, even though what we mean by "should" and "morally" depends on ourselves.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-12-07T21:28:27.577Z · LW(p) · GW(p)

It all adds up to normality, and don't worry about it.

See, I can sum up an entire sequence in one sentence!

This also doesn't seem like the most original idea, in fact I think this "you create your own values" is the central idea of existentialism.

comment by Jiro · 2015-11-30T22:33:36.353Z · LW(p) · GW(p)

http://lesswrong.com/lw/i5/bayesian_judo/

This one where Eliezer seems to be bragging about using the Chewbacca defense.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-12-01T20:15:05.688Z · LW(p) · GW(p)

That's not the chewbacca defense. It's going on the offense against something he disagrees with by pointing out implications. The Aumann bit is just throwing his hands up in the air.

Replies from: Jiro
comment by Jiro · 2015-12-01T22:18:17.798Z · LW(p) · GW(p)

The Aumann bit is him quoting something which doesn't actually prove what he's quoting it to prove, but which he knows his opponent can't refute because he's never heard of it. It isn't him throwing his hands up in the air--it's an argument, just a fallacious one.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-12-01T22:45:18.125Z · LW(p) · GW(p)

Both times he used it, he's giving up on getting somewhere and is just screwing with the guy; it's not part of his main argument.

The first time he's trying to stop him from weaseling out. Plus, Aumann doesn't mean that, taken in its literal form. But, it applies indirectly, aspirationally: to try to be rational and try to share relevant information, etc. so as to approximate the conditions under which it would apply. Indeed, the most reasonable interpretation of the other's suggestion to agree to disagree is that they both stop trying to be more right than they are (because uncomfortable, can of worms, etc). That's the opposite of the rationalist approach, and going against that is exactly how he used it - 'if they disagree, someone is doing something wrong', is not very wrong.

The second time, it's just 'Screw this, I'm out of here'.

Replies from: Jiro
comment by Jiro · 2015-12-03T16:10:33.027Z · LW(p) · GW(p)

Both times he used it, he's giving up on getting somewhere and is just screwing with the guy; it's not part of his main argument.

It's worded like an argument. And he and the bystanders would, when listening to it, believe that Eliezer had made an argument that nobody was able to refute. The impact of Eliezer's words depends on deceiving him and the bystanders into thinking it is, and was intended as, a valid argument.

In one sense this is a matter of semantics. If you knowingly state something that sounds like an argument, but is fallacious, for the purposes of tricking someone, does that count as "making a bad argument" (in which case Eliezer is using the Chewbacca Defense) or "not making an argument at all" (in which case he isn't)?

comment by username2 · 2015-11-30T14:19:20.832Z · LW(p) · GW(p)

There is a LW post about rational home buying.But how to rationally buy a car?

Replies from: Gavin, Elo, Lumifer
comment by Gavin · 2015-11-30T17:55:24.617Z · LW(p) · GW(p)

I don't have the knowledge to give a full post, but I absolutely hate car repair. And if you buy a used car, there's a good chance that someone is selling it because it has maintenance issues. This happened to me, and no matter how many times I took the car to the mechanic it just kept having problems.

On the other hand, new cars have a huge extra price tag just because they're new. So the classic advice is to never buy a new car, because the moment you drive it off the lot it loses a ton of value instantly.

Here are a couple ideas for how to handle this:

  1. Buy a car that's just off a 2 or 3 year lease. It's probably in great shape and is less likely to be a lemon.There are companies that only sell off-lease cars.

  2. Assume a lease that's in its final year. (at http://www.swapalease.com/lease/search.aspx?maxmo=12 for example) Then you get a trial period of 4-12 months, and will have the option to buy the car. This way you'll know if you like the car or not and if it has any issues. The important thing to check is that the "residual price" that they charge for buying the car is reasonable. See this article for more info on that: http://www.edmunds.com/car-leasing/buying-your-leased-car.html

There are a ton of articles out there on how to negotiate a car deal, but one suggestion that might be worth trying is to negotiate and then leave and come back the next day to make the purchase. In the process of walking out you'll probably get the best deal they're going to offer. You can always just come back ten minutes later and make the purchase--they're not going to mind and the deal isn't going to expire (even if they say it is).

Replies from: drethelin
comment by drethelin · 2015-11-30T21:51:48.818Z · LW(p) · GW(p)

If you really hate repairs, doesn't it make much more sense just to lease yourself?

Replies from: Gavin, Elo
comment by Gavin · 2015-12-01T15:34:37.916Z · LW(p) · GW(p)

My real solution was not to own a car at all. Feel free to discount my advice appropriately!

comment by Elo · 2015-11-30T22:57:56.784Z · LW(p) · GW(p)

lease will usually end up more expensive but you pay-by-the-month so it can be affordable to some people. (that's how lease companies profit)

comment by Elo · 2015-12-01T06:34:43.436Z · LW(p) · GW(p)

I wonder if this will help:

http://lesswrong.com/r/discussion/lw/mv8/general_buying_considerations/

I wrote it to be general; if you consider each point with whether it relates or not; then you will be a lot closer to criteria.

comment by Lumifer · 2015-11-30T18:06:31.897Z · LW(p) · GW(p)

In the usual way: figure out your budget, figure out your requirements, look at cars in the intersection of budget and requirements and, provided the subset is not null, pick one which you like the most.

Replies from: Elo
comment by Elo · 2015-12-01T06:34:36.487Z · LW(p) · GW(p)

I suspect

in the usual way is an unhelpful line.

however the rest:

figure out your budget, figure out your requirements, look at cars in the intersection of budget and requirements and, provided the subset is not null, pick one which you like the most.

seems sound.

Replies from: Lumifer
comment by Lumifer · 2015-12-01T16:14:54.099Z · LW(p) · GW(p)

is an unhelpful line

Its intent was to point out that purchasing cars is not qualitatively different from purchasing any other thing and that the usual heuristics one uses when buying, say, a computer, apply to buying cars as well.

Replies from: Jiro
comment by Jiro · 2015-12-01T17:02:47.854Z · LW(p) · GW(p)

Purchasing cars often requires haggling. Purchasing computers rarely does.

Also, cars are often bought used and it is in the interests of the salesman to conceal information about the used car from you such as hidden problems. Computers are more rarely bought used, rarely have hidden problems that can impair their long-term functioning but are not obvious, and when bought used are often not bought for a price high enough that it's even worth the seller's effort to deceive you about the status of the computer. Furthermore, computers cam be bought in pieces and cars cannot.

comment by [deleted] · 2015-12-06T21:57:05.539Z · LW(p) · GW(p)

It seems to me lately that commute time is actually pretty comfortably spent thinking on problems which require 'holding off on proposing solutions' (I don't drive.) I used to misspend it by going over stuff in circles, but now I actually look forward to it and compose lists of things I have to do/buy/wash etc. (also, I spend far less of it belowground, which is still - years after I moved - a palpable relief.) I had tried listening to podcasts, but it made my ears hurt after a while, and simply 'disconnecting' during 'stupid commute' made me disgruntled. Apparently thinking doesn't feel too bad!:)

comment by signal · 2015-11-30T17:55:06.032Z · LW(p) · GW(p)

Can somebody point out text books or other sources that lead to an increased understanding of how to influence more than one person (the books I know address only 1:1, or presentations)? There are books on how to run successful businesses, etc, but is there overarching knowledge that includes successful states, parties, NGOs, religions, other social groups (would also be of interest for how to best spread rationality...). In the Yvain framework: given the Moloch as a taken, what are good resources that describe how to optimally influence the Moloch with many self-interested agents and for example its inherent game-theoretic problems as long as AI is not up to the task?

Replies from: Lumifer
comment by Lumifer · 2015-11-30T17:58:43.598Z · LW(p) · GW(p)

how to influence more than one person

I believe the usual term for this is "politics". This is one classic reference.

Replies from: signal, None
comment by signal · 2015-12-02T09:04:16.371Z · LW(p) · GW(p)

Thanks Lumifer. The Prince is worth reading. However, tranferring his insights regarding princedoms to how to design and spread memeplexes in the 21st century does have its limits. Any more suggestions?

Replies from: Lumifer
comment by Lumifer · 2015-12-02T16:03:40.679Z · LW(p) · GW(p)

I suspect that at the more detailed level you have to be more specific -- you do want to run a party or an NGO or a cult or design nerd-culture memes or what? Things become different.

Beyond the usual recommendation -- go into a large B&N and browse (or chase "similar" links on Amazon) -- I can point out an interesting book about building political movements from scratch in the XXI century.

comment by [deleted] · 2015-12-01T17:12:11.170Z · LW(p) · GW(p)

I believe the usual term for this is "politics". This is one classic reference.

There are actually a few others, such as Group Psychology, Marketing, Economics and Mechanism Design.

In general, I see this as a big problem that requires many different frameworks to have an effect.

Replies from: signal
comment by signal · 2015-12-02T09:05:16.797Z · LW(p) · GW(p)

Can you point out your 3-5 favorite books/frameworks?

comment by Panorama · 2015-12-04T18:42:05.404Z · LW(p) · GW(p)

How to build a better PhD

There are too many PhD students for too few academic jobs — but with imagination, the problem could be solved.

comment by Elo · 2015-12-03T22:36:06.865Z · LW(p) · GW(p)

https://blog.todoist.com/2015/11/30/ultimate-guide-personal-productivity-methods/

this seems like a good list of productive systems that I know sound reasonable. Worth looking over them and considering them for yourself.

comment by Elo · 2015-12-03T01:09:09.872Z · LW(p) · GW(p)

I am interested in an analysis of words/day or letters/day published in the LW forums over time (as comments, separately to posts). Can someone point me in the direction of accessing the full database of comments in easy manual-hackity analyseable format - a csv or something... Or can someone else with that access make a graph of characters published/day.

My intention is to approximate a value for characters/time or words/time (i.e. typing speed) to grasp an understand of how much approximate "human-equivalent" time is spent on the forums, and how it has changed over the period of the existence of the forum.

comment by passive_fist · 2015-12-01T00:14:57.248Z · LW(p) · GW(p)

How to gather like-minded enthusiasts together?

How do you go about finding people who share your goals and are willing to cooperate with you on working to attain those goals? I haven't been very successful with this so far. It seems that there should be thousands of people around the world who think like me yet I've only been able to find a few.

Replies from: Lumifer, Elo, ChristianKl
comment by Lumifer · 2015-12-01T01:08:21.139Z · LW(p) · GW(p)

and are willing to cooperate with you

Try looking for people you are willing to cooperate with.

comment by Elo · 2015-12-01T06:28:39.300Z · LW(p) · GW(p)

the question is quite general. Can you be more specific about what steps you have taken so far and exactly where you are being held up?

comment by ChristianKl · 2015-12-01T01:16:46.198Z · LW(p) · GW(p)

Depends very much on the goal. For some goals it's "start a company". For others it might be: "Start a local LW meetup"

Given your LW presence, you don't use your real life name. You don't have the country and city in which you live in your profile. That makes it harder for people to find you. Finding other people is a lot easier if you are open about who you are instead of hiding.

comment by CronoDAS · 2015-11-30T18:59:22.460Z · LW(p) · GW(p)

For various reasons, I don't listen to podcasts. Is there any reasonable way to get a text version of a podcast when none has been provided? (Pretend I'm totally deaf.)

Replies from: Vaniver
comment by Vaniver · 2015-12-01T17:01:37.951Z · LW(p) · GW(p)

There are a number of text-to-speech (transcription) services out there. Kalzumeus uses CastingWords. The cheapest option (price determines speed of transcription) runs a dollar a minute, so it'll be pricey for most podcasts--but you could consider it a donation and send the finished transcript on to the podcast creator, ideally convincing them to start producing their own transcripts.

There's also a lot of transcription software out there, but the quality might not be as good as you'd like.

comment by Panorama · 2015-12-04T18:37:56.504Z · LW(p) · GW(p)

'My father had one job in his life, I've had six in mine, my kids will have six at the same time'

In the ‘gig’ or ‘sharing’ economy, say the experts, we will do lots of different jobs as technology releases us from the nine to five. But it may also bring anxiety, insecurity and low wages

Replies from: Elo
comment by Elo · 2015-12-05T14:45:58.325Z · LW(p) · GW(p)

This is still industry dependent. hospitality industry has a short time-span per job, health industry workers don't change jobs.

comment by Panorama · 2015-12-04T19:00:09.706Z · LW(p) · GW(p)

User behaviour: Websites and apps are designed for compulsion, even addiction. Should the net be regulated like drugs or casinos?

When I go online, I feel like one of B F Skinner’s white Carneaux pigeons. Those pigeons spent the pivotal hours of their lives in boxes, obsessively pecking small pieces of Plexiglas. In doing so, they helped Skinner, a psychology researcher at Harvard, map certain behavioural principles that apply, with eerie precision, to the design of 21st‑century digital experiences.

Replies from: username2
comment by username2 · 2015-12-04T22:48:58.199Z · LW(p) · GW(p)

I can't see any plausible scenario where regulating the internet like drugs or casinos would lead to a net positive outcome.

comment by Elo · 2015-12-03T01:38:59.675Z · LW(p) · GW(p)

Have updated: my list of common human goals article.

now includes: Improve the tools available - sharpen the axe, write a new app that can do the thing you want, invent systems that work for you. prepare for when the rest of the work comes along.

in the sense of, making a good workplace; sharpening the tools; knolling), and more...

comment by NancyLebovitz · 2015-12-02T18:06:17.829Z · LW(p) · GW(p)

What people have done which has helped them when they had no hope Check out the livejournal link-- it's also got good comments

This is very specifically a discussion for personal accounts, not advice.

I'm willing to be that a formal study would turn up much the same sort of thing-- what helped was very varied-- even contradictory between one person and another, though with some overlap.

Replies from: Elo
comment by Elo · 2015-12-03T22:52:01.773Z · LW(p) · GW(p)

that was a lot less helpful than I expected it to be.

TL;DR - anything and everything. Do small things. Do something with structure or regularity, or you will want to anyway, (work for several people), consider a terrible mantra about how everything is hopeless but you are still above zero. "I lived through today" or something. pets help some; not others.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-12-03T23:04:34.810Z · LW(p) · GW(p)

The takeaway might be that at least some people find things that help, and you need to find something that suits you.

What were you hoping for from the links?

Replies from: Elo
comment by Elo · 2015-12-04T00:16:47.901Z · LW(p) · GW(p)

the posts were not very well organised. would have been more helpful if they were ordered like problem: feelings: tried: what worked:

they were mostly a mess of all the above in any order; along with a lot of "hope things get better for you".

maybe it's all in my head expecting content to be more ordered and rationally oriented.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-12-04T00:26:04.872Z · LW(p) · GW(p)

Maybe it would make sense for you to ask here for the kind of content you want .

Replies from: Elo
comment by Elo · 2015-12-04T01:11:35.324Z · LW(p) · GW(p)

I didn't have a need right now; but strategies are always good to have in times of preparation for when things go drastically wrong. (even for advising other people in circumstances)

was hoping for more is all. not a big deal.

comment by Lumifer · 2015-12-02T17:22:39.685Z · LW(p) · GW(p)

Iterating Grace, a curious little art book.

Subtitle: Heartfelt Wisdom and Disruptive Truths from Silicon Valley's Top Venture Capitalists. But it's really about being trampled by llamas.

comment by username2 · 2015-12-02T12:59:46.754Z · LW(p) · GW(p)

Who are the equivalents of olympic champions for soft/social skills? What occupations do they usually hold?

I am aware of a show format in America, in which a host invites a guest to a news show, and chats with them. I would assume that would require them to be able to hit up a conversation with pretty much anyone.

Or the insurance salesman signing a deal in well over the majority of the cases.

... or the cult leader.

What is the present day equivalent of the Byzantine courtier managing to turn friendships into grudges, and making lovers stab each other in the back?

Or generally, the best performers in the cynical management of people so they further their goals?

I would like to know who are the creme, and most importantly, the occupations which select for such abilities (this in hope of there existing textbooks/playbooks/workshops titled Handbook of /Occupation/ from which allows one to acquire these skills.)

I am most interested in the first (being able to hit up a conversation with anyone, from truck driver to scientist, and continue to do so for a reasonable time)

I recall Kaj_Sotala writing up about a friend of his, who was able to do pretty much that, and his drawing a mind-map of conversation topics, and generally, that conversation is just like anything else: you ought to put effort into it, prepare if you want better outcomes, and anything which goes for any complex activity, so goes for this. On the IRC channel when there was talk about AI box experiments, ridiculously long pre-prepared scripts were mentioned. Debate teams also do this

Any other (play/hand)books i should check out? This seems like a search query which would result in too much red herrings, so I am writing here first hoping you guys have a better understanding around this area.

Replies from: polymathwannabe, ChristianKl, Lumifer
comment by polymathwannabe · 2015-12-02T14:19:08.611Z · LW(p) · GW(p)

I no longer remember where I read this idea:

There have always been people highly attuned to identifying whom they need to flatter to obtain favors. Before modern democracy, when sovereign used to equal king, the sycophants trained in the art of ingratiating themselves with the sovereign were the courtiers whom you could always see surrounding the monarch. Now that the sovereign in most Western countries is "We the People," you can find the same sycophants using the same arts to gain favors from the sovereign---they're the PR consultants, political strategists, and most candidates for public office.

comment by ChristianKl · 2015-12-02T18:37:14.503Z · LW(p) · GW(p)

Who are the equivalents of olympic champions for soft/social skills? What occupations do they usually hold?

There are many different social skills. The skills of a good coach are different from the skills of a good salesperson.

Trump speaks about he would put people like Carl Icahn into negotiating deals. I think Icahn is at the top of the league. I would put Oprah as well into the olympic chamion category but she's very different from Icahn.

I recall Kaj_Sotala writing up about a friend of his, who was able to do pretty much that, and his drawing a mind-map of conversation topics, and generally, that conversation is just like anything else: you ought to put effort into it, prepare if you want better outcomes, and anything which goes for any complex activity, so goes for this. On the IRC channel when there was talk about AI box experiments, ridiculously long pre-prepared scripts were mentioned. Debate teams also do this

Of course you can have conversations by drawing out possible conversation topics. On the other hand I doubt that the quality of those conversations are very high. Having high quality conversations is a lot about opening up on an emotional level.

I went into my last 4-day personal development workshop with the expectation that while I'm at the emotional resonance with the workshop, it's likely that people will approach me more in public. After the first day I was travelling home (15 minutes walking + 20 minutes driving the train) and two people approached me for navigation advise.

On the second day one person approached me. I thought to myself "This is crazy." That was enough to close down and nobody approached me the next two days.

There's nothing that I could do based on reading a book that puts me into that state. It's all the emotional effect of opening up. Does that mean that I'm generally highly skilled at having conversations? No, I'm not top tier. Most of my days in the last year I was acting rather introverted.

I think you make a mistake if you try to focus on mental work such as mapping out conversation topics instead of dealing with emotional and physical issues.

comment by Lumifer · 2015-12-02T15:48:45.004Z · LW(p) · GW(p)

Who are the equivalents of olympic champions for soft/social skills? What occupations do they usually hold?

I would expect to find them among lawyers, marketers, fixers/facilitators/deal-makers. Red-pill people probably want to be there, too :-/

What is the present day equivalent of the Byzantine courtier

A political operator, playing the usual games in the corridors of power.

the best performers in the cynical management of people so they further their goals?

Steve Jobs was said to be very very good at this.

occupations which select for such abilities

Salesman is the obvious one. There are LOTS of books on how to Win Friends and Influence People :-) starting from the Carnegie classic. Check out your local B&N or browse Amazon.

comment by Richard_Kennaway · 2015-11-30T11:35:44.062Z · LW(p) · GW(p)

I request the attention of a moderator to the wiki editing war that began a week ago between Gleb_Tsipursky and VoiceofRa, regarding the article on Intentional Insights. So far. VoiceofRa has deleted it twice, and Gleb_Tsipursky has restored it twice.

Due to the way the editing to remove the page was done, to see the full editing history it is necessary to look also at the pseudo-article titled Delete.

I do not care whether there is an article on Intentional Insights or not, but I do care about standards for editing the wiki.

comment by [deleted] · 2015-12-06T09:13:40.283Z · LW(p) · GW(p)

I just started gathering some intelligence from potential competitors in entering a 'new industry'. Looks like there is basically just 1 major competitor. Just discovered they got a multimillion dollar government grant! I feel it's so unfair, and cheated. How can I compete or be a value adding collaborator at another stage of the value chain now! Anyone have experience applying for government grants as a startup - including for startups that aren't internet based?

Replies from: Elo
comment by Elo · 2015-12-06T22:45:21.587Z · LW(p) · GW(p)

Write to the company and ask to be involved? Offer your services?

Do you care about making money off the thing? Or getting the thing done? (this question might help you guide your future thinking)

comment by Gunslinger (LessWrong1) · 2015-12-05T18:59:50.015Z · LW(p) · GW(p)

Is there any correlation between facial recognition and computer-driven cars? Just a strange idea inspired by this article that got into my head, along with a cached knowledge of software recognition performing roughly similar to humans and because it's cached knowledge I'm not sure how reliable it is. Anyone more familiar with this?

I'm making a comparison between facial recognition and recognition of everything else, and I'm not sure how good it is, although it's fundamentally focusing on the same thing.

tl:dr if human recognition =+- software recognition how can computer-driven cars be safer?

Replies from: Manfred
comment by Manfred · 2015-12-07T05:22:20.818Z · LW(p) · GW(p)

Sure, there's some correlation, but a correlation can just mean that if one's getting better, the other probably is too. Just knowing that a correlation exists doesn't help us much.

The reason why self-driving cars could be almost perfect even if face recognition still had problems is that self-driving cars don't need to have great detection rates on people - it's enough to know where the road is and where other stuff is, the nature of that other stuff is only of secondary concern. To find out where stuff is, self-driving cars don't have to use still images of the environment - they can use things like multiple cameras, fancy laser range-finders, and motion parallax.