Open thread, July 16-22, 2013

post by David_Gerard · 2013-07-15T20:13:13.041Z · LW · GW · Legacy · 305 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Given the discussion thread about these, let's try calling this a one-week thread, and see if anyone bothers starting one next Monday.

305 comments

Comments sorted by top scores.

comment by FiftyTwo · 2013-07-15T23:43:57.570Z · LW(p) · GW(p)

Given our known problems with actively expressing approval for things, I'd like to mention that I approve of the more frequent open threads.

Replies from: Qiaochu_Yuan, Metus, Rukifellth
comment by Qiaochu_Yuan · 2013-07-16T00:51:02.346Z · LW(p) · GW(p)

I approve of your approval! I also object-level approve of this thread.

comment by Metus · 2013-07-15T23:45:37.750Z · LW(p) · GW(p)

I want to express my approval, too.

comment by Rukifellth · 2013-07-16T09:54:09.269Z · LW(p) · GW(p)

Me too, the biweeklies grew too bloated.

comment by gwern · 2013-07-20T23:34:14.800Z · LW(p) · GW(p)

While reading a psychology paper, I ran into the following comment:

Unfamiliar things are distrusted and hard to process, overly familiar things are boring, and the perfect object of beauty lies somewhere in between (Sluckin, Hargreaves, & Colman, 1983). The familiar comes as standard equipment in every empirical paper: scientific report structure, well-known statistical techniques, established methods. In fact, the form of a research article is so standardized that it is in danger of becoming deathly dull. So the burden is on the author to provide content and ideas that will knock the reader’s socks off—at least if the reader is one of the dozen or so potential reviewers in that sub-subspecialty.

Besides the obvious connection to Schmidhuber's esthetics, it occurred to me that this has considerable relevance to LW/OB. Hanson in the past has counseled contrarians like us to pick our battles and conform in most ways while not conforming in a few carefully chosen ones (eg Dear Young Eccentric, Against Free Thinkers, Even When Contrarians Win, They Lose); this struck me as obviously correct, and that one could think of oneself as having a "budget" where non-conforming on both dress and language and ideas blows one's credit with people / discredits oneself.

This idea about familiarity suggests a different way to think of it is in terms of novelty and familiarity: ideas like existential risk are highly novel compared to regular politics or charities. But if these ideas are highly novel, then they are likely "distrusted and hard to process" (which certainly describes well many people's reaction to things on LW/OB), and any additional novelty like that of vocabulary or formatting or style, is more likely to damage reception or perhaps push readers past some critical limit than if applied to some standard familiar boring thing like evolution, where due to sufficient familiarity, idiosyncratic or novel aspects will not damage reception but instead improve reception. Consider the different reactions to Nick Bostrom and Eliezer Yudkowsky, who write about many of the same exact ideas and problems - but no one put on Broadway plays or YouTube videos mocking Bostrom or accusing him of being a sinister billionaire's tool in a plot against all that is good and just - while on the other hand, Hofstadter's GEB is dearly beloved for its diversity of novel forms and expressions, even if it's all directed toward exposition on pretty standard unshocking topics like Godel's theorems or GOFAI.

This line of reasoning suggests a simple strategy for writing: the novelty of a story or essay's content should be inverse to the novelty of its form.

If one has highly novel, perhaps even outright frightening ideas, about the true nature of the multiverse or the future of humanity, the format should be as standard and dry as possible. Conversely, if one is discussing settled science like genetics, one should spice it up with little parables, stories, unexpected topics and applications, etc.

Does this predict success of existing writings? Well, let's take Eliezer as an example, since he has a very particular style of writing. Three of his longest fictions so far are the Ultra Mega Crossover, "Three Worlds Collide", and MoR. Keeping in mind that the former were targeted at OB and the last at a general audience on FF.net, they seem to fit well: the Crossover was confusing in format, introduced many obscure characters or allusions, in service of a computationally-oriented multiverse that only really made sense if you had already read Permutation City, and so is highly novel in form & content, so naturally no one ever mentions it or recommends it to other people; "Three Worlds Collide" took a standard SF opera short-story style with stock archetypes like "the Captain", and saved its novelty for its meta-ethical content and world-building, and accordingly, I see it linked and discussed both on LW and off; MoR, as fanfiction, adapts a world wholesale, reducing its novelty considerably for millions of people, and inside this almost-"boring" framework introduces its audience to a panoply of cognitive biases, transhuman tropes like anti-deathism, existential risks, the scientific method, Bayesian-style reasoning, etc, and MoR has been tremendously successful on and off LW (I saw someone recommend it yesterday on HN).

Of course this is just 3 examples, but it does match the vibe I get reading why people dislike Eliezer or LW: they seem to have little trouble with his casual informal style when it's being applied to topics like cognitive biases or evolution where the topic is familiar to relatively large numbers of people, but then are horribly put off by the same style or novel forms when applied to obscurer topics like subjective Bayesianism (like the Bayesian Conspiracy short stories - actually, especially the Conspiracy-verse stories) or cryonics. Of course, I suppose this could just reflect that more popular topics tend to be less controversial and what I'm actually noticing is people disliking marginal minority theories, but things like global warming are quite controversial and I suspect Eliezer blogging about global warming would not trigger the same reaction as to, say, his "you're a bad parent if you don't sign kids up for cryonics" post that a lot of people hate.

Have I seen this "golden mean" effect in my own writing? I'm not sure. Unfortunately, my stuff seems to generally adopt a vaguely academic format or tone in proportion to how mainstream a topic is, and a great deal of traffic is driven by interest in the topic and not my work specifically; so for example, my Silk Road page is not in any particularly boring format but interest in the topic is too high for that to matter either way. It's certainly something for me to keep in mind, though, when I write about stranger topics.

EDIT: put links at https://www.gwern.net/docs/psychology/novelty/index

Replies from: gwern, gwern, None, gwern, None, gwern, gwern, gwern, Lumifer
comment by gwern · 2018-05-15T17:24:35.145Z · LW(p) · GW(p)

Speaking of Schmidhuber, he serves as a good example: he spends weirdness points like they're Venezuelan bolivars. Despite him and his lab laying more of the groundwork for the deep learning revolution than perhaps anyone and being right about many things decades before everyone else, he is probably the single most disliked researcher in DL. Not only is he not unfathomably rich or in charge of a giant lab like DeepMind, he is the only DL/RL researcher I know of who regularly gets articles in major media outlets written in large part about how he has alienated people: eg https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html or https://www.bloomberg.com/news/features/2018-05-15/google-amazon-and-facebook-owe-j-rgen-schmidhuber-a-fortune And this is solely because of his personal choices and conduct. It's difficult to think of an example of a technologist inventing so much important stuff and then missing out on the gains because of being so entirely unnecessarily unpleasant and hard to bear (William Shockley and the Traitorous Eight come to mind as an example; maybe David Chaum & Digicash too).

comment by gwern · 2013-10-10T16:42:00.215Z · LW(p) · GW(p)

Though there’s probably no perfect way, the recent research mined keywords generated by users of the website the Internet Movie Database (IMDb), which contains descriptions of more than 2 million films. When summarizing plots, people on the site are prompted to use keywords that have been used to describe previous movies, yielding tags that characterize particular genres (cult-film), locations (manhattan-new-york), or story elements (tied-to-a-chair). Each keyword was given a score based on its rarity when compared to previous work. If some particular plot point – like, say, beautiful-woman – had appeared in many movies that preceded a particular film, it was given a low novelty value. But a new element – perhaps martial-arts, which appeared infrequently in films before the ’60s – was given a high novelty score when it first showed up. The scores ranged from zero to one, with the least novel being zero. Lining up the scores chronologically showed the evolution of film culture and plots over time. The results appeared Sept. 26 in Nature Scientific Reports.

...Unsurprisingly, the research also suggests that unfamiliar combinations of themes or plots that haven’t been encountered before (something like sci-fi-western) often have the highest novelty scores. “I think this reinforces this idea that novelty is often variations on a theme,” said Sreenivasan. “You use familiar elements broadly, and then combine them in novel ways.”

Sreenivasan’s analysis shows trends within particular genres as well. Action movies are essentially redefined in 1962 with the release of the first James Bond movie. Science-fiction films, on the other hand, show no similar creative uptick during the same period. According to the analysis, novelty in sci-fi has declined essentially since the genre first made it into movies. It’s possible that this has to do with early science-fiction films codifying the major tropes seen in these movies.

Another part of the analysis seem to correspond to theories put forth by social scientists about how much we enjoy novelty in creative works, said Sreenivasan. In general, humans enjoy new things. More specifically, there’s a tendency for people to look at and like things that are new but not too new. “If it’s way out there, it’s hard to palate,” said Sreenivasan. “And if it’s too familiar, then it seems boring.”

A model known as the Wundt-Berlyne curve illustrates this result. The amount of pleasure someone derives from a creative piece goes up as its novelty increases. But at a certain point, there is a maximum of enjoyment. After that, something becomes too unfamiliar to stomach anymore. Using the revenue generated by different films as a measure of its mass appeal, Sreenivasan found that more novel films sold more tickets until they reached a score of about 0.8. Afterwards, they appeared to decline in popularity and revenue.

(From the standard errors & shuffled results, the decline in revenue from 0.8 to 1.0 happens very fast, so one probably wants to undershoot novelty and avoid the catastrophic risk of overshoot.)

Replies from: gwern
comment by gwern · 2014-11-21T22:26:09.194Z · LW(p) · GW(p)

"The Shazam Effect: Record companies are tracking download and search data to predict which new songs will be hits. This has been good for business—but is it bad for music?"

...But here’s the catch: if you give people too much say, they will ask for the same familiar sounds on an endless loop, entrenching music that is repetitive, derivative, and relentlessly played out. Now that the Billboard rankings are a more accurate reflection of what people buy and play, songs stay on the charts much longer. The 10 songs that have spent the most time on the Hot 100 were all released after 1991, when Billboard started using point-of-sale data—and seven were released after the Hot 100 began including digital sales, in 2005. “It turns out that we just want to listen to the same songs over and over again,” Pietroluongo told me. Because the most-popular songs now stay on the charts for months, the relative value of a hit has exploded. The top 1 percent of bands and solo artists now earn 77 percent of all revenue from recorded music, media researchers report. And even though the amount of digital music sold has surged, the 10 best-selling tracks command 82 percent more of the market than they did a decade ago. The advent of do-it-yourself artists in the digital age may have grown music’s long tail, but its fat head keeps getting fatter. Radio stations, meanwhile, are pushing the boundaries of repetitiveness to new levels. According to a subsidiary of iHeartMedia, Top 40 stations last year played the 10 biggest songs almost twice as much as they did a decade ago. Robin Thicke’s “Blurred Lines,” the most played song of 2013, aired 70 percent more than the most played song from 2003, “When I’m Gone,” by 3 Doors Down. Even the fifth-most-played song of 2013, “Ho Hey,” by the Lumineers, was on the radio 30 percent more than any song from 10 years prior.

...The problem is not our pop stars. Our brains are wired to prefer melodies we already know. (David Huron, a musicologist at Ohio State University, estimates that at least 90 percent of the time we spend listening to music, we seek out songs we’ve heard before.) That’s because familiar songs are easier to process, and the less effort needed to think through something—whether a song, a painting, or an idea—the more we tend to like it. In psychology, this idea is known as fluency: when a piece of information is consumed fluently, it neatly slides into our patterns of expectation, filling us with satisfaction and confidence. “Things that are familiar are comforting, particularly when you are feeling anxious,” Norbert Schwarz, a psychology professor at the University of Southern California, who studies fluency, told me. “When you’re in a bad mood, you want to see your old friends. You want to eat comfort food. I think this maps onto a lot of media consumption. When you’re stressed out, you don’t want to put on a new movie or a challenging piece of music. You want the old and familiar.”

... One of the popular songs of this past summer, “Problem,” combined a dizzy sax hook, ’90s-pop vocals, a whispered chorus, and a female rap verse. It was utterly strange and, for a while, ubiquitous. Greta Hsu, an associate professor at the University of California at Davis, who has done research on genre-blending in Hollywood, told me that although mixing categories is risky, hybrids can become standout successes, because they appeal to multiple audiences as being somehow both fresh and familiar.

Music fans can also find comfort in the fact that data have not taken over the songwriting process. Producers and artists pay close attention to trends, but they’re not swimming in spreadsheets quite like the suits at the labels are. Perhaps one reason machines haven’t yet invaded the recording room is that listeners prefer rhythms that are subtly flawed. A 2011 Harvard study found that music performed by robotic drummers and other machines often strikes our ears as being too precise. “There is something perfectly imperfect about how humans play rhythms,” says Holger Hennig, the Harvard physics researcher who led the study. Hennig discovered that when experienced musicians play together, they not only make mistakes, they also build off these small variations to keep a live song from sounding pat.

Replies from: gwern
comment by gwern · 2018-04-05T17:41:52.991Z · LW(p) · GW(p)

Speaking of Billboard: "What Makes Popular Culture Popular? Product Features and Optimal Differentiation in Music" Askin & Mauskapf 2017:

In this article, we propose a new explanation for why certain cultural products outperform their peers to achieve widespread success. We argue that products' position in feature space significantly predicts their popular success. Using tools from computer science, we construct a novel dataset allowing us to examine whether the musical features of nearly 27,000 songs from Billboard's Hot 100 charts predict their levels of success in this cultural market. We find that, in addition to artist familiarity, genre affiliation, and institutional support, a song's perceived proximity to its peers influences its position on the charts. Contrary to the claim that all popular music sounds the same, we find that songs sounding too much like previous and contemporaneous productions - those that are highly typical - are less likely to succeed. Songs exhibiting some degree of optimal differentiation are more likely to rise to the top of the charts. These findings offer a new perspective on success in cultural markets by specifying how content organizes product competition and audience consumption behavior.
...We hypothesize that hit songs are able to manage a similarity-differentiation tradeoff. Successful songs invoke conventional feature combinations associated with previous hits while at the same time displaying some degree of novelty distinguishing them from their peers. This prediction speaks to the competitive benefits of optimal differentiation, a finding that reoccurs across multiple studies and areas in sociology and beyond (Goldberg et al. 2016; Lounsbury and Glynn 2001; Uzzi et al. 2013; Zuckerman 2016)
...Products must differentiate themselves from the competition to avoid crowding, yet they cannot differentiate to such an extent as to make themselves unrecognizable (Kaufman 2004). Research on consumer behavior suggests that audiences engage in this tradeoff as well. When choosing a product, audiences conform on certain identity-signaling attributes (e.g., a product's brand or category), while distinguishing themselves on others (e.g., color or instrumentation; see Chan, Berger, and Van Boven 2012). This tension between conformity and differentiation is central to our understanding of social identities (Brewer 1991), category spanning (Hsu 2006; Zuckerman 1999), storytelling (Lounsbury and Glynn 2001), consumer products (Lancaster 1975), and taste (Lieberson 2000). Taken together, this work signals a common trope across the social sciences: the path to success requires some degree of both conventionality and novelty (Uzzi et al. 2013)
  • Brewer 1991, ["The Social Self: On Being the Same and Different at the Same Time"](http://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Intergroup_Conflict/Brewer_1991_The_social_self.pdf)
  • Chan, Berger, and Van Boven 2012, ["Identifiable but Not Identical: Combining Social Identity and Uniqueness Motives in Choice"](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.462.8627&rep=rep1&type=pdf)
  • Goldberg et al 2016, ["What Does It Mean to Span Cultural Boundaries: Variety and Atypicality in Cultural Consumption"](http://dro.dur.ac.uk/16001/1/16001.pdf)
  • Hsu 2006, ["Jacks of All Trades and Masters of None: Audiences' Reactions to Spanning Genres in Feature Film Production"](https://cloudfront.escholarship.org/dist/prd/content/qt5p81r333/qt5p81r333.pdf)
  • Kaufman 2004, ["Endogenous Explanation in the Sociology of Culture"](https://sci-hub.tw/http://www.annualreviews.org/doi/abs/10.1146/annurev.soc.30.012703.110608)
  • Lieberson 2000, _A Matter of Taste: How Names, Fashions, and Culture Change_
  • Lounsbury & Glynn 2001, ["Cultural Entrepreneurship: Stories, Legitimacy, and the Acquisition of Resources"](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.199.3680&rep=rep1&type=pdf)
  • Uzzi et al 2013, ["Atypical Combinations and Scientific Impact"](https://pdfs.semanticscholar.org/488a/f28ee062c99330f4277d59ba886b4c065084.pdf)
  • Zuckerman 1999, ["The Categorical Imperative: Securities Analysts and the Illegitimacy Discount"](https://www.dropbox.com/s/50k36a9j9lwyl8e/1999-zuckerman.pdf?dl=0)
  • Zuckerman 2016, ["Optimal Distinctiveness Revisited: An Integrative Framework for Understanding the Balance between Differentiation and Conformity in Individual and Organizational Identities"](https://books.google.com/books?id=PVn0DAAAQBAJ&lpg=PA183&ots=v8QKB6HRXZ&lr&pg=PA183#v=onepage&q&f=false)
comment by [deleted] · 2013-07-21T13:44:57.258Z · LW(p) · GW(p)

Useful enough to be a discussion post.

comment by gwern · 2014-11-29T23:31:42.339Z · LW(p) · GW(p)

Some more discussion:

  • "You have a set amount of "weirdness points". Spend them wisely."
  • Idiosyncrasy credit

    Idiosyncrasy credit[1] is a concept in social psychology that describes an individual's capacity to acceptably deviate from group expectations. Idiosyncrasy credits are increased (earned) each time an individual conforms to a group's expectations, and decreased (spent) each time an individual deviates from a group's expectations. Edwin Hollander[2] originally defined idiosyncrasy credit as "an accumulation of positively disposed impressions residing in the perceptions of relevant others; it is… the degree to which an individual may deviate from the common expectancies of the group".

    (But the cited research in the Examples section seem weak, and social psychology isn't the most reliable area of psychology in the first place.)

Replies from: gwern
comment by gwern · 2018-10-29T02:32:41.368Z · LW(p) · GW(p)

Bryan Caplan, "A Non-Conformist's Guide to Success in a Conformist World":

1. Don't be an absolutist non-conformist. Conforming in small ways often gives you the opportunity to non-conform in big ways. Being deferential to your boss, for example, opens up a world of possibilities.

2. Don't proselytize the conformists. Most of them will leave you alone if you leave them alone. Monitor your behavior: Are you trying to change them more often than they try to change you? Then stop. Saving time is much more helpful than making enemies.

3. In modern societies, most demands for conformity are based on empty threats. But not all. So pay close attention to societal sanctions for others' deviant behavior. Let the impulsive non-conformists be your guinea pigs.

10. Social intelligence can be improved. For non-conformists, the marginal benefit of doing so is especially big.

12. When faced with demands for conformity, silently ask, "What will happen to me if I refuse?" Train yourself to ponder subtle and indirect repercussions, but learn to dismiss most such ponderings as paranoia. Modern societies are huge, anonymous, and forgetful.

13. Most workplaces are not democracies. This is very good news, because as a non-conformist you'll probably never be popular. You can however make yourself invaluable to key superiors, who will in turn protect and promote you.

comment by [deleted] · 2013-12-06T19:44:29.540Z · LW(p) · GW(p)

(I saw someone recommend it yesterday on HN).

Were they a LW user? Every once in a while I'll be surprised when someone links a LW article, only to see that it's loup-valliant.

Replies from: gwern
comment by gwern · 2013-12-06T20:08:15.806Z · LW(p) · GW(p)

I don't remember. It might've been a LW user.

comment by gwern · 2013-08-02T02:53:01.605Z · LW(p) · GW(p)

See also Schank's Law.

comment by gwern · 2015-03-08T22:50:06.153Z · LW(p) · GW(p)

Katja offers 8 models of weirdness budgets in "The economy of weirdness"; #1 seems to fit best the psychology and other research.

comment by Lumifer · 2013-10-10T16:58:11.978Z · LW(p) · GW(p)

ideas like existential risk are highly novel

Why is that so? The end of the world is a strong element in major religions and is a popular theme in literature and movies. The global warming meme made the idea that human activity can have significant planet-wide consequences be universally accepted.

Replies from: gwern
comment by gwern · 2013-10-10T17:48:53.743Z · LW(p) · GW(p)

Existential risk due to astronomical or technological causes, as opposed to divine intervention, is pretty novel. No one thinks global warming will end humanity.

Replies from: Lumifer
comment by Lumifer · 2013-10-10T18:01:12.660Z · LW(p) · GW(p)

If you're well familiar with the idea of the world ending, the precise mechanism doesn't seem to be that important.

I think what's novel is the idea that humans can meaningfully affect that existential risk. However that's a lower bar / closer jump than the novelty of the whole idea of existential risk.

Replies from: gwern
comment by gwern · 2013-10-10T19:20:34.843Z · LW(p) · GW(p)

If you're well familiar with the idea of the world ending, the precise mechanism doesn't seem to be that important.

"If you're familiar with the idea of Christians being resurrected on Judgment Day, the precise mechanism of cryonics doesn't seem to be that important."

"If you're familiar with the idea of angels, the precise mechanism of airplanes doesn't seem to be that important."

Replies from: Lumifer
comment by Lumifer · 2013-10-10T19:26:44.060Z · LW(p) · GW(p)

"If you're familiar with the idea of Christians being resurrected on Judgment Day, the precise mechanism of cryonics doesn't seem to be that important."

For the purpose of figuring out whether an idea is so novel that people have trouble comprehending it, yes, familiarity with the concept of resurrection is useful.

"If you're familiar with the idea of angels, the precise mechanism of airplanes doesn't seem to be that important."

People are familiar with birds and bats. And yes, the existence of those was a major factor in accepting the possibility of heavier-than-air flight and trying to develop various flying contraptions.

comment by Qiaochu_Yuan · 2013-07-17T05:20:08.111Z · LW(p) · GW(p)

Awesome job, whoever made this "latest open thread," "latest rationality diary," and "latest rationality quote" thing happen!

Replies from: Eliezer_Yudkowsky, Benito, David_Gerard, None, Adele_L
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-17T19:11:54.214Z · LW(p) · GW(p)

Brought to you by Lucas Sloan.

comment by Ben Pace (Benito) · 2013-07-19T09:04:38.551Z · LW(p) · GW(p)

But what's the 'Karma Awards'?

comment by David_Gerard · 2013-07-17T12:13:10.800Z · LW(p) · GW(p)

How are these triggered? Automagically or someone updating the link by hand?

comment by [deleted] · 2013-07-17T15:00:20.093Z · LW(p) · GW(p)

The "latest" rationality diary isn't the most recent one (July 15-31), for whatever reason.

Edit: It's been fixed now.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-17T19:13:13.911Z · LW(p) · GW(p)

I tried adding the group_rationality_diary tag to it, but I don't know how/if/when these things reload.

Replies from: LucasSloan
comment by LucasSloan · 2013-07-18T00:56:07.858Z · LW(p) · GW(p)

Needs the tag group_rationality_diary, they reload every time there's a new comment or every 12 hours.

Replies from: komponisto
comment by komponisto · 2013-07-18T01:21:14.817Z · LW(p) · GW(p)

Where did the "Top Contributors -- All Time" go?

Replies from: LucasSloan
comment by LucasSloan · 2013-07-18T01:52:18.215Z · LW(p) · GW(p)

They will be on the about page shortly.

comment by Adele_L · 2013-07-17T14:57:37.141Z · LW(p) · GW(p)

Agreed. I am glad to see those links.

comment by Tenoke · 2013-07-15T23:26:36.990Z · LW(p) · GW(p)

Some #lesswrong regulars who are currently learning to code have made a channel for that purpose on freenode - #lw-prog

Anyone who is looking for a place to learn some programming alongside fellow lesswrongers is welcome to join.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-07-16T00:01:03.298Z · LW(p) · GW(p)

Thanks for the heads up.

comment by fubarobfusco · 2013-07-16T07:13:48.673Z · LW(p) · GW(p)

One of the most salient differences between groups that succeed and groups that fail is the group members' ability to work well with one another.

A corollary: If you want a group to fail, undermine its members' ability to work with each other. This was observed and practiced by intelligence agencies in Turing's day, and well before then.

Better yet: Get them to undermine it themselves.

By using the zero-sum conversion trick, we can ask ourselves: What ideas do I possess that the Devil¹ approves of me possessing because they undermine my ability to accomplish my goals?


¹ "The Devil" is shorthand for a purely notional opponent whose values are the opposite of mine.

Replies from: Viliam_Bur, None
comment by Viliam_Bur · 2013-07-19T12:58:40.157Z · LW(p) · GW(p)

One Devil's tool against cooperation is reminding people that cooperation is cultish, and if they cooperate, they are sheep.

But there is a big exception! If you work for a corporation, then you are expected to be a team player, and you have to participate in various team-building activities, which are like cult activities, just a bit less effective. You are expected to be a sheep, if you are asked to be one, and to enjoy it. -- It's just somehow wrong to use the same winning strategy outside the corporation, for yourself or your friends.

So we get the interesting result that most people are willing to cooperate if it is for someone else's benefit, but have an aversion against cooperation for their own. If I tried to brainwash people to become obedient masses, I would be proud to achieve this.

This said, I am not sure what exactly caused this. It could be a natural result of thousand small-scale interactions; people winning locally by undermining their nearest competitors' agency, and losing globally by poluting the common meme-space. And the people who overcome this and become able to optimize for their own benefit probably find it much easier to find themselves followers than peers; thus they get out of the system, but don't change the system.

Replies from: sixtimes7, sixtimes7
comment by sixtimes7 · 2013-07-21T15:41:53.082Z · LW(p) · GW(p)

Can you give an example of how people resist cooperation? I'm having difficulty identifying such a trend in my past interactions.

P.S. It seems I accidentally double-posted. Sorry about that.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-21T17:20:45.670Z · LW(p) · GW(p)

The first example in my mind when I wrote that were the negative reactions about "rationalist rituals" (some comments were deleted). An alternative explanation is that it was mostly trolling.

At the recent LW meetup I organized, I tried to start the topic of becoming stronger: where would we individually want to become stronger, and how we could help each other with some specific goals. The whole topic was sabotaged (other sources later confirmed it was done intentionally) and turned to idle chatting by a participant, who happens to be a manager in a corporation. An alternative explanation is that the specific person simply has an aversion to the specific topic.

A few times happened to me that when I approached people with "we could do this as a group together", I was refused, but when I said "I want to do this, and I need you to do this", people complied. (Once it was about compiling a DVD with information from different sources; second time about making a computer application.) People are more willing to obey than to cooperate as equals, perhaps because this is what they are taught. Most likely, in other situation I react the same way. An alternative explanation is that people don't want to be responsible for coordination, motivating others, etc.

I know a few people with hobbies that could be used together to make something greater. For example: writing stories + drawing pictures = making an illustrated story book. When I tried to contact them together, they refused (without seeing each other). Based on the previous experiences, I suspect that if I inserted myself as the boss, and told each person "I want to do this, and I need you to this", they would be more likely to agree, although I am otherwise not needed in the process.

Uhm, perhaps other people can add more convincing examples?

comment by sixtimes7 · 2013-07-21T15:40:41.103Z · LW(p) · GW(p)

Can you give an example of how people resist cooperation? I'm having difficulty identifying such a trend in my past interactions.

comment by [deleted] · 2013-07-21T00:49:46.605Z · LW(p) · GW(p)

This was observed and practiced by intelligence agencies in Turing's day, and well before then.

Source?

Replies from: gwern
comment by gwern · 2013-07-21T01:04:39.400Z · LW(p) · GW(p)

Enigma comes to mind. IIRC, to camouflage it, the Brits specifically leaked messages claiming that it was due to some moles in Germany, not just explaining away how data kept leaking but actively impeding German operations. This was also seen in the Cold War where you had Soviet defectors who tried to discredit each other as agents sent to throw the CIA into confusion, and I've seen accusations that James Jesus Angleton was a spy or otherwise manipulated into his endless mole hunts by Russia specifically to destroy all agency effectiveness. For a more recent example, Assange's Wikileaks was based on this theory, which he put forth in a short paper around that time: enabling easy leaking would sow distrust and dissension in networks that depended on secrecy, forcing compartmentalization and degrading efficiency compared to more 'open' organizations. EDIT: and appropriately, this is exactly what is happening in the NSA now - they are claiming that Snowden was leaking materials which had been made available to much of NSA, to assist in coordination, and they are locking down the material, adding more logging, and restricting sysadmins' accesses, none of which is going to make the NSA more efficient than before... Similar to how State etc had to lock down and add friction to internal processes after Manning.

I don't know if the tactic has any name or handy references, but certainly intelligence agencies are aware of the value of witch hunts and internal dissension.

Replies from: None
comment by [deleted] · 2013-07-24T00:18:55.048Z · LW(p) · GW(p)

The Assange paper in question: State and Terrorist Conspiracies. Written considerably prior to Wikileaks entering the spotlight (dated 2006 in that PDF).

Various leaks from Anonymous indicate the FBI (and probably local LEA) uses similar tactics against Occupy and other groups.

comment by IsTheLittleLion · 2013-07-18T20:09:05.484Z · LW(p) · GW(p)

Me and my friend are organizing a new meetup in Zagreb but I don't have enough karma to make an announcement here. Thanks!

comment by philh · 2013-07-16T22:40:48.671Z · LW(p) · GW(p)

[Meta] Most meetup threads have no comments. It seems like it would be useful for people to post to say "I'm coming", both for the organiser and for other people to judge the size of the group. Would this be a good social norm to cultivate? I worry slightly that it would annoy people who follow the recent comments feed, but I can't offhand think of other downsides.

Replies from: JoshuaZ, Vaniver, Dorikka
comment by JoshuaZ · 2013-07-16T23:02:04.413Z · LW(p) · GW(p)

Suggested alternative to reduce the recent comment clutter issue: Have a poll attached to each meetup with people saying if they are coming. Then people can get a quick glance at how many people are probably coming, and if one wants to specifically note it (say one isn't a regular) then mention that in the comment thread.

comment by Vaniver · 2013-07-17T21:10:38.434Z · LW(p) · GW(p)

Many meetup attendees don't have LW accounts, so it may not be a very good measure.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-18T02:05:18.025Z · LW(p) · GW(p)

and even the ones who do will likely not bother to vote every single week for regular meetups.

Replies from: drethelin
comment by drethelin · 2013-07-18T18:14:57.296Z · LW(p) · GW(p)

this is what I found when i tried to use facebook: many of the people who go to meetups who even have facebook accounts don't bother responding.

comment by Dorikka · 2013-07-20T18:26:26.388Z · LW(p) · GW(p)

Another suggestion is to set up something that e-mails past attendees with a quick poll of whether they are coming to the next meetup (1 extra per week is likely worth it), and there is an updating thingy in the LW post that shows accepted/tenative/declined vs total number on the list and time to next meetup.

I don't know which parts of this would be difficult to implement, but it (working with the final product, not necessarily setting it up) is easier than having people answer an LW poll given the complications posted in other comments below.

comment by Decius · 2013-07-17T17:26:00.051Z · LW(p) · GW(p)

If you're missing a lot of flights, you should arrive at the airport sooner.

comment by Martin-2 · 2013-07-16T04:33:24.993Z · LW(p) · GW(p)

Here is some verse about steelmanning I wrote to the tune of Keelhauled. Compliments, complaints, and improvements are welcome.

*dun-dun-dun-dun

Steelman that shoddy argument

Mend its faults so they can't be seen

Help that bastard make more sense

A reformulation to see what they mean

Replies from: RomeoStevens, skeptical_lurker
comment by RomeoStevens · 2013-07-16T05:56:11.415Z · LW(p) · GW(p)

To whomever downvoted parent: Please don't downvote methods for providing epistemic rationality techniques with better mental handles so they actually get used. Different tricks are useful for different people.

comment by skeptical_lurker · 2013-07-16T21:17:08.398Z · LW(p) · GW(p)

Alestorm are a very rationalist band. I particularly like the lyrics:

You put your faith in Odin and Thor, We put ours in cannons and whores!

Its about how a religious society can never achieve what technology can.

comment by [deleted] · 2013-07-17T19:26:15.858Z · LW(p) · GW(p)

Being in Seattle has taught me something I never would have thought of otherwise:

Working in a room with a magnificent view has a positive effect on my productivity.

Is this true for other people, as well? I normally favor ground-level apartments and small villages, but if the multiplier is as consistent as it's been this past week, I may have to rethink my long-term plans.

Replies from: Qiaochu_Yuan, army1987, army1987
comment by Qiaochu_Yuan · 2013-07-17T20:19:04.341Z · LW(p) · GW(p)

It could be just the novelty of such a view. I suspect that any interesting modification to your working environment leads to a short-term productivity boost, but these things don't necessarily persist in the long term. In any case, it seems like the VoI of exploring different working environments is high.

Replies from: wadavis
comment by wadavis · 2013-07-17T21:43:06.017Z · LW(p) · GW(p)

The under-utilized conference room with a great view has become the unofficial thinking room at work.

There is a whole list of little factors that contribute to the success of the thinking room, but major contributors include the both view, and the novelty.

comment by A1987dM (army1987) · 2013-07-21T16:19:01.646Z · LW(p) · GW(p)

I dunno -- on one hand, I'd be more tempted to slack off by looking outside; on the other hand, it'd be easier for me to recharge my willpower, by looking outside. I think the former would be a larger effect for me, but I'm not sure.

comment by A1987dM (army1987) · 2013-07-21T16:18:53.003Z · LW(p) · GW(p)

I dunno -- on one hand, I'd be more tempted to slack off by looking outside; on the other hand, it'd be easier for me to recharge my willpower, by looking outside. I think the former would be a larger effect for me, but I'm not sure.

comment by David_Gerard · 2013-07-20T10:07:08.028Z · LW(p) · GW(p)

Question: Who coined the term "steelman" or "steelmanning", and when?

I was surprised not to find it in the wiki, but the term is gaining currency outside LessWrong.

Also, I'd be surprised if the concept were new. Are there past names for it? Principle of charity is pretty close, but not as extreme.

Replies from: shminux
comment by shminux · 2013-07-20T19:37:49.056Z · LW(p) · GW(p)

Google search with a date restriction and a few other tricks to filter out late comments on earlier blog posts suggests Luke's post Better disagreement as the first online reference, though the first widely linked reference is quite recent, from the Well Spent Journey blog.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-20T23:17:45.139Z · LW(p) · GW(p)

Yes, but Luke refers to it as a term already in use.

Replies from: shminux
comment by shminux · 2013-07-21T00:51:52.987Z · LW(p) · GW(p)

But apparently not anywhere online accessible to search robots.

comment by pop · 2013-07-20T04:06:34.162Z · LW(p) · GW(p)

Saw this on twitter. Hilarious: "Ballad of Big Yud"

http://www.youtube.com/watch?v=nXARrMadTKk

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-20T11:04:32.741Z · LW(p) · GW(p)

There is another video from the same author explaining his opinions on LW. It takes 2 minutes to just start talking about LW, so here are the important parts: ---

The Sequences are hundreds and hundreds of blog posts, written by one man. They are like catechism, teach strange vocabulary like "winning", "paying rent", "mindkilling", "being Bayesian".

The claim that Bayes theorem, which is just a footnote in statistic textbook, has the power to reshape your thinking so that you can maximize the outcomes of your life... has no evidence. You can't simplify the complexity of life into simple probabilities. EY is a high-school dropout and he has no peer-reviewed articles.

People on LW say that criticism of LW is upvoted. Actually, that "criticism" does not disagree with anything -- it just asks MIRI to be more specific. Is that the LW's best defense against accusations of cultishness?

LW community believes in Singularity, which again, has no evidence, and the scientific community does not support it. MIRI asks your money, and does not say how specifically it will be used to save the world.

LW claims that politics is the mindkiller, yet EY admits that he is libertarian. Most of MIRI money comes from Peter Theil -- a right-wing libertarian billionaire.

Roko's basilisk...

...and these guys pretend to be skeptics?

Now let's look at CFAR. They have EY on their board, and they force you to read the Sequences if you want to join them.

Julia Galef is a rising star in the skeptical movement; she has a podcast "Rationally Speaking". But she is connected with LW, she believes in Bayes theorem, and she only criticizes the political left. She is obviously used as a face of LW movement because she is pretty! -- This is a sexism on LW's part, because men at LW agree in comments that Julia is pretty. If they weren't sexist, they would talk about how smart she is.

People like this are not skeptics and should not be invited to Skepticon!

Replies from: Mitchell_Porter, Viliam_Bur, CAE_Jones, bogus, Kawoomba
comment by Mitchell_Porter · 2013-07-20T12:00:48.247Z · LW(p) · GW(p)

There's a user at RationalWiki, one of the dedicated LW critics there, called "Baloney Detection". I often wondered who it was. The image at 5:45 in this video, and the fact that "Baloney Detection" also edited the "Julia Galef" page at RW to decry her association with LW, tells me this is him...

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-20T15:20:26.542Z · LW(p) · GW(p)

By the way, the RW article about LW now seems more... rational... than the last time I checked. (Possibly because our hordes of cultists sposored by the right-wing extremist conspiracy fixed it, hoping to receive the promised 3^^^3 robotic virgins in singularitarian paradise as a reward.) You can't say the same thing about the talk pages, though.

It's strange. Now I should probably update towards "a criticism of LW found online probably somehow comes from two or three people on RW". On their talk pages, Aris Katsaris sounds like a lonely sane voice in a desert of... I guess it's supposed to be a "rationality with a snarky point of view"; which works like this -- I can say anything, and if you prove me lying, I say I was exaggerating to make it more funny.

Some interesting bits from the (mostly boring) talk page:

Yudkowsky is an uneducated idiot because there simply can't be 3^^^3 distinct people

A proper skeptical argument about why "Torture vs Dust Specks" is wrong.

what happened is that they hired Luke Muehlhauser who doesn't know about anything technical but can adequately/objectively research what a research organization would look like, and then push towards outwards appearance of such

This is why LW people care about Löb's Theorem, in case you (LW cultists not belonging to the inner circle) didn't know.

Using Thiel's money to list yourself as co-author is very weak evidence of competence.

An ad-hoc explanation is being prepared. Criticising Eliezer for being a high school dropout and never publishing in peer-reviewed journal is so much fun... but if he would some day publish in a peer-reviewed journal and get citations or whatever recognition by the scientific establishment, RationalWiki already knows the true explanation -- the right-wing conspiracy bribed the scientists. (If the day comes that RW starts criticizing scientists for supporting LW, I will be laughing and munching popcorn.)

Holden Karnofsky's critique had a significant number of downvotes as well - being high profile, they didn't want to burn the bridges, so it wasn't deleted, and a huge number of non-regulars upvoted it.

How do you know what you know? Specifically, where are those data about who upvoted and downvoted Holden coming from? (Or it is an alternative explanation-away? LW does not accept criticism and censors everything, but this one time the power of the popular opinion prevented them from deleting it.)

And finally a good idea:

This talk page is becoming one of the central coordination points for LW/SI's critic/stalkers. Maybe that should be mentioned on the page too?

I vote yes.

Replies from: David_Gerard, RomeoStevens, None
comment by David_Gerard · 2013-07-27T13:56:55.271Z · LW(p) · GW(p)

The article was improved 'cos AD (a RW regular who doesn't care about LW) rewrote it.

comment by RomeoStevens · 2013-07-21T00:29:36.350Z · LW(p) · GW(p)

It was disappointing to see Holden's posts get any down votes.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-21T08:49:58.040Z · LW(p) · GW(p)

I agree, but we are speaking about approximately 13 downvotes from 265 total votes. So we have at least 13 people on LessWrong who oppose a high-quality criticism.

The speculation about regulars downvoting and non-regulars upvoting is without any evidence; could have also been the other way round. We also had a few trolls and crazy people here in the past. And by the way, it's not like people from RationalWiki couldn't create throw-away accounts here. So, with the same zero evidence, I could propose an alternative hypothesis that Holden was actually downvoted by people from RW who smartly realized that his "criticism" of LW is actually no criticism. But that would just be silly.

Replies from: wedrifid
comment by wedrifid · 2013-07-27T15:48:54.909Z · LW(p) · GW(p)

I agree, but we are speaking about approximately 13 downvotes from 265 total votes. So we have at least 13 people on LessWrong who oppose a high-quality criticism.

Or there are approximately 13 people who believe the post is worth a mere 250 votes, not 265 and so used their vote to push it in the desired direction. Votes needn't be made or considered to be made independently of each other.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-27T20:32:26.525Z · LW(p) · GW(p)

Or there are approximately 13 people who believe the post is worth a mere 250 votes, not 265 and so used their vote to push it in the desired direction.

One data point: I used to do that kind of things before the “% positive” thing was implemented, but I no longer do that, at least not deliberately.

comment by [deleted] · 2013-07-21T00:12:15.115Z · LW(p) · GW(p)

I am pleasantly surprised that they didn't get overwhelmed by the one or two LW trolls that swamped them a couple months back.

Looking through the talk pages, it seems those guys partially ran out of steam, which let cooler heads prevail.

comment by Viliam_Bur · 2013-07-20T11:46:50.962Z · LW(p) · GW(p)

My own thoughts:

I wonder how much "hundreds of blog posts written by one man" is the true rejection. I mean, would the reaction be different if it was a book instead of hundred blog posts? Would it be different if the Sequences were on a website separate from LessWrong? -- The intuition is that a "website by one man" would seem more natural than a "website mostly by one man". Because people do have their personal blogs, and it's not controversial. Even if the personal blog gets hundreds of comments, it still feels like a personal blog, not like a movement.

(Note: I am not recommending any change here. Just thinking loudly whether there is something about the format of the website that provokes people, or whether it is mere "I dislike you, therefore I dislike anything you do".)

Having peer-reviewed articles (not just conference papers) or otherwise being connected with the scientific establishment would obviously be a good argument. I'm not saying it should be high priority for Eliezer, but if there is a PR department in MIRI/CFAR, it should be a priority for them. (Actually, I can imagine some CFAR ideas published in a pedagogical journal -- that also counts as official science, and could be easier.)

The cultish stuff is the typical "did you stop beating your wife?" pattern. Anything you respond... is exactly what a cult would do. (Because being cultish is an evidence for being a cult, but not being cultish is also an evidence for being a cult, because cults try to appear not cultish. And by the way, using the word "evidence" is an evidence of being a brainwashed LW follower.)

What is the relation between politics and skepticism? I mean, do all skeptics have to be perfectly politically neutral? Or is left-wing politics compatible with skepticism and only the right-wing politics is incompatible? (I am not sure which of these was the author's opinion.) How about things like "Atheism Plus"? And here is a horrible thought... if some research would show there is a non-zero corelation between atheism and a position on a political spectrum, would it mean that atheists are also forbidden from skeptical movement?

I appreciate the spin of saying that Julia is just a pretty face, and then suddenly attributing this opinion to LW. I mean, that's a nice Dark Arts move -- say something offensive, and then pretend it was actually your opponent who believes that, not you. (The author is mysteriously silent about his own opinion. Does he believe that Julia is not smart? Or does he believe that she is smart, but that it is completely accidental to the fact that she represents LW on Skepticon? Either choice would be very suspicious, so he just does not specify it. And turns off the comments on youtube, so we cannot ask.)

Replies from: David_Gerard
comment by David_Gerard · 2013-07-27T13:58:11.923Z · LW(p) · GW(p)

If it was a book, it'd be twice the size of Lord Of The Rings.

comment by CAE_Jones · 2013-07-20T11:26:36.305Z · LW(p) · GW(p)

The only point I feel the need to contest is "EY admits he is libertarian". What I remember is EY admitting that he was previously libertarian, then stopped.

Well, and "EY is a high school dropout with no peer reviewed articles", not because it's untrue, but because neither of those is all that important.

The rest is sound criticism, so far as I can tell.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-20T12:04:40.649Z · LW(p) · GW(p)

What I remember is EY admitting that he was previously libertarian, then stopped.

Here is a comment (from 2007) about it:

I started my career as a libertarian, and gradually became less political as I realized that (a) my opinions would end up making no difference to policy and (b) I had other fish to fry. My current concern is simply with the rationality of the disputants, not with their issues - I think I have something new to say about rationality.

It could be interpreted as Eliezer no longer being libertarian, but also as Eliezer remaining libertarian, just moving more meta and focusing on more winnable topics.

"EY is a high school dropout with no peer reviewed articles", not because it's untrue, but because neither of those is all that important.

Sure, but why does it feel (I mean, at least to the author) as important? I guess it is heuristics "if you are not a scientist, and you speak a lot about science, you got it wrong". Which may be generally correct, if people obsessed with science usually become either scientists or pseudoscientists.

The rest is sound criticism, so far as I can tell.

The part about Julia didn't sound fair to me -- but perhaps you should see the original, not my interpretation. It starts at 8:50.

Otherwise, yes, he has some good points, he is just very selective about the evidence he considers. I was most impressed by the part about Holden's non-criticism. (More meta, I wonder how he would interpret this agreement with his criticism. Possibly as something unimportant, or something that a cult would do to try appear non-cultish.)

Replies from: Alejandro1
comment by Alejandro1 · 2013-07-20T14:22:14.091Z · LW(p) · GW(p)

In 2011, he describes himself as a "a very small-‘l’ libertarian” in this essay at Cato Unbound.

comment by bogus · 2013-07-20T13:01:45.384Z · LW(p) · GW(p)

Julia Galef is a rising star in the skeptical movement; she has a podcast "Rationally Speaking". But she is connected with LW, she believes in Bayes theorem, and she only criticizes the political left. She is obviously used as a face of LW movement because she is pretty! -- This is a sexism on LW's part, because men at LW agree in comments that Julia is pretty. If they weren't sexist, they would talk about how smart she is

I think what this is really saying is that Galef is socially popular especially among skeptics (she has a popular blog, co-hosts multiple podcasts, and all that), but she's not necessarily smarter, or even more involved in LW activities (presumably, MIRI/CFAR has a reputation of very smart folks being involved, hence the confusion), compared to many other LW folks, e.g. Eliezer, etc. So, the argument goes, it's not really clear why she should get to be the public face of LW, but it's certainly convenient in that, again, LW is made to look less like a cult than it really is.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-20T14:30:26.046Z · LW(p) · GW(p)

I hope I am not mistaken about this, but it seems to me that MIRI and CFAR were separated because the former focuses on "Friendly AI" and the latter on "raising the sanity waterline". It's not just a difference in topic, but the topic also determines tools and strategy. -- To research Friendly AI, you need to find good mathematicians, develop a mathematical theory, convince AI researchers about its seriousness, publish in peer-reviewed journals, and ultimately develop the machine. To raise the sanity waterline, you need to find good teachers, develop a curriculum, educate people, and measure the impact. -- Obviously, Eliezer cares mostly about the former, and I believe even the author of the video would agree with that.

So, pretty likely, Eliezer is not the most involved person in CFAR. I don't know about internal stuff of CFAR to say precisely who is that person. Perhaps there are many people contributing significantly in ways that can't be directly compared; is it more important to research the curriculum, write the textbooks, test the curriculum, connect people, or keep everything running smoothly? Maybe it's not Julia, but that doesn't mean it's Eliezer.

I guess CFAR could also send Anna Salamon, Michael Smith, Andrew Critch, or anyone else from their team to Skepticon. Would that be better? Or unless it is Eliezer personally, will it is always seem like the dark overlord Eliezer is hiding behind someone else's face? (Actually, I wouldn't mind if Eliezer goes to Skepticon, if he would think this is the best way to use his time.) How about all of them going to Skepticon together -- would that be acceptable? Or is it: anyone but Julia?

By the way, I really liked Julia's Straw Vulcan lecture, and sent a few people a hyperlink. So she has some interesting things to say, too. And those things are completely relevant to CFAR goals.

comment by Kawoomba · 2013-07-21T09:48:44.381Z · LW(p) · GW(p)

Chorus ... We should help him read the sequences ... shambles forward

The anti-LW'ers have become quite the community themselves, the video is referencing XiXiDu and others.

It's thoroughly entertaining, the music especially.

Edit: I must say I found this statement by the video's author illuminating indeed in regards to his strong discounting of Bayesian reasoning:

Math has always been a weakness for me. Source

To his benefit, Dmytry explained it to him, and now all is well again.

comment by [deleted] · 2013-07-16T22:27:31.866Z · LW(p) · GW(p)

Could I get some career advice?

I'd like to work in software. I can graduate next year with a math degree and look for work, or I can study for additional CS-specific credentials, (two or three extra years for a Master's degree).

On the one hand, I'm told online that programming is unusually meritocratic, and that formal education and credentials matter very little if you can learn and demonstrate competency in other ways, like writing your own software or contributing to open-source projects.

On the other hand, mid-career professionals in other fields (mostly engineering), have told me that education credentials are an inevitable filter for raises, hiring, layoffs, and just getting interesting work. They say that getting a graduate degree will be worthwhile even if I could have learned equally valuable skills by other means.

I think I would enjoy and do well in graduate school, but if it makes little career difference, I don't think I would go. I'm skeptical that marginal credentials are unimportant, (or will remain unimportant in ten years), but I don't know any programmers in person who I could ask.

Any thoughts or experiences here?

Replies from: fubarobfusco, oooo
comment by fubarobfusco · 2013-07-17T02:03:30.445Z · LW(p) · GW(p)

What programming have you done so far? Have you worked on any open-source projects? Run your own web site?

I know a lot of people with math degrees working in software engineering or site reliability in Silicon Valley. So it's definitely possible ... but you have to have the skills.

So tell me about your skills. :)

Replies from: None
comment by [deleted] · 2013-07-17T07:29:53.176Z · LW(p) · GW(p)

In school, some of my math courses have been programming intensive, (bioinformatics and statistics, all sorts of numerical methods and optimization courses). I've taken most of the CS curriculum as well, but scheduling the remaining class (a senior project) for a double major would take an extra year.

On my own, I've written a couple android apps, mostly video games. But that's about it. No websites and no open-source work.

Replies from: gwillen
comment by gwillen · 2013-07-18T04:04:47.081Z · LW(p) · GW(p)

I have a BS in computer science. I worked at Google for four years. I would guess that your credentials -- with a BS in math -- would be no bar to getting a programming job. I would focus on direct programming experience instead of further credentialling. Graduate degrees in computer science are generally not required, and not necessarily even useful, for programming jobs in industry. Masters degrees in computer science are especially suspect, because they are often less rigorous than undergraduate degrees in the field. This is especially true of coursework (non-research-oriented) masters degrees.

comment by oooo · 2013-07-22T07:24:04.069Z · LW(p) · GW(p)

What type of work in software would you like to do? The rest of my comment will assume that you mean the software technology industry, and not programming specifically.

There are many individual contributor roles in technology companies. Being a developer is one of them. Others may include field deployment specialists, system administrator, pre-sales engineers, sales or the now popular "data scientist".

I agree that credentials help with hiring and promotions. When I evaluate staff with little work experience graduate credentials play a role in my evaluation.

They say that getting a graduate degree will be worthwhile even if I could have learned equally valuable skills by other means.

If you could have learned equally valuable skills by other means, then the graduate degree almost always comes out on top due to signalling/credentialing factor. However, usually this isn't the case. Usually the graduate degree is framed as a trade-off between the actual signalling factor, coursework, research and graduate institution vs. work experience directly relevant to your particular domain of expertise. There are newer alternative graduate degrees programs that may be more useful to you with your strong undergraduate mathematics base such as Masters of Financial Engineering*, Masters in Data Science that offer a different route to obtaining an interesting job in the software industry without necessarily going through a more "traditional" CS graduate program.

I think I would enjoy and do well in graduate school, but if it makes little career difference, I don't think I would go. I think much will depend on the pedigree of the graduate school and the work that you can showcase (a portfolio of sorts) upon completion that will determine magnitude of career impact.

If you are dead set on being a programmer for the next 10 years, please consider why. The reason I bring this up is because some college seniors I've talked to can clearly visualize working as a developer, but find it harder to visualize what it's like doing other jobs in the technology industry, or worse have uninformed and incorrect stereotypes of the types of work involved with different roles (canonical example are technology sales roles, where anybody technical seems to have a distaste for salespeople).

It you are still firmly aiming to be a developer, it may help to narrow down what type of programming you like to do, such as web, embedded, systems, tooling, etc., and also spend a bit of time at least trying to imagine companies you'd like to work for evaluated on different dimensions (e.g. industry, departmental function, Fortune 500, billing/security/telco infrastructure/mobile, etc.).

One additional point to consider is why not do both by working full-time and immediately embarking on a part-time graduate degree? Granted, some graduate degrees (e.g. certain institutions or program structure) don't allow for part-time enrollment, but it's at least something to consider. That way you cover both bases.

* Google MFE or "Masters Financial Engineering" -- many US programs have sprung up over the past several years

EDIT: I apologize in advance for the US-centric links in case you are outside of N. America.

comment by JoshuaZ · 2013-07-21T21:00:03.277Z · LW(p) · GW(p)

I've recently noticed a new variant of failure mode in political discussions. It seems to be most common on political discussions where one already has almost all Blues or all Greens. It goes like this:

Blue 1: "Hey look at this silly thing said by random silly Green. See this website here."

Blue 2, Blue 3... up to Blue n: "Haha! What evil idiots."

Blue n+1 (or possibly Blue sympathizer or outright interloper or maybe even a Red or a Yellow): "Um, the initial link given by Blue 1 is a parody. That website does satire."

Large subset of Blue 2 through Blue n: "Wow, the fact that we can't tell that's a parody shows how ridiculous the Greens are."

Now at this point, the actual failure of rationality happened with Blues not Greens. But somehow Blues will then count this as further evidence against Greens. Is there any way to politely get Blues to understand the failure mode that has occurred in this context?

Replies from: linkhyrule5, Eugine_Nier
comment by linkhyrule5 · 2013-07-22T05:15:05.785Z · LW(p) · GW(p)

This isn't entirely a fallacy: if you can't tell a signal from random noise, either you're bad at seeing signals or there's not a whole lot of information in that signal.

Maybe presenting it in that format? "It's possible the Greens really are that stupid, but alternatively it's possible that you just missed a perfectly readable signal?"

comment by Eugine_Nier · 2013-07-25T03:29:59.072Z · LW(p) · GW(p)

Another failure mode I noticed is that of a particularly rational Blue noticing that his fellow Blues frequently exhibit failure mode X and concluding that the same is true of Greens.

comment by Rukifellth · 2013-07-16T23:36:18.980Z · LW(p) · GW(p)

What with the popularity of rationalist!fanfiction, I feel like there's an irresistible opportunity for anyone familar with The Animorphs books.

Imagine it! A book series where sentient slugs control people's bodies, yet can communicate with their hosts. To borrow from the AI Box experiments, the Yeerks are the Gatekeepers, and the Controlled humans are the AI's! One could use the resident black-sheep character David Hunting as the rationalist! character, who was introduced in the middle of the series, removed three books later and didn't really do anything important. I couldn't write such a thing, but it would be wicked if someone else did.

Replies from: None
comment by Error · 2013-07-16T11:50:18.061Z · LW(p) · GW(p)

I've run into a roadblock on the Less Wrong Study Hall reprogramming project. I've been writing against Google Hangouts, but it seems that there's no way to have a permanent, public hangout URL that also runs a specified application. (that is, I can get a fixed URL, or a hangout that runs an app for all users, but I can't do both)

Any of the programmers here know a way around that? At the moment it's looking like I'll have to go back to square zero and find an entirely different approach.

Replies from: malcolmocean, Mqrius, tondwalkar
comment by MalcolmOcean (malcolmocean) · 2013-07-16T22:47:27.398Z · LW(p) · GW(p)

Could you have a server that knows where the dynamic url is at all times, and provides a redirect? So I'd hit up lwsh.me and it would redirect me to https://plus.google.com/hangouts/_/etc ...that would create an effectively permanent url, even though the hangout itself would change urls.

Looking at the Hangouts API, it appears that when the app is initialized you could call getHangoutUrl() and then pipe it back to the server. This could probably be used in a pretty dynamic manner too... like whenever anyone uses the app, it connects with the main server and adds that chat to the list of active chats...

comment by Mqrius · 2013-07-21T10:50:10.098Z · LW(p) · GW(p)

To get a permanent URL, the workaround was that you could schedule a hangout very far in the future. Are you saying that you can't run a specified application on that?

Replies from: Error
comment by Error · 2013-07-21T19:08:01.508Z · LW(p) · GW(p)

A qualified "yes, exactly": I haven't found a way to do it, which is different from saying a way doesn't exist.

comment by tondwalkar · 2013-07-16T16:54:06.488Z · LW(p) · GW(p)

I'm not sure what you mean by "runs an app for all users", Are you writing a separate app that you want the hangout to automatically open on entry? Doesn't it make more sense to do this the other way around?

Replies from: malcolmocean, Error
comment by MalcolmOcean (malcolmocean) · 2013-07-16T22:26:03.694Z · LW(p) · GW(p)

The app runs within Google Hangouts (like drive, chat, youtube, effects) which is part of the draw of using that platform.

comment by Error · 2013-07-16T18:28:44.955Z · LW(p) · GW(p)

Of course it does, but reality in this case does not appear to make sense. :-(

Replies from: DreamFlasher
comment by DreamFlasher · 2014-04-16T08:49:27.638Z · LW(p) · GW(p)

Adding apps to permanent Google Hangouts works for me - shouldn't we revisit this option?

Replies from: Error
comment by Error · 2014-04-16T15:41:25.787Z · LW(p) · GW(p)

Possibly. I know it used to be possible and the capability was lost in a change, so maybe they changed it back while I wasn't looking. I also got a PM recently noting that lightirc supports webcams; that might be an even better option since it would give us server control.

I'm busy being sick right now, but I'll take a new look at things once I'm functional again.

comment by FiftyTwo · 2013-07-15T23:37:58.551Z · LW(p) · GW(p)

What are good sources for "rational" (or at least not actively harmful) advice on relationships?

Replies from: Vaniver, Bill_McGrath, Manfred, None, Xachariah, Dorikka
comment by Vaniver · 2013-07-16T00:22:14.770Z · LW(p) · GW(p)

What are good sources for "rational" (or at least not actively harmful) advice on relationships?

What sort of relationships? Business? Romantic? Domestic? Shared hobby?

The undercurrent that runs along good advice for most is "make your presence a pleasant influence in the other person's life." (This is good advice for only some business relationships.)

Replies from: Dorikka, FiftyTwo
comment by Dorikka · 2013-07-16T01:56:24.269Z · LW(p) · GW(p)

If you know of a reference of similar quality to the one I mention here but for platonic relationships, I would appreciate the referral. The book that I mentioned touches on such, but I think it intends to somewhat focus on romance.

Replies from: Vaniver
comment by Vaniver · 2013-07-16T22:00:41.263Z · LW(p) · GW(p)

I don't, but I do appreciate your referral of that book.

comment by FiftyTwo · 2013-07-16T17:37:58.796Z · LW(p) · GW(p)

I was implicitly referring to romantic ones. I imagine a lot of the advice would overlap, but the quality of advice for those is particularly bad.

comment by Bill_McGrath · 2013-07-16T22:16:48.766Z · LW(p) · GW(p)

The Captain Awkward advice blog. They're not currently taking questions but the archives cover lots of material, and I found just reading the various responses on many different problems, even ones that were in no way similar to mine, allowed me to approach my issues from a new perspective.

comment by Manfred · 2013-07-16T02:39:47.530Z · LW(p) · GW(p)

A book on "nonviolent communication" is also handy rationality advice.

comment by [deleted] · 2013-07-17T05:42:43.803Z · LW(p) · GW(p)

Will and Divia talk about rational relationships.

Athol Kay for ev-psych aware long-term relationship advice. (Holy crap it works).

Seconding nonviolent communication

Replies from: Multiheaded
comment by Multiheaded · 2013-07-17T07:20:54.651Z · LW(p) · GW(p)

Athol Kay for ev-psych aware long-term relationship advice. (Holy crap it works).

That guy's stuff has been said to have a shitload of mistrust, manipulation and misogyny which poisons reasonable everyday advice about getting along.

Check out the comments there on how this overall attitude to relationships that he (and other stereotypical PUA writers) present can be so nasty, despite some grains of common sense that it contains. Seriously, would you enjoy playing the part of a cynical, paranoid control freak with a person whom you want to be your life partner?

Replies from: None, None, kgalias, bogus
comment by [deleted] · 2013-07-23T15:38:50.166Z · LW(p) · GW(p)

Athol's advice is useful, he does excellent work advising couples with very poor marriages. So far I have not encountered anything that is more unethical than any mainstream relationship advice. Indeed I think it less toxic than mainstream relationship advice.

As to misogyny, this is a bit awkward, I actually cite him as an example of a very much not woman hating red pill blogger. Call Roissy a misogynist, I will nod. Call Athol one and I will downgrade how bad misogyny is.

Replies from: pragmatist, Multiheaded
comment by pragmatist · 2013-07-23T18:11:29.053Z · LW(p) · GW(p)

Athol's advice is useful, he does excellent work advising couples with very poor marriages.

Is there evidence that he is more successful at this than the typical "Blue Pill" marriage counselor/relationship expert? Even better would be evidence that he is more successful than the top tier of Blue Pill experts. I realize these are hard things to measure, and I don't expect to see scientific studies, but I'm wondering what you're basing your claim of his excellence on. Is it just testimonials? Personal experience?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-24T09:35:19.709Z · LW(p) · GW(p)

I guess nobody measured Athol's counselling scientifically; we only have self-reports of people who say it helped them (on his web page), which is an obvious selection effect.

Maybe someone measured Blue Pill counselling. I would be curious about the results. For starters, whether it is better or worse than no counselling. (I don't have any data on this, not even the positive self-reports, but that's mostly a fact about my ignorance.)

comment by Multiheaded · 2013-07-23T17:40:52.099Z · LW(p) · GW(p)

Oh, he is not a misogynist, all right, I just said that he frames his stuff in language that's widely used and abused by misogynists. Geeks can't appreciate how important proper connotations are in all social matters! We've talked about that before! The comments I linked to say as much; that might be some decent advice, but why frame it like that?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-24T10:16:41.111Z · LW(p) · GW(p)

he frames his stuff in language that's widely used and abused by misogynists

He is reclaiming the language! (Half-seriously.)

Look, there are some unsympathetic people everywhere. "Red Pill" people have Roissy. Feminists had Solanas. Comparing these two, at least Roissy didn't try to kill anyone, nor does he recommend killing, so let's cut him some slack. The difference is that Roissy is popular now, Solanas is mostly forgotten. Well, ten years later maybe nobody will know about Roissy, if the more sane people become more popular than him and the ideas will enter the mainstream. Try to silence Athol Kay, and then all you have left are the Roissys. Because the idea is already out there and it's not going to disappear; it fits many people's experiences too well. (For example myself.)

Connotations of ideas are a matter of political power. If you have the power, you can create positive connotations for your keywords and negative connotations for your opponents' keywords. You can make your ideas mainstream, and for many people mainstream equals good. Currently, feminism has the power, so it has the power to create the connotations. And it has the power to demonize its opponents. And you are exercising this power right now. (You take a boo word "misogynist" and associate it with someone, and you have a socially valid argumentum ad hominem. If I tried to do the same thing using the word "misandrist", I wouldn't get anywhere, because people are not conditioned about that word, so they would just laugh at that kind of argument.)

Someone else could try to tell the same advice, avoiding to use the sensitive words. Which means that for many words he would simply have to invent synonyms. Which would be academically dishonest, because it is a way to use someone's research without giving them credit. But it would be technically possible. Maybe even successful. The question is whether other people would not connect the old words with the new words. Some words, like the "Red Pill" are not necessary. With some other words, the offensive part is the concept (for example that female attraction is predictable, and this is how specifically it works).

Fun fact: There is a RedPillWomen group on Reddit. Are those women misogynists too? (Here is a thread about hating women and their choices, here is a thread about feminism versus the Red Pill.)

Replies from: Multiheaded
comment by Multiheaded · 2013-07-24T15:47:12.791Z · LW(p) · GW(p)

Fun fact: There is a RedPillWomen group on Reddit. Are those women misogynists too?

No shit, Sherlock. Internalized sexism exists. Luckily, one lady who just wanted "traditional gender roles" in her relationship, and less of the fucked-in-the-headedness, has escaped that goddamn cesspool and reported her experience:
http://www.reddit.com/r/TheBluePill/comments/1hh5z5/changed_my_view/

Also:
http://www.reddit.com/r/TheBluePill/comments/1gapim/trp_why_i_actually_believed_this_shit_for_a_month/

comment by [deleted] · 2013-07-17T19:47:20.937Z · LW(p) · GW(p)

I disagree that his outlook is toxic. He uses a realistic model of the people involved and recommends advice that would achieve what you want under that model. He repeatedly states that it is a mistake to make negative moral judgement of your partner just because they are predictable in certain ways. His advice is never about manipulation, instead being win-win improvements that your partner would also endorse if they were aware of all the details, and he suggests that they should be made aware of such details.

I see nothing to be outraged about, except that things didn't turn out to actually be how we previously imagined it. In any case, that's not his fault, and he does an admirable job of recommending ethical relationship advice in a world where people are actually physical machines that react in predictable ways to stimuli.

Seriously, would you enjoy playing the part of a cynical, paranoid control freak with a person whom you want to be your life partner?

Drop the adjectives. I strive to be self-aware, and to act in the way that works best (in the sense of happiness, satisfaction, and all the other things we care about) for me and my wife, given my best model of the situation.

I do occasionally use his advice with my wife, and she is fully aware of it, and very much appreciates it when I do. We really don't care what a bunch of naive leftists on the internet think of how we model and do things.

Someone asked for rational relationship advice, an IMO, Athol's advice is right on the money for that. Keep your politics out of it please.

Replies from: bogus
comment by bogus · 2013-07-17T19:53:07.117Z · LW(p) · GW(p)

He repeatedly states that it is a mistake to make negative moral judgement of your partner just because they are predictable in certain ways.

If this is the case, he is doing serious damage by associating with the "Red Pill" brand of misogynists and misanthropes. If he actually wants to further these stated objectives, he should drop this association pronto.

Replies from: None, None
comment by [deleted] · 2013-07-17T20:05:46.304Z · LW(p) · GW(p)

Serious damage to who? Idiots who fail to adopt his advice because he calls it a name that is associated with other (even good) ideas that other idiots happen to be attracted to? That's a tragedy, of course, but it hardly seems pressing.

Seems to me that people should be able to judge ideas on their quality, not on which "team" is tangentially associated with them. Maybe that's asking too much, though, and writers should just assume the readers are morally retarded, like you suggest.

Replies from: bogus
comment by bogus · 2013-07-17T20:36:23.896Z · LW(p) · GW(p)

Maybe that's asking too much, though, and writers should just assume the readers are morally retarded, like you suggest.

You're not familiar with the whole "Red Pill" meme cluster/subculture, I take it? It strongly promotes misanthropic attitudes which most people would consider morally wrong, and it selects for these attitudes in its adherents.

Replies from: None, David_Gerard
comment by [deleted] · 2013-07-18T02:28:41.522Z · LW(p) · GW(p)

I'm somewhat familiar. My impression is that the steelman version of it is a blanket label for views that reject the controversial empirical and philosophical claims of the left-wing mainstream:

  • Everyone is cognitively equal across race and sex and such
  • Cognition and desire are not embodied in predictable biology
  • Blank slate atomic agent model of relationships and such
  • (Various conspiracy theories)
  • democracy is awesome
  • etc.

Pointing out that an idea has stupid people who believe it is not really a good argument against that idea. Hitler was a vegetarian and a eugenicist, but those ideas are still OK.

It selects for these attitudes in its adherents

So?

Here's why that's true: "Red Pill" covers empirical revisionism of mainstream leftism. What kind of people do you expect to be attracted to such a label without considering which ideas are correct? I would expect bitter social outcasts, people who fail to ideologically conform, a few unapologetic intellectuals, and people who reject leftism for other reasons.

Then how are those people going to appear to someone who is "blue pilled" (ie reasonable mainstream progressive) for lack of a better word? They are going to appear like the enemy. The observer has been brought up with the assumption that anyone who disagrees on point X Y and Z are evil. Along comes a label that covers exactly disagreement with the mainstream on X Y and Z, so of course the people who identify with that label are going to appear evil.

Note that I've offered a plausible explanation for the existence of idiots and jerks in the red-pill cluster, and their appearance of evil without reference to the factual or moral accuracy of the "red-pill" claims. Your impressions are orthogonal to the facts.

Now of course, by the selection effect you mention and I explain, the "red pill" space is going to be actually filled with idiots and evil people, who will tend to influence things a lot. But I'm from 4chan, so I have the nasty habit of filtering out the background noise of idiots and evil to find the good stuff, and the "red-pill" space has a lot of good stuff in it, once you start ignoring the misogynists, conspiracy theorists, misanthropes, and antisocial idiots.

Replies from: gothgirl420666, bogus
comment by gothgirl420666 · 2013-07-18T05:01:20.239Z · LW(p) · GW(p)

I've been reading a lot of red pill stuff lately (while currently remaining agnostic), and my impression is that most of the prominent "red pill" writers are in fact really nasty. They seem to revel in how offensive their beliefs are to the general public and crank it up to eleven just to cause a reaction. Roissy is an obvious example. About one third of his posts don't even have any point, they're just him ranting about how much he hates fat women. Moldbug bafflingly decides to call black people "Negroes" (while offering some weird historical justification for doing so). Regardless of the actual truth of the red pill movement's literal beliefs, I think they bring most of their misanthropic, hateful reputation on themselves.

I haven't read Athol Kay, so I don't know what his deal is.

Replies from: bogus, wedrifid, Viliam_Bur
comment by bogus · 2013-07-18T05:37:02.735Z · LW(p) · GW(p)

Moldbug bafflingly decides to call black people "Negroes" (while offering some weird historical justification for doing so). ...

It's not that baffling if you know where Moldbug's ideas come from. Since he is effectively restating the ideas of Thomas Carlyle and other 19th century conservatives (admittedly in modernized terms), it's quite fitting in a way that he should lift some of their lexicon as well.

comment by wedrifid · 2013-07-18T05:58:57.049Z · LW(p) · GW(p)

Moldbug bafflingly decides to call black people "Negroes" (while offering some weird historical justification for doing so).

What is baffling to me is that it is ok to call black people black people. Both terms amount to labelling a race based on the same exaggerated description of a visible difference and in general requiring latin use is higher status than common English words. Prior to specific (foreign) cultural exposure I would expect "black people" to be an offensive label and so avoid it.

Replies from: gothgirl420666, NancyLebovitz
comment by gothgirl420666 · 2013-07-18T06:05:28.743Z · LW(p) · GW(p)

The euphemism treadmill is basically arbitrary most of the time. For example, "people of color" is very PC right now, but "colored people" is considered KKK-language. It is what it is.

Also black people is a kind of strange term. Pretty much all black people are okay with it, but a lot of white people are weirdly afraid of saying it, especially in formal settings.

Replies from: Qiaochu_Yuan, ModusPonies, wedrifid
comment by Qiaochu_Yuan · 2013-07-18T21:01:27.394Z · LW(p) · GW(p)

Black is a useful term for referring to people of African descent who aren't African-American, e.g. Caribbean-Americans.

comment by ModusPonies · 2013-07-18T20:55:35.188Z · LW(p) · GW(p)

"People of color" currently means anyone other than white people, not black people exclusively.

comment by wedrifid · 2013-07-18T06:08:38.882Z · LW(p) · GW(p)

For example, "people of color" is very PC right now

Really? That is even more surprising to me.

Replies from: taelor
comment by taelor · 2013-07-19T03:35:28.313Z · LW(p) · GW(p)

My experience is it is the prefered term of the Social Justice Crowd on Tumblr and other websites for non-white people.

comment by NancyLebovitz · 2013-07-20T12:00:09.438Z · LW(p) · GW(p)

Language can be pretty arbitrary. It's not as though science fiction reliably has any science in it, even fake science.

comment by Viliam_Bur · 2013-07-19T13:39:08.120Z · LW(p) · GW(p)

Isn't a similar dynamic involved anywhere where people are developing an idea that offensively contradicts the belief of a majority?

We could similarly ask why are some atheists so agressive, and whether it wouldn't be better for others to avoid using the "atheist" label to avoid the association with these people, otherwise they deserve all the religious backlash.

There are two strategies to become widely popular: say exactly the mainstream thing, or say the most shocking thing. The former strategy cannot be used if you want to argue against the mainstream opinion. Therefore the most famous writers of non-mainstream opinions will be the shocking ones. Not because the idea is necessarily shocking, but because of a selection effect -- if you have a non-mainstream idea and you are not shocking, you will not become popular worldwide.

I may sometimes disagree with how Richard Dawkins chooses his words, but avoiding the succesful "atheist" label would be a losing strategy. I disagree with a lot of what Roissy says, but "red pill" is a successful meme, and he is not the only one using it.

There are words which have both positive and negative connotations to different people. To insist that the negative connotation is the true one often simply means that the person dislikes the idea (otherwise they would be more likely to insist that the positive connotation is the true one).

Replies from: bogus, gothgirl420666
comment by bogus · 2013-07-19T18:55:39.141Z · LW(p) · GW(p)

Isn't a similar dynamic involved anywhere where people are developing an idea that offensively contradicts the belief of a majority?

This looks like begging the question to me. Whether an idea offensively contradicts mainstream beliefs has a lot to do with the connotations that happen to be associated with it. Lots of reasonably popular ideas contradict mainstream beliefs, but are not especially offensive. Obviously, once an idea becomes popular enough to be part of the mainstream, this whole distinction no longer makes sense.

We could similarly ask why are some atheists so agressive, and whether it wouldn't be better for others to avoid using the "atheist" label to avoid the association with these people ...

Indeed, this explains why many non-theistic people steadfastly refuse to self-identify as atheists (some of them may call themselves agnostics or non-believers). It also partially explains why the movements "Atheism Plus" and "Atheism 2.0" have started gaining currency.

Similarly, any useful and non-offensive content of "red pill" beliefs may be easily found and developed under other labels, such as "seduction community", "game"/"PUA", "ev psych" and the like.

Therefore the most famous writers of non-mainstream opinions will be the shocking ones.

It's not clear why we should care whether a writer of non-mainstream opinions is famous especially when such fame correlates poorly with truth-seeking and/or the opinions are gratuitously made socially unpopular for the sake of "controversy".

There are words which have both positive and negative connotations to different people.

Serious question, name a positive connotation of "The Red Pill" - which is not shared by "Game"/"PUA"/"seduction community" or "ev psych".

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-19T20:42:47.851Z · LW(p) · GW(p)

I agree with your explanation about some people's preference for the label "agnostic". The "atheism plus" on the other hand feels to me like "atheism plus political correctness" -- it is certainly not focused on not offending religious people. (So an equivalent would be a Game blog who cares about not offending... for example Muslims. That's not the same as a Game blog trying not to offend feminists.)

Serious question, name a positive connotation of "The Red Pill" - which is not shared by "Game"/"PUA"/"seduction community" or "ev psych".

Anyone who liked the movie Matrix? (Unless all of them are already in the seduction community.) I could imagine to use the same word as a metaphor for... for example early retirement, or any similar activity that requires you to go against the stereotypical beliefs of most people. I admit I never saw the word used in this context; I just feel like it would fit there perfectly. (Also it would fit perfectly to most conspiracy theories.)

Replies from: pragmatist
comment by pragmatist · 2013-07-24T12:29:14.584Z · LW(p) · GW(p)

The "atheism plus" on the other hand feels to me like "atheism plus political correctness" -- it is certainly not focused on not offending religious people.

I don't have that much knowledge of the Atheism Plus movement, but I have read some stuff that suggests they are concerned about how prominent atheists talk about Islam, at least. I also wouldn't be at all surprised if they had expressed opposition to Dawkins' description of religious upbringing as child abuse. I do know some feminists who were/are pissed about that.

comment by gothgirl420666 · 2013-07-19T18:12:46.778Z · LW(p) · GW(p)

I'm not necessarily disagreeing that the red pill writers are pursuing an effective strategy in disseminating their beliefs. To be honest, I can see it either way. On the one hand, offending people gets them to notice you, and emotionally charged arguments are more interesting. On the other hand, some of the rhetoric might needlessly alienate people, and to a certain extent it can discredit the ideas (e.g. someone recommends Athol Kay, someone says "isn't he one of those red pill guys? I saw Roissy's blog and it was appalling, no way I can listen to one of them"). I definitely don't think that being deliberately offensive is literally the only way to spread a contrarian belief.

But I don't think the red pill movement should be able to have their cake and eat it too. You can't deliberately make your writing as offensive and obnoxious as possible in order to try to get it to spread, and then turn around and say "People are offended? This just shows that anyone who doesn't think like the mainstream becomes a public enemy!"

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-19T20:30:51.091Z · LW(p) · GW(p)

Some movements are able to have their cake and eat it too. If a hundred years ago someone told the early feminists to be extra careful about not offending people, would they listen? Would it be a winning strategy?

I agree that it feels like people should choose between having their cake and eating it. But is this a description of how the world really works, or merely a just world fallacy? As a competing hypothesis, maybe it is all about power -- if you can crush your enemies (for example make them unemployed) and give positions of power (and grant money) to your allies, then people will celebrate you as the force of good, because everyone wants to join the winner. And if you fail, the only difference between being polite and impolite is whether you will be forgotten or despised.

Let's imagine that Athol Key would stop using the forbidden words like "Red pill" et cetera. What about the rest of his message? Would it stop feeling offensive for the "Blue pill" people, or not? If the blog would be successful, they would notice, and they would attack him anyway. (The linked article reacted to Athol's description of a "red pill woman", but would it be different if he just called her e.g. a "perfect woman"?) And if the blog would be unknown enough to avoid being noticed, then... it wouldn't really matter what's written there.

Compared with most blogs discussing the topic on either side, Athol Kay is extra polite. We can criticize him for not being perfect, while conveniently forgetting that neither is anyone else.

Replies from: bogus
comment by bogus · 2013-07-19T22:52:32.103Z · LW(p) · GW(p)

Let's imagine that Athol Key would stop using the forbidden words like "Red pill" et cetera. What about the rest of his message? Would it stop feeling offensive for the "Blue pill" people, or not?

Um, the issue is not that he's using "the red pill" or any other forbidden words, but that he's expressly associating with and supporting a subculture of misanthropes, losers and misanthropic losers who happen to be using "The Red Pill" as their badge of honor. And yes, some people might still be offended by his other messages, even if he stopped providing this kind of enablement. But he would be taking their strongest argument against him off the table.

Replies from: Viliam_Bur, Viliam_Bur
comment by Viliam_Bur · 2013-07-21T12:06:03.079Z · LW(p) · GW(p)

he's expressly associating with and supporting a subculture of misanthropes, losers and misanthropic losers

Just thinking... is loser a gendered word or not? Would you feel comfortable to describe a group of women as losers, on a public forum?

If not, then what would be the proper way to describe a subculture of women who are not satisfied with how society works now, who feel their options are limited by the society, who discuss endlessly on their blogs about how the society should be changed, and use some keywords as their badge of honor?

Replies from: bogus
comment by bogus · 2013-07-21T13:05:27.342Z · LW(p) · GW(p)

Would you feel comfortable to describe a group of women as losers, on a public forum?

That's an interesting question - I actually can't think of any group where that would be an accurate description, so I don't really have a good answer here. Sorry about that.

If not, then what would be the proper way to describe a subculture of women who are not satisfied with how society works now, who feel their options are limited by the society, who discuss endlessly on their blogs about how the society should be changed, and use some keywords as their badge of honor?

People who may or may not be on to something? Sure, lots of folks blame the failings of society for their comparative lack of success, and that's sometimes unhelpful. But even that is a lot better than just complaining about how all other people - most specifically including women as well as 'alpha male' other guys - are somehow evil and stupid. That's called sour grapes, and IMHO it is a highly blameworthy attitude, not least since it perpetuates and deepens the originally poor outcomes.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-21T15:56:14.716Z · LW(p) · GW(p)

I actually can't think of any group where that would be an accurate description

No real group, or even an imaginary group? I mean, take the "misanthropic losers" you described (and for the sake of debate, let's assume your description of them is completely accurate), and imagine exactly the same group with genders reversed. Would it be okay to call those women publicly "losers"?

Or perhaps "loser" is a gendered slur. (Something like the word "slut" that you can use to offend women, but if you try it to describe a sexually adventurous man, it somehow does not have the same shaming power.) In which case, saying that the "Red Pill" readers are losers contains almost as much information as saying that they are men.

Sure, lots of folks blame the failings of society for their comparative lack of success, and that's sometimes unhelpful. But even that is a lot better than just complaining about how all other people - most specifically including women as well as 'alpha male' other guys - are somehow evil and stupid.

Complaining achieves nothing, and people who complain without achieving anything are, yeah, losers.

How about a group that achieves real results? For example, there is a controversial movement, in an obscure part of the "manosphere", behind a blogger Valentine Solarius, often criticized by feminists for writing things like "to be female is to be deficient, cognitively limited"; "the female is completely egocentric, trapped inside herself, incapable of empathizing or identifying with others, or love, friendship"; "her intelligence is a mere tool in the services of her drives and needs"; "the female has one glaring area of superiority over the male - public relations; she has done a brilliant job of convincing millions of men that women are men and men are women"; "every woman, deep down, knows she's a worthless piece of shit". -- He writes a lot about his desire to kill women. Actually, he attacked and almost killed one woman for not responding to his e-mail, but she survived so he only spent three years in prison. He seems to be a popular person among some men politically influential in the Republican party... so, let's assume that his friends really succeed to create a political movement, change the way society perceives women, change the laws as they want to have them, etc. Then, they would no longer be losers, would they? Now, would that be better than merely blogging about the "Red Pill"? (See his blog for some more crazy ideas.)

Replies from: bogus
comment by bogus · 2013-07-21T17:05:46.602Z · LW(p) · GW(p)

No real group, or even an imaginary group? I mean, take the "misanthropic losers" you described (and for the sake of debate, let's assume your description of them is completely accurate), and imagine exactly the same group with genders reversed. Would it be okay to call those women publicly "losers"?

Well, we can imagine anything we want to. It's not hard to think of a possible world where some loose subculture or organized group of women could be fairly characterized as "losers" on a par with redpillers. You could basically get there if, say, radical feminism was a lot more dysfunctional than it actually is. No such luck, though.

How about a group that achieves real results?

Perhaps you're missing the point here? By "achieving real results", I obviously don't mean committing assault. Even successfully influencing politics would be a dubious achievement, as long as their basic ideology remains what it is. However, it is indeed a stylized fact in politics and social science that such nasty subcultures and movements generally appeal to people who are quite low either in self-perceived status/achivement, or in their level on Maslow's scale of human needs.

Your quotes from the Manosphere blogger were quite sobering indeed, but I'll be fair here - you can find such crazies in any extreme movement, so perhaps that's not what's most relevant after all. If most redpillers stuck to what they might perhaps be said to do best, e.g. social critiques about the pervasive influence of feminized thinking, the male's unrecognized role as an economic provider and the like, as well as formulating reform proposals (however extreme they might be), I don't think they would be so controversial. Who knows, they might even become popular in some underground circles who are quite fascinated by out-of-the-box thinking.

comment by Viliam_Bur · 2013-07-20T11:20:31.500Z · LW(p) · GW(p)

So, is it more about that he has loser friends than about what he writes? And by losers, I mean Greens.

Replies from: bogus
comment by bogus · 2013-07-20T21:26:49.426Z · LW(p) · GW(p)

Perhaps so, to some extent: you may like it or not, but guilt by association is a successful political tactic. But the problem is made even worse by the fact that his writings occasionally support the Greens' nasty attitudes.

To take the analogy even further, imagine a respected scientist writing approvingly about "deep ecologists" and "Soylent Greens", who believe in the primacy of natural wilderness, and argue that human societies are inherently evil and inimical to true happiness, excepting "naturally co-evolved" bands and tribes of low-impact hunter-gatherers. Such a belief might even be said to supported by evolutionary psychology, in some sense. But many people would nonetheless oppose it and describe it as nasty - notably including more moderate Greens, who might perhaps turn to other sciences such as economics, and think more favorably of "sustainable development" or even "natural capitalism".

comment by bogus · 2013-07-18T03:07:39.308Z · LW(p) · GW(p)

My impression is that the steelman version of it ...

ALERT. Fully General Counterargument detected in line 1.

Seriously, how many people would actually refer to thoughtful critique and even rejection of mainstream views as "Red Pill" material? Basically nobody would, unless they are already committed to the "Red Pill" identity for unrelated reasons. That's just not what Red Pill means in the first place.

And yes, the 'Red Pill' thing attracts jerks and losers, but that's the least of its problems. A very real issue is that this ensures that ideas in the Red Pill space achieve memetic success not by their practical usefulness or adherence to truth-seeking best practices, but by shocking value and being most acceptable or even agreeable to jerks and losers.

Yes, you can go looking for diamonds in the mud: there's nothing wrong with that and sometimes it works. But that does not require you, or anyone else, to provide enablement to such a deeply toxic and ethically problematic subculture.

Replies from: None
comment by [deleted] · 2013-07-18T03:53:11.510Z · LW(p) · GW(p)

Seriously, how many people would actually refer to thoughtful critique and even rejection of mainstream views as "Red Pill" material?

  • Mencius Moldbug
  • Athol Kay
  • High quality PUA
  • etc.

Arguing about what a term means is bound to go nowhere, but in my experience, "red pill" has been associated with useful and interesting ideas. Maybe that's just me and my experience isn't valid though.

I don't think it's fair to characterize an entire space of ideas by it's strawest members (shock-value seeking "edgy" losers). I could use that technique to dismiss any given space of ideas. See for example Yvain's analysis of how mainstream ideas migrate to crazytown by runaway signalling games.

I think there is a high proportion of valuable ideas in the part of "redpillspace" that I've been exposed to. Maybe we are looking at different things that happen to be called the same name, though.

But based on your terminology and attitude here, I think you are cultivating hatred and negativity, which is harmful IMO. In general, I think it is much better to actively look for the good aspects of things and try to like more things rather than casting judgement and being outraged at more things.

ALERT. Fully General Counterargument detected in line 1.

Correct, I attempt to see the good parts of things and ignore the crud with full generality.

Replies from: bogus
comment by bogus · 2013-07-18T04:36:33.300Z · LW(p) · GW(p)

Mencius Moldbug

This is beside the point, IMHO; Moldbug's references to "taking the red pill" are well explained by his peculiar writing style. I think they are mostly unrelated to how Athol Kay, reddit!TheRedPill and others use the term. OTOH, Multiheaded's comment upthread provides proof that Kay's views are genuinely problematic, in a way that's closely related and explained by his involvement in TheRedPill meme cluster. For the time being, I make no claim one way or the other about other "high quality PUAs".

Do also note that I really am criticizing a subculture and meme cluster here. AIUI, this has nothing to do with idea spaces in a more general sense, or even factual claims about the real world. Again, connotations and attitudes are what's most relevant here. Moreover, I'm not sure what gave you the feeling that I am "cultivating hatred and negativity", of all things. While it's quite true that I am genuinely concerned about this subculture, because of... well, you said it already, the real issue here is Kay's providing enablement to it, with the attendant bad effects. (Of course, this may also apply to other self-styled PUAs).

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-19T13:50:28.157Z · LW(p) · GW(p)

Multiheaded's comment upthread provides proof that Kay's views are genuinely problematic

If you refer to the linked article, and by "proof" you mean "strawmanning and non-sequitur"...

Seriously: Imagine a comment or an article written in a similar tone on LW. How many votes would it get?

An example:

Athol Kay: [A Red Pill Woman] understands that there is a sexual marketplace, and that women have an earlier peak of sexual desirability than men do.

Man Boobz: Presumably if she forgets this, her manospherian swain will happily neg her back to a properly less-positive assessment of her rapidly decaying beauty as a woman over the age of 14.

Where exactly in Athol's article, or even anywhere on his website, did anyone say that anything about women's decaying beauty over the age of 14? Citation needed!

Athol Kay: [A Red Pill Woman] understands that divorce sucks and is more akin to getting treatment for cancer than having cosmetic surgery.

Man Boobz: I sort of agree with this one, actually: for women married to Athol Kay’s followers, getting divorced would be a lot like removing a malignant tumor.

Yeah, this is the argumentation style we refer to when saying "raising the sanity waterline"... not!

Who exactly is the manipulative hateful douchebag in this article? Are you sure it was Athol Kay?

Replies from: bogus
comment by bogus · 2013-07-19T19:25:46.171Z · LW(p) · GW(p)

Seriously: Imagine a comment or an article written in a similar tone on LW. How many votes would it get?

Um, I think this is a silly argument, honestly. As the name makes reasonably clear, Man Boobz is a humor and satire website. Unlike most articles posted here at LW, they do not claim to qualify for any standard of rational argument. What's useful about them is in their pointing to some of Athol Kay's published opinions, and perhaps pointing out some undesirable connotations of these opinions.

Athol Kay: [A Red Pill Woman] understands that there is a sexual marketplace, and that women have an earlier peak of sexual desirability than men do.

Let me try to steel-man MB's critique of this statement. Why is it especially important for a RPW to understand this - especially when the basic notion is clearly understood by any COSMO reader (which is a rather low standard)? Athol Kay does not explain how this understanding is supposed to pay rent in terms of improved results. And it is clear that, unless some special care is taken (which Athol Kay does not point out), a naïve interpretation of such "understanding" has unpleasant and unhelpful connotations.

Keep in mind that PUA/game works best when it manages to disrupt the mainstream understanding of "sexual market value" as opposed to accepting it uncritically, and the seduction community is successfully developing "girl game" methods which can allow women to be more successful in the market. By failing to point this out, Kay is under-serving Red Pill women especially badly.

Athol Kay: [A Red Pill Woman] understands that divorce sucks and is more akin to getting treatment for cancer than having cosmetic surgery.

This falls under Bastiat's fallacy of "what is seen and what is not seen". We see that divorce sucks; what we do not see is that divorce is nonethess rational whenever not divorcing would suck even more.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-19T21:02:12.788Z · LW(p) · GW(p)

Strawmanning could be a technique used in humor and satire, but even then it isn't a "proof" that someone's views are "genuinely problematic".

Athol Kay does not explain how this understanding is supposed to pay rent in terms of improved results.

How about this: Two women in their 50s compare their husbands with men who were attracted to them when they were 18, and both see that their husband's "market value" is lower. Let's assume there is no other problem in the marriage; they just want to be maximizers, not merely satisficers.

One of them is a "Red pill woman", she does not divorce and keeps a relatively good relationship. The other one is encouraged by success stories in popular media, gets a divorce... and then finds that the men who were interested in her when she was 18 are actually not interested anymore, and that she probably would have maximized her happiness by staying married. -- This is how the belief can pay its rent.

We see that divorce sucks; what we do not see is that divorce is nonethess rational whenever not divorcing would suck even more.

I wouldn't advocate staying married for example in cases of domestic violence, and I guess neither would Athol Kay. So we are speaking about "sucking" in sense of "not being with the best partner one could be with", right? In that case, understanding one's "market value" is critical in determining whether staying or leaving is better. (By the way, a significant part of Athol's blog is about how men should increase their "market value", whether by exercise or making more money or whatever.)

And then, there is the impact on children. We should not expect that even if mommy succeeds to get a more attractive partner, that it will make them automatically happy. This trade-off is often unacknowledged.

comment by David_Gerard · 2013-07-17T23:22:31.479Z · LW(p) · GW(p)

The Red Pill on Reddit. Is this the one you're talking about?

Replies from: bogus
comment by bogus · 2013-07-17T23:47:05.079Z · LW(p) · GW(p)

I'm not sure that the subreddit enjoys any sort of official status, but it's certainly representative of what I'm talking about. Do note that the central problem with the RP meme cluster is one of connotation and general attitude, although some factual claims can definitely be problematic as well.

Come to think of it, even the name 'Red Pill' embodies all sort of irrationality and negative attitudes. Apparently, it is based on the very ancient idea that female period blood is in some sense a magical substance - so one can fashion a "Red Pill" out of it using sympathetic magick, and thus acquire some sort of occult or arcane knowledge which is normally exclusive to women and disallowed to men.

Replies from: None, taelor
comment by [deleted] · 2013-07-18T02:02:22.131Z · LW(p) · GW(p)

Apparently, it is based on the very ancient idea that female period blood is in some sense a magical substance - so one can fashion a "Red Pill" out of it using sympathetic magick, and thus acquire some sort of occult or arcane knowledge which is normally exclusive to women and disallowed to men.

Have you seen The Matrix?

Replies from: bogus
comment by bogus · 2013-07-18T03:36:36.307Z · LW(p) · GW(p)

Well, you can find a lot of magickal or Neopagan symbolism in The Matrix if you know how to look for it. The word "matrix" itself means "something motherly" in Latin, and its use in the movie could be viewed as a reference to the Great Mother Goddess. (More specifically, the Great Mother is actually one archetype of the feminine Great Goddess of Neopaganism.)

comment by taelor · 2013-07-18T01:19:49.481Z · LW(p) · GW(p)

Apparently, it is based on the very ancient idea that female period blood is in some sense a magical substance - so one can fashion a "Red Pill" out of it using sympathetic magick, and thus acquire some sort of occult or arcane knowledge which is normally exclusive to women and disallowed to men.

Citation Requested.

comment by [deleted] · 2013-07-23T15:56:15.264Z · LW(p) · GW(p)

The association is not a matter of packaging but content. The reductionist approach to one's social life, the model of male and female sexual psychology he uses, etc. If he dropped all the "Red Pill" or "PUA" markers such as vocabulary, links or credits, he would still be identified with them by critics and advocates.

comment by kgalias · 2013-07-17T22:16:02.103Z · LW(p) · GW(p)

Can you point to some less blatantly biased commentary?

comment by bogus · 2013-07-17T19:25:39.829Z · LW(p) · GW(p)

This might seem surprising, but I broadly agree with this assessment, except that I can't tell what "stereotypical PUA writers" might mean in this context. The "Red Pill" is a very distinctive subculture which is characterized by wallowing in misogynistic - and most often, just plain misanthropic - attitude!cynicism (I'm using Robin Hanson's "meta-cynical" taxonomy of cynicism here) about gender relations, relationships and the like. Its memes may be inspired by mainstream PUA and ev-psych, but - make no mistake here - it's absolutely poisonous if you share the mainstream PUA goal of long-term self-improvement in such matters.

comment by Xachariah · 2013-07-16T04:16:58.769Z · LW(p) · GW(p)

Karen Pryor's Don't Shoot the Dog.

Just kidding... sorta (Spoiler: It's a book on behavior training.)

comment by Dorikka · 2013-07-16T01:54:17.907Z · LW(p) · GW(p)

I am reading the textbook mentioned here. I find it enjoyable reading and it seems useful, but I have not applied any of it yet.

I believe that this is the book being referred to. I know that two of the authors are missing in the Amazon link, but they are present here -- it appears that some of the authors were purged during the updating.

comment by Thomas · 2013-07-16T17:12:44.492Z · LW(p) · GW(p)

I wrote a (highly speculative) article on my blog, about the conversion of negative energy into the ordinary mass-energy.

http://protokol2020.wordpress.com/2013/07/07/the-menace-that-is-dark-energy/

I don't expect mercy, though.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-21T16:30:22.202Z · LW(p) · GW(p)

How much do you know about general relativity? (This is a honest question BTW -- I know the postulates behind it and some of the maths, but I've never studied its implications in detail, besides the Schwarzschild metric and the FLRW metric, so I have trouble telling the levels above mine apart.)

Replies from: Thomas
comment by Thomas · 2013-07-21T17:16:10.466Z · LW(p) · GW(p)

I will pass the discussion about this here, I hope you understand that.

But please, fell free to engage me there, on my blog.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-22T17:59:59.389Z · LW(p) · GW(p)

I'm afraid I'm not knowledgeable enough for that -- I can't tell whether non-trivial claims about GR are valid any more reliably than by noticing whether or not the person who made them sounds like a crackpot.

Replies from: Thomas
comment by Thomas · 2013-07-22T20:57:53.682Z · LW(p) · GW(p)

Yes, this Relativity an Quantum Mechanics debates usually ends like this: "I have not enough knowledge, but it seems you don't have it either ..."

Is this a reason to avoid them? Maybe not.

But this is why I decided to also write about that planet rotation thing. Where the scene is very transparent. Thousands of top experts have no clue.

comment by Risto_Saarelma · 2013-07-16T12:23:17.566Z · LW(p) · GW(p)

Ben Goertzel will take your money and try put an AGI inside a robot.

Trigger warning: Those creepy semi-human robots that will make anyone who hasn't spent months and months locked in a workshop making them do those human-imitating jerky facial gestures recoil in horror.

Replies from: jmmcd, Sly, Richard_Kennaway
comment by jmmcd · 2013-07-16T16:24:29.196Z · LW(p) · GW(p)

That page mentions "common sense" quite a bit. Meanwhile, this is the latest research in common sense and verbal ability.

comment by Sly · 2013-07-17T08:52:26.117Z · LW(p) · GW(p)

That was hideous. Poor production values and a sloppy video that oozes incompetence.

comment by Richard_Kennaway · 2013-07-16T12:46:33.884Z · LW(p) · GW(p)

Um, wow.

My eyes would be on this sort of thing if I wanted to keep up to date on serious AI. Demo video of the hardware here.

comment by LanceSBush · 2013-07-22T15:24:40.971Z · LW(p) · GW(p)

Hey everyone, long-time lurker here (I ran a LW group in Ft. Lauderdale, FL for about a year) and this is my first comment. I would like to post a discussion topic on a proposal for potential low-hanging fruit: fixing up Wikipedia pages related to LessWrong's interests (existential risk, rationality, decision theory, cognitive biases, etc. and organizations/people associated with them). I'd definitely be interested in getting some feedback on creating a wiki project that focusing on improving these pages.

comment by tim · 2013-07-18T18:49:43.907Z · LW(p) · GW(p)

Is there a (more well-known/mainstream) name for arguments-as-soldiers-bias?

More specifically, interpreting an explanation of why or how an event happened as approval of that event. Or claiming that someone who points out a flaw in an argument against X is a supporter of X. (maybe these have separate names?)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-18T20:18:35.128Z · LW(p) · GW(p)

Should we even call this a bias? They're both unfortunate, but they're also both reasonable Bayesian updates.

Replies from: tim
comment by tim · 2013-07-18T22:41:23.533Z · LW(p) · GW(p)

Good point. They are generally useful heuristics that sometimes lead to unnecessary conflicts.

comment by So8res · 2013-07-17T01:22:50.891Z · LW(p) · GW(p)

We've been having beautiful weather recently in my corner of the world, which is something of a rarity. I have a number of side projects and hobbies that I tinker with during the evenings, all of them indoors. The beautiful days were making me feel guilt about not spending time outside.

So I took to going on bike rides after work, dropping by the beach on occasion, and hiking on weekends. Unfortunately, during these activities, my mind was usually back on my side projects, planning what to do next. I'd often rush my excursions. I was trying to tick the "outdoors" box so I could get back to my passions without guilt.

This realization fueled the guilt. I began to wonder how I could actually enjoy the outdoors, if both staying inside and playing outside left me dissatisfied.

What I realized was this: You don't enjoy nice weather by forcing yourself outdoors. You enjoy nice weather by having an outdoor hobby, an outdoor passion that you pursue regardless of weather. Then when the weather is good, you enjoy it automatically and non-superficially.

Similarly:

You don't become a music star by trying. You become a music star by wanting to make music.

You don't become intelligent by trying. You become intelligent by wanting the knowledge.

It was a revelation to me that I can't always take a direct path to the type of person I want to be. If I want to change the type of person that I am, I may have to adopt new terminal goals.

Replies from: None, D_Malik
comment by [deleted] · 2013-07-17T05:26:15.748Z · LW(p) · GW(p)

If I want to change the type of person that I am, I may have to adopt new terminal goals.

Wat? Methinks you have that backwards. "X reliably leads to Y, which I like, so I should like X" is reasonable "X reliably leads to Y, which I like, so I should adopt X as a terminal goal valuable regardless of what it gets me" is madness.

Mixing up your goal hierarchy is the path to the dark side.

Replies from: So8res
comment by So8res · 2013-07-17T05:47:17.916Z · LW(p) · GW(p)

Perhaps I did not adequately get my point across.

If you really want to be a music star, but you hate making music, you are in trouble. If after realizing this you still really want to be a music star, consider finding ways to modify your preferences concerning music creation.

Mixing up your goal hierarchy is the path to the dark side.

We're born with mixed up goal hierarchies. I'm merely pointing out that untangling your goal hierarchies can require changing your goals, and that some goals can be best achieved by driving towards something else.

Replies from: None, wedrifid
comment by [deleted] · 2013-07-17T05:50:38.765Z · LW(p) · GW(p)

If you really want to be a music star, but you hate making music, you are in trouble. If after realizing this you still really want to be a music star, consider finding ways to modify your preferences concerning music creation.

Ok, let's distinguish between your preferences as abstract ordering over lotteries over possible worlds, and preferences as physical facts about how you feel about something.

It is a bad idea to change the former for instrumental reasons. The latter are simply physical facts that you should change to be whatever the former thinks would be useful.

That probably clears up the confusion.

Replies from: So8res
comment by So8res · 2013-07-17T06:13:46.325Z · LW(p) · GW(p)

I would agree completely, if humans were perfect rationalists in full control of their minds. In my (admittedly narrow) experience, people who have the creation of art / attainment of knowledge as a terminal goal usually create better art / attain more knowledge than people who have similar instrumental goals.

I am indeed suggesting that the best way to achieve your current terminal goals may be to change your preference ordering over lotteries over possible worlds. If you are a young college student worried about the poor economy, and all you really want is a job, you should consider finding a passion.

Now, you could say that such people don't really have "get a job" as a terminal goal, that what they actually want is stability or something. But that's precisely my point: humans aren't perfect rationalists. Sometimes they have stupid end-games. (Think of all the people who just want to get rich.)

If you find yourself holding a terminal goal that should have been instrumental, you'd better change your terminal goals.

Replies from: None
comment by [deleted] · 2013-07-17T19:20:26.500Z · LW(p) · GW(p)

I am indeed suggesting that the best way to achieve your current terminal goals may be to change your preference ordering over lotteries over possible worlds. If you are a young college student worried about the poor economy, and all you really want is a job, you should consider finding a passion.

Ok. I disagree. I tried to separate what you want in the abstract form the physical fact of what this piece of meat you are sending into the future "wants" but then you went and re-conflated them. I'm tapping out.

Replies from: So8res
comment by So8res · 2013-07-17T20:29:13.712Z · LW(p) · GW(p)

For what it's worth, I don't think we disagree. In your terminology, my point is that people don't start with clearly separated "abstract wants" and "meat wants", and often have them conflated without realizing it. I hope we can both agree that if you find yourself thus confused, it's a good idea to adjust your abstract wants, no matter how many people refer to such actions as a "path to the dark side".

(Alternatively, I can understand rejecting the claim that abstract-wants and meat-wants can be conflated. In that case we do disagree, for it seems to me that many people truly believe and act as if "getting rich" is a terminal goal.)

comment by wedrifid · 2013-07-17T05:51:35.334Z · LW(p) · GW(p)

and that some goals can be best achieved by driving towards something else.

You used the phrase 'terminal goals'. This describes adopting an instrumental goal. Nyan's criticism applies.

Replies from: So8res
comment by So8res · 2013-07-17T05:55:01.342Z · LW(p) · GW(p)

I disagree. It seems to me that people who have music creation as a terminal goal are more likely to create good music than people who have music creation as an instrumental goal. Humans are not perfect rationalists, and human motivation is a fickle beast. If you want to be a music star, and you have control over your terminal goals, I strongly suggest adopting a terminal goal of creating good music.

Replies from: wedrifid
comment by wedrifid · 2013-07-17T08:00:58.944Z · LW(p) · GW(p)

I suggest that you abandon the word 'terminal' and simply speak of goals. You are using the word incorrectly and so undermining whatever other point you may have had.

Replies from: So8res
comment by So8res · 2013-07-17T14:23:14.928Z · LW(p) · GW(p)

What do you think the word "terminal" means in this context, and what do you think I think it means?

Edit: Seriously, I'm not being facetious. I think I am using the word correctly, and if I'm not, I'd like to know. The downvotes tell me little.

Replies from: hylleddin
comment by hylleddin · 2013-07-25T20:33:19.547Z · LW(p) · GW(p)

In local parlance, "terminal" values are a decision maker's ultimate values, the things they consider ends in themselves.

A decision maker should never want to change their terminal values.

For example, if a being has "wanting to be a music star" as a terminal value, than it should adopt "wanting to make music" as an instrumental value.

For humans, how these values feel psychologically is a different question from whether they are terminal or not.

See here for more information

Replies from: So8res
comment by So8res · 2013-07-25T20:50:11.461Z · LW(p) · GW(p)

Thanks. Looks like I was using the word as I intended to.

My point is that humans (who are imperfect decision makers and not in full control of their motivational systems) may actually benefit from changing their terminal goals, even though perfectly rational agents with consistent utility functions never would want to.

Humans are not always consistent, and making yourself consistent can involve dropping or acquiring terminal goals. (Consider a converted slaveowner acquiring a terminal goal of improving quality of life for all humans.)

My original point stems from two observations: Firstly, that many people seem to have lost purposes where their terminal goals should be. Secondly, that some humans may find it difficult to "trick" their goal system.

You may find it easier to achieve "future me is a music star" by sending a version of yourself with different terminal goals (wanting to make music) into the future, as opposed to sending a version of you who makes music for fame's sake. (The assumption here is that the music you make in the former case is better, and that you don't have access to it in the latter case, because humans find it difficult to trick their goal system.)

This is somewhat related to purchasing warm fuzzies. There are some things you cannot achieve by willpower alone. In order to achieve your current terminal goals, you may need to change your terminal goals.

I realize that this is a potentially uncomfortable conclusion, but I reject wedrifid's claim that I was misusing the word.

comment by D_Malik · 2013-07-17T21:29:18.658Z · LW(p) · GW(p)

Get some pot-plants and put a sunlamp on your desk. Then every day is a nice day, and you can stop this "outside" nonsense. :P

Replies from: David_Gerard
comment by David_Gerard · 2013-07-17T23:19:29.423Z · LW(p) · GW(p)

A really bright daylight-spectrum desk lamp does make things lovely.

comment by Metus · 2013-07-15T20:45:17.342Z · LW(p) · GW(p)

Anyone around here familiar with Stoicism and/or cognitive-behavioural therapy? I am reading this book and it seems vaguely like it would be of relevance to this site. Especially the focus of training the mind to make something of a habit like questioning whether something is ultimately in our control or not.

Also, I am kind of sad that there is nothing around here like a self-study guide that is easily accessible for the public.

And finally, I am confused again and again why there are so many posts about epistemic rationality and so few about instrumental rationality. The former helps me less to win than the latter. Or maybe I am wrong about the purpose of this site.

Post post scriptum: In light of current revelations about the NSA I would be very happy about this site offering https to protect passwords and to obfuscate the specific viewed content.

Replies from: David_Gerard, gothgirl420666, FiftyTwo, RomeoStevens
comment by David_Gerard · 2013-07-15T20:51:52.292Z · LW(p) · GW(p)

As a psychotherapy, CBT is the only psychotherapy with evidence of working better than just talking with someone for the same length of time. (Not to denigrate the value of just attention, but e.g. counselors are way cheaper than psychiatrists.) It seems to work well if it's guided, i.e. you have an actual therapist as well as the book to work through.

I don't know how it is for people who aren't coming to it with an actual problem to solve, but for self-knowledge as a philosophical end, or to gain the power of hacking themselves.

Replies from: Error
comment by Error · 2013-07-16T11:38:47.497Z · LW(p) · GW(p)

counselors are way cheaper than psychiatrists

Curiosity: How much cheaper?

I've felt like I could benefit from therapy from time to time, but I hate dealing with doctors and insurance.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-16T14:04:30.985Z · LW(p) · GW(p)

Hard to generalise internationally - but non-medical counselors charge like jobs that involve paying attention to someone, whereas psychiatrists charge like specialist doctors (which they are). I was mostly thinking in terms of public funding for medicine, where bang for the buck is an eternal consideration.

comment by gothgirl420666 · 2013-07-15T23:20:04.230Z · LW(p) · GW(p)

And finally, I am confused again and again why there are so many posts about epistemic rationality and so few about instrumental rationality.

Probably because teaching instrumental rationality isn't to the comparative advantage of anyone here. There's already tons of resources out there on improving your willpower, getting rich, becoming happier, being more attractive, losing weight, etc. You can go out and buy a CBT workbook written by a Phd psychologist on almost any subject - why would you want some internet user to write up a post instead?

Out of curiosity, what type of instrumental rationality posts would you like to see here?

Replies from: Metus, Viliam_Bur
comment by Metus · 2013-07-15T23:41:00.518Z · LW(p) · GW(p)

There's already tons of resources out there on improving your willpower, getting rich, becoming happier, being more attractive, losing weight, etc. You can go out and buy a CBT workbook written by a Phd psychologist on almost any subject - why would you want some internet user to write up a post instead?

Then linking to it would be interesting. I can't reasonably review the whole literature (that again reviews academic literature) to find the better or best books on the topics of my interest.

So many self-help books are either crap because their content is worthless or painful to read because they have such a low content-to-word ratio for any reasonable metric. I want just the facts. Take investing as an example: It can be summarized in this one sentence "Take as much money as you are comfortable with and invest it in a broad index fund, taking out money so to come out with zero money at the moment of your death, except if you want to leave them some money." And still there is a host of books from professional investors detailing technical analysis of the most obscure financial products.

Out of curiosity, what type of instrumental rationality posts would you like to see here?

Have reading groups reviewing books of interest. Post summaries of books of interest or reviews. Discuss the cutting edge of practical research, if relevant to our lifes. This is staying with your observation that most practically interesting stuff is already written.

Moving on, we know about all kinds of biases. We also know that some of those biases are helped by simply knowing about them, some are not. For the latter you need some kind of behavioural change. I do not know about books helping with that.

I know that this post is not precise and it can't be, as it explores what could be. If I knew exactly what I wanted, I would aready get it, it is a process of exploring.

Replies from: gothgirl420666, ChristianKl
comment by gothgirl420666 · 2013-07-16T00:00:59.217Z · LW(p) · GW(p)

So many self-help books are either crap because their content is worthless or painful to read because they have such a low content-to-word ratio for any reasonable metric. I want just the facts.

I've found that "just the facts" doesn't really work for self-help, because you need to a) be able to remember the advice b) believe on an emotional, not just rational level that it works and c) be actually motivated to implement the advice. This usually necessitates having the giver of advice drum it into you a whole bunch of different ways over the course of the eight hours or so spent reading the book.

Have reading groups reviewing books of interest. Post summaries of books of interest or reviews. Discuss the cutting edge of practical research, if relevant to our lifes. This is staying with your observation that most practically interesting stuff is already written.

One problem with this is that "reviewing" self-help books is hard because ultimately the judge of a good self-help book is whether or not it helps you, and you can't judge that until a few months down the road. Plus there can be an infinity of confounding factors.

But I can see your point. Making practical instrumentality issues more of a theme of the conversation here is appealing to me. Cut down on the discussion of boring, useless things (to me, of course) like Newcomb's problem and utility functions and instead discuss how to be happy and how to make money.

However, I have seen a few people complain about how LessWrong's quality is deteriorating because the discussion is being overrun with "self-help". So not everyone feels this way, for whatever reason.

Replies from: Metus
comment by Metus · 2013-07-16T00:20:25.532Z · LW(p) · GW(p)

I've found that "just the facts" doesn't really work for self-help, because you need to a) be able to remember the advice b) believe on an emotional, not just rational level that it works and c) be actually motivated to implement the advice. This usually necessitates having the giver of advice drum it into you a whole bunch of different ways over the course of the eight hours or so spent reading the book.

Very true and a good observation. My reading of stoic practice informs this further: They had their sayings and short lists of "just the facts" but also put emphasis on their continuous practice. Indeed, my current critique of lesswrong is based on this impression. But to counter your point: I had things like Mister Money Moustache in mind where multiple screen pages are devoted to a single sentence of actual advice. I dislike that just as I don't like Eliezer's roundabout way of explaining things.

One problem with this is that "reviewing" self-help books is hard because ultimately the judge of a good self-help book is whether or not it helps you, and you can't judge that until a few months down the road. Plus there can be an infinity of confounding factors.

This can be helped by stating the criteria in advance. A few of the important criteria, at least for me, are correctness of advice, academic support, high information density and readability. So some kind of judgement can be readily made immediately after reading the book. Or a professional can review the book regarding it's correctness.

But I can see your point. Making practical instrumentality issues more of a theme of the conversation here is appealing to me. Cut down on the discussion of boring, useless things (to me, of course) like Newcomb's problem and utility functions and instead discuss how to be happy and how to make money.

However, I have seen a few people complain about how LessWrong's quality is deteriorating because the discussion is being overrun with "self-help". So not everyone feels this way, for whatever reason.

My suggestion is/was to seperate the discussion part of lesswrong in two parts: Instrumental and epistemic. That way everyone gets his part without reading too much, for them, unnecessary content. But people are opposed to something like that, too. Fact is, the community here is changing and something has to be done about that. Usually people are very intelligent and informed around here so I would love to hear their opinions on issues that matter to me.

Replies from: maia
comment by maia · 2013-07-16T11:58:22.369Z · LW(p) · GW(p)

Maybe we should have a "Instrumental Rationality Books" thread or something, similar to the "best textbooks" thread but with an emphasis on good self-help books or books that are otherwise useful in an everyday way.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-07-18T00:32:10.277Z · LW(p) · GW(p)

That sounds like a good idea. I might make it in the next few days if no one else does.

comment by ChristianKl · 2013-07-16T14:28:03.253Z · LW(p) · GW(p)

Take investing as an example: It can be summarized in this one sentence "Take as much money as you are comfortable with and invest it in a broad index fund, taking out money so to come out with zero money at the moment of your death, except if you want to leave them some money."

This assumes that you now when you will die and can predict in advance how interest rates will vary over the future. It also ignores akrasia issues.

comment by Viliam_Bur · 2013-07-19T13:15:34.668Z · LW(p) · GW(p)

why would you want some internet user to write up a post instead?

A group of internet users could discuss an existing book or a group of books, and say for example: "this part worked for me", "this part didn't work for me", "I did this meta action to not forget using this part", "here is a research that disproves one of the assumptions in the book" etc. They don't have to replace the books, just build on them further.

Seems to be that many books are optimized (more or less successfully) to be bestsellers. A book that actually changes your life will not necessarily be more popular than a book that impressed you and makes you recommend it to your friends, even if your life remains unchanged or if the only change is being more (falsely) optimistic about your future successes.

comment by FiftyTwo · 2013-07-15T23:35:32.080Z · LW(p) · GW(p)

I feel the same way about stoicism as I do about Buddhism, there's some good stuff but its hard to separate out from the accumulated mystical detritus. The advantage of modern psychology is it tends to include the empirically supported parts of these traditions.

As for CBT, I've personally had extremely good experience with Introducing Cognitive Behavioural Therapy: A Practical Guide.

comment by RomeoStevens · 2013-07-15T21:27:17.092Z · LW(p) · GW(p)

There is moodgym.

comment by [deleted] · 2013-07-19T08:24:13.873Z · LW(p) · GW(p)

Hello and welcome to Phoenix Wright: Ace Economist.

Briefly, Phoenix Wright: Ace Attorney is a series of games where you play as Phoenix Wright, an attorney, who defends his client and solves crimes. Using a free online application that lets you make your own trials, I've turned Phoenix Wright into an economist and unleashed him upon the world.

I'm posting it here just in case it interest anyone. The LessWrong crowd is smart and well-educated, and so I'd appreciate any feedback I can get from you fine folk.

Play it here (works best in Firefox):

http://aceattorney.sparklin.org/jeu.php?id_proces=49235

Although I'm using Ace Attorney: Online as a medium of expression here, this is not a normal Phoenix Wright game. This trial is actually intended to explain in a more fun and friendly format the ideas contained in an academic paper I wrote about economics (a paper which has been read by the professional economist and top econ blogger Tyler Cowen, among other people). So while there's testimonies and cross-examinations, you're not really solving a crime here so much as reading a Socratic dialogue of sorts about economics. It's been playtested for bugs, but let me know if you catch anything I missed.

Let it load. The first few frames are supposed to be just black with dialogue, but if they're still that way after the green text with the date and time, just wait till it loads. Parts of the game might look weird because the background will be partially loaded.

Gameplay is simple. You click on the arrow to make the dialogue progress. Don't press too fast or every now and then you'll miss a piece of dialogue. A few times you'll be asked questions. Pick the right answer. Sometimes you'll have to pick the right evidence to present. Pick the correct evidence and click present.

During cross examinations (when the big arrow splits into a two smaller forward and backwards arrows), you can move backwards and forwards between the pieces of the testimony. Press "press" to ask a question. This is always a good idea. Occasionally you'll need to present the right evidence at the right part of the testimony to advance, but be careful: present the wrong evidence at the wrong part of the testimony and you'll incur a penalty. Too many penalties and you lose.

It's still missing music (in my defense, my aged computer is not able to play sound right now, so I couldn't select any music), but I hope that doesn't prevent you from playing the game and learning something from the dialogue.

I don't expect this to interest all of you, but if you find economics interesting give it a go. The worst that happens is that you waste half an hour of your life playing a game on the internet--like you've never done that before, amirite?

I would appreciate both any comments you have about improving the trial, gameplay, and writing, and what you think about the subject of the Socratic dialogue, both your own thoughts and your comments on the arguments presented in the trial. etc. I particularly need help steelmanning the prosecution.

This is part one of three. The other two parts are in progress. They are similar to the first part but advance and draw out the implications of the argument presented in part one.

Enjoy.

Replies from: ygert
comment by ygert · 2013-07-19T14:21:38.132Z · LW(p) · GW(p)

Cool. I haven't played the Ace Attorney games in a while, but I'll check this out.

comment by folkTheory · 2013-07-18T07:02:22.909Z · LW(p) · GW(p)

I'm trying to decide whether to marry someone, but I'm having a lot of trouble deciding. Anyone have any advice?

Replies from: drethelin, Lumifer, shminux, Dorikka, Eliezer_Yudkowsky, bogdanb
comment by drethelin · 2013-07-18T18:13:40.859Z · LW(p) · GW(p)

1) do you plan on spending a long period of time in a relationship with someone?

2) you have a job where they will get benefits from being married to you or vice versa?

3) do you expect to have children or buy property soon?

4) do you hang out with people who care whether or not you're married rather than just a long-term couple?

5) do you expect the other person to ever leave you and take half your stuff?

6) do you want to have a giant ceremony?

7) do you live in a country where you get tax credits or something for being married?

8) do you expect yourself or them to act differently if "married" or not?

9) do you have the money to blow on a wedding?

10) is there any benefit to getting married soon over later? If you expect to be together in several years as a married couple, can you just stay together a year and THEN get married?

These are some useful questions off the top of my head for this situation.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-19T13:57:09.933Z · LW(p) · GW(p)

Don't forget to include the probability of a divorce (use outside view) and likely consequences.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-21T16:14:34.627Z · LW(p) · GW(p)

Ain't that the 5 in drethelin's list?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-21T16:23:24.982Z · LW(p) · GW(p)

Oops, I somehow skipped that one.

comment by Lumifer · 2013-07-18T17:48:04.219Z · LW(p) · GW(p)

Other than in special circumstance, I think marriage is one of these occasions where "having trouble deciding" pretty clearly means "NO".

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2013-07-18T22:14:56.854Z · LW(p) · GW(p)

It could also mean "Not now".

comment by shminux · 2013-07-21T01:08:32.275Z · LW(p) · GW(p)

If in doubt, don't. There is rarely a good reason to formalize a relationship these days until you are absolutely sure that he/she is the one.

comment by Dorikka · 2013-07-20T18:35:25.497Z · LW(p) · GW(p)

You might be interested in the textbook that I recommended here, which includes some general information about patterns in relationships that predict how-long-people-that-are-married-stay-married.

I am aware that I am recommending a 500 page textbook in response to your request for advice, and that this is kinda absurd. I am not familiar enough with the material to be able to (given the amount of effort that I am willing to dedicate to the task) summarize the relevant information for you, but figured that the link would be literally better than nothing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-21T00:50:47.656Z · LW(p) · GW(p)

Are you already married? What do your current spouses say?

Replies from: shminux, folkTheory
comment by shminux · 2013-07-21T01:10:26.221Z · LW(p) · GW(p)

While funny as jests go, your reply sounds rather condescending in the "transhumanists are better than muggles" sort of way. Unless I misunderstand your point.

comment by folkTheory · 2013-07-22T01:07:28.540Z · LW(p) · GW(p)

I am not currently married.

comment by bogdanb · 2013-07-18T17:40:57.871Z · LW(p) · GW(p)

Start with a list :-)

First figure out why you’re trying to decide that (the pros) and write it down. Then figure why you haven’t decided yet (the cons) and write those down.

If writing them down isn’t enough, try to figure out a way to put numbers on each item. (Exactly what kind of numbers depends on you, and figuring that is part of the solution.)

If that doesn’t work, then ask for help, with the list.

comment by wedrifid · 2013-07-18T05:47:42.013Z · LW(p) · GW(p)

How credible is the research (that forms the inspiration) of this popularisation? The subject is the effect of status on antisocial behaviour and soforth. Nothing seemed particularly surprising to me but that may be confirmation bias with respect to my general philosophy and way of thinking.

comment by CAE_Jones · 2013-07-18T12:37:47.498Z · LW(p) · GW(p)

So, there's this multiplayer zombie FPS for the blind called Swamp, and the developer recently (as in the past few months) added an AI to help with the massive work of banning troublemakers who use predictable methods to subvert bans. Naturally, a lot of people distrust the AI (which became known as Swampnet), and it makes a convenient scapegoat for all the people demanding to be unbanned (when it turns out that they did indeed violate the user agreement).

In the past 24 hours, several high-status, obviously innocent players started getting banned. I predicted that someone was using their passwords, while everyone else went on about how Swampnet is clearly unreliable. I was tempted to throw around terms like dictionary attack, but decided against making such a specific prediction, especially without fully understanding dictionary attacks myself.

The developer confirmed that someone had been grabbing people's passwords to link them to his (banned) account, which Swampnet uses to treat them as the same person. He also confirmed that the number of tries involved meant the villain was not brute-forcing it, but also that he hadn't hacked the server or intercepted data packets, making him wonder if there isn't some obvious list of passwords being shared or something.

Meta: I probably shouldn't feel as good about outpredicting everyone and wisely avoiding getting too specific as I do. If I'd outpredicted the majority of, say, LWers, then it would feel way more justified, but that community's selection pressures are not directed toward prediction power.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-07-18T12:48:55.047Z · LW(p) · GW(p)

Addendum: I reread the discussion, and I treated the first one as a possible bug, but after the second clearly innocent banning, I decided it must be a hacker, and even jumped online a few times to see if I'd been hit. Posting only because I was afraid I'd used the word immediately in referring to my prediction (I did, and edited it out accordingly), when it took two datapoints for me to update to the successful prediction.

comment by Stabilizer · 2013-07-17T01:05:47.979Z · LW(p) · GW(p)

Has anyone read Dennett's Intuition Pumps? I'm thinking of reading it next. The main thing I want to know: does he offer new ways of thinking which one can actually apply while thinking about (a) everyday situations and (b) math and physics (which is my work).

Replies from: palladias, fubarobfusco
comment by palladias · 2013-07-18T00:58:27.972Z · LW(p) · GW(p)

Read and reviewed. I'd get it from a library and take a few notes, but not buy it. The book is a mix of practical habits for everyday situations, explanations of how computers and algorithms work, high-level problems in philosophy of consciousness.

If you're simply looking for better ways to use thought experiments in everyday life, you can bail out after the first few sections.

Replies from: Stabilizer
comment by Stabilizer · 2013-07-18T04:46:31.610Z · LW(p) · GW(p)

Thanks! Your review was very helpful. Especially when you pointed out that the examples he uses to demonstrate his intuition pumps are in highly abstract and non-everyday scenarios. That was exactly what I was worried about: even if I pick up a more sophisticated vocabulary to handle ideas, I'd have to try to come up with many examples myself in order to internalize it (though, it'd probably be worth it).

comment by fubarobfusco · 2013-07-17T01:58:40.799Z · LW(p) · GW(p)

I'm only about one-quarter of the way into it. So I'm not so sure about your questions; but I expect that I'd suggest it as a more-philosophical, less-empirical companion to Kahneman's Thinking, Fast and Slow as an introduction to This Sort Of Thing. A lot of it does seem to have the summary nature, which is review for anyone not new to the subject; for instance, there's yet another intro to Conway's Life in (IIRC) one of the appendices. But it's intended as an introductory book.

I can imagine a pretty good undergraduate "philosophy, rationality, and cognition" course using this book and Kahneman (among others). A really interesting course might use those two, Drescher's Good and Real, and maybe Gary Cziko's Without Miracles to cover evolutionary thinking ....

comment by FiftyTwo · 2013-07-21T00:14:54.141Z · LW(p) · GW(p)

Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.

comment by [deleted] · 2013-07-16T14:02:50.229Z · LW(p) · GW(p)

Note: The following post is a cross of humor and seriousness.

After reading another reference to an AI failure, it seems to me that almost every "The AI is an unfriendly failure" story begin with "The Humans are wasting too many resources, which I can more efficiently use for something else."

I felt like I should also consider potential solutions that look at the next type of failure. My initial reasoning is: Assuming that a bunch of AI researchers are determined to avoid that particular failure mode and only that one, they're probably going to run into other failure modes as they attempt (and probably fail) to bypass that.

For instance: AI Researchers build an AI that gains utility roughly equivalent to the Square Root(Median Human Prolifigacy) times Human Population times Time, and is dumb about Metaphysics, and has a fixed utility function.

It's not happier if the top Human doubles his energy consumption. (Note: Median Human Prolifigacy)

It's happier, but not twice as happy when Humans are using Twice as many Petawatthours per Year (Note: Square Root: This also helps prevent 1 human killing all other humans from space and setting the earth on fire be a good use of energy. This Skyrockets the Median, but it does not skyrocket the Square Root of the Median nearly as much.)

It's five times as happy if there are five times as many Humans, and ten times as happy when Humans are using the same amount of energy for year for 10 years as opposed to just 1.

Dumb about metaphysics is a reference to the following type of AI failure: "I'm not CERTAIN that there are actually billions of Humans, we might be in the matrix, and if I don't know that, I don't know if I'm getting utility, so let me computronium up earth really quick just to run some calculations to be sure of what's going on." Assume the AI just disregards those kinds of skeptical hypotheses, because it's dumb about metaphysics. Also assume it can't change it's utility function, because that's just too easy too combust.

As I stated, this AI has bunches of failure modes. My question is not "Does it Fail?" but "Does it even sound like it avoids having eat humans, make computronium be the most plausible failure? If so, what sounds like a plausible failure?"

Example Hypothetical Plausible Failure: The AI starts murdering environmentalists because it fears that environmentalists will cause an overall degradation in Median human energy use that will lower overall AI utility, and environmentalists also encourage less population growth, which further degrades AI utility, and while the AI does value the environmentalists human energy consumption which boosts utility, they're environmentalists, so they have a small energy footprint, and it doesn't value not murdering people in of itself.

After considering that kind of solution, I went up and changed 'my reasoning' to 'my initial reasoning' Because at some point I realized I was just having fun considering this kind of AI failure analysis and had stopped actually trying to make a point. Also, as Failed Utopia 4-2 points out in http://lesswrong.com/lw/xu/failed_utopia_42/ designing more interesting failures can be fun.

Edit for clarity: I AM NOT IMPLYING THE ABOVE AI IS OR WILL CAUSE A UTOPIA. I don't think it it could be read that way, but just in case there are inferential gaps, I should close them.

Replies from: Martin-2, Armok_GoB
comment by Martin-2 · 2013-07-17T00:53:21.914Z · LW(p) · GW(p)

it seems to me that almost every "The AI is an unfriendly failure" story begin with "The Humans are wasting too many resources, which I can more efficiently use for something else."

Really? I think the one I see most is "I am supposed to make humans happy, but they fight with each other and make themselves unhappy, so I must kill/enslave all of them". At least in Hollywood. You may be looking in more interesting places.

Per your AI, does it have an obvious incentive to help people below the median energy level?

Replies from: None
comment by [deleted] · 2013-07-17T14:03:48.041Z · LW(p) · GW(p)

Really? I think the one I see most is "I am supposed to make humans happy, but they fight with each other and make themselves unhappy, so I must kill/enslave all of them". At least in Hollywood. You may be looking in more interesting places.

To me, that seems like a very similar story, it's just their wasting their energy on fighting/unhappiness. I just thought I'd attempt to make an AI that thinks "Human's wasting energy? Under some caveats, I approve!"

Per your AI, does it have an obvious incentive to help people below the median energy level?

I made a quick sample population to run some numbers about incentives (8 people, using 100, 50, 25,13,6,3,2,1 energy, assuming only one unit of time) and ran some numbers to consider incentives.

The AI got around 5.8 utility from taking 50 energy from the top person, giving 10 energy to use to the bottom 4, and just assuming that the remaining 10 energy either went unused or was used as a transaction cost. However, the AI did also get about .58 more Utility from killing any of the four bottom people, (even assuming their energy vanished)

Of note, roughly doubling the size of everyone's energy pie does get a greater amount of Utility then either of those two things (Roughly 10.2), except that they aren't exclusive: You can double the Pie and also redistribute the Pie (and also kill people that would eat the pie in such a way to drag down the Median)

Here's an even more bizzare note: When I quadrupled the population (giving the same distribution of energy to each people, so 100x4, 50x4, 25x4, 13x4, 6x4,3x4, 2x4, 1x4) The Algorithm gained plenty of additional utility. However, the amount of utility the algorithm gained by murdering the bottom person skyrocketed (to around 13.1) Because while it would still move the Median from 9.5 to 13, the Squareroot of that Median was multiplied by a much greater population than when Median was multiplied by a much greater population. So, if for some reason, the energy gap between the person right below the Median and the person right above the Median is large, the AI has a significant incentive to murder 1 person.

In fact, the way I set it up, the AI even has incentive to murder the bottom 9 people to get the Median up to 25.... but not very much, and each person it murders before the Median shifts is a substantial disutility. The AI would have gained more utility by just implementing the "Tax the 100's" plan I gave earlier than instituting either of those two plans, but again, they aren't exclusive.

I somehow got: Murder can be justified, but only of people below the median, and only in those cases where it Jukes the median sufficiently, and in general helping them by taking from people above the median is more effective, but you can do both.

Assuming a smoother distribution of energy expenditures in the population of 32 appeared to limit this problem from happening. Given a smoother energy expenditure, the median does not jitter by so much when a bottom person dies and Murdering bottom people goes back to causing disutility.

However, I have to admit that in terms of Novel ways an algorithm could fail, I did not see the above coming: I knew it was going to fail, but I didn't realize it might also fail in such an oddly esoteric manner in addition to the obvious failure I already mentioned.

Thank you for encouraging me to look at this in more detail!

Replies from: bogdanb
comment by bogdanb · 2013-07-18T18:14:28.817Z · LW(p) · GW(p)

Note that killing people is not the only way to raise the median. Another technique is taking resources and redistributing them. The optimal first-level strategy is to only allow minimum-necessary-for-survival to those below the median (which, depending on what it thinks “survival” means, might include just freezing them, or cutting off all unnecessary body parts and feeding them barely nutritious glop while storing them in the dark), and distribute everything else equally between the rest.

Also, given this strategy, the median of human consumption is 2×R/(N-1), where R is the total amount of resources and N is the total amount of humans. The utility function then becomes sqrt(2×R/(N-1)) × N × T. Which means that for the same resources, its utility is maximized if the maximum number of people use them. Thus, the AI will spend its time finding the smallest possible increment above “minimum necessary for survival”, and maximize the number of people it can sustain, keeping (N-1)/2 people at the minimum and (N-1)/2+1 just a tiny bit above it, and making sure it does this for the longest possible time.

comment by Armok_GoB · 2013-07-16T16:37:18.876Z · LW(p) · GW(p)

Well, even if it turned out to do exactly what it's designers were thinking (wich it won't), it'd still be unfriendly for the simple fact that no remotely optimal future likely involve humans with big energy consumptions. The FAI almost certainly should eat all humans for computronium, the only difference is the friendly one will scan their brains first and make emulations.

Replies from: None
comment by [deleted] · 2013-07-17T14:08:21.161Z · LW(p) · GW(p)

You get an accurate prediction point for guessing that it wouldn't do what it's designers were thinking: Even if the designers assumed it would kill environmentalists (and so assumed it was flawed), A more detailed look as Martin-2 encouraged me to do found that it also finds murder to be a utility benefit in at least some other circumstances.

comment by Larks · 2013-07-20T19:51:42.621Z · LW(p) · GW(p)

The Good Judgement Project is using the Brier score to rate participants forecasts. This is not LW's usual preferred scoring system (negative log odds); Brier is much more forgiving of incorrect assignments of 0 probability. I checked the maths, and you're expected score is still minimised by honestly reporting your subjective probabilities, but are there any more subtle ways to game the system?

Replies from: gwern
comment by gwern · 2013-07-20T20:53:29.048Z · LW(p) · GW(p)

Perhaps it encourages one to make long-shot bets? If you aren't penalized too badly for P=0 events happening, this suggests that short-selling contracts at ~1% may be better than it looks.

comment by NancyLebovitz · 2013-07-20T15:15:46.241Z · LW(p) · GW(p)

Is there a name for the bias that information can just happen, rather than having to be derived by someone using some means?

Replies from: None, Vaniver, shminux, Eliezer_Yudkowsky
comment by [deleted] · 2013-07-20T18:13:56.929Z · LW(p) · GW(p)

You might be after the 'myth of the given', which is Wilfred Sellars' coinage in Empiricism and the Philosophy of Mind. 'Given' is just the english translation of 'datum', and so the claim is something like 'It is a myth that there is any such thing as pure data.'

The slightly more complicated point is that foundationalist theories of empiricism (for example) involve the claim that while most knowledge is justified by inferences of some kind, there is a foundation of knowledge that is justified simply by the way we get it (e.g. through the senses, intellectual intuition, etc.). Sellars' argues that no such foundation is possible, and so far as I can tell his argument is more or less accepted today, for whatever that's worth.

comment by Vaniver · 2013-07-20T16:09:07.626Z · LW(p) · GW(p)

Hm. One interpretation sounds like the philosophical position of a priori knowledge,* but you might mean knowledge existing independent of a mind, which I don't know of a shorter phrase to describe.

*I think this is actually somewhat well validated, under the name of "instinct," and humans appear to have lots of instincts.

Replies from: NancyLebovitz, None
comment by NancyLebovitz · 2013-07-20T16:33:19.931Z · LW(p) · GW(p)

One example would be that people tend to think that their senses automatically give them information, while in fact senses and their interpretation is a very complex process.

Another would be (from what Root-Bernstein says) that very good scientists are fascinated by their tools-- they're the ones who know that the tool might not be measuring what they think it's measuring.

Replies from: None
comment by [deleted] · 2013-07-20T17:49:51.689Z · LW(p) · GW(p)

One example would be that people tend to think that their senses automatically give them information, while in fact senses and their interpretation is a very complex process.

And indeed, to capture this notion is why Kant made the distinction between analytic and synthetic a priori knowledge in the first place.

comment by [deleted] · 2013-07-20T17:57:11.572Z · LW(p) · GW(p)

*I think this is actually somewhat well validated, under the name of "instinct," and humans appear to have lots of instincts.

Instincts wouldn't be a case of a priori knowledge, I think just because they couldn't be considered a case of knowledge. But at any rate, 'a priori' doesn't mean 'innate', or even 'entirely independent of experience'. A priori knowledge is knowledge the truth of which does not refer to any particular experience or set of experiences. This doesn't imply anything about whether or not it's underived or anything like that: most people who take a priori knowledge to be a thing would consider a mathematical proof a case of a priori justification, and those are undoubtedly derived by some particular person at some particular time using some particular means. (I'm not endorsing the possibility of a priori knowledge, just trying to clarify the idea).

comment by shminux · 2013-07-20T19:48:14.370Z · LW(p) · GW(p)

Seems like a version of the Illusion of transparency (possibly in reverse):

The illusion of transparency is a tendency for people to overestimate the degree to which their personal mental state is known by others. Another manifestation of the illusion of transparency (sometimes called the observer's illusion of transparency) is a tendency for people to overestimate how well they understand others' personal mental states.

What you describe is more like "a tendency for people to overestimate the degree to which" their senses are accurate and assume that they are a true representation of external reality.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-20T15:18:54.094Z · LW(p) · GW(p)

Second the question.

comment by erratio · 2013-07-18T23:58:14.577Z · LW(p) · GW(p)

Anyone have a good recommendation for an app/timer that goes off at pseudo-random (not too short - maybe every 15 min to an hour?) intervals? Someone suggested to me today that I would benefit from a luminosity-style exercise of noting my emotions at intervals throughout the day, and it seems like something I ought to automate as much as possible

Replies from: Watercressed
comment by Watercressed · 2013-07-19T00:28:17.350Z · LW(p) · GW(p)

It takes a bit of work to set up, but Tagtime does both the notifications and the logging

Replies from: erratio
comment by erratio · 2013-07-19T11:51:11.610Z · LW(p) · GW(p)

Thanks! Downloaded it, will report back after trying for a bit

comment by Oscar_Cunningham · 2013-07-16T11:21:20.287Z · LW(p) · GW(p)

Why to the maps of meetups on the front page and on the meetups page differ? Why do neither of them show the regular meetups?

comment by gothgirl420666 · 2013-07-15T23:12:42.265Z · LW(p) · GW(p)

Does anyone know anything about yoga as a spiritual practice (as opposed to exercise or whatever)? I get the sense that it's in the same "probably works" category as meditation and I'd be interested in learning more about it, but I don't know where to start, and I feel like there's probably "real" yoga and "pop" yoga that I need to be able to differentiate between.

Also, I can't sit in any of the standard meditation positions - I can only do maybe five minutes indian-style before I get intense pain. When I ask people how to remedy this, they tell me "do yoga", but aren't any more specific than that.

If someone knowledgeable could point me towards a good starting point or a resource, that would be great.

Replies from: ChristianKl, NancyLebovitz, Metus
comment by ChristianKl · 2013-07-16T14:19:17.803Z · LW(p) · GW(p)

If someone knowledgeable could point me towards a good starting point or a resource, that would be great.

A local yoga course. Having a teacher that can tell you what you are doing wrong is very valuable.

When it comes to meditation the same applies. Go to a local Buddhist tempel and let them guide you in learning meditation.

comment by NancyLebovitz · 2013-07-16T01:10:09.410Z · LW(p) · GW(p)

Taoist meditation is done either standing or sitting in a chair.

Source: I've read a moderate amount about this, so there may be exceptions.

I did standing meditation from Lam Kam Chuen's The Way of Energy for a while, and cleared up a case of RSI.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-07-16T01:35:39.582Z · LW(p) · GW(p)

I know that meditation is possible while sitting in a chair, and I do it about half the time (the other half I sit on the ground sort of like this, just because I like it). I kind of want to be able to do it the standard way so I can fulfill an irrational urge to "feel like a real Buddhist", which I think would motivate me.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-16T14:16:30.215Z · LW(p) · GW(p)

I kind of want to be able to do it the standard way so I can fulfill an irrational urge to "feel like a real Buddhist", which I think would motivate me.

This is deeply funny. Buddhism is about getting rid of urges.

Secondly seiza is also a position in which a lot of buddhist meditate and sitting that way is usually easier.

Thirdly it seems like you somehow try to do Buddhism on your own without a teacher when having a in person teacher is a core element of Buddhism.

comment by Metus · 2013-07-15T23:27:21.399Z · LW(p) · GW(p)

Also, I can't sit in any of the standard meditation positions - I can only do maybe five minutes indian-style before I get intense pain. When I ask people how to remedy this, they tell me "do yoga", but aren't any more specific than that.

Go see a doctor and don't leave until you get a specific diagnosis or treatment.

Replies from: Jayson_Virissimo, gothgirl420666, NancyLebovitz
comment by Jayson_Virissimo · 2013-07-17T02:18:55.105Z · LW(p) · GW(p)

Go see a doctor and don't leave until you get a specific diagnosis or treatment.

Careful. Sometimes the treatment can be worse than the disease.

comment by gothgirl420666 · 2013-07-16T00:05:47.016Z · LW(p) · GW(p)

Are you implying that something is very wrong with me if I can't sit Indian style and that I should see a doctor right away, or are you just saying that this would be an effective way to solve my problem?

Replies from: Metus, NancyLebovitz
comment by Metus · 2013-07-16T00:12:27.057Z · LW(p) · GW(p)

Effective way. You obviously have some kind of problem that other people don't have that gives you discomfort without any obvious way to solve it. Seeing a doctor helps to rule out some underlying, organic problem. I don't know about very wrong, but being able to sit only five minutes indian style seems very low.

Replies from: gothgirl420666, NancyLebovitz
comment by gothgirl420666 · 2013-07-16T00:22:41.477Z · LW(p) · GW(p)

Oh okay, for some reason when I first read your comment I got a sense of urgency from it. Thanks for clarifying.

comment by NancyLebovitz · 2013-07-16T01:11:05.399Z · LW(p) · GW(p)

I've seen yoga books which explain how to ease into sitting in full lotus.

Replies from: Metus
comment by Metus · 2013-07-16T01:57:01.204Z · LW(p) · GW(p)

I was going on the description of "intense pain". I know from personal experience that you need to ease into the lotus but I never experienced anything that I would describe as "intense pain", at most "mild to moderate discomfort" after five minutes. Anyway, gothgirl420666 was having a problem without any obvious solutions, as evidenced by his lack of proposed solutions by his peers, so I suggested to pay a visit to a professional with extensive domain knowledge.

comment by NancyLebovitz · 2013-07-16T16:35:52.972Z · LW(p) · GW(p)

When you say "Indian style" do you mean with your feet under your thighs or on top of them?

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-07-16T20:35:10.972Z · LW(p) · GW(p)

Under.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-17T00:03:47.483Z · LW(p) · GW(p)

That's more of a physical limitation that I first interpreted you as meaning. Still, I'm not going to put it in the "OMG, must be solved" category.

Feldenkrais Method (a approach of gentle repeated movements to increase physical awareness and coordination) might be a good idea. Somatics by Thomas Hanna has a daily cat stretch which takes about ten minutes to do, and as I recall, about two hours to learn.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-19T22:20:02.619Z · LW(p) · GW(p)

A little extra explanation: I've found that knee and hip problems can actually be a result of a tight lower back, and Feldenkrais can help.

comment by NancyLebovitz · 2013-07-16T16:35:15.387Z · LW(p) · GW(p)

What kind of a doctor?

comment by [deleted] · 2013-07-22T08:07:55.069Z · LW(p) · GW(p)

what does the 'add a friend' feature on this site actually do?

Replies from: arundelo, Richard_Kennaway, Kawoomba
comment by arundelo · 2013-07-22T09:08:11.070Z · LW(p) · GW(p)

Controls whose posts appear at http://lesswrong.com/r/friends/ . (Only posts are shown, not comments.)

comment by Richard_Kennaway · 2013-07-22T12:23:02.830Z · LW(p) · GW(p)

I never noticed it until now. I'm curious to know how many people use it.

comment by Kawoomba · 2013-07-22T08:47:32.527Z · LW(p) · GW(p)

Adds a friend.

comment by linkhyrule5 · 2013-07-20T20:42:22.113Z · LW(p) · GW(p)

Running an interest check for an "Auto-Bayes."

Something I've noticed when reading articles on the web is that I occasionally run across the same beliefs, but have completely forgotten my last assigned probability - my current prior. In order to avoid this, I'm writing a program that keeps track of a database of beliefs and current priors, with automated Bayes updating. If nothing else, it'll also make it easier to get statistics on how accurate my predictions are, and keep me honest.

Anyway, I got halfway started and realized that this might be something other people might be interested in, so: interest check!

comment by Qiaochu_Yuan · 2013-07-18T18:06:20.769Z · LW(p) · GW(p)

Do animal altruists regard zoos as a major contributor to animal suffering? Or do the numbers not compare when matched up against factory farming and the like?

Replies from: tim
comment by tim · 2013-07-18T19:07:05.852Z · LW(p) · GW(p)

While I don't know what animal altruists think, these statistics might give an (extremely) rough idea of the numbers.

(the second one is only cattle and doesn't distinguish between human/inhuman conditions though 80-90% of cattle are in feedlots with >1000 heads, so you could draw some order-of-magnitude comparisons)

comment by CAE_Jones · 2013-07-16T06:50:09.211Z · LW(p) · GW(p)

(Longpost warning; I find myself wondering if I shouldn't post it to my livejournal and just link it here.)

A few hours shy of a week ago, I got a major update to my commercial game up to releasable standards. When I got to the final scene, I was extremely happy--on a scale of 1=omnicidally depressed to 10=wireheading, possibly pushing 9 (I've tried keeping data on happiness levels in April/May and determined that I'm not well calibrated for determining the value of a single point).

That high dwindled, of course, but for about 24 hours it kept up pretty well.

Since then, I've been thoroughly unable to find anything I feel motivated enough to actually work on. I've come close on a couple projects, but nothing ever comes of them. So for the most part, the past week has been right back into the pits of despair. If I'm not noticeably accomplishing something, I'm averaging 3-4 or so on the above scale (I haven't been recording hourly data in the past week). Mostly, the times when I manage to get up around 5-6 are when I'm able to go off and think about something; when I actually try to do anything on the computer, it all drops rapidly.

So far, my method for finding something to work on has been pretty feeble. "Seek out something among the projects we've already identified as worth pursuing; if failed, let mind wander and hope something sticks." The major update that I managed to work on for the previous two or so weeks arose from an idea not among any of the projects I had in mind (in a round-about way, it came from someone's Facebook status); more ideas grew from it, until I decided to just add them to the existing game, since they fit there about as well as in something new, and would force me to make some long-needed improvements.

That game itself had its origins in a similar situation; I was trying to work on a different but related project, and complained about the impenetrable Akrasiatic barrier to the very same person whose status spawned the recent updates. He made a vague suggestion, I was able to start on it, and the project grew out of that, and was easy enough to edit that it continued expanding.

This does seem to apply primarily to game development; music/fiction don't seem to follow this trend that I've noticed. At most, I wind up defining a few classes for what I want to work on, and in the best cases make some menus but don't really do much if any testing of the game's engine. The things that do get done are usually just tiny, non-serious things done on a whim that can evolve into something more serious if the earliest results are pleasant enough.

This sucks and I want to change it and have no idea how to do so. Accomplishment = superhappy and unpredictable, non-accomplishment = depressedly coasting until something happens. Success spirals only seem to work over a very brief interval mid-awesome, assuming I can be distracted from said awesome long enough to do something else worthwhile (as happened the first time I marathonned HPMoR and The Motivation Hacker ; it's much harder to get a success spiral out of awesome spawned from work, since I'm much less willing to take the risk of turning away from the work for any longer than it takes to remain functional. ).

Replies from: maia
comment by maia · 2013-07-16T12:04:17.620Z · LW(p) · GW(p)

Just trying to think of some possible ideas...

Since then, I've been thoroughly unable to find anything I feel motivated enough to actually work on.

How much time, by the clock, have you spent trying to think of different things you could be doing? If you haven't, it could be helpful to just sit down and brainstorm as much stuff as you can.

Also, maybe doing something fairly easy but that seems "productive" could be helpful in starting a success spiral getting you back up to your previous speeds; possibly online code challenges or something like that.

Or maybe you should be trying to draw on other things that could make you happier, like hanging out with friends.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-07-17T04:19:37.337Z · LW(p) · GW(p)

How much time, by the clock, have you spent trying to think of different things you could be doing?

I haven't committed any numbers to memory, but my time is mostly divided between trying to think my way to doing something and trying to avoid drowning in frustration by wasting time on the internet. Just judging by how today has gone so far, it seems to be roughly 1:2 or 1:3 in favor of wasting time. I did briefly turn off the internet at one point, and that seemed to help some, although I still didn't manage to make good use of that time.

Or maybe you should be trying to draw on other things that could make you happier, like hanging out with friends.

I have no such opportunities of which I am aware.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-17T16:50:45.895Z · LW(p) · GW(p)

I recommend poking around in your mind to find out what's actually in your mind, especially when you're considering taking action. I've found it helpful to find out what's going on before trying to make changes.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-07-18T11:05:22.110Z · LW(p) · GW(p)

I tried to follow this, though I'm not sure I did it in quite the way you meant, and I realized something potentially useful, then immediately--after staying focused on the introspection task for quite some time--wound up wandering off to think about Harry Potter and other things not at all useful to solving the problem. I can only assume my brain decided that the apiphony was sufficient and we were free to cool down.

Anyway, this does seem like a useful direction for now, so thanks!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-19T13:46:57.204Z · LW(p) · GW(p)

I'm glad my suggestion helped.

I'm not sure what you thought I meant, but there might be an interesting difference between finding out what's going on at the moment vs. finding out what one's habits are-- I've had exploration work out both ways.

comment by shminux · 2013-07-20T19:39:03.101Z · LW(p) · GW(p)

What's the difference between a simulation and a fiction?

comment by Rukifellth · 2013-07-16T10:03:03.782Z · LW(p) · GW(p)

I posted this in the previous open thread, and would like to carry on the discussion into this thread. As before, I regard this entire subject as a memetic hazard, and will rot13 accordingly. Also, if you're going to downvote it, at least tell me why; karma means nothing to me, even in increments of 5, but it makes others less likely to respond.

Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:

... gung gurer vf bayl bar crefba va gur havirefr, lbh, naq rirelbar lbh frr nebhaq lbh vf ernyyl whfg lbh.

Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.

V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhnyvfz, be whfg nethzragf ntnvafg Pybfrq naq Rzcgl Vaqvivqhnyvfz gung yrnir BV nf gur bayl nygreangvir. Vpbcb Irggbev rkcynvarq vg yvxr guvf:

PV pnaabg znantr fngvfsnpgbevyl gur "pbagvahvgl ceboyrz" (jung znxrf lbh gb pbagvahr gb erznva lbh va gvzr). Guvf vf jul va "Ernfba naq Crefbaf", Qrerx Cnesvg cebcbfrq RV nf n fbyhgvba. Va "V Nz Lbh", Qnavry Xbynx cebcbfrq BV, fubjvat gung grpuavpnyyl gurl ner rdhvinyrag. Fb pubbfvat orgjrra RV naq BV frrzf gb or n znggre bs crefbany gnfgr. Znlor gurve qvssreraprf zvtug or erqhprq gb n grezvabybtl ceboyrz. Bgurejvfr, V pbafvqre BV zber fgebat orpnhfr vg pna rkcynva jung V pnyyrq "gur vaqvivqhny rkvfgragvny ceboyrz" [Jung jr zrna jura jr nfx bhefryirf "Pbhyq V unir arire rkvfgrq?"]

Gur rrevrfg cneg nobhg gur Snprobbx tebhc "V Nz Lbh: Qvfphffvbaf va Bcra Vaqvivqhnyvfz" vf gung gur crbcyr va gung tebhc gerng gur pbaprcg bs gurer orvat bayl bar crefba gur fnzr jnl gung Puevfgvnaf gerng gur pbaprcg bs n Tbq gung jvyy qnza gurve ybirq barf gb Uryy sbe abg oryvrivat va Uvz. Vg'f nf vs ab bar va gur tebhc ernyvmrf gur frpbaq yriry vzcyvpngvbaf bs gurer abg orvat nalbar ryfr, be znlor gurl qba'g rira pner.

comment by Thomas · 2013-07-18T10:44:27.854Z · LW(p) · GW(p)

Another day, another (controversial) opinion!

http://protokol2020.wordpress.com/2013/07/17/is-p-np/

Replies from: asr, David_Gerard
comment by asr · 2013-07-18T16:02:24.981Z · LW(p) · GW(p)

I think this misunderstands the state of modern complexity theory.

There are lots of NP-complete problems that are well known to have highly accurate approximations that can be computed efficiently. The knapsack problem and traveling-salesperson in 2D Euclidean space are both examples of this. Unfortunately, having an epsilon-close approximation for one NP-complete problem doesn't necessarily help you on other NP-complete problems.

There's nothing particularly magic about evolutionary algorithms here. Any sensible local search will often work well on instances of NP-complete problems.

Replies from: Thomas
comment by Thomas · 2013-07-18T16:36:19.012Z · LW(p) · GW(p)

Oh, yes! The evolutionary algorithm is not the only way, and certainly not the magic way. It's just an example how to sometimes cheat and get a good result for a NP problem. It's the best cheater I know, but likely not the only one.

Sometimes we can guess the answer. Sometimes we can role the Monte Carlo. Simulated annealing is another way to "steal the NP gods fire". And sometimes the error of our approximation might even be zero!

But the main point of my article is this innovating aspect of the Evolutionary Algorithm. Unforeseen solutions delivered, as this 249 circles in square is one of many. Then humans are those who do the fine tuning on the base of this EA solutions. They do the routine job of refining, after the EA has made the fundamental innovation.

I wouldn't mind normally. There is no thin red line between the improvement and the innovation, people just imagine it. But since they do, here they can see how the humans and computers are both on the wrong sides of their discrimination line. As they have swapped their "natural places".

Replies from: bogdanb
comment by bogdanb · 2013-07-18T17:31:26.419Z · LW(p) · GW(p)

The evolutionary algorithm is not the only way [...] It's the best cheater I know, but likely not the only one.

Unless you meant to imply a specific problem (and very probably even then), evolutionary algorithms are actually pretty stupid. I’ll even go on a limb and claim that the evolutionary algorithm is the smartest of the stupid algorithms, where “stupid” means approximately “I understand nothing about the problem except I can tell if some solutions are better than others, if I’m given examples”.

Of course, if the problem is complicated enough that might be the best we can do.

But the main point of my article is this innovating aspect of the Evolutionary Algorithm. Unforeseen solutions delivered,

I’m not sure what you mean by “innovating”. A solution I receive from any algorithm that searches (rather than verifies) solutions will usually be unforseen. (If I foresaw it, I wouldn’t need to search for it, I’d just test it.)

Replies from: Thomas
comment by Thomas · 2013-07-18T17:34:57.775Z · LW(p) · GW(p)

evolutionary algorithms are actually pretty stupid

They hold some world records on the density of packing.

If you held just one, would you call yourself stupid?

I guess not.

Replies from: bogdanb, bogdanb
comment by bogdanb · 2013-07-18T18:40:14.085Z · LW(p) · GW(p)

Why not? If I used an EA to get it, that basically means “I don’t know how to solve the problem, so I’ll just use the best method I know of for trying random solutions”.

Also, I’m the world record holder at looking like myself, that doesn’t mean that I’m smarter than anyone else, particularly in the sense of knowing how to build a person that looks like myself.

Replies from: Thomas
comment by Thomas · 2013-07-18T20:06:29.970Z · LW(p) · GW(p)

“I don’t know how to solve the problem, so I’ll just use the best method I know of for trying random solutions”

If your random guessing will provide a (previously unknown) solution, very well. But it probably won't.

I am not talking about "maybe it would", I talk about "it did, indeed".

I’m the world record holder at looking like myself

Everybody has few such records, but those are worthless. Some people however, solved a difficult puzzle. On this particular site I gave you a link, such a competition is going on. Maybe it reminds someone to the LW's PD agents competition?

Anyway, I don't hold any record there. An algorithm I designed and called it Pack'n'tile holds some. Follow the links, download the program and try it, if you want.

Replies from: bogdanb
comment by bogdanb · 2013-07-18T22:36:10.714Z · LW(p) · GW(p)

Sorry, I was unclear. By “best method I know for trying random solutions” I meant evolutionary algorithms. (Which I think of as “guess randomly, then mix guesses randomly, then mutate guesses randomly, then select randomly biased towards the the best you found, then repeat from step two”. Of course, there’s a bit of smartness needed when applying the randomness, but still.)

Some people however, solved a difficult puzzle.

I think we’re having mostly a terminology disagreement. I tend to think of EA as “finding a solution” rather than “solving the problem”, which I agree is not the most logical and precise use of language.

On another subject, I fear I may have offended you. If so, I apologize, and kudos for keeping calm enough to make it hard for me to be sure :)

I specifically said that the algorithms are stupid. That wasn’t meant to disparage anyone that uses them. I well know that it’s not at all trivial to write such an algorithm, and that there are good and bad ways of doing it, and that one can put a lot of cleverness in one. The authors of an algorithms that “won” a record in an important problem are very probably very smart people. But the algorithm itself may still be stupid, in the sense that it’s closer to brute force than actually finding the solution with a minimum of (computing) effort.

Replies from: Thomas
comment by Thomas · 2013-07-19T05:59:44.734Z · LW(p) · GW(p)

Technically speaking EA is stupid in the sense it's very brief. The actual implementation is another matter.

But what is important in this context is the following: The algorithm's results in this matching context are quite sloppy in the sense, that the squares don't even touch each other to gain some more space. Still,the whole circle setting is so clever, that it can afford this generosity and still wins! After then, some humans often polish the evolved solution and claim the victory. What's perfectly fine, the whole log exists.

comment by bogdanb · 2013-07-18T18:44:36.033Z · LW(p) · GW(p)

Just curious, as I’m not familiar with that particular problem: are any of those records on “density of packing per FLOP”, or just “density of packing”?

Replies from: Thomas
comment by Thomas · 2013-07-18T19:45:42.589Z · LW(p) · GW(p)

What is that all about, you can best see here

Replies from: bogdanb
comment by bogdanb · 2013-07-18T22:40:45.405Z · LW(p) · GW(p)

OK, thanks, I’ll look if I have time, that’s a bit too much info to go through right now.

comment by David_Gerard · 2013-07-18T14:04:42.924Z · LW(p) · GW(p)

Confuses analytical best solution (what P=NP would be) with numerical good-enough solution (what evolution approximates just well enough to get advantage).

Replies from: Thomas
comment by Thomas · 2013-07-18T16:07:42.567Z · LW(p) · GW(p)

evolution approximates just well enough to get advantage

Exactly! Approximate. ~=.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-18T18:22:18.281Z · LW(p) · GW(p)

Yes, but that doesn't constitute "solving" NP in P except in having to work out a different approximation method in every instance of an NP problem.

comment by Rukifellth · 2013-07-18T15:11:14.863Z · LW(p) · GW(p)

I personally regard this entire subject as an example of a harmful meme, and will rot13 accordingly.

Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:

gung gurer vf bayl bar crefba va gur havirefr, lbh, naq rirelbar lbh frr nebhaq lbh vf ernyyl whfg lbh.

Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.

V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhnyvfz, be whfg nethzragf ntnvafg Pybfrq naq Rzcgl Vaqvivqhnyvfz gung yrnir BV nf gur bayl nygreangvir. Vpbcb Irggbev rkcynvarq vg yvxr guvf:

PV pnaabg znantr fngvfsnpgbevyl gur "pbagvahvgl ceboyrz" (jung znxrf lbh gb pbagvahr gb erznva lbh va gvzr). Guvf vf >jul va "Ernfba naq Crefbaf", Qrerx Cnesvg cebcbfrq RV nf n fbyhgvba. Va "V Nz Lbh", Qnavry Xbynx cebcbfrq BV, fubjvat >gung grpuavpnyyl gurl ner rdhvinyrag. Fb pubbfvat orgjrra RV naq BV frrzf gb or n znggre bs crefbany gnfgr. Znlor gurve >qvssreraprf zvtug or erqhprq gb n grezvabybtl ceboyrz. Bgurejvfr, V pbafvqre BV zber fgebat orpnhfr vg pna rkcynva jung >V pnyyrq "gur vaqvivqhny rkvfgragvny ceboyrz" [Jung jr zrna jura jr nfx bhefryirf "Pbhyq V unir arire rkvfgrq?"]

Gur rrevrfg cneg nobhg gur Snprobbx tebhc "V Nz Lbh: Qvfphffvbaf va Bcra Vaqvivqhnyvfz" vf gung gur crbcyr va gung tebhc gerng gur pbaprcg bs gurer orvat bayl bar crefba gur fnzr jnl gung Puevfgvnaf gerng gur pbaprcg bs n Tbq gung jvyy qnza gurve ybirq barf gb Uryy sbe abg oryvrivat va Uvz. Vg'f nf vs ab bar va gur tebhc ernyvmrf gur frpbaq yriry vzcyvpngvbaf bs gurer abg orvat nalbar ryfr, be znlor gurl qba'g rira pner.

I've already posted this to the previous discussion thread, and despite losing a good chunk of my karma, I don't feel entirely satisfied with the answers I received. Here's a link to that post if you're interested

Replies from: None, Rukifellth
comment by [deleted] · 2013-07-18T15:17:48.068Z · LW(p) · GW(p)

I've already posted this to the previous discussion thread, and despite losing a good chunk of my karma, I don't feel entirely satisfied with the answers I received.

Insanity is doing the same thing over and over again and expecting different results.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-18T15:18:53.628Z · LW(p) · GW(p)

I have never seen that phrase applied to a scenario that had changing conditions.

comment by Rukifellth · 2013-07-18T18:20:19.795Z · LW(p) · GW(p)

Where are these downvotes coming from?

Replies from: drethelin
comment by drethelin · 2013-07-18T21:30:40.121Z · LW(p) · GW(p)

Me for one.

You're being ridiculous when you rot13 this, it's not actually important or interesting. It's just a restatement of solipsism in bigger words. Reposting it when people thought originally it was worth down voting makes it even more worthy of downvotes.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-18T22:46:59.668Z · LW(p) · GW(p)

Brilliant, fantastic. I'll be incredibly happy if anyone can link me to a counter argument, because this has been weighing rather heavily on me. Why else would I rot13 this?

Replies from: drethelin
comment by drethelin · 2013-07-18T22:57:09.986Z · LW(p) · GW(p)

counter-argument to WHAT? As with solipsism, this doesn't seem to anticipate any different experiences. This means that literally every piece of evidence is neutral between the two world-views. Either the universe is the universe and not in your head or it's in your head and seems like a universe. Either way you can reliably get results from interacting it in certain ways. If the entire universe is a simulation in your brain it's exactly as complicated and you have as much control over it as if it wasn't. Why do you care?

To put it another way, there's no use considering the theory that a malevolent demon is in control of all the information you receive. Everything you know is because it wants you to know it. You can never prove one way or the other if it exists, so it may as well NOT exist.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-18T23:01:56.161Z · LW(p) · GW(p)

counter-argument to WHAT?

Open Individualism.

As with solipsism, this doesn't seem to anticipate any different experiences. This means that literally every piece of evidence is neutral between the two world-views. Either the universe is the universe and not in your head or it's in your head and seems like a universe.

Someone else already went that route, and I explained why Open Individualism wasn't like solipsism. The discussion ended after my response.

Replies from: drethelin
comment by drethelin · 2013-07-18T23:08:03.531Z · LW(p) · GW(p)

That difference has no effect on your anticipated experiences.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-18T23:13:07.052Z · LW(p) · GW(p)

Not that I'm advocating the existence of zombies, but technically neither does having a zombie for a boyfriend. Eliezer Yudkowsky didn't knock-down the zombie possibility by talking about anticipated experiences, he knocked it down by explaining the logical impossibility,

I don't understand how saying that will make the concept go away.

Replies from: drethelin
comment by drethelin · 2013-07-18T23:16:39.363Z · LW(p) · GW(p)

The logical impossibility depended on how you couldn't have conversations about consciousness without it. If you're the only thing that exists how can I disagree with you about it? How did you learn about it from a philosopher?

Replies from: Rukifellth
comment by Rukifellth · 2013-07-18T23:26:29.386Z · LW(p) · GW(p)

Had I read the argument from someone else at an earlier date, I'd probably use an argument-from-difference like you are. Such a scenario is more than logically possible- I might have actually considered the problem in the past. I have no doubt that the person who had once disagreed with OI is me.

Do you want to take this to PM, if only to save on your karma?

Replies from: drethelin
comment by drethelin · 2013-07-18T23:49:58.791Z · LW(p) · GW(p)

Nah I've pretty much lost interest. Sorry. I don't particularly care about my karma except as evidence on average. The troll toll doesn't bother me. This issue is a lot less important to me because I'm not coming from a position of believing I'm the most important thing.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-18T23:59:38.559Z · LW(p) · GW(p)

Neither do I.

In that case, can you direct me to any relevant resources?