Open Thread, May 18 - May 24, 2015

post by Gondolinian · 2015-05-18T00:01:52.881Z · LW · GW · Legacy · 176 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

176 comments

Comments sorted by top scores.

comment by sixes_and_sevens · 2015-05-18T10:38:05.896Z · LW(p) · GW(p)

I'm looking for some "next book" recommendations on typography and graphically displaying quantitative data.

I want to present quantitative arguments and technical concepts in an attractive manner via the web. I'm an experienced web developer about to embark on a Masters in computational statistics, so the "technical" side is covered. I'm solid enough on this to be able to direct my own development and pick what to study next.

I'm less hot on the graphical/design side. As part of my stats-heavy undergrad degree, I've had what I presume to be a fairly standard "don't use 3D pie charts" intro to quantitative data visualisation. I'm also reasonably well-introduced to web design fundamentals (colour spaces, visual composition, page layouts, etc.). That's where I'm starting out from.

I've read Butterick's Practical Typography, which I found quite informative and interesting. I'd now like a second resource on typography, ideally geared towards web usage.

I've also read Edward Tufte's Visual Display of Quantitative Information, which was also quite informative, but felt a bit dated. I can see why it's considered a classic, but I'd like to read something on a similar topic, only written this century, and maybe with a more technological focus.

Please offer me specific recommendations addressing the two above areas (typography and data visualisation), or if you're sufficiently advanced, please coherently extrapolate my volition and suggest how I can more broadly level up in this cluster of skills.

Replies from: IlyaShpitser, MSwaffer, palladias, Douglas_Knight, adamzerner
comment by IlyaShpitser · 2015-05-18T10:54:31.374Z · LW(p) · GW(p)

Please post here if you learn a good answer elsewhere.

comment by MSwaffer · 2015-05-19T19:32:41.540Z · LW(p) · GW(p)

With your background in web development have you read things like Krug's Don't Make Me Think and William's The Non-Designer's Design Book? These are focused more on the design aspect of web however they contain some good underlying principles for data visualization as well.

Tufte's book are all great for underlying principles even though, as you noted, they aren't focused on modern technologies. Beautiful Evidence from 2006 has some updated thoughts but he still borrows heavily from his earlier books.

For general multimedia concepts, Mayer's Multimedia Learning is good from a human learning perspective (my background).

I found Data Points: Visualization That Means Something to be a good modern guide.

From my perspective, I am glad you are looking down the road and recognizing that after the data are analyzed the analysis must be communicated.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2015-05-19T22:52:30.841Z · LW(p) · GW(p)

This is all kinds of useful. Thanks!

You can learn an astonishing amount about web development without ever having to think about how it'll look to another human being. In a professional context, I know enough to realise when I should hand it over to a specialist, but I won't always have that luxury.

Replies from: MSwaffer
comment by MSwaffer · 2015-05-20T17:08:26.602Z · LW(p) · GW(p)

You are definitely right in that we need to think about how it will look to another human being.

If you are interested in pursuing this idea further, Don Norman has written a number of books about design in general. These are not about graphic design but just design thinking. The Psychology of Everyday Things is a classic and Emotional Design builds on the work of people like Antonio Damasio with regard to the role of emotion in cognition. Norman has another book called The Design of Everyday Things which I have not read but I imagine is a great read as well.

All of these works emphasize the role of design in helping humans accomplish their goals. Some practitioners of data analytics view the output of prose, charts, tables and graphs as the final product. In most cases however the final product of a data analytics effort is a decision. That decision might be to do more research, to buy one company versus another or propose a new policy to Congress. Regardless of the nature of the decision, how well you design the output will have an impact on the quality of the decision made.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2015-05-20T18:31:22.875Z · LW(p) · GW(p)

I've read The Design of Everyday Things. You don't need to read The Psychology of..., as it's the same book, renamed for marketing reasons.

comment by palladias · 2015-05-18T21:33:58.189Z · LW(p) · GW(p)

My job (not at the WSJ!) gave me The Wall Street Journal Guide to Information Graphics: The Dos and Don'ts of Presenting Data, Facts, and Figures in my new hire bundle, and I love it!

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2015-05-18T21:50:58.091Z · LW(p) · GW(p)

Do you love it to the tune of $20?

Replies from: palladias
comment by palladias · 2015-05-19T13:54:21.607Z · LW(p) · GW(p)

Yeah, I'd say so.

comment by Douglas_Knight · 2015-05-18T17:32:50.107Z · LW(p) · GW(p)

Learn the library ggplot2. It is worth learning the language R just to use this library (though there is a port in progress for python/pandas). Even if you cannot incorporate the library into your workflow, its very good defaults show you what you should be doing with more work in other libraries.

It is named after a book, the Grammar of Graphics, that I have not read.

Replies from: Lumifer, sixes_and_sevens
comment by Lumifer · 2015-05-18T18:24:06.644Z · LW(p) · GW(p)

I don't know if I'm that enthusiastic about ggplot2. It is certainly a competent library and it produces pretty plots. However it has a pronounced "my way or the highway" streak which sometimes gets in the way. I like nice defaults, I don't like it when a library enforces its opinions on me (see e.g. this noting that Hadley is the ggplot2 author).

comment by sixes_and_sevens · 2015-05-18T18:13:42.308Z · LW(p) · GW(p)

I've dabbled with ggplot, but I've put it on hold for the immediate future in lieu of getting to grips with D3. I'll be getting all the R I can handle next year.

I did not know about the book, but it's available to view from various sources. If I get time I'll give it a look-in and report back.

comment by Adam Zerner (adamzerner) · 2015-05-23T23:48:36.850Z · LW(p) · GW(p)

You may be interested in some of Bret Victor's stuff.

I too am a web developer looking to learn more about design. And I too have read Butterick's Practical Typography, Don't Make Me Think, Visual Display of Quantitative Information as well as a few other classics. But I don't think it's made me much better at design. I sense that there are a few "roadblocks". Ie. things I don't know that are preventing me from actually applying the things I learned in reading those books. Any thoughts on this?

comment by John_Maxwell (John_Maxwell_IV) · 2015-05-18T12:05:31.181Z · LW(p) · GW(p)

Every so often in the EA community, someone will ask what EA volunteer activities one can do in ones spare time in lieu of earning to give. Brian Tomasik makes an interesting case for reading social science papers and contributing what you learn to Wikipedia.

Replies from: Ishaan, ChristianKl
comment by Ishaan · 2015-05-18T16:50:13.138Z · LW(p) · GW(p)

On the topic of popularization, I think the ratio of idealistic people interested in alleviating global poverty to people who are aware of the concept of meta-charities that determine the optimal way to do so is shockingly low.

That seems like one of those "low hanging fruits" - dropping it into casual conversations, mentioning it in high visibility comment threads, and on. There's really no excuse for Kony to be more well known than Givewell.

Replies from: Lumifer, John_Maxwell_IV
comment by Lumifer · 2015-05-18T17:45:53.124Z · LW(p) · GW(p)

People actually interested in alleviating global poverty, or people who are interested in signaling to themselves and their social circle that they are caring and have appropriate attitudes?

By the way, saving lives (which Givewell focuses on) and "alleviating global poverty" are two very different goals.

Replies from: ChristianKl, Ishaan
comment by ChristianKl · 2015-05-18T18:18:59.853Z · LW(p) · GW(p)

By the way, saving lives (which Givewell focuses on) and "alleviating global poverty" are two very different goals.

I don't think that it's fair to say that GiveWell only focuses on lives saved. Their reports about charities are long. It's just that they focus on the number of "saving lives" when they boil down the justification to short paragraphs.

comment by Ishaan · 2015-05-19T04:36:49.564Z · LW(p) · GW(p)

Frankly who cares? If someone wants to signal, then fine we can work with that. Life saving is an archetypal signal of heroism. Start a trend of wearing necklaces with one bead for each life you saved to remind everyone of the significance of each life and to remind you that you've given back to this world. That would be pretty bad ass, I'd wear it. Imagine you feel sad, then look down and remember you've added more QALYs to this world than your entire natural lifespan, that you've added centuries of smiles. Perhaps too blatant a boast for most people's tastes?

Point is, even if it was all signalling, you could boast more if you knew how to get qalys efficiently. (I saved 2 lives sounds way better than i spent 10000 dollars)

Replies from: Lumifer
comment by Lumifer · 2015-05-19T05:43:36.468Z · LW(p) · GW(p)

Frankly who cares?

If people are actually interested in signaling to their social circle, they will ignore geeky Givewell and do a charity walk for a local (for-profit) hospital instead.

Start a trend of wearing necklaces with one bead for each life you saved

I would consider anyone who would do this (based on the dollar amount of donation) to be terribly pretentious and, frankly, silly.

Replies from: Ishaan, None
comment by Ishaan · 2015-05-19T14:25:29.238Z · LW(p) · GW(p)

I do have a parallel thought process which finds it pretentious, but I ignore it because it also said that the ice bucket was pretentious. And the ice bucket challenge was extremely effective. I think the dislike is just contrarian signalling, and is why our kind can't cooperate. That or some kind of egalitarian instinct against boasting.

Isn't "pretentious" just a negative way to say "signalling"? Of course that idea might not be effective signalling but abstractly, the idea is that EA is well suited for signalling so why isn't it?

I'd consider value in doing a local hospital. Local community strengthening and good feelings is its own thing with its own benefits, and there's a special value in the aid coming from local people who know what's what - as a natural extension of the idea that aid is better coming from parents to children than from distant government to children. I'm talking about the global poverty crowd here.

Replies from: Lumifer, ChristianKl, NancyLebovitz
comment by Lumifer · 2015-05-19T14:47:06.518Z · LW(p) · GW(p)

That I find something pretentious is my moral/aesthetic judgement. Evaluating the effectiveness of dark arts techniques is an entirely different question.

Speaking of signaling, pretentiousness means you tried to signal and failed.

Replies from: Ishaan
comment by Ishaan · 2015-05-19T15:13:34.304Z · LW(p) · GW(p)

Why is it dark? Doesn't it have to be a drawback in order to be dark? (agreed about pretentiousness=signal failure)

Replies from: Lumifer, OrphanWilde
comment by Lumifer · 2015-05-19T15:43:14.338Z · LW(p) · GW(p)

It's dark because it's manipulation. You are pushing buttons in other people's minds to achieve a certain outcome.

Replies from: Ishaan
comment by Ishaan · 2015-05-19T20:17:06.994Z · LW(p) · GW(p)

All interactions involving people involve pushing buttons for outcomes.

Negative-connotation-Manipulation is when you do it in ways that they would not approve of it if they realized exactly what you were doing. The ice bucket challenge for example does exactly what it says on the tin - raise awareness, raise money, have social activity.

Replies from: Lumifer
comment by Lumifer · 2015-05-19T20:50:43.447Z · LW(p) · GW(p)

All interactions involving people involve pushing buttons for outcomes.

I disagree.

comment by OrphanWilde · 2015-05-19T15:24:54.898Z · LW(p) · GW(p)

All actions have a drawback, in at least the form of opportunity costs.

comment by ChristianKl · 2015-05-19T19:10:27.649Z · LW(p) · GW(p)

Isn't "pretentious" just a negative way to say "signalling"?

It's signaling more status than the people around you want to give you.

comment by NancyLebovitz · 2015-05-19T15:41:06.088Z · LW(p) · GW(p)

"Pretentious" might be signalling of high status [1]that's irritating to receive, which leads to a large new topic. When is signalling fun vs. not fun? Is it just a matter of what's a positive signal in the recipient's group?

[1] Signalling about sports teams isn't pretentious, even when it's annoying. I don't think there's a word for the annoyingness of middle-to-low status signaling. "Vulgar" covers some cases, but not most of them.

comment by [deleted] · 2015-05-19T09:16:25.367Z · LW(p) · GW(p)

I would consider anyone who would do this (based on the dollar amount of donation) to be terribly pretentious and, frankly, silly.

Why?

Replies from: Lumifer
comment by Lumifer · 2015-05-19T14:22:23.635Z · LW(p) · GW(p)

I do not accept that a dollar is a unit of caring.

I do not think that contributing money to an organization which runs programs which statistically save lives can be legitimately called "I saved X lives". Compare: "I bought some war bonds so I can say I personally killed X enemy soldiers".

I think that strutting one's charitable activities is in very poor taste.

Replies from: jkaufman, Ishaan, None
comment by jefftk (jkaufman) · 2015-05-20T18:07:30.631Z · LW(p) · GW(p)

What would you use "I saved X lives" to mean if not "compared to what I would have done otherwise, X more people are alive today"?

(I don't at all like the implied precision in giving a specific number, though.)

Replies from: Lumifer
comment by Lumifer · 2015-05-20T19:30:17.335Z · LW(p) · GW(p)

There are two issues here.

One is tracking of individual contributions. When a charity says "A $5000 donation saves one life" they do not mean that your particular $5000 will save one specific life. Instead they divide their budget of $Z by their estimate of Y lives saved and produce a dollars/life number. This is an average and doesn't have much to do with you personally other than that you were one data point in the set from which this average was calculated.

"I contributed to the common effort which resulted in preventing Y deaths from malaria" is a more precise formulation which, of course, doesn't sound as good as "I saved X lives".

Two is the length of the causal chain. If you, with your own hands, pull a drowning kid out of the water, that's one life saved with the causal chain of length 1. If you give money to an organization which finances another organization which provides certain goods for the third organization to distribute with the help of a bunch of other organizations, the causal chain is long and the longer it goes, the fuzzier it gets.

As always, look at incentives.Charity fundraising is effectively advertising with greater social latitude to use emotional manipulation. One strand in that manipulation is to make the donor feel an direct emotional connection with "direct" being the key word. That's why you have "Your donation saves lives!" copy next to a photo of an undernourished black or brown kid (preferably a girl) looking at the camera with puppy eyes.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2015-05-27T14:41:53.640Z · LW(p) · GW(p)

When a charity says...

If someone is saying "I saved 10 lives" because they gave $500 to a charity that advertises a cost per life saved of $50, then yes, that's very different from actually saving lives. But the problem is that charities' reports of their cost effectiveness are ridiculously exaggerated, and you just shouldn't trust anything they say.

Instead they divide their budget of $Z by their estimate of Y lives saved and produce a dollars/life number.

What we want are marginal costs, not average costs, and these are what organizations like GiveWell try to estimate.

the causal chain is long and the longer it goes, the fuzzier it gets

Yes, this is real. But we're ok with assigning credit along longish causal chains in many domains; why exclude charity?

Replies from: Lumifer
comment by Lumifer · 2015-05-27T16:59:58.736Z · LW(p) · GW(p)

you just shouldn't trust anything they say.

Oh, trust me, I don't :-D

What we want are marginal costs, not average costs

The problem with marginal costs is that they are conditional. For example, the marginal benefit of your $1000 contribution depends on whether someone made a $1m contribution around the same time.

But we're ok with assigning credit along longish causal chains in many domains; why exclude charity?

I don't know about that -- I'm wary of assigning credit "along longish causal chains", charity is not an exception for me.

comment by Ishaan · 2015-05-19T20:41:58.937Z · LW(p) · GW(p)

It's not intended as a unit of caring - it's a unit of achievement, a display of power, focused on outcomes. Consequences over virtue ethics, utils over fuzzies.

Don't get me wrong, I do see the ugliness in it. I too have deeply held prejudices against materialism and vanity, and the whole thing bites against the egalitarian instinct for giving even more status to the wealthy. But helping people is something worthy of pride, unlike the mercedes or thousand dollar suits or flashy diamonds and similar trifles people use for the same purpose.

My point is, you said they were signalling. I'm not approving of signalling so much as saying, why not signal productively, in a manner that actually does what you've signaled to do?

Replies from: Lumifer
comment by Lumifer · 2015-05-19T20:53:59.804Z · LW(p) · GW(p)

It's not intended as a unit of caring

Some people think otherwise.

But helping people is something worthy of pride

How about buying status signals with the the minor side-effect of helping people?

No one ever seems to say "eww what expensive clothing that man has, such poor taste, yuck what an unnecessarily large house".

Of course they do. "So much money, so little taste" is a common attitude. "Unnecessarily large houses" are known as McMansions in the US.

comment by [deleted] · 2015-05-19T20:04:05.444Z · LW(p) · GW(p)

I think that strutting one's charitable activities is in very poor taste.

Beware, envy lives here. Cloaked in the robes of social decency, he whispers:

“Imposters, all of them. They don’t deserve praise…you do.”

Replies from: Lumifer
comment by Lumifer · 2015-05-19T20:49:14.316Z · LW(p) · GW(p)

Huh?

Replies from: None
comment by [deleted] · 2015-05-19T22:08:55.931Z · LW(p) · GW(p)

If I were you, I would consider the possibility that I am envious of those who signal and receive praise, and that I am rationalizing my feelings by claiming to uphold the social standard of "good taste".

Replies from: Lumifer
comment by Lumifer · 2015-05-19T23:28:07.794Z · LW(p) · GW(p)

That seems unlikely.

First, even after introspection I don't have envious feelings towards such people which is probably because in my social circle ostentatious displays of kinda-virtue usually lead not to praise but to slight awkwardness.

Second, this is consistent with my general taste in other things and looks to be a pretty ancient attitude :-)

comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T03:07:49.052Z · LW(p) · GW(p)

Agree. (The EA community is already very well aware of "spreading EA" as a valuable volunteer activity, but I'd seen less discussion of Tomasik's proposal.)

comment by ChristianKl · 2015-05-18T14:19:27.545Z · LW(p) · GW(p)

I agree that adding content to Wikipedia is worthwhile.

In addition to Wikipedia I think that StackExchange pages can often be very worthwhile.

Often when I come across an interesting claim on the internet where I don't know whether it's true, I post it on Skeptics.StackExchange or a subject specific site in the StackExchange network.

comment by shminux · 2015-05-18T01:44:22.711Z · LW(p) · GW(p)

What changes would LW require to make itself attractive again to the major contributors who left and now have their own blogs?

Replies from: Gram_Stone, John_Maxwell_IV, philh, Halfwitz
comment by Gram_Stone · 2015-05-18T21:46:03.772Z · LW(p) · GW(p)

As I often say, I haven't been here long, but I notice a sort of political-esque conflict between empirical clusters of people that I privately refer to as the Nice People and the Forthright People. The Nice People think that being nice is pragmatic. The Forthright People think that too much niceness decreases the signal-to-noise ratio and also that there's a slippery slope towards vacuous niceness that no longer serves its former pragmatic functions. A lot of it has to do with personality. Not everyone fits neatly, and there are Moderate People, but many fit pretty well.

I also notice policy preferences among these groups. The Nice don't mind discussion of object-level things that people have been drawn towards as the result of purportedly rational thinking and deciding. The Forthright often prefer technical topics and more meta-level discussion of how to be rational, and many harken back to the Golden Age when LW was, as far as I can tell, basically a way to crowdsource hyperintelligent nerds (in the non-disparaging sense) to work past inadequate mainstream decision theories, and also to do cognitive-scientific philosophizing as opposed to the ceiling-gazing sort. The Nice think that new LW members should be welcomed with open arms and that this helps advance the Cause. The Forthright often profess that the Eternal September is long past and that new members that cannot tolerate their Forthrightness are only reducing the discussion quality further.

The current LW is a not-so-useful (certainly not useless, as far as I'm concerned) compromise between the two extremes. The Nice think that the Forthright are often rude and pedantic (often being from academia, as the Forthright are), and prefer not to post here. The Forthright think that the discussion quality has fallen too far, such that the content stream is too difficult to follow time-efficiently, and that to do so would have little value, and prefer not to post here.

I know that you specifically spoke out against subreddits, but I think subreddits would help. Last time I checked, the post was called Hold Off On Proposing Solutions, not Hold Off On Implementing Solutions Indefinitely. (Excuse my Forthrightness!) Tags are good for getting fed the right content, but subreddits encourage subcultures, and subcultures already exist on LW. If you posted in a more technical subreddit, you could expect more Forthright behavior, but also super-high discussion quality. Forthrightness really isn't so bad in a semi-academic context; it's the outside-LW norm. If you posted in a sub-reddit for object-level lifestyle stuff, or miscellaneous stuff, you could expect more Nice behavior; that's also the outside-LW norm. This might actually be a case of LW collectively overestimating how atypical it is, which is, so ironically, very typical.

Replies from: NancyLebovitz, shminux
comment by NancyLebovitz · 2015-05-19T02:52:10.056Z · LW(p) · GW(p)

That's an interesting distinction, but I think the worst problem at LW is just that people rarely think of interesting things to write about. I don't know whether all the low-hanging fruit has been gathered, or if we should be thinking about ways to find good topics. Scott Alexander seems to manage.

Replies from: None, Gram_Stone
comment by [deleted] · 2015-05-19T07:24:22.186Z · LW(p) · GW(p)

whether all the low-hanging fruit has been gathered

Still there is the issue that it is a format of publishing sorted by publishing date. It is not like a library where it is just as easy to find a book published 5 years ago than the one published yesterday because they are sorted by topic or the author's name or something. Sequences and the wiki help this, still, a timeless view of the whole thing would be IMHO highly useful. A good post should not be "buried" just because it is 4 years old.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-05-19T11:09:22.034Z · LW(p) · GW(p)

There's a tremendous amount of material on LW. Do you have ideas about how to identify good posts and make them easier to find?

I can think of a solutions, but they might just converge on a few posts. Have a regular favorite posts thread. Alternatively, encourage people to look at high-karma older posts.

Replies from: Vaniver
comment by Vaniver · 2015-05-20T15:39:04.452Z · LW(p) · GW(p)

There's a tremendous amount of material on LW. Do you have ideas about how to identify good posts and make them easier to find?

Actually, we could probably use off-the-shelf (literally) product recommendation software. The DB knows what posts people have upvoted and downvoted, and which posts they haven't looked at yet (in order to get the "new since last visit" colored comment border).

comment by Gram_Stone · 2015-05-19T09:18:19.039Z · LW(p) · GW(p)

That's the thing though. My hypothesis is that the 'people who seem to manage' have left because the site is a lukewarm compromise between the two extremes that they might prefer it to be. Thus, subreddits.

Like, what would a Class Project to make good contributors on LW look like? Does that sound feasible to you?

Oh man, I'm arguing that blogging ability is innate.

Replies from: Vaniver
comment by Vaniver · 2015-05-20T15:37:21.941Z · LW(p) · GW(p)

Oh man, I'm arguing that blogging ability is innate.

Obviously there's an innate portion to blogging ability. We can still manipulate the environmental portion.

Replies from: Gram_Stone
comment by Gram_Stone · 2015-05-20T20:33:53.108Z · LW(p) · GW(p)

I hope I didn't come off like I'm going to automatically shoot all suggestions to reinvigorate LW out of the sky. That's most of the problem with the userbase! I genuinely wonder what such a Class Project would look like, and would also be willing to participate if I am able.

Since my comment was written in the context of Nancy_Lebovitz's comment, I'm specifically curious about how one would go about molding current members into high-quality contributors. I see a lot of stuff above about finding ways to make the user experience more palatable, but that in itself doesn't seem to ensure the sort of change that I think most people want to see.

comment by shminux · 2015-05-18T22:22:48.445Z · LW(p) · GW(p)

I don't believe I was against subreddits, just against the two virtually useless ones we have currently. Certainly subreddits work OK on, well, Reddit. Maybe a bit of a segmentation with different topics and different moderation rules is a good idea, but there is no budget for this, as far as I know, and there is little interest from those still nominally in charge. In fact, I am not sure why Trike doesn't just pull the plug. It costs them money, there are no ads or any other revenue, I am guessing.

comment by John_Maxwell (John_Maxwell_IV) · 2015-05-18T13:03:37.973Z · LW(p) · GW(p)

In my view, you're asking the wrong question. The major contributors are doing great; they have attracted their own audiences. A better question might be: how can LW grow promising new posters in to future major contributors (who may later migrate off the platform)?

I had some ideas that don't require changing the LW source that I'll now create polls for:

Should Less Wrong encourage readers to write appreciative private messages for posts that they like?

[pollid:976]

Should we add something to the FAQ about how having people tear your ideas apart is normal and expected behavior and not necessarily a sign that you're doing anything wrong?

[pollid:977]

Should we add something to the FAQ encouraging people to use smiley faces when they write critical comments? (Smiley faces take up very little space, so don't affect the signal-to-noise-ratio much, and help reinforce the idea that criticism is normal and expected. The FAQ could explain this.)

[pollid:978]

We could start testing these ideas informally ASAP, make a FAQ change if polls are bullish on the ideas, and then announce them more broadly in a Discussion post if they seem to be working well. To keep track of how the ideas seem to be working out, people could post their experiences with them in this subthread.

Replies from: Lumifer, Sarunas, None, NancyLebovitz, Vaniver, estimator
comment by Lumifer · 2015-05-18T16:15:16.968Z · LW(p) · GW(p)

Should we add something to the FAQ

Does anyone read the FAQ? Specifically, do the newbies look at the FAQ while being in the state of newbiedom?

Replies from: Vaniver, None, John_Maxwell_IV
comment by Vaniver · 2015-05-18T19:23:07.751Z · LW(p) · GW(p)

Does anyone read the FAQ? Specifically, do the newbies look at the FAQ while being in the state of newbiedom?

At least some do. In general, we could improve the onboarding experience of LW.

Replies from: Lumifer, John_Maxwell_IV
comment by Lumifer · 2015-05-18T19:35:43.326Z · LW(p) · GW(p)

In general, we could improve the onboarding experience of LW.

"Hello, I see you found LW. Here is your welcome package which consists of a first-aid trauma kit, a consent form for amputations, and a coupon for a PTSD therapy session..."

X-)

Replies from: Error
comment by Error · 2015-05-18T19:55:29.119Z · LW(p) · GW(p)

...and a box of paperclips.

Replies from: Lumifer
comment by Lumifer · 2015-05-18T20:36:14.378Z · LW(p) · GW(p)

...and a box of paperclips

...please don't use it to tease resident AIs, it's likely to end very very badly...

comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T03:22:41.669Z · LW(p) · GW(p)

What concrete actions could we take to improve the onboarding experience?

Replies from: Vaniver
comment by Vaniver · 2015-05-19T14:23:32.240Z · LW(p) · GW(p)

I imagine there are UI design best practices, like watching new users try out the site, that could be followed. A similarly serious approach I've seen is having a designated "help out the newbie" role, either as someone people are encouraged to approach or specifically pairing mentees with mentors.

Both of those probably cost more than they deliver. A more reasonable approach would be having two home pages: one for logged-in users that probably links to /r/all/new (or the list version), and one for new users that explains more about LW, and maybe has a flowchart about where to start reading based on interests.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T14:40:01.697Z · LW(p) · GW(p)

So the homepage already explains some stuff about LW. What do you think is missing?

I'd guess we can get 80% of the value of a flowchart with some kind of bulleted question/recommendation list like the one at http://lesswrong.com/about/ Maybe each bullet should link to more posts though? Or recommend an entire sequence/tag/wiki page/something else? And the bullets could be better chosen?

comment by [deleted] · 2015-05-18T19:10:20.911Z · LW(p) · GW(p)

...yes.

comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T03:21:02.021Z · LW(p) · GW(p)

It's linked to from the About page. Scroll to the bottom and you can see it has over 40,000 views: http://wiki.lesswrong.com/wiki/FAQ But it's not among the top 10 most viewed pages on the LW wiki: http://wiki.lesswrong.com/wiki/Special:Statistics So it seems as though the FAQ is not super discoverable.

It looks like the About page has been in approximately its current form since September 2012, including the placement of the FAQ link. For users who have discovered LW since September 2012, how have you interacted with the FAQ?

[pollid:981]

If you spent time reading it, did you find it useful?

[pollid:982]

Should we increase its prominence by linking to it from the home page too?

[pollid:983]

Replies from: None
comment by [deleted] · 2015-05-19T11:37:35.399Z · LW(p) · GW(p)

I went directly to the sequences, not sure why. Probably the sheer size of the list of contents was kind of intimidating.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T13:07:47.598Z · LW(p) · GW(p)

"the sheer size of the list of contents" - hm? What are you referring to?

Replies from: None
comment by [deleted] · 2015-05-19T13:17:12.389Z · LW(p) · GW(p)

The FAQ

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T13:29:44.708Z · LW(p) · GW(p)

I figure an exhaustive FAQ isn't that bad, since it's indexed by question... you don't have to read all the questions, just the ones you're interested in.

Replies from: None
comment by [deleted] · 2015-05-19T13:32:52.207Z · LW(p) · GW(p)

No, it is not bad at all. But it does what it says on the tin, answers questions. When starting with LW from zero there are no questions yet or not many, but more like exploration.

comment by Sarunas · 2015-05-19T19:30:15.117Z · LW(p) · GW(p)

I think that while appreciative messages are (I imagine) pleasant to get, I don't think they are the highest form of praise that a poster can get. I imagine that if I wrote a LW post, the highest form of praise to me would be comments that take the ideas expressed in a post (provided they are actually interesting) and develop them further, perhaps create new ideas that would build upon them. I imagine that seeing other people synthesizing their ideas with your ideas would be perhaps the best praise a poster could get.

While comments that nitpick the edge cases of the ideas expressed in a post obviously have their value, often they barely touch the main thesis of the post. An author might find it annoying having to respond to people who mostly nitpick his/her offhand remarks, instead of engaging with the main ideas of the post which the author finds the most interesting (that's why he/she wrote it). The situation when you write a comment and somehow your offhand remark becomes the main target of responses (whereas nobody comments on the main idea you've tried say) is quite common.

I am not saying that we should discourage people from commenting on remarks that are not central to the post or comment. I am trying to say that arguing about the main thesis is probably much more pleasant than arguing about offhand remarks, and, as I have said before, seeing other people take your ideas and develop them further is even more pleasant. Of course, only if those ideas are actually any good. That said, even if the idea is flawed, perhaps there is a grain of truth that can be salvaged? For example, maybe the idea works under some kind of very specific conditions? I think that most people would be more likely to post if they knew that even commenters discovered flaws in their ideas, the same commenters would be willing to help to identify whether something can be done to fix those flaws.

(This comments only covers LW posts (and comments) where posters present their own ideas. Not all posts are like that, e.g. many summarize arguments, articles and books by others)

comment by [deleted] · 2015-05-18T13:56:47.495Z · LW(p) · GW(p)

Maybe it would be a good thing for the site if people were encouraged to write critical reviews of something in their fields, the way SSC does? It has been mentioned that criticizing is easier than creating.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-18T13:58:23.697Z · LW(p) · GW(p)

Sounds like a good idea. Do it!

Replies from: None
comment by [deleted] · 2015-05-18T14:10:11.377Z · LW(p) · GW(p)

I do have something specific in mind (about how plant physiology is often divorced from population research), but I am in a minority here, so it might be more interesting for most people to read about other stuff.

Replies from: John_Maxwell_IV, faul_sname
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T03:29:38.387Z · LW(p) · GW(p)

I am in a minority here, so it might be more interesting for most people to read about other stuff

You mean you are studying a field most LWers are unfamiliar with? Well that means we can learn more from your post, right? ;)

If people don't find it interesting they won't read it. Little harm done. Polls indicate that LWers want to see more content, and I think you're displaying the exact sort of self-effacing attitude that is holding us back :)

I'm not guaranteeing that people will vote up your post or anything, but the entire point of the voting system is to help people find good content and ignore bad content. So upvoted posts are more valuable than downvoted posts are harmful.

comment by faul_sname · 2015-05-22T06:09:43.456Z · LW(p) · GW(p)

I, for one, would be interested in such a post.

Replies from: None
comment by [deleted] · 2015-05-22T06:50:59.256Z · LW(p) · GW(p)

Thank you, I will do it ASAP, I'm just a bit rushed by PhD schedule and some other work that can be done only in summer. Do you have similar observations? It would be great to compile them into a post, because my own experience is based more on literature and less on personal communication, for personal reasons.

Replies from: faul_sname
comment by faul_sname · 2015-05-22T10:45:51.609Z · LW(p) · GW(p)

I really don't have any similar observations, since I mostly focused on biochem and computational bio in school.

I'm actually not entirely sure what details you're thinking of -- I'm imagining something like the influence of selective pressure from other members of the same species, which could cover things like how redwoods are so tall because other redwoods block out light below the canopy. On the other hand, insight into the dynamics of population biologists and those studying plant physiology would also be interesting.

According to the 2014 survey we have about 30 biologists on here, and there are considerably more people here who take an interest in such things. Go ahead and post -- the community might say they want less of it, but I'd bet at 4:1 odds that the community will be receptive.

Replies from: None, None
comment by [deleted] · 2015-06-08T19:23:15.997Z · LW(p) · GW(p)

...you know, this is actually odd. I would expect ten biologists to take over a free discussion board. Where are those people?

comment by [deleted] · 2015-05-22T11:12:17.311Z · LW(p) · GW(p)

No, I meant rather what between-different-fields-of-biology observations you might have. It doesn't matter what you study, specifically. It's more like 'but why did those biochists study the impact of gall on probiotics for a whole fortnight of cultivation, if every physiologist knows that the probiotic pill cannot possibly be stuck in the GI tract for so long? thing.' Have you encountered this before?

Replies from: faul_sname
comment by faul_sname · 2015-05-22T11:17:16.724Z · LW(p) · GW(p)

I can come up with a few examples that seemed obvious that they wouldn't work in retrospect, mostly having to do with gene insertion using A. tumefaciens, but none that I honestly predicted before I learned that they didn't work. Generally, the biological research at my institution seemed to be pretty practical, if boring. On the other hand, I was an undergrad, so there may have been obvious mistakes I missed -- that's part of what I'd be interested in learning.

Replies from: None
comment by [deleted] · 2015-05-22T11:25:52.347Z · LW(p) · GW(p)

Oh, I really can't tell you much about that:) In my field, it's much more basic. Somehow, even though everyone knows that young ferns exist because adult ferns reproduce, there are very few studies that incorporate adult ferns into young ferns' most crucial life choices (like, what to produce - sperm or eggs.) I have no idea why it is so beyond simple laboratory convenience. It is not even a mistake, it's a complete orthogonality of study approaches.

comment by NancyLebovitz · 2015-05-19T02:47:41.367Z · LW(p) · GW(p)

I don't recommend smiley faces-- I don't think they add much.

I do recommend that people be explicit if they like something about a post or comment.

comment by Vaniver · 2015-05-18T19:21:59.583Z · LW(p) · GW(p)

Should we add something to the FAQ encouraging people to use smiley faces when they write critical comments?

Hmm. I typically see emoticons as tied to emotion, and am unsurprised to see that women use them more than men. While a LW that used emoticons well might be a warmer and more pleasant place, I'm worried about an uncanny valley.

Replies from: Jiro
comment by Jiro · 2015-05-18T21:36:29.611Z · LW(p) · GW(p)

Putting smiley faces on critical comments is likely to encourage putting smiley faces on anything that may be perceived as negative, which in turn will lead people to put smiley faces on actual hostility. Putting a smiley face on hostility just turns it into slightly more passive aggressive hostility (how dare you react to this as if it's hostile, see, I put a smiley face on) and should be discouraged.

I also worry that if we start putting smiley faces on critical comments, we'll get to the point where it's expected and someone whose comments are perceived as hostile will be told "it's your own fault--you should have put a smiley face on".

comment by estimator · 2015-05-18T19:54:50.640Z · LW(p) · GW(p)

Should we add something to the FAQ about how having people tear your ideas apart is normal and expected behavior and not necessarily a sign that you're doing anything wrong?

Should we add something to the FAQ encouraging people to use smiley faces when they write critical comments?

I believe that the most LWers have some STEM background, so they are already familiar with such level of criticism, therefore criticism-is-normal disclaimers aren't necessary. Am I wrong? :)

Should Less Wrong encourage readers to write appreciative private messages for posts that they like?

Positive reinforcement is a thing. But how are you going to efficiently encourage readers to do that? :) Also, we have karma system, which (partially?) solves the feedback problem.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T03:02:53.736Z · LW(p) · GW(p)

I believe that the most LWers have some STEM background, so they are already familiar with such level of criticism, therefore criticism-is-normal disclaimers aren't necessary. Am I wrong? :)

Possibly, given that lukeprog, Eliezer, and Yvain have all complained that writing LW posts is not very rewarding. Reframing criticism might do a bit to mitigate this effect on the margin :)

Positive reinforcement is a thing. But how are you going to efficiently encourage readers to do that? :) Also, we have karma system, which (partially?) solves the feedback problem.

One of the things that strikes me as interesting reading Eliezer's old sequence posts is the positive comments that were heaped on him in the absence of a karma system. I imagine these were important in motivating him to write one post a day for several years straight. Nowadays we consider such comments low-signal and tell people to upvote instead. But getting upvotes is not as rewarding as getting appreciative comments in my view. I imagine that 10 verbal compliments would do much more for me than 10 upvotes. In terms of encouraging readers... like I said, put it in the FAQ and announce it in a discussion post. Every time someone sends me an encouraging PM, I get reminded to send others encouraging PMs when I like their work.

comment by philh · 2015-05-18T12:24:30.433Z · LW(p) · GW(p)

I recently wrote this, which would probably have been of interest to LW. But when I considered submitting it, my brain objected that someone would make a comment like "you shouldn't have picked a name that already redirects to something else on wikipedia", and... I just didn't feel like bothering with that kind of trivia. (I know I'm allowed to ignore comments like that, but I still didn't feel like bothering.)

I don't know if that was fair or accurate of my brain, but Scott has also said that the comments on LW discourage him from posting, so it seems relevant to bring up.

The HN comments, and the comments on the post itself, weren't all interesting, but they weren't that particular kind of boring.

Replies from: 9eB1
comment by 9eB1 · 2015-05-18T21:32:27.353Z · LW(p) · GW(p)

One of those HN comments made me realize that you'd perfectly described a business situation that I'd just been in (a B2B integration, where the counterparty defected scuttling the deal), so they were interesting to me. Maybe this argues that you should have included more examples, but it's unlikely that it would have sparked that thought except that it was the perfect example.

comment by Halfwitz · 2015-05-18T02:10:54.376Z · LW(p) · GW(p)

I doubt there's much to be done. I wouldn't be surprised if MIRI shut down LessWrong soon. It's something of a status drain because of the whole Roko thing and no one seems to use it anymore. Even the open threads seem to be losing steam.

We still get most of the former value from the SlateStarCodex, Gwern.net, and the tumblr scene. Even for rationality, I'm not sure LessWrong is needed now that we have CFAR.

Replies from: D_Malik, John_Maxwell_IV, raydora, None
comment by D_Malik · 2015-05-18T05:16:11.144Z · LW(p) · GW(p)

I don't think a shutdown is even remotely likely. LW is still the Schelling point for rationalist discussion; Roko-gate will follow us regardless; SSC/Gwern.net are personal blogs with discussion sections that are respectively unusable and nonexistent. CFAR is still an IRL thing, and almost all of MIRI/CFAR's fans have come from the internet.

Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.

Replies from: Viliam, Cariyaga, SanguineEmpiricist
comment by Viliam · 2015-05-18T14:23:51.313Z · LW(p) · GW(p)

Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.

To have a website with content like the original Sequences, we need someone who (a) can produce enough great content, and (b) believes that producing content for a website is the best use of their time.

It already sounds like a paradox: the more rational and awesome a person is, the more likely it is that they can use their time much better than writing a blog.

Well, unless they use the blog to sell something...

I think Eliezer wrote the original Sequences pretty much to find people to cooperate with him at MIRI, and to make people more sympathetic and willing to send money to MIRI. Mission accomplished.

What would be the next mission (for someone else) which could be accomplished by writing interesting articles to LW?

comment by Cariyaga · 2015-05-18T10:11:59.347Z · LW(p) · GW(p)

If Less Wrong is, indeed, losing steam as a community (I wouldn't have considered myself part of it until recently, and hadn't kept up with it before then), there are options to deal with it.

First, we could create enjoyable media to be enjoyed by large quantities of people, with rationalistic principles, and link back to Less Wrong in it. HPMOR is already a thing, and certainly does well for its purpose of introducing people to and giving some basic instruction in applied rationality. However, as it's over, the flow of people from the readership it generated has ceased.

Other media is a possibility. If people are interested in supporting Less Wrong and CFAR specifically, there could perhaps be a youtube channel made for it; maybe streaming live discussions and taking questions from the audience. Non-video means are also, obviously, possible. Webcomics are somewhat niche, but could drive readership if a high quality one was made. I'm loathe to suggest getting already-established content creators to read and support Less Wrong, partially because of my own reticence in such, and partially because of a host of problems that would come with that, as our community is somewhat insular, and though welcoming in our own way, Less Wrong often comes off to people as arrogant or elitist.

On that note, while I would not suggest lowering our standards for discourse, I think that in appealing to a larger community it's necessary to realize that newer members of the community may not have the background necessary to take constructively the criticisms given. I'm not sure how to resolve this problem. Being told to "go and read such and such, then you'll understand" comes off rudely. Perhaps some form of community primer link on the front page, regarding customs here? The about page is a little cluttered and not entirely helpful. That in addition to a marker next to someone's name indicating they're new to Less Wrong could do a lot to help. Furthermore, a section for the "younger" (in terms of account) posters with encouragement for the older ones to come in and help out may be of help.

Well, I could go on for a while longer, but I think that's enough of a thought dump for now.

Replies from: None, John_Maxwell_IV
comment by [deleted] · 2015-05-18T16:11:12.652Z · LW(p) · GW(p)

Your attitude to informational videos is: [pollid:979]

Replies from: ChristianKl, Cariyaga
comment by ChristianKl · 2015-05-18T17:30:52.173Z · LW(p) · GW(p)

There's some research that suggests that videos that actually help people to learn aren't pleasant to watch. http://chronicle.com/article/Confuse-Students-to-Help-Them/148385/

"It seems that, if you just present the correct information, five things happen," he said. "One, students think they know it. Two, they don’t pay their utmost attention. Three, they don’t recognize that what was presented differs from what they were already thinking. Four, they don’t learn a thing. And five, perhaps most troublingly, they get more confident in the ideas they were thinking before."

If the student feels confused by the video they are more likely to actually update.

The kind of informational videos that are popular aren't useful for learning and vice versa.

comment by Cariyaga · 2015-05-18T16:46:14.337Z · LW(p) · GW(p)

I voted other. The reason I suggested nontextual formats is because I don't believe that rationality can be taught solely through text, even if I personally prefer to learn that way. I have multiple friends who do not learn well at all in such a manner, but I believe that both of them would learn much more effectively from a video; I suspect this extends out to others, for whom the text dump nature of this site might be intimidating.

comment by John_Maxwell (John_Maxwell_IV) · 2015-05-18T12:36:48.056Z · LW(p) · GW(p)

I'm not sure about webcomics or Youtube videos. LW is full of essays on abstract philosophical topics; if you don't like reading, you're probably not going to get much out of it. I think the biggest ways for people to help LW are:

  • Write quality posts. There are a bunch of suggestions in this FAQ question.

  • Share Less Wrong posts with thoughtful people who will find them interesting. Think Facebook friends, your favorite subreddit, etc. Ideally people who are even smarter than you are.

Improving the about page is also high-leverage. I encourage you to suggest concrete changes or simply ignore the existing one and write an alternative about page from scratch so we can take the best ideas from each.

Replies from: Cariyaga
comment by Cariyaga · 2015-05-18T17:00:01.846Z · LW(p) · GW(p)

Certainly, writing high quality posts is essential for improving on what we already do well, but as I mentioned in a reply above, not everyone learns best -- or at all effectively -- that way. To be clear, I'm not suggesting we do any less of that, but I think that we may be limiting ourselves somewhat by producing only that style of content. I think that we would be able to get more people interested in Less Wrong by producing non-textual content as well.

I will note, however, that when I suggested webcomics, I wasn't specifically intending a webcomic about Less Wrong (although one about biases in general could work quite well!) so much as one written by someone from Less Wrong, with a rationalist bent, to get people interested in it. Although, admittedly, going at it with that goal in mind may produce less effective content.

Regarding improving the about page, the main thing that jumped out to me is that there seem to be far too many hyperlinks. My view of the About page is that it should be for someone just coming into Less Wrong, from some link out there on the net, with no clue what it is. Therefore, there should be less example in the form of a list of links, and more explanation as to what Less Wrong's function is, and what its community is like.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-19T03:06:17.615Z · LW(p) · GW(p)

If someone wants to create a rationalist webcomic, Youtube channel, etc. I'm all for that.

I did the current About page. I put in a lot of links because I remembered someone saying that it seems like people tend to get in to Less Wrong when they read a particular article that really resonates with them, so I figured I would put in lots of links so that people might find one that would resonate. Also, when I come across a new blog that seems interesting, I often look over a bunch of posts trying to find the gems, and providing lots of links seems like it would facilitate this behavior.

What important info about LW's function/community would you like to see on the about page?

comment by SanguineEmpiricist · 2015-05-18T23:58:19.391Z · LW(p) · GW(p)

Part of the reason it is losing steam is there is a small quantity of posters that post wayyyy too much using up everyone's time and they hardly contribute anything. Too many contrarians.

We have a lot of regular haters that could use some toning down.

comment by John_Maxwell (John_Maxwell_IV) · 2015-05-18T12:22:20.945Z · LW(p) · GW(p)

It's true that Less Wrong has a reputation for crazy ideas. But as long as it has that reputation, we might as well continue posting crazy ideas here, since crazy ideas can be quite valuable. If LW was "rebooted" in some other form, and crazy ideas were discussed there, the new forum would probably acquire its own reputation for crazy ideas soon enough.

The great thing about LW is that it allows a smart, dedicated, unknown person to share their ideas with a bunch of smart people who will either explain why it's wrong or change their actions based on it relatively quickly. Many of LW's former major contributors have now independently acquired large audiences that pay attention to their ideas, so they don't need LW anymore. But it's very valuable to leave LW open in order to net new contributors like Nate Soares (who started out writing book reviews for LW and was recently promoted to be MIRI's executive director). (Come to think of it, lukeprog was also "discovered" through Less Wrong as well... he went from atheist blogger to LW contributor to MIRI visiting fellow to MIRI director.)

Consider also infrequent bloggers. Kaj Sotala's LW posts seem to get substantially more comments than the posts on his personal blog. Building and retaining an audience on an independent blog requires frequent posting, self-promotion, etc... we shouldn't require this of people who have something important to say.

comment by raydora · 2015-05-18T03:38:44.582Z · LW(p) · GW(p)

I recently joined this site after lurking for awhile. Are blog contributions of that sort are the primary purpose of Less Wrong?

It seems like it fulfills a niche that the avenues you listed do not: specifically, in the capacity of a community rather than an individual, academic, or professional endeavor.

There are applications of rational thought present in these threads that I don't see gathered anywhere else. I'm sure I'm missing something here, but could viewing Less Wrong as a potential breeding ground for contributors of that kind be useful?

I realize it's a difficult line to follow without facing the problems inherent to any community, especially one that preaches a Way.

I haven't encountered the rationalist tumblr scene. Is such a community there?

comment by [deleted] · 2015-05-18T11:07:00.919Z · LW(p) · GW(p)

Eh, it is just useful to have a generic discussion forum on the Internet with a high average IQ and a certain culture of epistemic sanity / trying to avoid at least the worst fallacies and biases. If out of the many ideas in the sequences, at least "tabooing" would get out into the wild so people on other forums would get more used to discussing actual things instead of labels and categories, it could become bearable out there. For example you can hardly have a sane discussion in economics.reddit.com because labels like capitalism and socialism being used as rallying flags.

comment by lululu · 2015-05-18T16:04:14.140Z · LW(p) · GW(p)

When should a draft be posted in discussion and when should it be posted in LessWrong?

I just wrote a 3000+ word post on science-supported/rational strategies to get over a break-up, I'm not sure where to put it!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-05-18T17:47:22.726Z · LW(p) · GW(p)

Do you mean whether it should be posted to Discussion or Main?

You can post it to Discussion. It might get promoted to Main. I'm not sure who makes those decisions.

You can post it to Main, and take your chances on it being downvoted.

You can post a link to it, and see if you get advice on where you should post it.

Replies from: lululu
comment by lululu · 2015-05-18T20:17:06.266Z · LW(p) · GW(p)

OK, thank you. This is my first LessWrong post. I posted to discussion, hopefully it will find its place.

comment by gwern · 2015-05-22T22:49:45.337Z · LW(p) · GW(p)

A comment about some more deep learning feats:

Interestingly, they initialise the visual learning model using the ImageNet images. Was it 3 years ago that was considered a pretty much intractable problem, and now the fact a CNN can work on it well enough to be useful isn't even worth a complete sentence.

(Background on ImageNet recent progress: http://lesswrong.com/lw/lj1/open_thread_jan_12_jan_18_2015/bvc9 )

comment by D_Malik · 2015-05-19T18:08:37.057Z · LW(p) · GW(p)

Clicking on the tag "open thread" on this post only shows open threads from 2011 and earlier, at "http://lesswrong.com/tag/open_thread/". If I manually enter "http://lesswrong.com/r/discussion/tag/open_thread/", then I get the missing open threads. The problem appears to be that "http://lesswrong.com/tag/whatever/" only shows things posted to Main. "http://lesswrong.com/r/all/tag/open_thread/" seems to behave the same as "http://lesswrong.com/tag/open_thread/", i.e. it only shows things posted to Main, despite the "/r/all". Going to "article navigation → by tag" also goes to an open thread from 2011, so it seems to also ignore things posted to Discussion.

comment by gjm · 2015-05-19T08:55:26.743Z · LW(p) · GW(p)

It looks like someone downvoted about 5 of my old comments in the last ~10 hours. (Not recent ones that are still under any kind of discussion, I think. I can't tell which old ones.)

I mention this just in case others are seeing the same; I suspect Eugine_Nier/Azathoth123 has another account and is up to his old mass-downvoting tricks again. (I actually have a suspicion which account, too, but nowhere near enough evidence to be making accusations.)

Replies from: Gram_Stone, Dahlen, skeptical_lurker, Good_Burning_Plastic, Dorikka
comment by Gram_Stone · 2015-05-19T10:39:53.027Z · LW(p) · GW(p)

Another data point: someone would downvote every comment I made up until April 1st. Not sure if I successfully signalled my 'rationality' or if I successfully signalled that I'm not going away.

comment by Dahlen · 2015-05-22T10:53:47.918Z · LW(p) · GW(p)

Same here, in fact I've been keeping an eye on that account for a while, and noticed when you expressed your complaints about downvoting in a discussion with him recently. There's no apparent sign of the sheer downvote rampages of old so far, if we're right he's been a little more careful this time around about obvious giveaways (or maybe it's just the limited karma)... Alas, old habits die hard.

I'm not even sure anyone can do anything about it; LessWrong is among those communities that are vulnerable to such abuses. Without forum rules relating to member conduct, without a large number of active moderators, without a culture of holding new members under close scrutiny until they prove themselves to bring value to the forum, but with a built-in mechanism for anyone to disrupt the forum activity of anyone else...

Replies from: gjm
comment by gjm · 2015-05-22T11:54:11.451Z · LW(p) · GW(p)

It's interesting that you're confident of which account it is; I didn't say. I had another PM from another user, naming the same account (and giving reasons for suspicion which were completely different from mine). So: yeah, putting this all together, either it's him again or there are a whole bunch of similarities sufficient to ring alarm bells independently for multiple different users.

I don't see any need for anyone to swing the banhammer again unless he starts being large-scale abusive again, in which case no doubt he'll get re-clobbered. Perhaps by then someone will have figured out how to get his votes undone. (In cases where someone's been banned for voting abuses, come back, and done the same things again, I would be in favour of simply zeroing out all votes by the revenant account.)

comment by skeptical_lurker · 2015-05-19T18:45:29.386Z · LW(p) · GW(p)

I think Azarthoth is back too, and I think I know which account, but I don't get the impression that the mass upvote sockpuppets that were suspected to be helping his previous incarnations are active.

I think there should be simple ways to combat this sort of problem anyway, for a start people's accounts could list the percentage of upvotes you give in the same way it currently lists the percentage of upvotes you receive. Limits could be put on the amount of downvotes you can issue by saying that they cannot exceed your karma (or a multiple thereof).

This problem has been encountered before in places like reddit - how did they deal with it there?

Replies from: Gurkenglas, Lumifer
comment by Gurkenglas · 2015-05-20T00:33:02.450Z · LW(p) · GW(p)

Wouldn't they just mass-upvote random posts not from that person?

comment by Lumifer · 2015-05-19T18:56:03.435Z · LW(p) · GW(p)

people's accounts could list the percentage of upvotes you give

And what exactly would you infer from this metric?

As far as I know solely downvoting the posts you don't like and never upvoting anything is fully within the rules.

Limits could be put on the amount of downvotes you can issue by saying that they cannot exceed your karma

Such limits exist and are in place, I think.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-05-19T19:18:08.536Z · LW(p) · GW(p)

And what exactly would you infer from this metric?

As far as I know solely downvoting the posts you don't like and never upvoting anything is fully within the rules.

You would infer that they are a very critical person, I suppose.

Replies from: Lumifer
comment by Lumifer · 2015-05-19T19:42:16.619Z · LW(p) · GW(p)

You would infer that they are a very critical person, I suppose.

Actually, would you? This is an interesting inference/rationality question. If someone's voting history has 900 downvotes and 100 upvotes then yes, it looks reasonable to conclude that this is very critical person with high standards. But if a voting history contains only 1000 downvotes and not upvotes at all?

I would probably decide that this person has some rules (maybe set by herself for herself) which prevent her from upvoting. And in such a case you can't tell whether she's highly critical or not.

Replies from: Viliam
comment by Viliam · 2015-05-20T08:39:31.911Z · LW(p) · GW(p)

If someone's voting history has 900 downvotes and 100 upvotes...

The important thing would be who received those 900 downvotes. I am not sure about the exact formula, but the first approximation is whether the set of 900 comments downvoted by user X would correlate more with "what other people downvoted" or with "who wrote those comments". That is, how much the user has high standards vs how much is a personal grudge.

To some degree "what other people downvoted" and "who wrote those comments" correlate with each other, because some people are more likely to write good comments, and some people are more likely to write bad comments. The question would be whether the downvoting patterns of user X correlate with "who wrote that" significantly more strongly that the downvoting patterns of an average user.

(Of course, any algorithm, when made public, can be gamed. For example, detection by the algorithm as described above could be avoided by a bot who would (a) upvote every comment that already has karma 3 or more, unless the comment author is in the "target" list; (b) downvote every comment that already has karma -3 or less, and (c) downvote every comment whose author is in the "target" list. The first two parts would make the bot profile seem similar to the average user, if the detection algorithm ignores the order of votes for each comment.)

Replies from: Lumifer
comment by Lumifer · 2015-05-20T14:52:31.513Z · LW(p) · GW(p)

the first approximation is whether the set of 900 comments downvoted by user X would correlate more with "what other people downvoted" or with "who wrote those comments". That is, how much the user has high standards vs how much is a personal grudge.

That doesn't look like a good approach to me. Correlating with "what other people downvoted" doesn't mean "high standards" to me, it means "follows the hivemind".

Imagine a forum which is populated by representatives of two tribes, Blue and Green, and moreover 90% of the forum participants are Green and only 10% are Blue. Let's take Alice who's Blue -- her votes will not be positively correlated with other people's votes for obvious reasons. You're thinking about a normative situation where people should vote based on ill-defined "quality" of the post, but from a descriptive point of view people vote affectively, even on LW.

I think what you want is fairly easy to define without correlations. You are looking for a voting pattern that:

  • Stems from a single account (or a small number of them)
  • Is targeted at a single account (or a small number of them)
  • Has a large number of negative votes in a short period of time
  • Targets old posts, often in a particular sequence that matches the way software displays comments
comment by Good_Burning_Plastic · 2015-05-24T19:14:58.052Z · LW(p) · GW(p)

I actually have a suspicion which account, too

Me too. Should you I PM you to tell which one?

Replies from: gjm
comment by gjm · 2015-05-24T19:54:43.005Z · LW(p) · GW(p)

By all means. At this point I'll be quite surprised if you don't suspect the same account as I do! It would be interesting to know your reasons for suspicion, too.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-05-25T09:19:54.027Z · LW(p) · GW(p)

PM sent.

comment by Dorikka · 2015-05-24T18:15:18.535Z · LW(p) · GW(p)

If I remember correctly, NancyLebovitz is the forum moderator; she might have the means and willingness to look into this kind of thing, and take action if needed.

comment by Adam Zerner (adamzerner) · 2015-05-24T00:55:34.727Z · LW(p) · GW(p)

Some unrefined thoughts on why rationalists don't win + a good story.

Why don't rationalists win?

1) As far as being happy goes, the determinants of that are things like optimism, genetics, good relationships, sense of fulfillment etc. All things you could easily get without being rational, and that rationality doesn't seem too correlated with (there's probably even a weak-moderate negative correlation).

2) As far as being right goes (epistemic rationality), well people usually are wrong a lot. But people have an incredible ability to compartmentalize, and people often exhibit a surprising degree of rationality in their domain of expertise. And also, you could often do a solid job of being right without much rationality - heuristics go a long way.

Story:

  • There's a pool of water around the base of my toilet, and I'm sitting there like an idiot trying to use the scientific method to deduce the cause.
  • I figured out that it only shows up when I turn the shower on. Not when I flush the toilet, and not when it's idle.
  • I closed my shower curtains as best I could, and didn't observe any water coming from the shower head and landing near the toilet. Additionally, the pool of water around the toilet was only around the toilet base. The area between the shower and the pool of water was dry. So it didn't seem that it was dripping down the bath tub and drifting to the shower base (especially because the floor is flat).
  • So, I was pretty confident that there was some sort of damage to the pipes that caused water to come out from under the toilet when I turned the shower on.
  • I called the repair guy. He did his thing, and concluded that the pipes were fine, and that the water must have been coming from the shower head. I told him my theories, and he smiled and didn't change his conclusion. It turns out he was right. There's this really thin stream of water that is coming out from the shower head, splashing, and causing the problem. I had previously considered this (briefly), but thought that my shower curtains were sealed enough to prevent this. But it turns out that there's this little crease that it's getting through.
  • Final score: Repair Guy - 1. Rationalist - 0.

So then - why rationality?

  • Simple: it drastically raises the ceiling of how much we could accomplish in each of these fields (instrumental and epistemic).
  • Also, in theory, it shouldn't have any costs associated with it. You should still be able to benefit from the same heuristics and the same happiness indicators as an irrational person (actually, there's probably some that you have to sacrifice by being a rationalist).
Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-05-24T01:09:24.165Z · LW(p) · GW(p)

All things you could easily get without being rational, and that rationality doesn't seem too correlated with (there's probably even a weak-moderate negative correlation).

  • I sense that a common rationalist perspective is to pay a lot more attention to the bad things, and not to be satisfied with the good. More generally, this seems to be the perspective of ambitious people.
  • Rationalists don't seem to be able to derive as much joy from interaction with normal people, and thus probably struggle to find strong relationships.
  • Normal people seem to derive a sense of fulfillment from things that they probably shouldn't. For example, my Uber driver was telling me how much fulfillment she gets from her job, and how she loves being able to help people get to where they're going. She didn't seem to be aware of how replaceable she is. She wasn't asking the question of "what would happen if I wasn't available as an Uber driver". Or "what if there was one less Uber drive available".

I should note that none of this is desirable, and that someone who's a perfect rationalist would probably do quite well in all of these areas. But I think that Reason as a memetic immune disorder applies here. It seems that the amount of rationality that is commonly attained often acts as an immune disorder in these situations.

comment by Adam Zerner (adamzerner) · 2015-05-20T04:01:33.309Z · LW(p) · GW(p)

In thinking/talking to people, it's too hard to be comprehensive, so I usually simplify things. The problem is that I feel pressure to be consistent with what I said, even though I know it's a simplification.

This sorta seems like an obvious thing to say, but I get the sense that making it explicit is useful. I notice this to be a moderate-big problem in myself, so I vow to be much much much better at this from now on (I'm annoyed that I fell victim to it at all).

Replies from: MrMind
comment by MrMind · 2015-05-20T07:55:01.625Z · LW(p) · GW(p)

This sorta seems like an obvious thing to say, but I get the sense that making it explicit is useful.

It might not be. Any further explanation is costly, and would be welcomed only if the subject is really interested in the topic. If not, you would come across as pedantic and boring.

The problem is that I feel pressure to be consistent with what I said, even though I know it's a simplification.

I think that you should learn to resist the pressure. It's very rare that someone will call you out for some inconsistency, even blatant. It's quite amazing, actually. In the rare cases where someone does call you out, you can just offer further explanation, if you care to.

comment by [deleted] · 2015-05-19T11:35:02.074Z · LW(p) · GW(p)

If using multiple screens at work made you more productive, care to give an example or two what do you put on one and the other and how they interact? Perhaps also negatives, in what situations that doesn't help?

Hypothesis: they only work with transformation type work e.g. translation where you read a document in one and translate in another, or read a spec in one and write code to implement it in another or at any rate the output you generate is strongly dependent on an input that you need to keep referring to.

I actually borrowed a TV as a second screen because I need to re-create the layouts of document reports from an old accounting software in a new. So it is handy to have the example on the TV while I work on the new one. Of course a printout on a music-stand would work just as well...

Replies from: gjm, MSwaffer, OrphanWilde, Vaniver, wadavis, Unknowns, shminux, None, sixes_and_sevens
comment by gjm · 2015-05-19T13:22:44.230Z · LW(p) · GW(p)

At work:

Software development: text editors (or IDE) on one screen, terminal/command-prompt window(s) for building, running tests, etc., on another.

Exploratory work in MATLAB: editor(s) and MATLAB figure windows (plots, images, ...) on one screen, MATLAB command window on another.

I use virtual desktops as well as multiple monitors, so things like email and web browser are over in another universe and less distracting. (This does have the downside that when I'm, say, replying to something on Less Wrong, my work is over in another universe and less distracting.) So are other things (e.g., documents being written, to-do lists, etc.).

Of course things may get moved around; e.g., if I'm writing a document based on some technical exploration then I may want a word processor coexisting with MATLAB and a web browser.

At home: email on one monitor, web browser on another. (And all kinds of other things on other virtual desktops.)

Replies from: None
comment by [deleted] · 2015-05-19T13:30:38.658Z · LW(p) · GW(p)

Hm, so we have two cases now, thanks:

  • Read on S1 -> think -> write on S2
  • Write on S1, execute / do other things with what is written on S2

Third case, such as web browser and email does not sound that useful to me, but it at least forces you to move your neck which is actually good, lower chance if getting stiff and painful from staring ahead unmoving for hours. Actually I wonder if from this angle, encouraging motion, we should put another one on the floor, one on the ceiling :) If neither money nor work productivity was a huge issue, the most healthy setup would be robotic arms rearranging screens around you every few minutes in 3D, encouraging regular movement along all axes.

Replies from: gjm
comment by gjm · 2015-05-19T13:53:38.396Z · LW(p) · GW(p)

web browser and email

Sometimes useful: e.g. get email saying "hey, look at this interesting thing on the web", or "could you please do X" where X requires buying something online. Or see something interesting on the web and send an email to a friend about it. But yeah, it's not hugely valuable. (I have two monitors on my home machine because sometimes I do more serious things on it and they're useful then. And because there was a spare monitor going cheap so I thought I might as well.)

robotic arms rearranging screens around you

If money and productivity were that little an issue, why would you be sat at this contraption in the first place?

Replies from: None
comment by [deleted] · 2015-05-19T14:35:02.612Z · LW(p) · GW(p)

Good question. Actually - it might not even reduce productivity. Suppose you put a terminal where you run commands on the average every ten minutes on one such screen positioned on a fully 3D positionable robotic arm. You lose maybe 2 seconds finding out if this time is it is over your left shoulder or up right on the ceiling. But the improved blood flow from the movement could improve your cognitive skills and maybe being forced into a 3D all-around situational awareness "awakens the ancestral hunter" ie.e. improves awareness, focus and concentration. A good example is driving a car. It tends to put me in a focused mode.

But, lacking that, at least having some neck movement between screens must be a good thing.

Replies from: Lumifer
comment by Lumifer · 2015-05-19T14:48:13.358Z · LW(p) · GW(p)

Have you read Stephenson's REAMDE? It describes in detail an interesting working setup... :-)

comment by MSwaffer · 2015-05-19T19:49:26.101Z · LW(p) · GW(p)

I have 2 desks in my office, both with multiple screen layouts. Your question made me think about how I use them and it comes down to the task I am performing.

Like others, when I am programming I typically have an IDE where I am doing work on one and a reference open on another. When doing web development my third monitor usually has a browser where I can immediately refresh my work to see results, for other development it may be a virtual machine or remote desktop that I am logged into.

When I am doing academic work, I often have EndNote (reference manager) on one monitor, the document I am writing on another and the documents I am finding / reading on the third.

Since both my desks are next to each other, I often "borrow" a monitor from the other setup to keep communication windows open (Skype, Lync, Hangouts, #Slack etc.) This allows me to keep in touch with coworkers and colleagues without having to flip windows every time I get a message.

So I would say there are three purposes identified:

  • Active Work
  • Reference Material
  • Communication
comment by OrphanWilde · 2015-05-19T17:49:30.673Z · LW(p) · GW(p)

I put source code/IDE/logging output in one, and the program I'm running in the other, particularly when debugging a program; running in debug mode or watching logs is simpler.

I also put remote desktops in a separate screen, often copying the contents of configuration files over as text, as I don't generally get the ability to drag files directly into environments (people who prevent direct copying of files or dragging and dropping, your security is getting in the way without providing any security - Base64 encoding exists).

Otherwise I will have social applications open in one (e-mail application, chats with clients, etc), and my actual work in the other.

comment by Vaniver · 2015-05-19T17:34:51.457Z · LW(p) · GW(p)

I of course do much of the "work on A, reference on B" that others have talked about--the IDE open on one screen and the documentation open on the other--but it's also worth pointing out the cases where there are multiple pieces of reference material that I'm trying to collide somehow, and having both of them open simultaneously is obviously incredible useful.

comment by wadavis · 2015-05-19T14:11:40.000Z · LW(p) · GW(p)

The typical theme is reference material on one screen, and working material on the other screen. The equivalent of having all your reference material open on your desk so you are not flipping back an forth through notes.

Edit: Read The Intelligent Use of Space by David Kirsh as recommended by this LessWrong post.

comment by Unknowns · 2015-05-19T12:41:19.860Z · LW(p) · GW(p)

I work with multiple screens and I estimate that I save between 20 minutes and one hour per day in comparison to using only one. I do financial work and examples would be: Quickbooks open on one screen and an internet bank account open on the other; or the account open on one page and some financial pdf open on the other; or similar things.

Replies from: None
comment by [deleted] · 2015-05-19T13:05:25.081Z · LW(p) · GW(p)

So read on screen 1-> thought and transformational work -> write on screen 2?

comment by shminux · 2015-05-21T06:20:55.443Z · LW(p) · GW(p)

3 monitors, 1 for a browser, 1 for IDE, 1 for misc stuff, like watching syslog messages, file manager, etc.

comment by [deleted] · 2015-05-20T01:24:51.467Z · LW(p) · GW(p)

One screen (small square monitor I found for free) is often filled up with my matlab data files and matlab command window. The other (large) contains some combination of figures generated by my matlab scripts from my yeast data (constantly popping in and out), analysis I am writing, and scripts I am editing.

(I should really map out the dependencies of all my scripts sometime...)

When things are slower the small monitor often contains the live feed from the space station.

comment by sixes_and_sevens · 2015-05-19T23:31:49.788Z · LW(p) · GW(p)

I don't know how common this is, but with a dual-monitor setup I tend to have one in landscape and one in portrait. The portrait monitor is good for things like documents, or other "long" windows like log files and barfy terminal output. The landscape monitor is good for everything that's designed to operate in that aspect ratio (like web stuff).

More generally, there's usually something I'm reading and something I'm working on, and I'll read from one monitor, while working on whatever is in the other.

At work I make use of four Gnome workspaces: one which has distracting things like email and project management gubbins; one active work-dev workspace; one self-development-dev workspace; and one where I stick all the applications and terminals that I don't actively need to look at, but won't run minimised/headlessly for one reason or another.

comment by [deleted] · 2015-05-22T15:06:52.202Z · LW(p) · GW(p)

How do other people study? I'm constantly vacillating between the ideas of taking notes and making flashcards, or just making flashcards. I'd like to study everything the same way, but it seems like for less technical subjects like philosophy making flashcards wouldn't suffice and I'd need to take notes. For some reason the idea of taking notes for some subjects but not others is uncomfortable to me. And I'm also stuck between taking notes on the literature I read or just keeping a list. It's getting to the point where I don't even study or read anymore because I feel like I need to figure out the best way first.

Ideally I want to take no notes whatsoever and just make flashcards in Anki, since it's quicker and I never look back at notes anyway, but I'm paranoid that I'll be doing things sub-optimally. Does anyone have any suggestions for what to do? I mostly study math and science.

Replies from: estimator, Dorikka, OrphanWilde
comment by estimator · 2015-05-22T18:09:41.677Z · LW(p) · GW(p)

I believe that both making notes and making flashcards are suboptimal; the best (read: fastest) method I know is to read and understand what you want to learn, then close your eyes and recall everything in full detail (that is hard, and somewhat painful; you should try to remember something for at least few minutes before giving up). Re-read whatever you haven't remembered. Repeat until convergence.

In math, it helps to solve problems and find counterexamples to theorem conditions, because it leads to deeper understanding, which makes remembering significantly easier. Also try to make as much connections to already known facts and possible applications as possible: our memory is associative.

comment by Dorikka · 2015-05-22T17:57:23.249Z · LW(p) · GW(p)

If possible, I like to allocate full attention to listening to the lecturer instead of dividing it between such and taking notes. However, this isnt always feasible. It helps if there is a slidepack or something similar that will be avaliable afterwards. Most of the time, I'm trying to build a mental construct for how all of the things that I'm learning fit together. Depending on the difficulty of the material, I may need to begin creating this construct pretty soon so I can understand the material, or it may be able to wait until pretty close to the exam. (If I'm not having to take notes, I can start doing it in class, which is more efficient and effective.)

I try to fill in the gaps in my mental model with a list of questions to ask in office hours. In the process, the structure of the material becomes a bit more evident. Is it very interconnected, either in a logical or physical sense? Is it something that seems to be made of arbitrary facts? If the latter, and the material is neither interesting nor useful nor required, I will be tempted to drop the class. If it is interesting or useful, facts stick much better, as I can think about how I can use them, how they help me understand things in such a manner that I can more easily affect them. Not sure that I personally have found many classes interesting but not useful if they lack a structure. If neither, but required, I prefer creating a structure that helps me link things together in a way that will help me remember them. A memorable example was committing a picture of the amino acid table to memory and then stepping through it vertically, horizontally, diagonally to make it stick. A structure that can be useful here is to repeat all past memorized items when memorizing a list. So A, then AB, then ABC, and so on.

I like pictures, lists, and cheat sheets (often worth making for me to help memtal organization even if I can't take them into a test) for the facts that don't fit in my mental model, or just as redundancy. Otherwise, I tend to mainly do things by trying to get an understanding of the relationships between and use cases of the concepts and methods. (Sometimes involving outlining the class on paper), and then using practice tests to highlight gaps.

comment by OrphanWilde · 2015-05-22T15:21:22.353Z · LW(p) · GW(p)

Focus on grokking, on understanding, rather than remembering.

comment by Tenoke · 2015-05-21T14:36:23.067Z · LW(p) · GW(p)

Apparently the new episode of Morgan Freeman's Through the Wormhole is on the Simulation Hypothesis.

comment by Fluttershy · 2015-05-19T06:10:07.962Z · LW(p) · GW(p)

Epistemic status: unlikely that my proposal works, though I am confident that my calculations are correct. I'm only posting this now because I need to go to bed soon, and will likely not get around to posting it later if I put it off until another day.

Does anyone know of any biological degradation processes with a very low energy of activation that occur in humans?

I was reading over the "How Cold Is Cold Enough" article on Alcor's website, in which it is asserted that the temperature of dry ice (-78.5 C, though they use -79.5 C) isn't a cold enough temperature to store cryonicists at long-term. The article is generally well written, and the calculations are correct, with one partial exception that I'll point out in a minute.

Specifically, the article says that:

I am going to be pessimistic, and choose the fastest known biological reaction, catalase. I'm not going to get into detail, but the function of the enzyme catalase is protective. Some of the chemical reactions that your body must use have extraordinarily poisonous by-products, and the function of catalase is to destroy one of the worst of them. The value for its E is 7,000 calories per mole-degree Kelvin.

However, for computing k(T1)/k(T2), i.e. the ratio of rate constants at different temperatures, the pessimism behind the assumption that E = 7000 cal/mol*K may be causing Alcor to incorrectly conclude that dry ice can't be used for cryopreservation. We can perform a manipulation of the Arrhenius equation, similarly to what is done in Alcor's post:

k[T1]/k[T2] = exp(-E/R(1/(T1) - 1/(T2)))

Where T1 is 310.16 K (37 C), T2 is the temperature of corpse storage (such as 194.66 K), E is the activation energy, R is the ideal gas constant, and k(T1)/k(T2) is the rate of a chemical reaction at T1 divided to the rate of the same reaction at T2.

One can see that the ratio of rate constants at different temperatures, k(T1)/k(T2), should increase by a factor of [k(T1)/k(T2)^(new rate constant value/old rate constant value)] if a rate constant value of something other than 7000 cal/mol*K is used for E. So, if the fastest biological degradation process which humans experience at the temperature of dry ice has an activation energy of, say, 21,000 cal/molK, then, since the k(T1)/k(T2) calculated with E = 7,000 cal/mol\K is equal to 844.4 at the temperature of dry ice, the k(T1)/k(T2) calculated with E = 24,000 cal/mol*K at the same temperature would be 844.4^(21,000/7,000) = 844.4^3 = 6.02 * 10^8.

This means that the amount of degradation a person on dry ice's body should experience over 6.02 * 10^8 seconds, i.e. 19 years, would be equivalent to the amount of degradation that person would experience after 1 second at room temperature, given the completely hypothetical changes in activation energy stated above.

Of course, the number I got above was only as large as it was because I was seeing what happened if the activation energy tripled. Hence my reason for asking if anyone knew of any biological degradation processes with a very low energy of activation.


There may be assumptions which the Arrhenius equation makes which I'm not considering here, i.e. the assumption that fast mixing is occurring, and the assumption that the activation energy is constant across a wide range of temperatures.

comment by ChristianKl · 2015-05-18T17:17:18.407Z · LW(p) · GW(p)

According to the official story Pakistan didn't know about Osama Bin Ladin's location at the time of his death.

What your credence that the official story is true about that claim? (answer as probability between 0 and 1) [pollid:980]

Replies from: James_Miller, ike, bogus
comment by James_Miller · 2015-05-18T17:27:41.309Z · LW(p) · GW(p)

Define Pakistan.

Replies from: ChristianKl
comment by ChristianKl · 2015-05-18T17:39:51.185Z · LW(p) · GW(p)

At least one of Ashfaq Parvez Kayani (chief of military), Ahmad Shuja Pasha (directior of ISI) and Asif Ali Zardari (Pakistani president) knew about it.

comment by ike · 2015-05-18T18:34:36.599Z · LW(p) · GW(p)

According to the official story Pakistan didn't know about Osama Bin Ladin's location at the time of his death.

Isn't the official story that the US didn't know that Pakistan knew? As in, it's both possible that Pakistan knew/ didn't know, but the US didn't know one way or another.

I'm assuming you're talking about the US's official story.

Replies from: ChristianKl
comment by ChristianKl · 2015-05-18T18:47:56.436Z · LW(p) · GW(p)

Googling a bit it seems that various people do say different things about the likelihood of Pakistan knowing. If I would formulate the question again I might drop the word official and ask directly for whether Pakistan knew.

I think this question is still okay in this form because it asks directly for whether the respondent of the poll believes Pakistan to have known.

comment by bogus · 2015-05-18T19:43:56.411Z · LW(p) · GW(p)

Pakistan didn't know about Osama Bin Ladin's location at the time of his death.

Given the location where OBL was eventually found, this "official story" is not plausible in the least, and everyone knows that. The only reason for its existence is nobody wants to 'officially' admit that Pakistan was running a _scam_ on the U.S. by asking for $$$ "to help search for Bin Ladin", and the U.S. government fell for it for quite a while.

comment by [deleted] · 2015-05-18T15:25:21.778Z · LW(p) · GW(p)

Yesterday, I stumbled upon this reddit comment by the author of the open textbook AI Security, Dustin Juliano. If I understood it correctly, the claim is basically that an intelligence explosion is unlikely to happen, and thus the development of strong AI should be an open, democratic process so that not a single person or a small circle can gain considerable amount of power. What is Bostrom's/the MIRI's take on this issue?

Replies from: Gram_Stone
comment by Gram_Stone · 2015-05-18T20:16:21.072Z · LW(p) · GW(p)

They're not exactly patrolling Reddit for critics, but I'll bite.

From what I understand, Bostrom's only premise is that intelligent machines can in principle perform any intellectual task that a human can, and this includes the design of intelligent machines. Juliano says that Bostrom takes hard-takeoff as a premise:

The premise of Bostrom's book is based on the assumption that the moment an advanced AI is created that it will overcome the human race within minutes to hours.

He doesn't do that. Chapter 4 of Superintelligence addresses both hard- and soft-takeoff scenarios. However, Bostrom does consider medium- to hard-takeoff scenarios more likely than soft-takeoff scenarios.

Another thing, when he says:

This extraordinary claim is neither explained nor substantiated with any evidence. It is just expected that the reader will take it at face value that this will happen. It's an incoherent premise from a scientific standpoint, but the idea is so sensational that people who don't understand the technical issues behind why that is not going to happen don't care.

There can't be evidence of an intelligence explosion because one hasn't happened yet. But we predict an intelligence explosion because it's based on an extrapolation of our current scientific generalizations. This sort of criticism can be made against anything that is possible in principle but that has not yet happened. If he wanted to argue against the possibility of an intelligence explosion, he would need to explain how it isn't in line with our current generalizations. You have to have a more complex algorithm for evaluating claims than "evidence = good & no-evidence = bad" to get around mistakes like this. He actually sort-of seems to imply that he doesn't think it's in line with our generalizations, when he says "people [...] don't understand the technical issues behind why that is not going to happen", which would be a step in the right direction, but he doesn't actually say anything about where he disagrees.

Also, Bostrom has a whole section in Chapter 14 on whether or not AGI should be a collaborative effort, and he's strongly in favor of collaboration. Race dynamics penalize safety-conscious AGI projects, and collaboration mitigates the risk of a race dynamic. Also, most people's preferences are resource-satiable; in other words, there's not much more that someone could do with a billion galaxies' worth of resources as opposed to one galaxy's worth, so it's better for everyone to collaborate and maximize their chances of getting something (which in this scenario is necessarily a lot) as opposed to taking on a large risk of getting nothing and a small chance of getting a lot more than they would ever probably want.

But this is a very different conception from Juliano's, because I guess Juliano doesn't think that machines could become far more intelligent than any human. His recommendations make sense if you think that strong AI is sort of like really smart computer viruses, and all we need to do is have an open community that collaborates to enact countermeasures like we do with modern computer viruses. But if you think that superintelligent machines are in line with our current generalizations, then his suggestions are wholly inadequate.

Replies from: None
comment by [deleted] · 2015-05-19T07:32:17.947Z · LW(p) · GW(p)

Can you recommend an article about the inner view on intelligence? The outer view seems to be an optimization ability, which I am not sure I buy but won't challenge either, let's say accepted as a working hypothesis. But what it is it on the inside? Can we say that it is like a machine shop? Where ideas are first disassembled, and this is called understanding them, taking them apart and seeing their connections. (Latin: intelligo = to understand.) And then reassembled e.g. to generate a prediction. Is IQ the size of the door on the shop that determines how big a machine can be brought in for breaking down?

For example randomly generating hypotheses and testing them, while it may be very efficient for optimization, does not really sound like textbook intelligence. Textbook intelligence must have a feature of understanding, and understanding is IMHO idea-disassembly, model-disassembly. Intelligence-as-.understanding (intelligo), interpreted as the ability to understand ideas proposed by other minds and hence conversational ability, have this disassembly feature.

From this angle one could build an efficient hypothesis-generator-and-tester type optimizer who is not intelligent in the textbook sense, is not too good at "intelligo", could not discuss Kant's philosophy. I am not sure I would call that AI and it is not simply a question of terminology, most popular AI fiction is about conversation-machines, not "silent optimizers" so it is important how we visualize it.

Replies from: Gram_Stone
comment by Gram_Stone · 2015-05-19T12:24:04.780Z · LW(p) · GW(p)

I'm having a really hard time modeling your thought process. Like, I don't know what is generating the things that you are saying; I am confused.

I'm not sure what you mean by inner vs. outer view.

Well, IQ tests test lots of things.

Is IQ the size of the door on the shop that determines how big a machine can be brought in for breaking down?

This seems like a good metaphor for working memory, and even though WM correlates with IQ, it's also just one component.

I don't really get what you mean when you say that it's important how we visualize it.

Well, if you take, say, AIXI, which sounds like this sort of hypothesis-testing-optimizer-type AI that you're talking about, AIXI takes an action at every timestep, so if you consider a hypothetical where AIXI can exist and still be unbounded, or maybe a computable approximation in which it has a whole hell of a lot of resources and a realistic world model, one of those actions could be human natural language if it happened to be the action that maximized expected reward. So I'd say that you're anthropomorphizing a bit too much. But AIXI is just the provably-best jack-of-all-trades; from what I understand there could be algorithms that are worse than AIXI in other domains but better in particular domains.

Replies from: None
comment by [deleted] · 2015-05-19T13:12:34.228Z · LW(p) · GW(p)

I think the keyword to my thought process is anthropomorphizing. The intuitive approach to intelligence is that it is a a human characteristic, almost like handsomeness or richness. Hence the pop culture AI is always an anthropomorphic conversation machine from Space Odyssey to Matrix 3 to Knight Rider. For example, it should probably have a sense of humor.

The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don't have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.

Looking at humans, beside optimization, the human traits that are considered part of intelligence, such as a sense of humor, or easily understanding difficult ideas in a conversation, are parts of it too, and they lie outside the optimization domain. The outer view is that we can observe intelligent humans optimizing things, this being one of their characteristics, although not exhaustive. However it does not lead to a full understanding of intelligence, just one facet of it, the optimization facet. It is merely an output, outcome of intelligence, not the process but its result.

So when a human with a high IQ tells you to do something in a different way, this is not intelligence, intelligence was the process that resulted in this optimization. To understand the process, you need to look at something else than optimization, the same way to understand software you cannot just look at its output.

What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven's Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?

What is AIXI?

Replies from: ChristianKl, Gram_Stone
comment by ChristianKl · 2015-05-19T19:07:24.247Z · LW(p) · GW(p)

"IQ" is just a terms for something on the map. It's what we measure. It's not a platonic idea. It's a mistake to treat it as such. On the other hand it's useful measurement. It correlates with a lot of quantities that we care about. We know that because people did scientific studies. That allows us to see things that we wouldn't see if we just reason on an armchair with concepts that we developed as we go along in our daily lives.

Scientific thinking needs well defined concepts like IQ, that have a precise meaning and that don't just mean what we feel they mean.

Those concepts have value when you move in areas where the naive map breaks down and doesn't describe the territory well anymore.

comment by Gram_Stone · 2015-05-19T18:24:59.814Z · LW(p) · GW(p)

The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don't have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.

Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there's no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There's no reason to think about things in human terms; there's only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value.

Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic.

What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven's Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?

If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have.

This is AIXI.

comment by passive_fist · 2015-05-18T22:10:59.238Z · LW(p) · GW(p)

I have an extremely crazy idea - framing political and economic arguments in the form of a 'massively multiplayer' computer-verifiable model.

Human brains are really terrible at keeping track of a lot of information at once and sussing out how subtle interactions between parts of a system lead to the large-scale behavior of the whole system. This is why economists frequently build economic models in the form of computer simulations to try to figure out how various economic policies could affect the real world.

That's all well and good, but economic models built in this way usually have two major downsides:

  • They have very limited scope. They basically consist of all the different factors that the authors know about, which, even for the most dedicated model-builders, is a pretty small fraction of reality.

  • They carry the author's own biases.

Now the 'standard' resolution of this in economics is: "Reproduce the model and modify it if you think it's flawed." But reproducing models is an extremely time-wasting effort. It doesn't make sense to reproduce a huge model if you just want to make a few modifications.

What I'm proposing is to instead have a monolithic large-scale economic model residing on the internet, written and displayed in a graphical format (nodes and interconnection between nodes) that "anyone can edit" - anyone can add interactions between nodes and others can review these modifications until a 'consensus' emerges over time (and if it doesn't, some level of agreed-upon 'uncertainty' can be introduced into the model as well).

So basically, what I'm proposing is a combination of wiki-style editing freedom and economic models. Imagine being able to insert and play around with factors like the cost of healthcare and the probability that the average person will develop some kind of rare disease, or factors like the influence of tax rate or cost of labor on the decision-making processes of a company. Imagine if instead of endless political debates in various public forums, various sides could just stick in their numbers in the system (numbers that hopefully come from publicly-verifiable research) and let the computer 'battle it out' and give a concrete answer.

Replies from: skeptical_lurker, raydora, Lumifer
comment by skeptical_lurker · 2015-05-21T09:24:38.581Z · LW(p) · GW(p)

No-one would agree on what models to use.

I think this is an interesting idea in theory. And if you connect it to prediction markets, then this could be some sort of computational collaborative futarchy.

Replies from: passive_fist
comment by passive_fist · 2015-05-22T02:52:25.875Z · LW(p) · GW(p)

The biggest issue (aside from computational cost) is definitely how to reconcile conflicting models, although no one would ever be editing the entire model, only small parts of it. I hope (and I could be wrong) that once the system reaches a certain critical mass, predicting the emergent behaviour from the microscopic details becomes so hard that someone with a political agenda couldn't easily come up with ways to manipulate the system just by making a few local changes (you can think of this as similar to a cryptographic hashing problem). Other large-scale systems (like cryptocurrencies) derive security from similar 'strength in numbers' principles.

One option is to limit input to the system to only peer-reviewed statistical studies. But this isn't a perfect solution, for various reasons.

Using a connection to prediction markets (so that people have some skin in the game) is a nice idea, but I'm not sure how you're thinking of implementing that?

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-05-25T18:47:17.128Z · LW(p) · GW(p)

Well, models generally rely on parameter values which can be determined empirically or reasoned about more theoretically or the model could be fitted to data and the parameters inferred by some form of optimisation algo such as monte carlo markov chains.

Anyway, suppose two people disagree on the value of a parameter. Running the model with different parameter values would produce different predictions, which they could then bet on.

comment by raydora · 2015-05-22T00:19:47.777Z · LW(p) · GW(p)

This sounds like a larger implementation of the models pathologists use to try and predict the infection rate of a disease. Considering the amount of computing power needed for that, such a service might be prohibitively expensive- at least in the near future.

I'm wondering if there would be a way for participants to place some skin in the game, besides a connection to prediction markets.

comment by Lumifer · 2015-05-18T23:46:01.609Z · LW(p) · GW(p)

So what happens when 4chan discovers it?

Replies from: passive_fist
comment by passive_fist · 2015-05-19T00:10:01.326Z · LW(p) · GW(p)

Same as what happened when 4chan discovered wikipedia. I suspect there will be vandalism but also self-correction. Ideally you'd want to build in mechanisms to make vandalism harder.

comment by Lumifer · 2015-05-18T15:02:00.687Z · LW(p) · GW(p)

Some software already tries to read and affect human emotions: link

Sample:

EmoSPARK, say its creators, is dedicated to your happiness. To fulfil that, it tries to take your emotional pulse, adapting its personality to suit yours, seeking always to understand what makes you happy and unhappy.

comment by Adam Zerner (adamzerner) · 2015-05-24T03:45:38.559Z · LW(p) · GW(p)

I find that I learn better when I am eating. I sense that the pleasure coming from the food helps me pay attention and/or remember things. It seems similar to the phenomena of people learning better after/during exercise (think: walking meetings).

Does anyone know of any research that supports this? Any anecdotal evidence?

Replies from: None
comment by [deleted] · 2015-05-24T08:14:06.860Z · LW(p) · GW(p)

I think learn better if I stop to eat whenever I feel like eating, and not get distracted by thoughts of food (I also lack 10 kg to normal weight, so I can afford it.)

comment by the-citizen · 2015-05-19T07:42:28.023Z · LW(p) · GW(p)

Suffering and AIs

Disclaimer - For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that

One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.

Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.

If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).

Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become "suffering". (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also... killer robots with no pain or fear of death :-P

Replies from: DanielLC
comment by DanielLC · 2015-05-24T09:23:26.185Z · LW(p) · GW(p)

Killer robots with no pain or fear of death would be much easier to fight off than ones who have pain and fear of death. It doesn't mean they won't get distracted and lose focus on fighting when they're injured or in danger. It means that they won't avoid getting injured or killed. It's a lot easier to kill someone if they don't mind it if you succeed.

Replies from: the-citizen
comment by the-citizen · 2015-06-08T05:45:10.569Z · LW(p) · GW(p)

True! I was actually trying to be funny in (4), tho apparently I need more work.

comment by Ilverin the Stupid and Offensive (Ilverin) · 2015-05-18T18:33:40.376Z · LW(p) · GW(p)

Disclaimer: I may not be the first person to come up with this idea

What if for dangerous medications (such as 2-4 dinitrophenol (dnp) possibly?) the medication was stored in a device that would only dispense a dose when it received a time-dependent cryptographic key generated by a trusted source at a supervised location (the pharmaceutical company/some government agency/an independent security company)?

Could this be useful to prevent overdoses?

Replies from: Lumifer, 9eB1
comment by Lumifer · 2015-05-18T18:42:48.764Z · LW(p) · GW(p)

If the dispensing device is "locked" against the user and you want to enforce dosing you don't need any crypto keys. Just make the device have an internal clock and dispense a dose every X hours.

In the general case, the device is externally controlled and then people who have control can do whatever they want with it. I'm still not seeing a particular need for a crypto key.

Replies from: DanielLC
comment by DanielLC · 2015-05-24T09:25:40.650Z · LW(p) · GW(p)

Just make the device have an internal clock and dispense a dose every X hours.

Forever? What if you want to change the dosage.

I'm still not seeing a particular need for a crypto key.

So that only the person who's supposed to control it can control it. You don't want someone altering it with their laptop just because they have bluetooth.

Edit:

Somehow I was thinking of implanting something that dispensed drugs. Just dispensing pills would make most of that pointless. Why worry about someone breaking it with a laptop if they can break it with a hammer? I suppose it might work if you somehow build the thing like a safe.

comment by 9eB1 · 2015-05-18T21:39:47.049Z · LW(p) · GW(p)

There are already dispensing machines that dispense doses on a timer. They are mostly targeted at people who need reminding (e.g. Alzheimers), though, rather than people who may want to take too much. I don't think the cryptographic security would be the problem in that scenario, but the physical security of the device. You would need some trusted way to reload it and it would have to be very difficult to open even though it would presumably just be sitting on your table at home, which is a very high bar. It could possibly be combined with always-on tampering reporting and legal threats to make the idea of tampering with it less appealing though.