Open thread, Nov. 16 - Nov. 22, 2015

post by MrMind · 2015-11-16T08:03:11.515Z · LW · GW · Legacy · 189 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.


Comments sorted by top scores.

comment by NancyLebovitz · 2015-11-16T19:38:53.224Z · LW(p) · GW(p)

Use "The Boy Who Cried Wolf" as a mnemonic. His first error is type 1 (claiming a wolf as present when there wasn't one). His second error is type 2 (people don't notice an existing wolf).

Replies from: gjm, D_Malik, IlyaShpitser, MrMind
comment by gjm · 2015-11-17T16:34:02.961Z · LW(p) · GW(p)


To fight back against terrible terminology from the other side (i.e., producing rather than consuming) I suggest a commitment to refuse to say "Type I error" or "Type II error" and always say "false positive" or "false negative" instead.

Replies from: twanvl, philh
comment by twanvl · 2015-11-18T10:33:27.995Z · LW(p) · GW(p)

I find "false positive" and "false negative" also a bit confusing, albeit less so than "type I" and "type II" errors. Perhaps because of a programming background, I usually interpret 'false' and 'negative' (and '0') as the same thing. So is a 'false positive' something that is false but is mistaken as positive, or something that is positive (true), but that is mistaken as false (negative)? In other words, does 'false' apply to the postiveness (it is actually negative, but classified as positive), to being classified as positive (it is actually positive, but classified as positive)?

Perhaps we should call false positives "spurious" and false negatives "missed".

Replies from: gjm
comment by gjm · 2015-11-18T11:24:29.040Z · LW(p) · GW(p)

Huh. That never occurred to me (even though I spend a lot of my days writing code too).

In case you're expressing actual uncertainty rather than merely what your brain gets confused about, the answer is that a false positive is something that falsely looks positive. Perhaps the best way to put it is different, though: a false positive is a positive result of your test (so it actually is a positive) that doesn't match the underlying reality. Like a "false alarm".

comment by philh · 2015-11-18T10:35:29.607Z · LW(p) · GW(p)

Now that I know which is which, this will be very slightly harder for me than it used to be.

comment by D_Malik · 2015-11-18T20:53:44.376Z · LW(p) · GW(p)

Introspecting, the way I remember this is that 1 is a simple number, and type 1 errors are errors that you make by being stupid in a simple way, namely by being gullible. 2 is a more sophisticated number, and type 2 errors are ones you make by being too skeptical, which is a more sophisticated type of stupidity. I do most simple memorization (e.g. memorizing differentiation rules) with this strategy of "rationalizing why the answer makes sense". I think your method is probably better for most people, though.

comment by IlyaShpitser · 2015-11-16T22:44:27.815Z · LW(p) · GW(p)


comment by MrMind · 2015-11-17T07:55:04.792Z · LW(p) · GW(p)

Ha, I wasn't even aware of this. Really nice, thanks.

comment by Viliam · 2015-11-19T08:20:09.793Z · LW(p) · GW(p)

LW displays notifications about replies and private messages in the same place, mixed together, looking the same. Note that top-level comments on your articles are also considered replies to you (this is a default behavior, you could turn it off, but it makes sense so you will probably leave it turned on).

This has the disadvantage that when you post an article which receives about 20 comments and someone sends you a private message, it is very easy to miss the message. Because in your inbox you just see 21 entries that look almost the same.

Suggestion: The easiest fix would probably be to change the appearance of private messages in your inbox. Make the difference obvious, so you can't miss it. For example, add a big icon above each private message.

Replies from: Elo, Tem42
comment by Elo · 2015-11-19T10:22:18.605Z · LW(p) · GW(p)

support this change; have no idea how easy it is to do.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2015-11-28T02:33:52.743Z · LW(p) · GW(p)

yes please

comment by Tem42 · 2015-11-19T22:17:03.724Z · LW(p) · GW(p)

We do have an inbox that sorts messages onto a separate page from replies to posts. I think the easiest change would be to simply make two separate icons for these two separate pages.

Replies from: Vaniver, Viliam
comment by Vaniver · 2015-11-20T19:39:18.693Z · LW(p) · GW(p)

I don't think this is correct. I'm only familiar with , which mixes the two together. You may be thinking of , which shows all the private messages you've sent.

comment by Viliam · 2015-11-20T08:50:37.669Z · LW(p) · GW(p)

Thanks, I didn't know that. What's the URL? (Or how else can I get there?)

Replies from: Tem42
comment by Tem42 · 2015-11-20T21:52:06.210Z · LW(p) · GW(p)

No, I'm sorry, I was thinking of the Outbox. You can see what you've sent, but not what you've received. Currently not useful, but at least it suggests that the coding might not be so hard to do.

comment by g_pepper · 2015-11-18T02:32:49.666Z · LW(p) · GW(p)

The latest New Yorker has a lengthy article about Nick Bostrom and Superintelligence. It contains a good profile of Bostrom going back to his graduate school days, his interest in existential threats in general, and how that interest became more focused on the risk of AGI specifically. Many concepts frequently discussed at LW are mentioned, e.g. the Fermi paradox and the Great Filter, the concept of an intelligence explosion, uploading, cryonics, etc. Also discussed is the progress that Bostrom and others have made in getting the word out regarding the threat posed by AGI, as well as some opposing viewpoints. Various other AI researchers, entrepreneurs and pundits are mentioned as well (although neither EY nor LW is mentioned, unfortunately).

The article is aimed at a general audience and so it doesn't contain much that will be new to the typical LWer, but it is an interesting and well-done overview, IMO.

Replies from: gwern, hg00, Soothsilver
comment by gwern · 2015-11-18T03:23:39.722Z · LW(p) · GW(p)

I was amused to see both modafinil and nicotine pop up. I guess I should feel proud?

Replies from: signal
comment by signal · 2015-11-18T16:08:41.005Z · LW(p) · GW(p)

You should. Just started playing with those gums.

comment by hg00 · 2015-11-19T04:35:09.939Z · LW(p) · GW(p)

although neither EY nor LW is mentioned

"There's no limit to the amount of good you can do if you don't care who gets the credit."

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-20T19:33:11.790Z · LW(p) · GW(p)

I don't think that's the right explanation in this case.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-11-21T12:50:05.193Z · LW(p) · GW(p)

I understood the comment differently: The OP did write the post because the purpose of the Times article was spot on - not because the 'right' people got the credit.

Replies from: g_pepper
comment by g_pepper · 2015-11-22T02:02:39.176Z · LW(p) · GW(p)

I did not mean to suggest that anyone had been slighted or denied any due credit when I stated that neither EY nor LW was mentioned. As I read the article, I had just been looking for mentions of EY or LW, and I figured that others might as well, so that is why I mentioned it.

No article can cover everything. As Gunnar stated, I thought it was a great article!

comment by Soothsilver · 2015-11-20T15:18:04.347Z · LW(p) · GW(p)

I was surprised to see how health-conscious Bostrom is. Making his own foods in order to maximize health and not shaking hands. I thought that was limited to Kurzweil only.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2015-11-22T08:48:57.794Z · LW(p) · GW(p)

"Bostrom had little interest in the cocktail party. He shook a few hands, then headed for St. James’s Park, a public garden that extends from the gates of Buckingham Palace through central London. " - Article

Replies from: Soothsilver
comment by Soothsilver · 2015-11-22T14:38:12.621Z · LW(p) · GW(p)

And yet "His intensity is too untidily contained, evident in his harried gait on the streets outside his office (he does not drive), in his voracious consumption of audiobooks (played at two or three times the normal speed, to maximize efficiency), and his fastidious guarding against illnesses (he avoids handshakes and wipes down silverware beneath a tablecloth)."

Replies from: Tem42
comment by Tem42 · 2015-11-22T16:58:07.462Z · LW(p) · GW(p)

If he is a rationalist, I would expect that he has a good grasp of when it is socially pragmatic to shake hands, and when he can operate under Crocker's rules and request not to shake hands. I also expect that he is smart enough to have an antibacterial wipe in his pocket to use after shaking hands (but not use it until he is out of sight in the gardens).

Replies from: Soothsilver
comment by Soothsilver · 2015-11-22T19:25:05.212Z · LW(p) · GW(p)

What do Crocker's rules have to do with this? Also, it seems carrying antibacterial wipe to use after shaking hands is excessive. The chance that he'll suffer serious health problems from infection by handshake is so small that I doubt even the time taken for all these efforts is worth it.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2015-11-28T02:25:25.140Z · LW(p) · GW(p)

You quoted him saying he did not shake hands, that to a lot of us seems a bit excessive. Tem42 tells us that it is more plausible to carry antibacterial wipe for hygiene concerns as opposed to a blanket bank on shaking hands, which to us, is rather strange.

If the cost/benefit is vs . It seems like the latter is more plausible, especially cause the article also said he shook hands and left.

Replies from: Soothsilver
comment by Soothsilver · 2015-11-28T13:50:27.755Z · LW(p) · GW(p)

I think the most plausible is that he does shake hands and he does not use anti-bacterial wipe, merely that he mentioned to the reporter "I prefer not to shake hands to keep myself safe" and that the reporter exaggerated.

comment by signal · 2015-11-18T20:29:22.976Z · LW(p) · GW(p)

As soon as I have two Karma points, I will post a 2000 word article on bias in most LW posts (which I would love to have your feedback on) with probably more to follow. However, I don't want to search for some more random rationality quotes to meet that requirement. Note to the administrators: Either you are doing a fabulous job at preventing multiple accounts or registration is currently not working (tried multiple devices, email addresses, and other measures).

Replies from: signal, wizard
comment by signal · 2015-11-18T22:19:59.198Z · LW(p) · GW(p)

Thanks. It is now online in the discussion section: "The Market for Lemons."

comment by wizard · 2015-11-21T22:40:19.948Z · LW(p) · GW(p)

Same here.

I was going to make an incredible article about wizards and the same thing happened plus all the negative karma I got from trolling. :(

comment by cousin_it · 2015-11-16T14:23:04.486Z · LW(p) · GW(p)

I've been hearing about all this amazing stuff done with recurrent neural networks, convolutional neural networks, random forests, etc. The problem is that it feels like voodoo to me. "I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?" I'm not sure, it sounds like you don't understand what your program is doing. That's pretty much why I'm not studying machine learning right now. What do you think?

Replies from: IlyaShpitser, passive_fist, Vaniver, Lumifer, Douglas_Knight, bogus, solipsist, solipsist, Daniel_Burfoot, Dagon, V_V, SanguineEmpiricist
comment by IlyaShpitser · 2015-11-16T18:18:26.359Z · LW(p) · GW(p)

ML is search. If you have more parameters, you can do more, but the search problem is harder. Deep NN is a way to parallelize the search problem with # of grad students (by tweaks, etc.), also a general template to guide local-search-via-gradient (e.g. make it look for "interesting" features in the data).

I don't mean to be disparaging, btw. I think it is an important innovation to use human AND computer time intelligently to solve bigger problems.

In some sense it is voodoo (not very interpretable) but so what? Lots of other solutions to problems are, too. Do you really understand how your computer hardware or your OS work? So what if you don't?

Replies from: ZankerH, cousin_it
comment by ZankerH · 2015-11-17T08:57:55.664Z · LW(p) · GW(p)

In some sense it is voodoo (not very interpretable)

There is research in that direction, particularly in the field of visual object recognising convolutional networks. It is possible to interpret what a neural net is looking for.

comment by cousin_it · 2015-11-16T18:24:52.928Z · LW(p) · GW(p)

I guess the difference is that an RNN might not be understandable even by the person who created and trained it.

Replies from: Lumifer, IlyaShpitser
comment by Lumifer · 2015-11-17T18:10:32.827Z · LW(p) · GW(p)

There is an interesting angle to this -- I think it maps to the difference between (traditional) statistics and data science.

In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting.

In data science (and/or ML) a lot of models are of the sprawling black-box kind where coefficients are not separable and make no sense outside of the context of the whole model. These models aren't traditionally parsimonious either. Also, because many usual metrics scale badly to large datasets, overfitting has to be managed differently.

Replies from: bogus
comment by bogus · 2015-11-17T18:46:32.551Z · LW(p) · GW(p)

In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting.In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting.

Keep in mind that traditional stats also includes semi-parametric and non-parametric methods. These give you models which basically manage overfitting by making complexity scale with the amount of data, i.e. they're by no means "small" or "parsimonious" in the general case. And yes, they're more similar to the ML stuff but you still get a lot more guarantees.

Also, because many usual metrics scale badly to large datasets, overfitting has to be managed differently.

I get the impression that ML folks have to be way more careful about overfitting because their methods are not going to find the 'best' fit - they're heavily non-deterministic. This means that an overfitted model has basically no real chance of successfully extrapolating from the training set. This is a problem that traditional stats doesn't have - in that case, your model will still be optimal in some appropriate sense, no matter how low your measures of fit are.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-17T21:53:54.166Z · LW(p) · GW(p)

I think I am giving up on correcting "google/wikipedia experts," it's just a waste of time, and a losing battle anyways. (I mean the GP here).

I get the impression that ML folks have to be way more careful about overfitting because their methods are not going to find the 'best' fit - they're heavily non-deterministic. This means that an overfitted model has basically no real chance of successfully extrapolating from the training set. This is a problem that traditional stats doesn't have - in that case, your model will still be optimal in some appropriate sense, no matter how low your measures of fit are.

That said, this does not make sense to me. Bias variance tradeoffs are fundamental everywhere.

comment by IlyaShpitser · 2015-11-16T18:33:57.408Z · LW(p) · GW(p)

I don't think any one person understands the Linux kernel anymore. It's just too big. Same with modern CPUs.

Replies from: cousin_it, solipsist
comment by cousin_it · 2015-11-17T15:17:31.265Z · LW(p) · GW(p)

An RNN is something that one person can create and then fail to understand. That's not like the Linux kernel at all.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-17T17:32:19.032Z · LW(p) · GW(p)

Correction: An RNN is something that a person working with a powerful general optimizer can create and then fail to understand.

A human without the optimizer can create RNNs by hand - but only of the small and simple variety.

comment by solipsist · 2015-11-17T13:13:31.170Z · LW(p) · GW(p)

Although the Linux kernel and modern CPUs are piecewise-understandable, whereas neural networks are not.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-17T14:56:41.385Z · LW(p) · GW(p)

Lots of neural networks at an individual vertex level are a logistic regression model, or something similar, -- I think I understand those pretty well. Similarly: "I think I understand 16-bit adders pretty well."

comment by passive_fist · 2015-11-16T20:55:46.316Z · LW(p) · GW(p)

I did my PhD thesis on a machine learning problem. I initially used deep learning but after a while I became frustrated with how opaque it was so I switched to using a graphical model where I had explicitly defined the variables and their statistical relationships. My new model worked but it required several months of trying out different models and tweaking parameters, not to mention a whole lot of programming things from scratch. Deep learning is opaque but it has the advantage that you can get good results rapidly without thinking a lot about the problem. That's probably the main reason that it's used.

comment by Vaniver · 2015-11-16T15:02:18.897Z · LW(p) · GW(p)

RNNs and CNNs are both pretty simple conceptually, and to me they fall into the class of "things I would have invented if I had been working on that problem," so I suspect that the original inventors knew what they were doing. (Random forests were not as intuitive to me, but then I saw a good explanation and realized what was going on, and again suspect that the inventor knew what they were doing.)

There is a lot of "we threw X at the problem, and maybe it worked?" throughout all of science, especially when it comes to ML (and statistics more broadly), because people don't really see why the algorithms work.

I remember once learning that someone had discretized a continuous variable so that they could fit a Hidden Markov Model to it. "Why not use a Kalman filter?" I asked, and got back "well, why not use A, B, or C?". At that point I realized that they didn't know that a Kalman filter is basically the continuous equivalent of a HMM (and thus obviously more appropriate, especially since they didn't have any strong reason to suspect non-Gaussianity), and so ended the conversation.

Replies from: cousin_it, RaelwayScot
comment by cousin_it · 2015-11-16T16:41:31.345Z · LW(p) · GW(p)

Can you give a link to that explanation of random forests?

Replies from: Vaniver, IlyaShpitser, V_V
comment by Vaniver · 2015-11-16T22:45:02.276Z · LW(p) · GW(p)

Unfortunately I can't easily find a link to the presentation: it was a talk on Mondrian random forests by Yee Whye Teh back in 2014. I don't think it was necessarily anything special about the presentation, since I hadn't put much thought into them before then.

The very short version is it would be nice if classifiers had fuzzy boundaries--if you look at the optimization underlying things like logistic regression, it turns out that if the underlying data is linearly separable it'll make the boundary as sharp as possible, and put it in a basically arbitrary spot. Random forests will, by averaging many weak classifiers, create one 'fuzzy' classifier that gets the probabilities mostly right in a computationally cheap fashion.

(This comment is way more opaque than I'd like, but most of the ways I'd want to elaborate on it require a chalkboard.)

comment by IlyaShpitser · 2015-11-18T08:00:49.065Z · LW(p) · GW(p)

This is related to making a strong learner (really accurate) out of weak learners (barely better than majority). It is actually somewhat non-obvious this should even be possible.

The famous example here is boosting, and in particular "AdaBoost." The reason boosting et al. work well is actually kind of interesting and I think still not entirely understood.

I didn't really get Vaniver's explanation below, there are margin methods that draw the line in a sensible way that have nothing to do with weak learners at all.

comment by V_V · 2015-12-05T00:55:44.236Z · LW(p) · GW(p)

Start with the base model, the decision tree. It's simple and provides representations that may be actually understandable, which is rare in ML, but it has a problem: it sucks. Well, not always, but for many tasks it sucks. Its main limitation is that it can't efficiently represent linear relations unless the underlying hyperplane is parallel to one of the input feature axes. And most practical tasks involve linear relations + a bit of non-linearity. Training a decision tree on these tasks tends to yield very large trees that overfit (essentially, you end up storing the training set in the tree which then acts like a lookup table).

Fortunately, it was discovered that if you take a linear combination of the outputs a sizeable-but-not-exceptionally-large number of appropriately trained decision trees, then you can get good performances on real-world tasks. In fact it turns out that the coefficients of the linear combination aren't terribly important, a simple averaging will do.

So the issue is how do you appropriately train these decision trees.
You want these trees to be independent from each other conditioned on the true relation as much as possible. This means that ideally you would have to train each of them on a different training set sampled from the underlying true distribution, that is, you would have to have enough training data for each tree. But training data is expensive (ok, used to be expensive in the pre-big data era) and we want to learn an effective model from as few data as possible.
The second requirement is that each decision tree must not overfit. In the tradeoff between overfitting and underfitting, you prefer underfitting the invdividual models, since model averaging at the end can take care of it.

Random forests use two tricks to fulfill these requirements:

The first one is Bootstrap aggregating, aka "bagging": instead of gathering from the true distribution m training sets of n examples each for each of your m decision trees, you generate m-1 alternate sets by resampling with replacement your original training set.of n examples. It turns out that, for reasons not entirely well understood, these m datasets behave in many ways as if they were independently sampled from the true distribution.
This is an application of a technique known in statistics as Bootstrapping), which has some asymptotic theoretical guarantees under certain conditions that probably don't apply here, but nevertheless empirically it often works well.

The second one is the Random subspace method, which is just a fancy term for throwing features away at random, in a different way for each decision tree.
This makes it more difficult for each decision tree to overfit, since it has a reduced number of degrees of freedom, and specifically it makes it more difficult to get high training accuracy by relying on recognizing some spurious pattern that appears in the training set but is disrupted by throwing some features away. Note that you are not throwing away some features from the whole model. It's only the internal decision trees that each train on limited information, but the model overall still trains, with high probability, on all the information contained in all features. The individual trees underfit compared to trees trained on all features, but the averaging at the end compensates for this.
Yes, there are some tasks where throwing features away is guaranteed to hurt the accuracy of the individual decision trees to the point of making the task impossible, e.g. the parity problem, but for practical tasks, again for reasons not entirely well understood, it works reasonably well.

With these tricks random forests manage to be the state of the art technique for a large class of ML tasks: any supervised task (in particular classification) that is difficult enough that simple linear methods won't suffice, but is not difficult enough that you need a very big dataset (or is that difficult but the big dataset is not available) where neural networks would dominate (or a task that has logical depth greater that three, like the aforementioned parity problem, although it's not clear how common these are in practice).

A full understanding of why random forests work would require a Bayesian argument with an assumption about the prior on the data distribution (Solomonoff? Levin? something else?). This is not currently known for random forests and in fact AFAIK has been done only for very simple ML algorithms using simplifying assumptions on the data distribution such as Gaussianity. If that's the level of rigor you are looking for then I'm afraid that you are not going to find it in the discussion of any practical ML algorithm, at least so far. If you enjoy the discussion of math/statistics based methods even if there are some points that are only justified by empirical evidence rather than proof, then you may find the field interesting.

comment by RaelwayScot · 2015-11-16T18:00:04.711Z · LW(p) · GW(p)

I find CNNs a lot less intuitive than RNNs. In which context was training many filters and successively apply pooling and again filters to smaller versions of the output an intuitive idea?

Replies from: Manfred
comment by Manfred · 2015-11-16T19:02:10.461Z · LW(p) · GW(p)

In the context of vision. Pooling is not strictly necessary but makes things go a bit faster - the real trick of CNNs is to lock the weights of different parts of the network together so that you go through the exact same process to recognize objects if they're moved around (rather than having different processes for recognition for different parts of the image).

Replies from: RaelwayScot
comment by RaelwayScot · 2015-11-16T22:22:17.839Z · LW(p) · GW(p)

Ok, so the motivation is to learn templates to do correlation at each image location with. But where would you get the idea from to do the same with the correlation map again? That seems non-obvious to me. Or do you mean biological vision?

Replies from: Manfred
comment by Manfred · 2015-11-17T09:32:24.292Z · LW(p) · GW(p)

Nope, didn't mean biological vision. Not totally sure I understand your comment, so let me know if I'm rambling.

You can think of lower layers (the ones closer to the input pixels) as "smaller" or "more local," and higher layers as "bigger," or "more global," or "composed of nonlinear combinations of lower-level features." (EDIT: In fact, this restricted connectivity of neurons is an important insight of CNNs, compared to full NNs.)

So if you want to recognize horizontal lines, the lowest layer of a CNN might have a "short horizontal line" feature that is big when it sees a small, local horizontal line. And of course there is a copy of this feature for every place you could put it in the image, so you can think of its activation as a map of where there are short horizontal lines in your image.

But if you wanted to recognize longer horizontal lines, you'd need to combine several short-horizontal-line detectors together, with a specific spatial orientation (horizontal!). To do this you'd use a feature detector that looked at the map of where there were short horizontal lines, and found short horizontal lines of short horizontal lines, i.e. longer horizontal lines. And of course you'd need to have a copy of this higher-level feature detector for every place you could put it in the map of where there are short lines, so that if you moved the longer horizontal line around, a different copy of of this feature detector would light up - the activation of these copies would form a map of where there were longer horizontal lines in your image.

If you think about the logistics of this, you'll find that I've been lying to you a little bit, and you might also see where pooling comes from. In order for "short horizontal lines of short horizontal lines" to actually correspond to longer horizontal lines, you need to zoom out in spatial dimensions as you go up layers, i.e. pooling or something similar. You can zoom out without pooling by connecting higher-level feature detectors to complete (in terms of the patch of pixels) sets of separated lower-level feature detectors, but this is both conceptually and computationally more complicated.

comment by Lumifer · 2015-11-16T16:08:36.489Z · LW(p) · GW(p)

The problem is that it feels like voodoo to me.

Clarke's Third Law :-)

Anyway, you gain understanding of complicated techniques by studying them and practicing them. You won't understand them unless you study them -- so I'm not sure why you are complaining about lack of understanding before even trying.

comment by Douglas_Knight · 2015-11-16T19:46:56.841Z · LW(p) · GW(p)

it sounds like you don't understand what your program is doing

That is ambiguous. Do you mean the final output program or the ML program?

Most ML programs seem pretty straight-forward to me (search, as Ilya said); the black magic is the choice of hyperparameters. How do people know how many layers they need? Also, I think time to learn is a bit opaque, but probably easy to measure. In particular, by mentioning both CNN and RNN, you imply that the C and R are mysterious, while they seem to me the most comprehensible part of the choices.

But your further comments suggest that you mean the program generated by the ML algorithms. This isn't new. Genetic algorithms and neural nets have been producing incomprehensible results for decades. What has changed is that new learning algorithms have pushed neural nets further and judicious choice of hyperparameters have allowed them to exploit more data and more computer power, while genetic algorithms seem to have run out of steam. The bigger the network or algorithm that is the output, the more room for it to be incomprehensible.

comment by bogus · 2015-11-17T13:28:10.324Z · LW(p) · GW(p)

"I've trained my program to generate convincing looking C code! It gets the indentation right, but the variable use is a bit off. Isn't that cool?"

What this is really saying is: "Hey, convincing-looking C code can be modeled by a RNN, i.e. a state-transition version ("recurrent") of a complex non-linear model which is ultimately a generalization of logistic regression ("neural network")! And the model can be practically 'learned', i.e. fitted empirically, albeit with no optimality or accuracy guarantees of any kind. The variable use is a bit off, though. Isn't this cool/Does this tell us anything important?"

comment by solipsist · 2015-11-16T15:58:59.349Z · LW(p) · GW(p)

Is it for reasons similar to the Strawman Chompsky view in this essay by Peter Norvig?

Replies from: cousin_it
comment by cousin_it · 2015-11-16T16:35:21.524Z · LW(p) · GW(p)

Yeah. Maybe Norvig is right and it's much easier to implement Google Translate with what I call "voodoo" than without it. That's a good point, I need to think some more.

comment by solipsist · 2015-11-16T15:27:13.534Z · LW(p) · GW(p)

It sounds like you do not understand what your experiments are doing. That's pretty much why I'm not studying electromagnetism.

-- Letter from James Clerk Maxwell to Michael Faraday, in the setup of a Steam Punk universe I just now invented

Replies from: solipsist
comment by solipsist · 2015-11-16T15:45:11.855Z · LW(p) · GW(p)

Here's how I read your question.

  1. Many machine learning techniques work, but in ways we don't really understand.
  2. If (1), I shouldn't study machine learning

I agree with (1). Could you explain (2)? Is it that you would want to use neural networks etc. to gain insight about other concrete problems, and question their usefulness as a tool in that regard? Is it that you would not like to use a magical back box as part of a production system?

EDIT I'm using "machine learning" here to mean the sort of fuzzy blackbox techniques that don't have easy interpretations, not techniques like logistic regression where it is clearer what they do

comment by Daniel_Burfoot · 2015-11-16T14:38:13.791Z · LW(p) · GW(p)

I agree that this is a huge problem, but RNNs and CNNs aren't the whole of ML (random forests are a different category of algorithm). You should study the ML that has the prettiest math. Try VC theory, Pearl's work on graphical models, AIT, and MaxEnt as developed by Jaynes and applied by della Pietra to statistical machine translation. Hinton's early work on topics like Boltzmann machines and Wake-Sleep algorithm is also quite "deep".

Replies from: cousin_it
comment by cousin_it · 2015-11-16T16:25:47.530Z · LW(p) · GW(p)

Yeah, I suppose our instincts agree, because I've already studied all these things except the last two :-)

Replies from: V_V
comment by V_V · 2015-12-04T23:37:54.555Z · LW(p) · GW(p)

Have fun with generative models such as variational Bayesian neural networks, generative adversarial networks, applications of Fokker–Planck/Langevin/Hamiltonian dynamics to ML and NNs in particular, and so on. There are certainly lots of open problems for the mathematically inclined which are much more interesting than "Look ma, my neural networks made psychedelic artwork and C-looking code with more or less matched parentheses".

For instance, this paper provides pointers to some of these methods and describes a class of failure modes that are still difficult to address.

comment by Dagon · 2015-11-16T18:51:57.551Z · LW(p) · GW(p)

There are some pretty amazing actually useful applications for larger and larger feasible ML spaces. Everyone studying CS or seriously undertaking any computer engineering should at least learn the fundamentals (I'd recommend the Coursera ML class).

And most should not spend a huge fraction of their study time on it unless it really catches your fancy. But rather than saying "that's why I'm not studying ML right now", I'd like to hear the X in "that's why I'm focusing on X over ML right now".

comment by V_V · 2015-12-05T01:21:27.277Z · LW(p) · GW(p)

The trippy pictures and the vaguely C-looking code are just cool stunts, not serious experiments. People may be tempted to fell into the hype, sometimes a reality check is helpful.

This said, neural networks really do well in difficult tasks such as visual object recognition and machine translation, indeed for reasons that are not fully understood.

Sounds like a good reason to study the field in order to understand why they can do what they do, and why they can't do what they can't do, doesn't it?

comment by SanguineEmpiricist · 2015-11-16T20:24:26.363Z · LW(p) · GW(p)

Might want to take a look into the library google just open sourced

comment by CellBioGuy · 2015-11-17T16:17:17.502Z · LW(p) · GW(p)

Kim Stanley Robinson, author of the new scifi novel Aurora and back in the day the Mars trilogy, on how the notion of interstellar colonization and terraforming is really fantasy and we shouldnt let it color our perceptions of the actual reality we have, and the notion of diminishing returns on technology.

He doesnt condemn the genre but tries to provide a reality check for those who take their science fiction literally.

Replies from: EGI, passive_fist, Daniel_Burfoot
comment by EGI · 2015-11-17T17:20:21.391Z · LW(p) · GW(p)

Um, no, we cannot colonise the stars with current tech. What a surprise! We cannot even colonise mars, antarctica or the ocean floor.

Of course you need to solve bottom up manufacturing (nanotech or some functional eqivalent) first, making you independent from eco system services, agricultural food production, long supply chains and the like. This also vastly reduces radiation problems and probably solves ageing. Then you have a fair chance.

So yes, if we wreck earth the stars are not plan B, we need to get our shit together first.

If at this point there is still a reason to send canned monkeys is a completely different question.

Replies from: WalterL, DanielLC
comment by WalterL · 2015-11-17T19:12:49.806Z · LW(p) · GW(p)

I've never thought colonizing worlds outside of the solar system with human beings was reasonable. If we are somehow digitized, and continue to exist as computer programs, then sure.

Replies from: Stingray
comment by Stingray · 2015-11-17T22:19:42.535Z · LW(p) · GW(p)

Are there any science fiction novels that take this approach?

Replies from: NancyLebovitz, philh
comment by NancyLebovitz · 2015-11-18T06:01:58.760Z · LW(p) · GW(p)

Charles Stross' Saturn's Children and Neptune's Brood has robots with minds based on humans as humanity's successor.

David Moffitt's Genesis Quest and Second Genesis has specs for humans sent out by radio and recreated by aliens.

James Hogan's Voyage from Yesteryear has a probe which has humans recreated on another planet and raised by robots.

comment by philh · 2015-11-18T10:22:01.741Z · LW(p) · GW(p)

The characters in Greg Egan's Diaspora are mostly sentient software, who send out several probes containing copies of themselves.

comment by DanielLC · 2015-11-18T07:49:36.560Z · LW(p) · GW(p)

Alternately, learn to upload people. Which is still probably going to require nanotech. This way, you're not dependent on ecosystems because you don't need anything organic. You can also modify computers to be resistant to radiation more easily than you can people.

If we can't thrive on a wrecked Earth, the stars aren't for us.

comment by passive_fist · 2015-11-17T22:54:56.057Z · LW(p) · GW(p)

The thing that is somewhat frustrating to me is that I've been saying this for years. In our current form, it is quite pointless to attempt interstellar colonization. But once we start uploading, it becomes straightforward, even easy.

comment by Daniel_Burfoot · 2015-11-17T21:19:26.024Z · LW(p) · GW(p)

It's strange that he doesn't talk about radical life extension. To me, the game plan is pretty clear:

  1. Discover life extension technology to enable humans to live for one million years.
  2. Colonize the galaxy
  3. ???
  4. Profit
comment by Lumifer · 2015-11-17T18:19:57.590Z · LW(p) · GW(p)

The great advantage of Robin Hanson's posts is that you can never tell when he's trolling :-D


...maybe low status men avoiding women via male-oriented video games isn’t such a bad thing?

Replies from: polymathwannabe
comment by polymathwannabe · 2015-11-17T18:30:29.773Z · LW(p) · GW(p)

I'm getting tired of the Overcoming Bias blog in general. It feels like for Hanson everything is translatable into status terminology.

Replies from: Viliam
comment by Viliam · 2015-11-18T08:35:19.402Z · LW(p) · GW(p)

Is he wrong though? Sometimes I feel I'm getting tired of humanity, because it makes everything about status.

Replies from: IlyaShpitser, NancyLebovitz, entirelyuseless, hg00, polymathwannabe, MrMind
comment by IlyaShpitser · 2015-11-18T17:01:58.365Z · LW(p) · GW(p)

Outside view: scientists often think their models apply to everything. Hanson is very insightful, but not immune to this, I think.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-18T19:43:24.480Z · LW(p) · GW(p)

Outside view: scientists often think their models apply to everything. Hanson is very insightful, but not immune to this, I think.

I think Hanson considers it his role to try to argue that his models fit everywhere and make the best possible case that the models apply.

I think you would sometimes get different answers from him if you would bet with him.

comment by NancyLebovitz · 2015-11-18T13:21:43.541Z · LW(p) · GW(p)

Everything people do isn't entirely about status, or we couldn't survive.

I don't have a handle on how status and useful actions interact with each other. If I had some idea of how to approach the subject (and I do think it's important), maybe I'd have an article for Main.

Replies from: Viliam
comment by Viliam · 2015-11-18T21:01:35.103Z · LW(p) · GW(p)

Everything people do isn't entirely about status, or we couldn't survive.

I agree.

Yet, almost every human interaction has this... uhm... parallel communication channel where status is communicated and transferred. If you ignore it for a while, unless you make a big blunder, nothing serious happens. But in long term the changes accumulate, and at some moment it will bite you. (Happened to me a few times, then I started paying more attention. Probably still less attention than would be optimal.)

Also, some people care about status less (this probably correlates with the autistic spectrum), but some people care more. Sometimes you have to interact with the latter, and the result of the interaction may depend on your status.

I prefer environments where I don't have to care about status fights, but they are merely "bubbles" in the large social context.

comment by entirelyuseless · 2015-11-18T13:53:41.223Z · LW(p) · GW(p)

Exactly. And I really like Hanson's blog, even though he's sometimes wrong, because he's very often right, and because even when he isn't, he says what he thinks no matter how weird it sounds.

comment by hg00 · 2015-11-19T07:10:50.738Z · LW(p) · GW(p)

It is a bit unfortunate though that talking about status can turn what would have been a productive fact-based discussion in to a status competition.

comment by polymathwannabe · 2015-11-18T12:54:29.334Z · LW(p) · GW(p)

Well, you know the old saying: if your only tool is game theory, everything will look like signaling.

comment by MrMind · 2015-11-19T08:21:04.997Z · LW(p) · GW(p)

Well, as social animals, status evaluation is deeply embedded in our biological firmware.
I suppose it's only because our psychological unity of consciousness is so far removed from the basic process of the brain that we can find status irritating.

comment by Evan_Gaensbauer · 2015-11-16T09:11:35.510Z · LW(p) · GW(p)

I found out about Omnilibrium a couple months ago, and I was thinking of joining in eventually. I was also thinking of telling some friends of mine who might want to get in on it even more than I do about it. However, I've been thinking if I told lots of people, or they themselves told lots of people, then suddenly Omnilibrium might get flooded with dozens of new users at once. I don't know how big that is compared to the whole community, but I was thinking Omnilibrium would be averse to it growing it too big, as well-kempt gardens die by pacificism and all that. But then, Slate Star Codex linked to it a few weeks ago. So, that's plausibly hundreds of new users flooding it.

I'm wondering, how do the admins of Omnilibrium feel about this? Are you happy to have many new users? Are you upset SSC linked to Omnilibrium, bringing it to the attention of so many people who may not necessarily maintain the quality of discourse current users of Omnilibrium have gotten accustomed to?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-11-16T19:15:33.913Z · LW(p) · GW(p)

The whole point of the site is to do automated moderation and curation. At the moment, it is so small that it is serving no purpose better than a human dictator would. The whole point is that algorithms can scale. Maybe the algorithms aren't yet ready for prime time and maybe it's better if it grows slowly so that they have time to understand how to modify the algorithms. In particular, I believe that it currently grades users on a single political axis, while with more users it would probably be better to have a more complicated clustering scheme. But you probably won't cause it to grow rapidly, anyhow.

comment by Lumifer · 2015-11-18T17:15:29.981Z · LW(p) · GW(p)

Recommended: a conversation between Tyler Cowen and Cliff Asness about financial markets. Especially recommended for people who insist that markets are fully efficient.


A momentum investing strategy is the rather insane proposition that you can buy a portfolio of what’s been going up for the last 6 to 12 months, sell a portfolio of what’s been going down for the last 6 to 12 months, and you beat the market. Unfortunately for sanity, that seems to be true.


One thing I should really be careful about. I throw out the word “works.” I say “This strategy works.” I mean “in the cowardly statistician fashion.” It works two out of three years for a hundred years. We get small p-values, large t-statistics, if anyone likes those kind of numbers out there. We’re reasonably sure the average return is positive. It has horrible streaks within that of not working.


...what’s the actual vision of human nature? What’s the underlying human imperfection that allows it to be the case, that trading on momentum across say a 3 to 12 month time window, sorry, investing on momentum, will work? What’s with us as people? What’s the core human imperfection?

Replies from: ChristianKl
comment by ChristianKl · 2015-11-18T19:24:21.328Z · LW(p) · GW(p)

What’s the core human imperfection?

Maybe because everybody thinks that you try to buy a stock when it's low and sell if it is high?

comment by Panorama · 2015-11-19T22:58:05.117Z · LW(p) · GW(p)

Disinformation review, a weekly publication, which collects examples of the Russian disinformation attacks.

The main aim of this product is to raise the awareness about Russian disinformation campaign. And the way to achieve this goal is by providing the experts in this field, journalists, academics, officials, politicians, and anyone interested in disinformation with some real time data about the number of disinformation attacks, the number of countries targeted, the latest disinformation trends in different countries, the daily basis of this campaign, and about the coordination of the disinformation spread among many countries.

Our global network of journalists, government officials, academics, NGOs, think tanks (and other people / initiatives dealing with this issue) provides us with the examples of current disinformation appearing in their countries. East StratCom Task Force compiles their reports and publishes a weekly review of them. The document with the data collected is public and free for further use – journalists may use it as a source for their products, decision makers and government officials may find relevant information about the latest state of events, experts may find data for their analysis, NGOs and think tanks may share the knowledge about this issue with the rest of the world.

Replies from: Viliam, knb
comment by Viliam · 2015-11-20T09:04:29.954Z · LW(p) · GW(p)

This is extremely important.

Truths are entangled, and if you once tell a lie, the truth is ever after your enemy. In politics, it is often an advantage to sell a specific lie. Sometimes the most efficient way to do that repeatedly is to allocate a huge budget for "lowering the sanity waterline".

(Here is an example of what it looks like when someone uses a political crisis in your country to launch an insanity attack.)

comment by knb · 2015-11-22T04:11:19.033Z · LW(p) · GW(p)

How do you know this isn't a disinformation attack against Russia?

comment by SanguineEmpiricist · 2015-11-16T19:43:43.420Z · LW(p) · GW(p)

How are you all doing today? I'm having a pretty good start of my day(it's 11:42 am) here :P

I have found Krushke's bayesian data analysis & Gelman's text to be pretty good companions to each other and I'm glad I bought both. Personally I also found that building a physical personal library was much better for my person development than probably any other choice I made throughout the last year and a half. Libraries are definitely antifragile.

Also Feller vol 2 paperback is 8 dollars used.

comment by [deleted] · 2015-11-19T11:26:35.378Z · LW(p) · GW(p)

Is anybody interested in Moscow postrationality meetup?

comment by TylerJay · 2015-11-18T02:29:27.509Z · LW(p) · GW(p)

I'm curious about how others here process study results, specifically in psychology and the social sciences.

The (p < 0.05) threshold for statistical significance is, of course, completely arbitrary. So when I get to the end of a paper and the result that came in at, for example, (p < 0.1) is described as "a non-significant trend favoring A over B," part of me wants to just go a head and update just a little bit, treating it as weak evidence, but I obviously don't want to do even that if there isn't a real effect and the evidence is unreliable.

I've found that study authors are often inconsistent with this—they'll "follow the rules" and report no "main effect" detected when walking you through the results, but turn around and argue for the presence of a real effect in the discussion/analysis based on non-individually-significant trends in the data.

The question of how to update is further compounded by (1) the general irreproducibility of these kinds of studies, which may indicate the need to apply some kind of global discount factor to the weight of any such study, and (2) the general difficulty of properly making micro-adjustments to belief models as a human.

This is exactly the situation where heuristics are useful, but I don't have a good one. What heuristics do you all use for interpreting results of studies in the social sciences? Do you have a cutoff p-value (or a method of generating one for a situation) above which you just ignore a result outright? Do you have some other way of updating your beliefs about the subject matter? If so, what is it?

Replies from: gjm
comment by gjm · 2015-11-18T12:06:58.978Z · LW(p) · GW(p)

I don't spend enough of my time reading the results of studies that you should necessarily pay much attention to what I think. But: you want to know what information it gives you that the study found (say) a trend with p=0.1, given that the authors may have been looking for such a trend and (deliberately or not) data-mining/p-hacking and that publication filters out most studies that don't find interesting results.

So here's a crude heuristic:

  • There's assorted evidence suggesting that (in softish-science fields like psychology) somewhere on the order of half of published results hold up on closer inspection. Maybe it's really 25%, maybe 75%, but that's the order of magnitude.
  • How likely is a typical study result ahead of time? Maybe p=1/4 might be typical.
  • In that case, getting a result significant at p=0.05 should be giving you about 4.5 bits of evidence but is actually giving you more like 1 bit.
  • So just discount every result you see in such a study by 3 bits or so. Crudely, multiply all the p-values by 10.

You might (might!) want to apply a bit less discounting in cases where the result doesn't seem like one the researchers would have been expecting or wanting, and/or doesn't substantially enhance the publishability of the paper, because such results are less likely to be produced by the usual biases. E.g., if that p=0.1 trend is an incidental thing they happen to have found while looking at something else, you maybe don't need to treat it as zero evidence.

This is likely to leave you with lots of little updates. How do you handle that given your limited human brain? What I do is to file them away as "there's some reason to suspect that X might be true" and otherwise ignore it until other evidence comes along. At some point there may be enough evidence that it's worth looking properly, so then go back and find the individual bits of evidence and make an explicit attempt to combine them. Until then, you don't have enough evidence to affect your behaviour much so you should try to ignore it. (In practice it will probably have some influence, and that's probably OK. Unless it's making you treat other people badly, in which case I suggest that the benefits of niceness probably outweigh those of correctness until the evidence gets really quite strong.)

Replies from: TylerJay
comment by TylerJay · 2015-11-18T18:30:26.174Z · LW(p) · GW(p)

Thank you! This is exactly what I was looking for. Thinking in terms of bits of information is still not quite intuitive to me, but it seems the right way to go. I've been away from LW for quite a while and I forgot how nice it is to get answers like this to questions.

comment by Bound_up · 2015-11-21T12:48:43.461Z · LW(p) · GW(p)

I've never understood the appeal of uploading.

I've seen just once someone talk about an idea which I strongly doubt is the mainstream, that there's this question about which hardware "you" will "wake up" in. Surely not. Both would be conscious, right?

If I upload myself, there are two of me.

But this doesn't make me feel like I don't mind dying. What do I care if the world will continue with another of me? I want to live. It's not that I want someone who is me to keep existing, I want to keep living myself.

Am I confused about why people think of this as a life extension possibility?

Replies from: knb, Tem42
comment by knb · 2015-11-21T21:29:54.305Z · LW(p) · GW(p)

It's not that I want someone who is me to keep existing, I want to keep living myself.

I want both, personally. If my organic body was going to die but I could create an upload version of myself I definitely would. I would take solace in the fact that some version of me was going to continue on.

comment by Tem42 · 2015-11-21T21:49:24.741Z · LW(p) · GW(p)

There are a number of different reasons different people give as to why uploading is good.

For example, I do see making copies of myself as a good and positive goal. If there are two of me, all other things being equal, I am twice as well off -- regardless of whether or not I have any interaction with myselves. I am a really good thing and there should be more of me.

Some people, on the other hand, either subconsciously assume or actively desire a destructive upload -- they have one copy of themselves, the software copy, and that's all they want. The meat body is either trash to be disposed of or simply not considered.

Closely related, some people conceive of a unitary selfhood as a inherently valuable thing, but also want access to a meat body in some form. In this case, duplication/destruction is a problem to be solved -- the meat body might be disposed of, might be disposed of but DNA kept for potential reanimation, might be kept in cold storage... If we go by published science fiction, this seems to be the most common model. This is an interesting case in which the meat body (and perhaps the brain specifically) is often seen as a good and desirable thing, and in some cases the point of uploading is only that it is a useful way of insuring immortality (John Varley style transhumanism).

With so many mental models to choose from, it is not surprising that anyone who does not want to think about a lonely meatbody wasting away on a dying Earth just doesn't bother to consider the problem. It's an easy issue to ignore, when most people are still doubtful that uploading will be a possibility in their lifetime.

However, I think in most cases, people who think about uploading see it as "better than dying", while at the same time acknowledging your concern that really, someone called you is dying. Whether they see this as a personal death (as you do) or statistical death ("half of me's just died!") probably has no more ontological fact behind it than whether or not you have a soul... but of course, there are plenty of people who are willing to argue for hours on either of those points :-)

Replies from: Bound_up
comment by Bound_up · 2015-11-22T01:09:40.506Z · LW(p) · GW(p)

But's it's precisely this reference to "my meatbody" and "my computer body" or whatever that confuses me. When you upload, a new consciousness is created, right? You don't have two bodies, you just have a super-doppleganger. He can suffer while I don't, and vice versa. And he can die, and I'll go on living. And I can still die just as much as before, while the other goes on living. I don't understand what about this situation would make me okay with dying.

So I could understand it as valuable to someone for other reasons, but I don't understand its presentation as a life extension technology.

Replies from: wizard, Tem42
comment by wizard · 2015-11-22T02:00:10.946Z · LW(p) · GW(p)

My understanding is that LWers do not believe in a permanent consciousness.

  • A teleporter makes a clone of you with identical brain patterns: did it get a new consciousness, how do you tell your consciousness didn't go to the clone, where does the consciousness lies, is it real, etc.
  • It's not real, therefore the clone is literally you.

Either that or we're dying every second.

comment by Tem42 · 2015-11-22T02:54:00.583Z · LW(p) · GW(p)

I understand what you are saying, and I think that most people would agree with your analysis (at least, once it is explained to them). But I also think that it is not entirely coherent. For example, imagine that we had the technology to replace neurons with nanocircuits. We inject you with nanobots and slowly, over the course of years, each of your brain cells are replaced with electronic equivalents. This happens so slowly that you do not even notice -- conscious is maintained unbroken. Then, one at a time, the circuits and feedback loops are optimized; this you do notice, as you get a better memory and you can think faster and more clearly; throughout this, however, your consciousness is maintained unbroken. Then your memory is transcribed onto a more efficient storage medium (still connected to your brain, and with no downtime). You can see where this is going. There is no point where it is clear that one you ceases and another begins, but at the end of the process you are a 'computer body'. Moreover, while I set this up to happen over years, there's no obvious reason that you couldn't speed the example up to take seconds.

Wizard has given another example; most of us accept Star Trek style transporters as a perfectly reasonable concept (albeit maybe impossible in practice), but when you look at them closely they present exactly the sort of moral/ontological dilemma you are worried about.This suggests that we do not fully grok even our own concept of personal identity.

One solution, is to conclude that, after much thought, if you cannot define a consistent concept of persistence of personal identity over time, perhaps this is because it is not an intellectual concept, but a lizard-brain panic caused by the mention of death.

In my mind this is exactly the same sort of debate people have over free will. The concept makes no real sense as an ontological concept, but it is one so deeply ingrained in our culture that it takes a lot of thought to accept that.

Replies from: Bound_up, entirelyuseless
comment by Bound_up · 2015-11-22T15:20:11.693Z · LW(p) · GW(p)

So if uploading was followed by guillotining the "meatbody," would you sign up?

I have no problem with the brain just being one kind of hardware you can run a consciousness on. I have no problem with transporting the mind from one hardware to another, instantaneously, if you can do it in between the neural impulses.

But it seems like people mean you get scanned, a second, fully "real," person comes into existence, and this is supposed to extend your life.

Are we to believe that the new consciousness would be fine with being killed, just because you would still be around afterwards? Would their life be extended in you even if they were deleted after being created? Are they going to stick around feeling and experiencing life because you exist?

My confusion is that these seem like obvious points. Why are people even taking this seriously, why is it on the list?

I can fully understand why the rest of us might like to upload the great people of the world, or maybe everybody if we value having them around. But I don't think this should make them feel indifferent to their deaths, because it's not extending anyone's life.

I put this in the open thread because I assumed I was just ignorant of some key part of the process. If this is really it, maybe these points should be their own post and we can kick uploading off the life extension possibility list.

Replies from: Tem42
comment by Tem42 · 2015-11-22T16:25:43.902Z · LW(p) · GW(p)

I would not signup for a destructive upload unless I was about to die. But if I was convinced that I was about to die, then I absolutely would.

I don't think that you are missing anything, really. If I uploaded the average transhumanist, and then asked the meatbody (with mind intact) what I should do with the meatbody, they'd say either to go away and leave them alone or to upload them a few more times, please. If I asked them if they were happy to have a copy uploaded, they would say yes. If I asked them if they were disappointed that they were the meatbody version of themselves, they'd say yes. If I asked if the meatbody would now like an immortality treatment, they would say yes. If I asked the uploaded copy if they wanted the meatbody to get the immortality treatment, they would say yes.... I think.

I think that uploading is on the list primarily because there is a lot of skepticism that the original human brain can last much more than ~150 years. Whether or not this skepticism is justified is still an open question.

Uploading may also get a spot on the list because if you can accept a destructive upload, then your surviving self does get (at least theoretically) a much much better life than is likely to be possible on meatEarth.

comment by entirelyuseless · 2015-11-22T14:42:35.599Z · LW(p) · GW(p)

If you accept this solution, however, you might also say that neither uploading nor life extension technology in general is actually necessary, because many other things, such as having children, are just as good objectively, even if your lizard-brain panic caused by the mention of death doesn't agree.

Replies from: Tem42
comment by Tem42 · 2015-11-22T16:12:39.924Z · LW(p) · GW(p)

I like children and want children that are as cool as I am. But no child of mine has a statistically significant chance of being me.

"Just as good objectively" misses the point on two counts:

  1. Lots of things are as good as other things. But just because tiramisu is just as good as chocolate mousse, this does not mean that it is okay to get rid of chocolate mousse. What might make it okay to get rid of chocolate mousse is if you had another dish that tasted exactly like chocolate mousse, to the point that the only way you could tell which is which was by looking at the dish it was in.

  2. This is not a question of objectivity -- this is a question of managing your own subjective feelings. I may well find that I am best off if I keep my highly subjective view that I am one of the most important people in my world, but also be better off if I rejected my subjective view that meatbody death is the same as death of me.

Etid: tpos.

Replies from: entirelyuseless
comment by entirelyuseless · 2015-11-22T16:20:29.148Z · LW(p) · GW(p)

The point is that "so and so is me" is never an objective fact at all. So if the child has no chance of being you, neither does the upload. If you are saying that you can identify with the upload, that is not in any objective sense different from identifying with your child, or identifying with some random future human and calling that a reincarnated version of yourself.

And I don't object to any of that; I think it may well be true that it is objectively just as good to have a child and then to die, as to continue to live yourself, or to upload yourself and die bodily. As you say, the real issue is managing your feelings, and it is just a question of what works for you. There is no reason to argue that having children shouldn't be a reasonable strategy for other people, even if it is not for you.

Replies from: Tem42
comment by Tem42 · 2015-11-22T16:54:17.618Z · LW(p) · GW(p)

Granted, and particularly true, I'd like to think, for rationalists.

It is reasonable to argue that any social/practical aspect of yourself also exists in others, and that the most rational thing to do is to a) confirm that this is a objectively good thing and b) work to spread it throughout the population. This is a good reason to view works of art, scientific work, and children as valid forms of immortality. This is particularly useful to focus on if you expect to die before immortality breakthroughs happen, but as a general outlook on life it might be more socially (and economically) productive than any other. As some authors have pointed out, immortality of the individual might equal the stagnation of society.

Accepting death of the self might be the best way forward for society, but it is a hard goal to achieve.

comment by bokov · 2015-11-17T12:12:45.857Z · LW(p) · GW(p)

Finally, someone with a clue about biology tells it like it is about brain uploading

In reading this, suggest being on guard against own impulse to find excuses to dismiss the arguments presented because they call into question some beliefs that seem to be deeply held by many in this community.

Replies from: username2
comment by username2 · 2015-11-17T13:12:09.268Z · LW(p) · GW(p)

If they never studied those things, they would never figure out the answers to those objections. If they already knew about all these things, new studies wouldn't be needed. What else is there to study if not things we don't understand?

Replies from: CellBioGuy, bokov, ChristianKl
comment by CellBioGuy · 2015-11-17T16:02:24.022Z · LW(p) · GW(p)

The human brain project is largely not study, its a huge pointless database and 'simulation' (without knowing what you are simulating) project for its own sake. Which is why so many scientists hate it, for its pointlessness and taking research money that could actually be productive rather than buzzword salad elsewhere.

comment by bokov · 2015-11-17T14:06:44.272Z · LW(p) · GW(p)

I agree. My reason for posting the link here is as reality check-- LW seems to be full of people firmly convinced that brain-uploading is the only only viable path to preserving consciousness, as if the implementation "details" were an almost-solved problem.

Replies from: Baughn
comment by Baughn · 2015-11-19T21:57:27.416Z · LW(p) · GW(p)

Ah, no. I do agree that uploading is probably the best path, but B doesn't follow from A.

Just because I think it's the best option, doesn't mean I think it'll be easy.

comment by ChristianKl · 2015-11-18T00:25:38.667Z · LW(p) · GW(p)

The human brain project exists because "we want to simulate the human brain" is a project that can be sold to politicians. A lot of more sensible projects such as having money for replication of already existing research isn't as easily sold.

comment by ScottL · 2015-11-17T00:36:32.836Z · LW(p) · GW(p)

What do people think of the “When should I post in Discussion, and when should I post in Main?” section in the FAQ?

I find myself looking less and less in Main because I don’t see much content in there besides the meetup posts. I have a suggestion which might improve this and that is to update the FAQ so that it encourages reposting highly voted content in discussion into Main. This would have couple of benefits:

  • It would allow the potential main articles to go through a process of vetting. It would be suggested that only highly voted (15 karma or more, maybe) posts should be reposted in Main.
  • The post in discussion can be improved based on the provided comments before it is reposted to main
  • The comments from the draft version of the article in discussion can be discarded
  • It would allow double visibility for better posts, where better is decided by their karma level.
  • This would just be a suggestion in the FAQ. Anyone can still post in Main straight away if that is desired.

The problems I see with moving the post from discussion to main once it is highly voted is that:

  • I think there might be a bug where the extra karma from the difference in discussion vs Main doesn’t go towards your total karma, but does go towards your monthly karma.
  • If there is revision in the post, the comments are still there from the old version of the post.
Replies from: None
comment by [deleted] · 2015-11-17T02:50:31.440Z · LW(p) · GW(p)

I think we should get rid of "main" and "promoted" .

Right now there's four tiers: open thread, discussion, main, and main promoted.

at least once a week I see a comment that says "this should be in main," "this shouldn't be in main", "this should be in the open thread," or "this shouldn't be in the open thread, it should be it's own post".

I think the two tier system of open thread/discussion would suffice, and the upvote downvote mechanism could take care of the rest.

Replies from: Viliam, ScottL, hg00
comment by Viliam · 2015-11-18T08:39:11.946Z · LW(p) · GW(p)

Right now there's four tiers: open thread, discussion, main, and main promoted.

And the "main" tier is actually worse than the "discussion" tier. :(

So I'd recommend removing only the dysfunctional part, and have: open thread, discussion, discussion promoted.

Replies from: hg00
comment by hg00 · 2015-11-19T07:20:22.648Z · LW(p) · GW(p)

As far as I can tell "promoted" is meaningful because (a) there's a promoted RSS feed (b) it goes to the LW twitter feed. Posts get "promoted" by people who have admin powers, but most of the users with admin powers left a while ago. I think I would kill "promoted" and maybe make it so if you discussion post gets at least +4 or something it goes to the old promoted RSS feed and the twitter feed.

comment by ScottL · 2015-11-17T23:15:58.966Z · LW(p) · GW(p)

I think we should get rid of "main" and "promoted" .

Do you think that “main” is a bad idea or that we should get rid of “main” because it hasn’t had much content for a while?

I personally like the concept of “main” because from a site mechanics point of view with its (10x) karma it indicates that less wrong promotes and prioritizes longer, multi-post and referenced material, which is the type of material I am more interested in.

Replies from: None
comment by [deleted] · 2015-11-18T12:38:19.929Z · LW(p) · GW(p)

I like the concept of "main" for exactly the same reasons. However, it seems like most people who would post longer, more-referenced material are no longer contributing here. Indeed, even detailed discussion posts are now rare; most content now seems to be in open threads.

This dwindling content can be seen most clearly in the "Top Contributors, 30 Days" display. At the time I write this there are only seven posters with > 100 karma in the past 30 days, and it only takes 58 to appear on the list of 15. Perhaps the question should not be whether the content of LW should be reorganised, but whether LW is fulfilling its desired purpose any longer.

As nearly all the core people who worked the hardest to use this site to promote rationality are no longer contributing here, I wonder if this goal is still being achieved by LW itself. Is it still worth reading? Still worth commenting here?

Replies from: signal
comment by signal · 2015-11-18T16:40:18.733Z · LW(p) · GW(p)

LW does seem dying and mainly useful for its old content. Any suggestions for a LW 2.0?

Replies from: hg00, Vaniver
comment by hg00 · 2015-11-19T07:24:54.039Z · LW(p) · GW(p)

Yvain, #2 in all-time LW karma, has his own blog which is pretty great. The community has basically moved there and actually grown substantially... Yvain's posts regularly get over 1000 comments. (There's also Eliezer Yudkowsky's facebook feed and the tumblr community.) Turns out online communities are hard, and without a dedicated community leader to tweak site mechanics and provide direction, you are best off just taking a single top contributor and telling them to write whatever they want. Most subreddits fail through Eternal September; Less Wrong is the only community I know of that managed to fail from the opposite effect of setting an excessively high bar for itself. Good online communities are an unsolved and nontrivial problem (but probably worth solving since the internet is where discussions are happening nowadays--a good solution could be great for our collective sanity waterline).

I haven't visited Hacker News for a while, but it seemed like the leadership there was determined to create a quality community by whatever means possible, including solving Eternal September without oversolving it. I'll bet there is a lot to learn from them.

Replies from: Viliam
comment by Viliam · 2015-11-19T14:10:18.000Z · LW(p) · GW(p)

Writing high-quality content is one problem, selecting high-quality content is another. This is the advantage of one-person blogs, where if the author consistently writes high-quality content, both problems are solved at the same time.

The role of author is difficult and requires some level of talent, but it can also be emotionally rewarding. The author gets fans, maybe even money: from context advertising, asking for donations, selling their own product or services.

The role of censor (the person who filters what other people wrote) is emotionally punishing. Whatever you do, some people will hate you. If you remove an article, the author of the article, plus everyone who liked the article, will hate you. If you don't remove an article, everyone who disliked the article will hate you. There are not exact rules; some cases are obvious, but some cases are borderline and require your personal choice; and however you choose, people who would choose otherwise will hate you. People will want mutually conflicting things: some of them prefer higher quality, some of them prefer more content, and both of them will suspect that if you would do your job right, the website would have content both excellent and numerous. It is very difficult for the censor to learn from feedback, because the feedback will be negative either way, thus it does not work as an evidence for doing the job correctly or not.

The author writes when he or she wishes. The censor works 24/7. Etc.

Give me a perfect (x-rational, unbiased, and tireless) censor, and we can have a great rationalist website. Here is how -- In version 1.0, the censor would create a subreddit. Then he would look at a few rationalist blogs (and facebook pages, and tumblr pages, etc.), and whatever passes his filter, he would post it in the subreddit. Also, anyone would be allowed to post/link things on subreddit, and the censor would delete them if they are not good enough. Also, the censor would delete comments, and possibly ban users, if they are not enough.

This is all that is necessary to create a great rationalist debate forum. But it is very difficult. Not to do it once -- but to keep doing it every day, for months and years, despite getting only negative feedback.

Replies from: hg00, tut
comment by hg00 · 2015-11-19T23:09:46.171Z · LW(p) · GW(p)

There might be clever ways to distribute the job of censor, e.g. have an initial cadre of trusted users and ban any newcomer that gets voted down too much by your trusted users. Someone gets added to the trusted users if the existing trusted users vote them up sufficiently. Or something like that. But I expect you would need someone to experiment with the site full time for a while (years?) before the mechanics could be sufficiently well worked out.

comment by tut · 2015-11-19T19:24:01.459Z · LW(p) · GW(p)

... Here is how ...

Is this similar to r/rationalistdiaspora?

Replies from: Viliam
comment by Viliam · 2015-11-19T20:08:10.035Z · LW(p) · GW(p)

Oh, I haven't seen r/rationalistdiaspora for a long time.

Looking there: The front page contains 25 posts, each of them is 2 days old, most of them don't have any upvotes, none of them has comments.

Nope. When people don't vote or comment, slow down. Only choose the best stuff, or perhaps if you believe there is so much great content, create an article containing more links.

Also, I guess you have to somehow create the initial community, to get the ball rolling. I don't know exactly how, but some startups solve this problem by having an invitation-only phase, where you can join only if an existing user invites you, which means that you have demonstrated your interest (artificial scarcity) and also that you know at least one person who is already there, thus you will keep coming to meet them, and they will keep coming to meet you.

Okay, I admit there is more than just having a good censor.

comment by Vaniver · 2015-11-19T16:16:44.806Z · LW(p) · GW(p)

I've been thinking about this for a few months. I'm pointing this out to commit to writing a main-level article by December 1st, hopefully earlier.

Replies from: Viliam, None
comment by Viliam · 2015-11-19T20:09:17.786Z · LW(p) · GW(p)

You have my upvote, which on December 1st will become a downvote unless you will have posted. (Just kidding.)

Replies from: Vaniver
comment by Vaniver · 2015-12-01T02:41:07.736Z · LW(p) · GW(p)

Article written, edited, slept on, and edited again. I could post it now but will wait until the 2nd for timing reasons.

comment by [deleted] · 2015-12-03T14:06:11.146Z · LW(p) · GW(p)

Upvoted today for following through (and raising this discussion in a constructive and thoughtful manner).

comment by hg00 · 2015-11-19T07:17:22.438Z · LW(p) · GW(p)

If this change is made, the karma multiplier for a discussion post should also be increased. Right now making a 10 point discussion point gets you 10 karma but making a 10 point post in Main gets you 100 karma. Which doesn't make much sense given the reality of how Main & Discussion are being used (virtually identically: basically you post to Discussion if you're a person with a humble disposition). I'm in favor of having a solid multiplier for discussion posts, 4x at the absolute least, to encourage more toplevel posts. I would also disable downvoting for users with less than 100 karma to encourage more contributions... Less Wrong is such a dinosaur at this point there's little reason not to try this kind of radical change.

Replies from: tut
comment by tut · 2015-11-19T19:29:45.738Z · LW(p) · GW(p)

I would like to combine your two suggestions like so: Posts in discussion still earn 1 karma per vote. But as soon as a post gets at least five or so points it transfers to promoted. And then you get 10 karma per vote the post receives after getting promoted.

Posts in promoted are visible to people reading discussion, but readers can choose to see only promoted posts.

That way you have a smaller downside risk (if your post is received poorly you only lose one karma per downvote), but you can still get more karma if you write a substantial post that people like.

Replies from: hg00
comment by hg00 · 2015-11-19T23:14:45.995Z · LW(p) · GW(p)

I really like the suggestion of making it so downvotes only cost 1 karma on toplevel posts. But it seems weird to have the marginal karma from an upvote suddenly switch from 1 to 10 as soon as you get at least 5 points.

comment by [deleted] · 2015-11-16T11:50:12.727Z · LW(p) · GW(p)

Live in LA? On the autism spectrum? Got social anxiety or social phobia? You're elegible for legal MDMA therapy. Congrats. For the rest of you out there, take it from me, don't do extasy, it's unreliabe. The tests are shitty.

Live in Canada and got an addiction? Ayahuasca for you!. Live in Australia with PTSD? Soon.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2015-11-16T21:03:00.444Z · LW(p) · GW(p)

Just buy high quality stuff from black markets. It's pretty simple. If you ask around you should be able to find a local hook who has some, just stay updated with the scene.

Replies from: None
comment by [deleted] · 2015-11-17T05:53:44.444Z · LW(p) · GW(p)

That is improper. I prefer lawful transactions sanctioned by the expert opinions.

You can get 2 'doses' of crystal MDMA here in Melbourne for $50 from 'Alex'. But who knows how good it is. Dealers don't sell purity kits and they were banned as of a few months ago from the dodgy stores like Off Ya Tree and that place near Flinders station.

comment by Panorama · 2015-11-19T22:37:03.834Z · LW(p) · GW(p)

A Quasipolynomial Time Algorithm for Graph Isomorphism: The Details

Laszlo Babai has claimed an astounding theorem, that the Graph Isomorphism problem can be solved in quasipolynomial time. On Tuesday I was at Babai’s talk on this topic (he has yet to release a preprint), and I’ve compiled my notes here. As in Babai’s talk, familiarity with basic group theory and graph theory is assumed, and if you’re a casual (i.e., math-phobic) reader looking to understand what the fuss is all about, this is probably not the right post for you. This post is research level theoretical computer science. We’re here for the juicy, glorious details.

Video of Laszlo Babai's talk.

comment by T3t · 2015-11-16T23:39:13.951Z · LW(p) · GW(p)

Has anybody donated a car to charity before (in the US? CA in particular, but I imagine it'll generalize outside of location-specific charities).

The general advice online is useful but not very narrowly-tailored. Couple points I'm looking for information on:

1) Good charities (from an EA perspective)

2) Clarification on the tax details (when car's fair market value is between $500 and $5000)

Would appreciate any advice.

Replies from: Tripitaka, knb
comment by Tripitaka · 2015-11-18T12:43:18.794Z · LW(p) · GW(p)

Since you didnt receive a lot of feedback, my thoughts:

a) Take your highest ranking EA orgs and ask them if they would benefit from having a car available to them. Donate car to that NGO.

b) Sell car at market value and donate money.

No clue to taxes, not being US-based.

comment by knb · 2015-11-17T02:10:47.735Z · LW(p) · GW(p)

Kars for Kids is one that advertises heavily but they use the revenue primarily to support ultra-orthodox religious education, which doesn't seem very EA to me.

comment by CronoDAS · 2015-11-16T20:25:48.208Z · LW(p) · GW(p)

My girlfriend's cat poops on the carpet. The cat does poop in the litter boxes some of the time, and always urinates in them, but she also poops on the carpet several times a day in different places. (She also never buries her poop when she does use the boxes.) Any advice?

Replies from: James_Miller, drethelin, Dagon, raydora, ZankerH
comment by James_Miller · 2015-11-16T21:34:58.147Z · LW(p) · GW(p)

Get a new girlfriend. (Probably easier than getting your current girlfriend to get a new cat.)

Replies from: CronoDAS, username2
comment by CronoDAS · 2015-11-17T19:39:29.708Z · LW(p) · GW(p)

My girlfriend actually doesn't like the cat very much - I'm more of a cat person than she is, so her cat has sort of become my cat... I just wish the cat didn't leave "land mines" on the carpet for us to clean up.

comment by username2 · 2015-11-17T13:13:58.950Z · LW(p) · GW(p)

This is silly.

Replies from: James_Miller
comment by James_Miller · 2015-11-17T17:48:35.514Z · LW(p) · GW(p)

Not for men who have dated women who have cats.

Replies from: Lumifer
comment by Lumifer · 2015-11-17T17:58:43.700Z · LW(p) · GW(p)

Not for men who have dated women who have cats.

If you find yourself below the cat on the totem pole, maybe you do want a cat which poops less often...

Replies from: CronoDAS
comment by CronoDAS · 2015-11-17T19:28:32.160Z · LW(p) · GW(p)

She actually doesn't like the cat very much...

comment by drethelin · 2015-11-17T01:26:27.325Z · LW(p) · GW(p)

The cat is probably unhealthy, they don't normally poop several times a day

comment by Dagon · 2015-11-16T20:51:34.346Z · LW(p) · GW(p)

the top google hits will give reasonable advice. likely: the cat doesn't like the litterbox for some reason - wrong kind of litter, too small, too far away, or not changed often enough.

comment by raydora · 2015-11-18T11:57:57.726Z · LW(p) · GW(p)

Have you talked to her about it? What does she say?

Replies from: Viliam
comment by Viliam · 2015-11-18T12:11:03.070Z · LW(p) · GW(p)


comment by ZankerH · 2015-11-17T09:05:44.523Z · LW(p) · GW(p)

Donate most of your disposable income to MIRI.

Replies from: Viliam, IlyaShpitser, CronoDAS
comment by Viliam · 2015-11-18T08:44:22.151Z · LW(p) · GW(p)

Selling the cat and donating the money to MIRI would kill two birds with one stone.

comment by IlyaShpitser · 2015-11-17T21:52:19.535Z · LW(p) · GW(p)

There's more to life than project mayhem.

Replies from: gjm
comment by gjm · 2015-11-18T12:27:26.041Z · LW(p) · GW(p)

I don't think ZankerH was making a serious suggestion.

(Perhaps I've misunderstood your comment; I haven't seen Fight Club. If you weren't taking ZankerH's comment as an actual attempt to get CronoDAS to give money to MIRI then I think I've missed your point.)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-18T15:29:22.022Z · LW(p) · GW(p)

Poe's law: people say creepy things on LW all the time.

Replies from: philh, gjm
comment by philh · 2015-11-19T11:27:50.400Z · LW(p) · GW(p)

Sure, but - if most LW users could tell that that was a joke, and you couldn't, then that says more about your understanding of the LW zeitgeist than it does about the LW zeitgeist.

(Of course, I mostly only assume most LWers could tell it was a joke, because I thought it was obviously a joke.)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-19T12:22:03.076Z · LW(p) · GW(p)

Also: some things the "LW zeitgeist" considers a joke are not that funny (EY facts, etc.)

comment by gjm · 2015-11-18T16:55:16.033Z · LW(p) · GW(p)

True enough. But not usually quite so irrelevantly as this one would be if ZankerH were seriously proposing it.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-11-18T17:00:06.583Z · LW(p) · GW(p)

Btw, did you end up reading anything of mine? Shoot me an email if so.

Replies from: gjm
comment by gjm · 2015-11-18T18:56:50.218Z · LW(p) · GW(p)

Oh, crap, the next step was meant to be me sending you my email address and I failed to do that because I was trying to find yours and found it surprisingly difficult. I don't know how because when I just tried again it took maybe ten seconds. I'm sending you an email right now.

comment by CronoDAS · 2015-11-17T19:54:12.609Z · LW(p) · GW(p)

Interesting advice, but not useful in context.

Replies from: gjm
comment by gjm · 2015-11-18T12:42:42.830Z · LW(p) · GW(p)

Isn't it a joke? Not a very funny one, I have to say, but a joke. Person A posts something to LW that doesn't seem to have much relevance to LW. So person B replies with something comically-stereotypically LW that doesn't have much relevance to what A posted. There's a certain symmetry about it, and its irrelevance is also a way of saying "this doesn't belong here and so shouldn't be taken seriously". So I guess it's both a joke and a criticism.

comment by [deleted] · 2015-11-16T14:15:25.032Z · LW(p) · GW(p)

There are profit (and income) premiums in vice industries from non-competitive behaviour by moralists. Wouldn't be suprised if moral entrepreneurs intersect with actual entrepreneurs.

Replies from: Dagon
comment by Dagon · 2015-11-16T18:57:24.579Z · LW(p) · GW(p)

Bigger than vice industries. See also the bootleggers and baptists model of regulation.

I'd be interested to hear more about "moral enterpreneurs". I'm guessing you don't mean people who take moral risks in order to maximize their morality.

comment by Panorama · 2015-11-19T22:45:12.924Z · LW(p) · GW(p)

Feeling like you're an expert can make you closed-minded

Victor Ottati at Loyola University and his colleagues manipulated their participants (US residents, average age in their 30s) to feel relative experts or novices in a chosen field, through easy questions like “Who is the current President of the United States?” or tough ones like “Who was Nixon's initial Vice-President?” and through providing feedback to enforce the participants’ feelings of knowledge or ignorance. Those participants manipulated to feel more expert subsequently acted less open-minded toward the same topic, as judged by their responses to items such as “I am open to considering other political viewpoints.”

People’s perceptions of their all-round expertise – provoked in the participants via an easy rather than a hard trivia quiz – also led them to display a close-mindedness in general, even though it was the participants who took the hard quiz who failed more, and reported feeling more insecure, irritable and negative – ingredients that are normally associated with close-mindedness. This isn’t to say that these emotional states didn’t have any effect, just that any effect was swamped by perceptions of expertise.

comment by Panorama · 2015-11-19T22:41:29.701Z · LW(p) · GW(p)

[Stanford researchers uncover patterns in how scientists lie about their data(

Even the best poker players have "tells" that give away when they're bluffing with a weak hand. Scientists who commit fraud have similar, but even more subtle, tells, and a pair of Stanford researchers have cracked the writing patterns of scientists who attempt to pass along falsified data.

The work, published in the Journal of Language and Social Psychology, could eventually help scientists identify falsified research before it is published.

There is a fair amount of research dedicated to understanding the ways liars lie. Studies have shown that liars generally tend to express more negative emotion terms and use fewer first-person pronouns. Fraudulent financial reports typically display higher levels of linguistic obfuscation – phrasing that is meant to distract from or conceal the fake data – than accurate reports.

To see if similar patterns exist in scientific academia, Jeff Hancock, a professor of communication at Stanford, and graduate student David Markowitz searched the archives of PubMed, a database of life sciences journals, from 1973 to 2013 for retracted papers. They identified 253, primarily from biomedical journals, that were retracted for documented fraud and compared the writing in these to unretracted papers from the same journals and publication years, and covering the same topics.

They then rated the level of fraud of each paper using a customized "obfuscation index," which rated the degree to which the authors attempted to mask their false results. This was achieved through a summary score of causal terms, abstract language, jargon, positive emotion terms and a standardized ease of reading score.

comment by [deleted] · 2015-11-16T22:58:42.149Z · LW(p) · GW(p)

Focus on a new frame of reference, not on technique. Clients need to shift away from content—“it’s about my heart/ my debt/ the safety of the plane/ germs”—and toward the very best strategies to recover from their anxiety disorder. These strategies will always address the intentions that currently motivate their actions. Most decisions by anxious clients have two functions:

1) to only take actions that have a highly predictable, positive outcome

2) to stay comfortable

And that makes sense. Everyone seeks comfort. And everyone wants to feel confident about certain outcomes. Most people who experience traumatic events—a near drowning, a panic that resembles a heart attack, blanking out in the middle of a conference presentation—initially react by seeking comfort, safety, and reassurance. So persuading clients to change must include a convincing explanation that their solution to the problem—avoiding and resisting, and seeking comfort and certainty—perpetuates their problem. Anything that is resisted will persist; therefore, the best perspective is a paradoxical one: When facing a problem, one must purposely and voluntarily choose to go toward uncertainty and distress.

-Reid Wilson

comment by [deleted] · 2015-11-16T22:40:44.464Z · LW(p) · GW(p)

I bought a visa prepaid debit card that's expiring in a month. I have a bank account. How do I get the money from the debit card (anonymous, not attached to my name and has no online account associated with it) into my bank account? There's no payment gate in my online bank account.

Replies from: T3t
comment by T3t · 2015-11-17T01:03:31.877Z · LW(p) · GW(p)

I was able to use Square to transfer money from a pre-paid gift card (not sure if it was Visa though) to my bank account. Transaction fee is ~2.75% iirc.

comment by MrMind · 2015-11-16T10:35:37.243Z · LW(p) · GW(p)

A meta-ethics reflection about the three chimps.
We know that chimps societies are in a meta-stable Molochian equilibrium of violence, but you can tip them off with more resources into a more pacific state.
There is supposedly a "universal" progress of society towards a more moral baseline, such as less slavery, less torture, more freedom, but there were also notable exception. I was thinking about the seventeen's century Venice, which was freer than contemporary Venice. But at the time Venice was one the most powerful city-state in the Mediterranean sea, and was enjoying considerable wealth.
So my thinking went: there are at least two modalities in our ethics, one more resembling the chimps societies, the other closer the bonobo way of life, and we oscillate between the two based on the wealth available. This would mean that the moral progress is actually a progress in wealth, which tips off an oscillation in the bonobo region of our ethical system.
Thoughts? Counter-examples?

Replies from: Lalartu
comment by Lalartu · 2015-11-16T13:53:46.207Z · LW(p) · GW(p)

Whether there is "universal progess" in described sense depends on which start and end points do we choose. If take say from Middle Ages to today, then there is. If from Paleolithic to the height of Roman Empire, then trends would be exactly opposite, a march from freedom to slavery. So growth of per capita wealth can coexist with different directions of moral change.

Replies from: OrphanWilde, Lumifer
comment by OrphanWilde · 2015-11-16T16:27:06.265Z · LW(p) · GW(p)

Not to espouse moral directionality, but from the Paleolithic to the height of the Roman Empire, we didn't go from freedom to slavery, we went from informal to formal modes of dominance. Informal modes of dominance -look- more like freedom than formal modes of dominance, because there are more rules on the slave - but there are more rules on the master, as well, which is, in the end, what that thing we call freedom is.

comment by Lumifer · 2015-11-16T16:10:25.612Z · LW(p) · GW(p)

If from Paleolithic to the height of Roman Empire, then trends would be exactly opposite, a march from freedom to slavery.

Um... You believe that between Paleolithic and the height of Roman Empire the progress went in reverse?

Replies from: Lalartu, RolfAndreassen
comment by Lalartu · 2015-11-17T08:51:33.706Z · LW(p) · GW(p)

If we define "progress" as "less slavery, less torture, more freedom" as in top comment, then yes it went in reverse.

Replies from: Lumifer
comment by Lumifer · 2015-11-17T16:20:36.429Z · LW(p) · GW(p)

The top post actually talked about 'a "universal" progress of society towards a more moral baseline', but let's see.

A fair-warning preamble: no one really knows much about cultural practices in the Paleolithic, so the credence of statements about what Paleos (sorry, diet people) did is low.

Slavery -- sure, there was less slavery in the Paleolithic. So, what did they do instead? The usual source of slaves in Antiquity was wars: losers were enslaved. And during the Paleolithic? Well, I would guess that the losers had all the males killed and the fertile women dragged off to be breeding stock.

Maybe it's just me, but I don't see how the Paleolithic way is morally better or closer to the "more moral baseline", whatever it might be.

As to torture, it is entirely not obvious to me that Paleos had less torture than the Roman Empire. Primitive tribes tend to be very cruel to enemies (see e.g. this).

And freedom... it depends on how do you define it, but the Paleo tribes were NOT a happy collection of anarchists. In contemporary political terminology I expect them to have been dictatorships where the order was maintained by ample application of force and most penalties for serious infractions involved death. That doesn't look like a particularly free society.

I have a feeling you are thinking about noble savages. That's fiction.

Replies from: Lalartu
comment by Lalartu · 2015-11-18T12:33:32.975Z · LW(p) · GW(p)

I don't think it is reasonable to portray Paleolithic tribe as dictatorship. When the best weapon is pointed stick, and every man is has skill to use it, minority simply can't rule by force.

Replies from: Lumifer
comment by Lumifer · 2015-11-18T15:39:54.168Z · LW(p) · GW(p)

When the best weapon is pointed stick, and every man is has skill to use it, minority simply can't rule by force.

That's obviously wrong, as there is a large set of social animals which don't even have pointy sticks, and yet alpha males manage to rule the tribe with an iron hand (or paw, or beak, etc.).

comment by RolfAndreassen · 2015-11-17T06:23:18.806Z · LW(p) · GW(p)

How many slaves were there in the Paleolithic?

Replies from: Lumifer
comment by Lumifer · 2015-11-17T16:22:09.954Z · LW(p) · GW(p)

See my other comment in this subthread.

comment by [deleted] · 2015-11-21T13:14:45.975Z · LW(p) · GW(p)

After watching the show: 'locked up abroad'. I Don't want to travel and want to be culturally imperialistic and create liberal democracies in foreign corrupt lawless countries

Also makes me feel I need a contingency plan for people smuggling of wanted persons out of Australia to other countries and onwards to safety.

comment by [deleted] · 2015-11-16T22:51:07.011Z · LW(p) · GW(p)

Research suggests either positive or negative self-talk may improve performance, suggesting the effectiveness of self-talk phrases depends on how the phrase is interpreted by the individual.

Replies from: Stingray
comment by Stingray · 2015-11-17T22:51:39.825Z · LW(p) · GW(p)

What research are you refering to? Please cite it.

comment by [deleted] · 2015-11-17T07:44:29.180Z · LW(p) · GW(p)

Any enterprising rationalists know a thing or two about the Tequila industry?

Replies from: hg00
comment by hg00 · 2015-11-19T07:31:34.891Z · LW(p) · GW(p)

I'm assuming you're the author of this reddit post? Why is Clarity being downvoted for looking for a startup cofounder on LW?

Good luck with your startup. Screw the haters.

comment by [deleted] · 2015-11-17T01:51:36.258Z · LW(p) · GW(p)

The defense attorney must, at a minimum, cross-examine the expert on error rates, comparison to similar groups, validity, and reliability of the research.