Posts

Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them 2017-05-22T18:31:44.750Z · score: 5 (5 votes)
Algorithmic tacit collusion 2017-05-07T14:57:46.639Z · score: 1 (2 votes)
Stuart Ritche reviews Keith Stanovich's book "The rationality quotient: Toward a test of rational thinking" 2017-01-11T11:51:53.972Z · score: 4 (5 votes)
Social effects of algorithms that accurately identify human behaviour and traits 2016-05-14T10:48:27.159Z · score: 3 (3 votes)
Hedge drift and advanced motte-and-bailey 2016-05-01T14:45:08.023Z · score: 24 (24 votes)
Sleepwalk bias, self-defeating predictions and existential risk 2016-04-22T18:31:32.480Z · score: 8 (8 votes)
Identifying bias. A Bayesian analysis of suspicious agreement between beliefs and values. 2016-01-31T11:29:05.276Z · score: 7 (8 votes)
Does the Internet lead to good ideas spreading quicker? 2015-10-28T22:30:50.026Z · score: 7 (7 votes)
ClearerThinking's Fact-Checking 2.0 2015-10-22T21:16:58.544Z · score: 27 (32 votes)
[Link] Tetlock on the power of precise predictions to counter political polarization 2015-10-04T15:19:32.558Z · score: 6 (7 votes)
Matching donation funds and the problem of illusory matching 2015-09-18T20:05:46.098Z · score: 7 (11 votes)
Political Debiasing and the Political Bias Test 2015-09-11T19:04:49.452Z · score: 10 (14 votes)
Pro-Con-lists of arguments and onesidedness points 2015-08-21T14:15:36.306Z · score: 3 (4 votes)
Opinion piece on the Swedish Network for Evidence-Based Policy 2015-06-09T21:13:09.327Z · score: 9 (9 votes)
Could auto-generated troll scores reduce Twitter and Facebook harassments? 2015-04-30T14:05:45.848Z · score: 5 (11 votes)
[Link] Algorithm aversion 2015-02-27T19:26:43.647Z · score: 17 (18 votes)
The Argument from Crisis and Pessimism Bias 2014-11-11T20:25:44.734Z · score: 15 (17 votes)
Reverse engineering of belief structures 2014-08-26T18:00:31.094Z · score: 7 (11 votes)
Three methods of attaining change 2014-08-16T15:38:45.743Z · score: 7 (8 votes)
Multiple Factor Explanations Should Not Appear One-Sided 2014-08-07T14:10:00.504Z · score: 33 (34 votes)
Separating university education from grading 2014-07-03T17:23:57.027Z · score: 11 (13 votes)
The End of Bullshit at the hands of Critical Rationalism 2014-06-04T18:44:29.801Z · score: 9 (19 votes)
Book review: The Reputation Society. Part II 2014-05-14T10:16:34.380Z · score: 5 (10 votes)
Book Review: The Reputation Society. Part I 2014-05-14T10:13:19.826Z · score: 10 (13 votes)
The Rationality Wars 2014-02-27T17:08:45.470Z · score: 21 (22 votes)
Private currency to generate funds for effective altruism 2014-02-14T00:00:05.931Z · score: 3 (11 votes)
Productivity as a function of ability in theoretical fields 2014-01-26T13:16:15.873Z · score: 23 (27 votes)
Do we underuse the genetic heuristic? 2014-01-22T17:37:26.608Z · score: 4 (9 votes)
Division of cognitive labour in accordance with researchers' ability 2014-01-16T09:28:02.920Z · score: 10 (11 votes)

Comments

Comment by stefan_schubert on Robin Hanson on the futurist focus on AI · 2019-11-14T20:11:05.435Z · score: 2 (1 votes) · LW · GW

Associate professor, not assistant professor.

Comment by stefan_schubert on Is there a definitive intro to punishing non-punishers? · 2019-11-01T00:01:18.850Z · score: 14 (4 votes) · LW · GW
One of those concepts is the idea that we evolved to "punish the non-punishers", in order to ensure the costs of social punishment are shared by everyone.

Before thinking of how to present this idea, I would study carefully whether it's true. I understand there is some disagreement regarding the origins of third-party punishment. There is a big literature on this. I won't discuss it in detail, but here are some examples of perspectives which deviate from that taken in the quoted passage.

Joe Henrich writes:

This only makes sense as cultural evolution. Not much third party punishment in many small-scale societies .

So in Henrich's view, we didn't even (biologically) evolve to punish wrong-doers (as third parties), let alone non-punishers. Third-party punishment is a result of cultural, not biological, evolution, in his view.

Another paper of potential relevance by Tooby and Cosmides and others:

A common explanation is that third-party punishment exists to maintain a cooperative society. We tested a different explanation: Third-party punishment results from a deterrence psychology for defending personal interests. Because humans evolved in small-scale, face-to-face social worlds, the mind infers that mistreatment of a third party predicts later mistreatment of oneself.

Another paper by Pedersen, Kurzban and McCullough argues that the case for altruistic punishment is overstated.

Here, we searched for evidence of altruistic punishment in an experiment that precluded these artefacts. In so doing, we found that victims of unfairness punished transgressors, whereas witnesses of unfairness did not. Furthermore, witnesses’ emotional reactions to unfairness were characterized by envy of the unfair individual's selfish gains rather than by moralistic anger towards the unfair behaviour. In a second experiment run independently in two separate samples, we found that previous evidence for altruistic punishment plausibly resulted from affective forecasting error—that is, limitations on humans’ abilities to accurately simulate how they would feel in hypothetical situations. Together, these findings suggest that the case for altruistic punishment in humans—a view that has gained increasing attention in the biological and social sciences—has been overstated.
Comment by stefan_schubert on How do you assess the quality / reliability of a scientific study? · 2019-10-30T10:17:31.680Z · score: 10 (3 votes) · LW · GW

A recent paper developed a statistical model for predicting whether papers would replicate.

We have derived an automated, data-driven method for predicting replicability of experiments. The method uses machine learning to discover which features of studies predict the strength of actual replications. Even with our fairly small data set, the model can forecast replication results with substantial accuracy — around 70%. Predictive accuracy is sensitive to the variables that are used, in interesting ways. The statistical features (p-value and effect size) of the original experiment are the most predictive. However, the accuracy of the model is also increased by variables such as the nature of the finding (an interaction, compared to a main effect), number of authors, paper length and the lack of performance incentives. All those variables are associated with a reduction in the predicted chance of replicability.
...
The first result is that one variable that is predictive of poor replicability is whether central tests describe interactions between variables or (single-variable) main effects. Only eight of 41 interaction effect studies replicated, while 48 of the 90 other studies did.

Another, unrelated, thing is that authors often make inflated interpretations of their studies (in the abstract, the general discussion section, etc). Whereas there is a lot of criticism of p-hacking and other related practices pertaining to the studies themselves, there is less scrutiny of how authors interpret their results (in part that's understandable, since what counts as a dodgy interpretation is more subjective). Hence when you read the methods and results sections it's good to think about whether you'd make the same high-level interpretation of the results as the authors.

Comment by stefan_schubert on Two explanations for variation in human abilities · 2019-10-26T00:52:25.215Z · score: 2 (1 votes) · LW · GW

One aspect may be that the issues we discuss and try to solve are often at the limit of human capabilities. Some people are way better at solving them than others, and since those issues are so often in the spotlight, it looks like the less able are totally incompetent. But actually, they're not; it's just that the issues they are able to solve aren't discussed.

Cf. https://www.lesswrong.com/posts/e84qrSoooAHfHXhbi/productivity-as-a-function-of-ability-in-theoretical-fields

Comment by stefan_schubert on What Comes After Epistemic Spot Checks? · 2019-10-23T16:55:42.887Z · score: 11 (6 votes) · LW · GW
On first blush this looks like a success story, but it’s not. I was only able to catch the mistake because I had a bunch of background knowledge about the state of the world. If I didn’t already know mid-millenium China was better than Europe at almost everything (and I remember a time when I didn’t), I could easily have drawn the wrong conclusion about that claim. And following a procedure that would catch issues like this every time would take much more time than ESCs currently get.

Re this particular point, I guess one thing you might be able to do is to check arguments, as opposed to statements of facts. Sometimes, one can evaluate whether arguments are valid even when one isn't too knowledgeable about the particular topic. I previously did some work on argument-checking of political debates. (Though the rationale for that wasn't that argument-checking can require less knowledge than fact-checking, but rather that fact-checking of political debates already exists, whereas argument-checking does not).

I never did any systematic epistemic spot checks, but if a book contains a lots of arguments that appear fallacious or sketchy, I usually stop reading it. I guess that's related.

Comment by stefan_schubert on Replace judges with Keynesian beauty contests? · 2019-10-08T12:02:56.782Z · score: 4 (2 votes) · LW · GW

Thanks for this. In principle, you could use KBCs for any kind of evaluation, including evaluation of products, texts (essay grading, application letters, life plans, etc), pictures (which of my pictures is the best?), etc. The judicial system is very high-stakes and probably highly resistant to reform, whereas some of the contexts I list are much lower stakes. It might be better to try out KBCs in such a low-stakes context (I'm not sure which one would be best). I don't know what extent KBCs have tested for these kinds of purposes (it was some time since I looked into these issues, and I've forgotten a bit). That would be good to look into.

One possible issue that one would have to overcome is explicit collusion among subsets of raters. Another is, as you say, that people might converge on some salient characteristics that are easily observable but don't track what you're interested in (this could at least in some cases be seen as a form of "tacit collusion").

My impression is that collusion is a serious problems for ratings or recommender systems (which KBCs can be seen as a type of) in general. As a rule of thumb, people might be more inclined to engage in collusion when the stakes are higher.

To prevent that, one option would be to have a small number of known trustworthy experts, who also make evaluations which function as a sort of spot checks. Disagreement with those experts could be heavily penalised, especially if there are signs that the disagreement is due to (either tacit or explicit) collusion. But in the end only any anti-collusion measure needs to be tested empirically.

Relatedly, once people have a history of ratings, you may want to give disproportionate weights to those with a strong track record. Such epistocratic systems can be more efficient than democratic systems. See Thirteen Theorems in Search of the Truth.

KBCs can also be seen as a kind of prediction contests, where you're trying to predict other people's judgements. Hence there might be synergies with other forms of work on predictions.

Comment by stefan_schubert on Occam's Razor: In need of sharpening? · 2019-08-05T00:34:11.074Z · score: 12 (3 votes) · LW · GW

There is a substantial philosophical literature on Occam's Razor and related issues:

https://plato.stanford.edu/entries/simplicity/

Comment by stefan_schubert on Hedge drift and advanced motte-and-bailey · 2019-06-14T22:57:52.639Z · score: 2 (1 votes) · LW · GW

Yes, a new paper confirms this.

The association between quality measures of medical university press releases and their corresponding news stories—Important information missing
Comment by stefan_schubert on Say Wrong Things · 2019-05-30T11:31:28.652Z · score: 2 (1 votes) · LW · GW

Agreed; those are important considerations. In general, I think a risk for rationalists is to change one's behaviour on complex and important matters based on individual arguments which, while they appear plausible, don't give the full picture. Cf Chesterton's fence, naive rationalism, etc.


Comment by stefan_schubert on Reality has a surprising amount of detail · 2017-05-16T07:56:05.326Z · score: 1 (1 votes) · LW · GW

This was already posted a few links down.

Comment by stefan_schubert on OpenAI makes humanity less safe · 2017-04-06T15:57:33.424Z · score: 0 (0 votes) · LW · GW

One interesting aspect of posts like this is that they can, to some extent, be (felicitously) self-defeating.

Comment by stefan_schubert on Open thread, Oct. 03 - Oct. 09, 2016 · 2016-10-05T17:01:23.637Z · score: 1 (1 votes) · LW · GW

As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.

Comment by stefan_schubert on Open Thread May 9 - May 15 2016 · 2016-05-11T13:39:09.175Z · score: 0 (0 votes) · LW · GW

Thanks Ryan, that's helpful. Yes, I'm not sure one would be able to do something that has the right combination of accuracy, interestingness and low-cost at present.

Comment by stefan_schubert on Open Thread May 9 - May 15 2016 · 2016-05-10T17:08:53.193Z · score: 0 (0 votes) · LW · GW

Sure, I guess my question was whether you'd think that it'd be possible to do this in a way that would resonate with readers. Would they find the estimates of quality, or level of postmodernism, intuitively plausible?

My hunch was that the classification would primarily be based on patterns of word use, but you're right that it would probably be fruitful to use at patterns of citations.

Comment by stefan_schubert on Open Thread May 9 - May 15 2016 · 2016-05-10T10:26:20.772Z · score: 3 (3 votes) · LW · GW

deleted

Comment by stefan_schubert on Hedge drift and advanced motte-and-bailey · 2016-05-02T09:55:06.803Z · score: 0 (0 votes) · LW · GW

Good points. I agree that what you write within parentheses is a potential problem. Indeed, it is a problem for many kinds of far-reaching norms on altruistic behaviour compliance with which is hard to observe: they might handicap conscientious people relative to less conscientious people to such an extent that the norms do more harm than good.

I also agree that individualistic solutions to collective problems have a chequered record. The point of 1)-3) was rather to indicate how you potentially could reduce hedge drift, given that you want to do that. To get scientists and others to want to reduce hedge drift is probably a harder problem.

In conversation, Ben Levinstein suggested that it is partly the editors' role to frame articles in a way such that hedge drift doesn't occur. There is something to that, though it is of course also true that editors often have incentives to encourage hedge drift as well.

Comment by stefan_schubert on Sleepwalk bias, self-defeating predictions and existential risk · 2016-04-24T12:07:04.920Z · score: 2 (2 votes) · LW · GW

Thanks. My claim is somewhat different, though. Adams says that "whenever humanity can see a slow-moving disaster coming, we find a way to avoid it". This is an all-things-considered claim. My claim is rather that sleepwalk bias is a pro-tanto consideration indicating that we're too pessimistic about future disasters (perhaps especially slow-moving ones). I'm not claiming that we never sleepwalk into a disaster. Indeed, there might be stronger countervailing considerations, which if true would mean that all things considered we are too optimistic about existential risk.

Comment by stefan_schubert on Sleepwalk bias, self-defeating predictions and existential risk · 2016-04-24T11:59:47.835Z · score: 0 (0 votes) · LW · GW

It is not quite clear to me whether you are here just talking about instances of sleepwalking, or whether you are also talking about a predictive error indicating anti-sleepwalking bias: i.e. that they wrongly predicted that the relevant actors would act, yet they sleepwalked into a disaster.

Also, my claim is not that sleepwalking never occurs, but that people on average seem to think that it happens more often than it actually does.

Comment by stefan_schubert on Open Thread April 11 - April 17, 2016 · 2016-04-17T10:19:41.730Z · score: 5 (5 votes) · LW · GW

Open Phil gives $500,000 to Tetlock's research.

Comment by stefan_schubert on The Sally-Anne fallacy · 2016-04-15T08:55:58.564Z · score: 1 (1 votes) · LW · GW

Great post. Another issue is why B doesn't believe Y in spite of believing X and in spite of A believing that X implies Y. Some mechanisms:

a) B rejects that X implies Y, for reasons that are good or bad, or somewhere in between. (Last case: reasonable disagreement.)

b) B hasn't even considered whether X implies Y. (Is not logically omniscient.)

c) Y only follows from X given some additional premises Z, which B either rejects (for reasons that are good or bad or somehwere in between) or hasn't entertained. (What Tyrrell McAllister wrote.)

d) B is confused over the meaning of X, and hence is confused over what X implies. (The dialect case.)

Comment by stefan_schubert on Open Thread March 21 - March 27, 2016 · 2016-03-22T20:45:20.908Z · score: 3 (3 votes) · LW · GW

Thanks a lot! Yes, super-useful.

Comment by stefan_schubert on Open Thread March 21 - March 27, 2016 · 2016-03-22T14:14:30.956Z · score: 2 (2 votes) · LW · GW

I have a maths question. Suppose that we are scoring n individuals on their performance in an area where there is significant uncertainty. We are categorizing them into a low number of categories, say 4. Effectively we're thereby saying that for the purposes of our scoring, everyone with the same score performs equally well. Suppose that we say that this means that all individuals with that score get assigned the mean actual performance of the individuals with that that score. For instance, if there were three people who got the highest score, and their perfomance equals 8, 12 and 13 units, the assigned performance is 11 units.

Now suppose that we want our scoring system to minimise information loss, so that the assigned performance is on average as close as possible to the actual performance. The question is: how do we achieve this? Specifically, how large a proportion of all individuals should fall into each category, and how does that depend on the performance distribution?

It would seem that if performance is linearly increasing as we go from low to high performers, then all categories should have the same number of individuals, whereas if the increase is exponential, then the higher categories should have a smaller number of individuals. Is there a theorem that proves this, and which exacty specifies how large the categories should be for a given shape of the curve? Thanks.

Comment by stefan_schubert on Does the Internet lead to good ideas spreading quicker? · 2015-10-29T11:09:29.113Z · score: 4 (4 votes) · LW · GW

Great comment. Thanks!

Basically, rapid communication gives people too much choice. They choose things comfortably similar to what they know. Isolation is needed to allow new things to gain an audience before they're stomped out by the dominant things.

This is an interesting idea, reminiscent of, e.g. Lakatos's view of the philosophy of science. He argued that we shouldn't let new theories be discarded too quickly, just because they seem to have some things going against them. Only if their main tenets prove to be unfeasible should we discard them.

I think premature convergence does occur regarding the spread of ideas (memes), too (though it obviously varies). I do think, for instance, that what you describe in music has to a certain extent happened in analytic philosophy. In the early 20th century, several "scientific" approaches to philosophy developed, in, e.g. Cambridge, Vienna and Upsala. Today, the higher pace of communication leads to more convergence.

Comment by stefan_schubert on Does the Internet lead to good ideas spreading quicker? · 2015-10-29T10:55:46.768Z · score: 1 (1 votes) · LW · GW

I agree with all of this. The upshot seems to be that its important that those who actually have good ideas achieve high status.

Comment by stefan_schubert on ClearerThinking's Fact-Checking 2.0 · 2015-10-27T18:12:31.141Z · score: 2 (2 votes) · LW · GW

Good to hear, Christian. We're currently subtitling a bit more of the CNN Democratic debate, which should be up soon. We haven't decided, though, to what extent we will subtitle future debates. This is extremely time-consuming. But you could subscribe to ClearerThinking, who are likely to announce any major new updates. (They also do lots of other rationality related stuff; most notably rationality tests.)

Comment by stefan_schubert on ClearerThinking's Fact-Checking 2.0 · 2015-10-27T18:07:15.753Z · score: 1 (3 votes) · LW · GW

Your criticism would be much more interesting if you pointed to concrete problems in my fact-checking/argument-checking.

Comment by stefan_schubert on ClearerThinking's Fact-Checking 2.0 · 2015-10-22T21:40:51.174Z · score: 2 (2 votes) · LW · GW

Thanks! What device did you use? It is working poorly on phones, but we hoped it would work fine on computers. Thanks for pointing this out.

Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-17T15:45:56.720Z · score: 2 (2 votes) · LW · GW

Thanks!

I read that in a paper by Dan Kahan on bias, but have been unable to find it since. I hope I don't misremember, but that that was exactly what he said. In any case, I'll notify you in case I find it.

Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-12T11:36:35.996Z · score: 0 (0 votes) · LW · GW

Thanks! Yes, I'm actually working on fact-checking - or rather argument-checking - as well. Here are some posts on that. It's a related but different theme, both falling under the general concept of political rationality, which I talked about at the LW Community Weekend Berlin and EA Global Oxford.

Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-12T11:06:00.053Z · score: 1 (1 votes) · LW · GW

I agree that phrase of mine might a bit too strong. But I think a lot of cynics under-estimate the degree to which people want to be rational and unbiased.

I had one experience with an extremly smart person, which politically influential parents and maybe a future political career on her own who once quite explicetly said, that's what she was doing after she read a room wrongly and excused for that (it wasn't a even a public event but party internal).

I didn't get this anecdote, which sounded interesting.

Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-12T11:00:31.863Z · score: 3 (3 votes) · LW · GW

Yes, I admit some of the questions could have been better phrased. If I do another test, as I hope to, I'll try to crowdsource this. It would have been easier to come up with good questions if I had had social scientists and scientists in relevant fields on board. Also, I think that would minimize unclarities, and so on (more eyes, etc).

That said, we did a fair amount of pre-testing on Mechanical Turk and on friends.

Comment by stefan_schubert on Open Thread, Jul. 20 - Jul. 26, 2015 · 2015-07-23T22:34:14.407Z · score: 4 (4 votes) · LW · GW

Does anyone know if there is any data on the political views of the Effective Altruist community? Can't find it in the EA survey.

Comment by stefan_schubert on European Community Weekend 2015 Impressions Thread · 2015-06-15T18:49:56.924Z · score: 11 (11 votes) · LW · GW

Great weekend with lots of good talks. Many thanks to the flawless organization! Truly excellent. It would be great if Berlin could develop into a rationalist/EA hub, as was mentioned at the end.

Comment by stefan_schubert on Opinion piece on the Swedish Network for Evidence-Based Policy · 2015-06-10T13:42:25.531Z · score: 1 (1 votes) · LW · GW

Thanks! I'm sure Germany and Sweden aren't that different in this regard.

There are of course many expert groups inside and outside government working on what in a broad sense could be called "evidence-based policy" in most countries (whether or not that very term is used). Most of these groups have the problem that you mention - they have good advice, but the politicians don't listen to them. That's why we don't want to be another group of that kind, but also a campaigning organization. We want to push the message of political rationality fairly aggressively in the media and on social media (though you have to be careful not to trigger the "Straw Soviet"). That way, we'll try to force politicians' to listen to ours and other expert organizations' advice.

I don't know if there is any similar organization in Germany, though I think there should be.

You're absolutely right about getting civil servants onboard. A few has joined since we wrote the article, and we are working on recruiting more. We lack a bit of practical know-how now, since most of us are academics or students, but I'm confident we will be able to get more people onboard.

Yes, younger politicians are probably more interested in this than older, for several reasons. We are not affiliated with any party, but are independent. We do talk to the parties, though - and the Pirate Party is among them.

I'll talk about this on Saturday in Berlin, by the way, at the LW meeting. Will you be there?

Comment by stefan_schubert on Opinion piece on the Swedish Network for Evidence-Based Policy · 2015-06-10T09:58:52.089Z · score: 0 (0 votes) · LW · GW

I'd guess it's the immigrants he refers to. It's not the case, though. Here's a link in Swedish, and one in English (less informative, I'd think). Children of immigrants do perform worse in school than native children, but that can explain only a fraction of Sweden's fall in the PISA ranking.

Comment by stefan_schubert on Guidelines for Upvoting and Downvoting? · 2015-05-07T11:00:04.078Z · score: 5 (5 votes) · LW · GW

Thanks. The instructions are quite vague, though. On the one hand, it says:

Please do not vote solely based on how much you agree or disagree with someone's conclusions.

On the other:

In some cases it's probably acceptable to vote in order to register agreement or disagreement. For example, you can vote someone's proposal up or down based on whether you think it should be implemented.

In my view, this gives too much leeway to people to vote purely on the basis on disagreement with the conclusions.

Comment by stefan_schubert on Guidelines for Upvoting and Downvoting? · 2015-05-06T17:59:08.089Z · score: 7 (9 votes) · LW · GW

Lots of people seem to downvote a comment or a post simply because they do not agree with its conclusions, as was discussed here. That's wrong, in my opinion. Instead, you should only downvote in case the reasoning is poor, there are personal attacks, and so forth.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-04T15:56:11.002Z · score: 0 (0 votes) · LW · GW

"The same way, the troll button would also devolve into a disagree button."

There's no suggestion that there should be a troll button.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-02T18:33:17.151Z · score: 2 (2 votes) · LW · GW

Most of the top comments are "this is a terrible idea and here are the reasons we should never do it", and his comment is "we can do it sooner than you think, here's how".

I get that. But in my book you don't downvote a comment simply because you don't agree with it. You downvote a comment because it is poorly argued, makes no sense, or something like that. Clearly, that doesn't apply to this comment.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-01T11:15:36.785Z · score: -1 (1 votes) · LW · GW

That's very interesting! I would obviously love it if such a browser addon could be constructed. And the trollface image is a great idea. :)

By the way, the fact that your very insightful comment is downvoted is really a shame. Why do people downvote interesting and informative comments like this? That makes no sense whatsoever.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-01T11:12:01.106Z · score: 1 (1 votes) · LW · GW

My suggestion was not to train the system on user ratings:

The first is to let a number of sensible people give their troll scores of different Facebook posts and tweets (using the general and vague definition of what is to count as trolling). You would feed this into your algorithms, which would learn which combinations of words are characteristic of trolls (as judged by these people), and which arent't. The second is to simply list a number of words or phrases which would count as characteristic of trolls, in the sense of the general and vague definition.

Comment by stefan_schubert on Rational discussion of politics · 2015-04-30T14:10:56.995Z · score: 1 (1 votes) · LW · GW

Excellent! Great initiative.

Comment by stefan_schubert on Status - is it what we think it is? · 2015-04-02T20:51:11.749Z · score: 1 (1 votes) · LW · GW

Thanks. Those are good points.

Comment by stefan_schubert on Status - is it what we think it is? · 2015-04-02T20:50:44.468Z · score: 1 (1 votes) · LW · GW

Interesting. I'm starting to believe some people might think that they want to be in charge but actually really don't. They have, so to speak, internalized society's expectations that people should want to be in charge. Because it is true that being in charge has serious drawbacks.

Comment by stefan_schubert on Status - is it what we think it is? · 2015-04-01T00:07:37.406Z · score: 1 (1 votes) · LW · GW

Very nice and illuminating conceptual analysis. Thanks!

These people who don't like to be in charge, what are they like, according to you and/or Johnstone? Less confident or just less ambitious? More commonly women, perhaps? I don't have a very clear model of their psychology.

Comment by stefan_schubert on March 2015 Media Thread · 2015-03-02T22:25:33.055Z · score: 2 (2 votes) · LW · GW

Google wants to rank websites based on facts not links

The trustworthiness of a web page might help it rise up Google's rankings if the search giant starts to measure quality by facts, not just links.

...

Instead of counting incoming links, [Google's system for measuring the trustworthiness of a page] – which is not yet live – counts the number of incorrect facts within a page. "A source that has few false facts is considered to be trustworthy," says the team (arxiv.org/abs/1502.03519v1). The score they compute for each page is its Knowledge-Based Trust score.

The software works by tapping into the Knowledge Vault, the vast store of facts that Google has pulled off the internet. Facts the web unanimously agrees on are considered a reasonable proxy for truth. Web pages that contain contradictory information are bumped down the rankings.

Comment by stefan_schubert on [Link] Algorithm aversion · 2015-03-01T07:48:34.427Z · score: 3 (3 votes) · LW · GW

Indeed. That is precisely what the so-called "closet index funds" are doing. They are said to be actively managed funds, but are in reality so-called index trackers, which just are tracking the stock market index.

The reason the managers of the fund are using index-tracking algorithms rather than human experts is, however, not so much that the former are better (as I understand they are roughly on par) but that they are much cheaper. People think that the extra costs that active management brings with it are worth it, however, since they erroneously believe that human experts can consistently beat the index.

Comment by stefan_schubert on [Link] Algorithm aversion · 2015-02-27T22:37:24.901Z · score: 2 (2 votes) · LW · GW

Haha yes that did strike me too. However, I suppose there could have been other explanations of people's unwillingness to trust algorithms than a cognitive bias of this sort. For instance, the explanation could have been that experts conspire to fool people that they are in fact better than the algorithms. The fact that people mistrust algorithms even in this case, where there clearly wasn't an expert conspiracy going on, suggests that that probably isn't the explanation.

Comment by stefan_schubert on [Link] Algorithm aversion · 2015-02-27T22:25:06.814Z · score: 5 (5 votes) · LW · GW

Here's an article in Harvard Business Review about algorithm aversion:

It’s not all egotism either. When the choice was between betting on the algorithm and betting on another person, participants were still more likely to avoid the algorithm if they’d seen how it performed and therefore, inevitably, had seen it err.

My emphasis.

The authors also have a forthcoming paper on this issue:

If showing results doesn’t help avoid algorithm aversion, allowing human input might. In a forthcoming paper, the same researchers found that people are significantly more willing to trust and use algorithms if they’re allowed to tweak the output a little bit. If, say, the algorithm predicted a student would perform in the top 10% of their MBA class, participants would have the chance to revise that prediction up or down by a few points. This made them more likely to bet on the algorithm, and less likely to lose confidence after seeing how it performed.

Of course, in many cases adding human input made the final forecast worse. We pride ourselves on our ability to learn, but the one thing we just can’t seem to grasp is that it’s typically best to just trust that the algorithm knows better.

Presumably another bias, the IKEA effect, which says that people prefer products they've partially created themselves, is at play here.

Comment by stefan_schubert on February 2015 Media Thread · 2015-02-03T20:05:09.292Z · score: 4 (4 votes) · LW · GW

Article on Philip Tetlock's new research on predictions in Harvard Business Review