Comment by stefan_schubert on Hedge drift and advanced motte-and-bailey · 2019-06-14T22:57:52.639Z · score: 2 (1 votes) · LW · GW

Yes, a new paper confirms this.

The association between quality measures of medical university press releases and their corresponding news stories—Important information missing
Comment by stefan_schubert on Say Wrong Things · 2019-05-30T11:31:28.652Z · score: 2 (1 votes) · LW · GW

Agreed; those are important considerations. In general, I think a risk for rationalists is to change one's behaviour on complex and important matters based on individual arguments which, while they appear plausible, don't give the full picture. Cf Chesterton's fence, naive rationalism, etc.


Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them

2017-05-22T18:31:44.750Z · score: 3 (4 votes)
Comment by stefan_schubert on Reality has a surprising amount of detail · 2017-05-16T07:56:05.326Z · score: 1 (1 votes) · LW · GW

This was already posted a few links down.

Algorithmic tacit collusion

2017-05-07T14:57:46.639Z · score: 1 (2 votes)
Comment by stefan_schubert on OpenAI makes humanity less safe · 2017-04-06T15:57:33.424Z · score: 0 (0 votes) · LW · GW

One interesting aspect of posts like this is that they can, to some extent, be (felicitously) self-defeating.

Stuart Ritche reviews Keith Stanovich's book "The rationality quotient: Toward a test of rational thinking"

2017-01-11T11:51:53.972Z · score: 4 (5 votes)
Comment by stefan_schubert on Open thread, Oct. 03 - Oct. 09, 2016 · 2016-10-05T17:01:23.637Z · score: 1 (1 votes) · LW · GW

As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.

Social effects of algorithms that accurately identify human behaviour and traits

2016-05-14T10:48:27.159Z · score: 3 (3 votes)
Comment by stefan_schubert on Open Thread May 9 - May 15 2016 · 2016-05-11T13:39:09.175Z · score: 0 (0 votes) · LW · GW

Thanks Ryan, that's helpful. Yes, I'm not sure one would be able to do something that has the right combination of accuracy, interestingness and low-cost at present.

Comment by stefan_schubert on Open Thread May 9 - May 15 2016 · 2016-05-10T17:08:53.193Z · score: 0 (0 votes) · LW · GW

Sure, I guess my question was whether you'd think that it'd be possible to do this in a way that would resonate with readers. Would they find the estimates of quality, or level of postmodernism, intuitively plausible?

My hunch was that the classification would primarily be based on patterns of word use, but you're right that it would probably be fruitful to use at patterns of citations.

Comment by stefan_schubert on Open Thread May 9 - May 15 2016 · 2016-05-10T10:26:20.772Z · score: 3 (3 votes) · LW · GW

deleted

Comment by stefan_schubert on Hedge drift and advanced motte-and-bailey · 2016-05-02T09:55:06.803Z · score: 0 (0 votes) · LW · GW

Good points. I agree that what you write within parentheses is a potential problem. Indeed, it is a problem for many kinds of far-reaching norms on altruistic behaviour compliance with which is hard to observe: they might handicap conscientious people relative to less conscientious people to such an extent that the norms do more harm than good.

I also agree that individualistic solutions to collective problems have a chequered record. The point of 1)-3) was rather to indicate how you potentially could reduce hedge drift, given that you want to do that. To get scientists and others to want to reduce hedge drift is probably a harder problem.

In conversation, Ben Levinstein suggested that it is partly the editors' role to frame articles in a way such that hedge drift doesn't occur. There is something to that, though it is of course also true that editors often have incentives to encourage hedge drift as well.

Hedge drift and advanced motte-and-bailey

2016-05-01T14:45:08.023Z · score: 22 (23 votes)
Comment by stefan_schubert on Sleepwalk bias, self-defeating predictions and existential risk · 2016-04-24T12:07:04.920Z · score: 2 (2 votes) · LW · GW

Thanks. My claim is somewhat different, though. Adams says that "whenever humanity can see a slow-moving disaster coming, we find a way to avoid it". This is an all-things-considered claim. My claim is rather that sleepwalk bias is a pro-tanto consideration indicating that we're too pessimistic about future disasters (perhaps especially slow-moving ones). I'm not claiming that we never sleepwalk into a disaster. Indeed, there might be stronger countervailing considerations, which if true would mean that all things considered we are too optimistic about existential risk.

Comment by stefan_schubert on Sleepwalk bias, self-defeating predictions and existential risk · 2016-04-24T11:59:47.835Z · score: 0 (0 votes) · LW · GW

It is not quite clear to me whether you are here just talking about instances of sleepwalking, or whether you are also talking about a predictive error indicating anti-sleepwalking bias: i.e. that they wrongly predicted that the relevant actors would act, yet they sleepwalked into a disaster.

Also, my claim is not that sleepwalking never occurs, but that people on average seem to think that it happens more often than it actually does.

Sleepwalk bias, self-defeating predictions and existential risk

2016-04-22T18:31:32.480Z · score: 8 (8 votes)
Comment by stefan_schubert on Open Thread April 11 - April 17, 2016 · 2016-04-17T10:19:41.730Z · score: 5 (5 votes) · LW · GW

Open Phil gives $500,000 to Tetlock's research.

Comment by stefan_schubert on The Sally-Anne fallacy · 2016-04-15T08:55:58.564Z · score: 1 (1 votes) · LW · GW

Great post. Another issue is why B doesn't believe Y in spite of believing X and in spite of A believing that X implies Y. Some mechanisms:

a) B rejects that X implies Y, for reasons that are good or bad, or somewhere in between. (Last case: reasonable disagreement.)

b) B hasn't even considered whether X implies Y. (Is not logically omniscient.)

c) Y only follows from X given some additional premises Z, which B either rejects (for reasons that are good or bad or somehwere in between) or hasn't entertained. (What Tyrrell McAllister wrote.)

d) B is confused over the meaning of X, and hence is confused over what X implies. (The dialect case.)

Comment by stefan_schubert on Open Thread March 21 - March 27, 2016 · 2016-03-22T20:45:20.908Z · score: 3 (3 votes) · LW · GW

Thanks a lot! Yes, super-useful.

Comment by stefan_schubert on Open Thread March 21 - March 27, 2016 · 2016-03-22T14:14:30.956Z · score: 2 (2 votes) · LW · GW

I have a maths question. Suppose that we are scoring n individuals on their performance in an area where there is significant uncertainty. We are categorizing them into a low number of categories, say 4. Effectively we're thereby saying that for the purposes of our scoring, everyone with the same score performs equally well. Suppose that we say that this means that all individuals with that score get assigned the mean actual performance of the individuals with that that score. For instance, if there were three people who got the highest score, and their perfomance equals 8, 12 and 13 units, the assigned performance is 11 units.

Now suppose that we want our scoring system to minimise information loss, so that the assigned performance is on average as close as possible to the actual performance. The question is: how do we achieve this? Specifically, how large a proportion of all individuals should fall into each category, and how does that depend on the performance distribution?

It would seem that if performance is linearly increasing as we go from low to high performers, then all categories should have the same number of individuals, whereas if the increase is exponential, then the higher categories should have a smaller number of individuals. Is there a theorem that proves this, and which exacty specifies how large the categories should be for a given shape of the curve? Thanks.

Identifying bias. A Bayesian analysis of suspicious agreement between beliefs and values.

2016-01-31T11:29:05.276Z · score: 7 (8 votes)
Comment by stefan_schubert on Does the Internet lead to good ideas spreading quicker? · 2015-10-29T11:09:29.113Z · score: 4 (4 votes) · LW · GW

Great comment. Thanks!

Basically, rapid communication gives people too much choice. They choose things comfortably similar to what they know. Isolation is needed to allow new things to gain an audience before they're stomped out by the dominant things.

This is an interesting idea, reminiscent of, e.g. Lakatos's view of the philosophy of science. He argued that we shouldn't let new theories be discarded too quickly, just because they seem to have some things going against them. Only if their main tenets prove to be unfeasible should we discard them.

I think premature convergence does occur regarding the spread of ideas (memes), too (though it obviously varies). I do think, for instance, that what you describe in music has to a certain extent happened in analytic philosophy. In the early 20th century, several "scientific" approaches to philosophy developed, in, e.g. Cambridge, Vienna and Upsala. Today, the higher pace of communication leads to more convergence.

Comment by stefan_schubert on Does the Internet lead to good ideas spreading quicker? · 2015-10-29T10:55:46.768Z · score: 1 (1 votes) · LW · GW

I agree with all of this. The upshot seems to be that its important that those who actually have good ideas achieve high status.

Does the Internet lead to good ideas spreading quicker?

2015-10-28T22:30:50.026Z · score: 7 (7 votes)
Comment by stefan_schubert on ClearerThinking's Fact-Checking 2.0 · 2015-10-27T18:12:31.141Z · score: 2 (2 votes) · LW · GW

Good to hear, Christian. We're currently subtitling a bit more of the CNN Democratic debate, which should be up soon. We haven't decided, though, to what extent we will subtitle future debates. This is extremely time-consuming. But you could subscribe to ClearerThinking, who are likely to announce any major new updates. (They also do lots of other rationality related stuff; most notably rationality tests.)

Comment by stefan_schubert on ClearerThinking's Fact-Checking 2.0 · 2015-10-27T18:07:15.753Z · score: 1 (3 votes) · LW · GW

Your criticism would be much more interesting if you pointed to concrete problems in my fact-checking/argument-checking.

Comment by stefan_schubert on ClearerThinking's Fact-Checking 2.0 · 2015-10-22T21:40:51.174Z · score: 2 (2 votes) · LW · GW

Thanks! What device did you use? It is working poorly on phones, but we hoped it would work fine on computers. Thanks for pointing this out.

ClearerThinking's Fact-Checking 2.0

2015-10-22T21:16:58.544Z · score: 25 (31 votes)

[Link] Tetlock on the power of precise predictions to counter political polarization

2015-10-04T15:19:32.558Z · score: 6 (7 votes)

Matching donation funds and the problem of illusory matching

2015-09-18T20:05:46.098Z · score: 7 (11 votes)
Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-17T15:45:56.720Z · score: 2 (2 votes) · LW · GW

Thanks!

I read that in a paper by Dan Kahan on bias, but have been unable to find it since. I hope I don't misremember, but that that was exactly what he said. In any case, I'll notify you in case I find it.

Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-12T11:36:35.996Z · score: 0 (0 votes) · LW · GW

Thanks! Yes, I'm actually working on fact-checking - or rather argument-checking - as well. Here are some posts on that. It's a related but different theme, both falling under the general concept of political rationality, which I talked about at the LW Community Weekend Berlin and EA Global Oxford.

Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-12T11:06:00.053Z · score: 1 (1 votes) · LW · GW

I agree that phrase of mine might a bit too strong. But I think a lot of cynics under-estimate the degree to which people want to be rational and unbiased.

I had one experience with an extremly smart person, which politically influential parents and maybe a future political career on her own who once quite explicetly said, that's what she was doing after she read a room wrongly and excused for that (it wasn't a even a public event but party internal).

I didn't get this anecdote, which sounded interesting.

Comment by stefan_schubert on Political Debiasing and the Political Bias Test · 2015-09-12T11:00:31.863Z · score: 3 (3 votes) · LW · GW

Yes, I admit some of the questions could have been better phrased. If I do another test, as I hope to, I'll try to crowdsource this. It would have been easier to come up with good questions if I had had social scientists and scientists in relevant fields on board. Also, I think that would minimize unclarities, and so on (more eyes, etc).

That said, we did a fair amount of pre-testing on Mechanical Turk and on friends.

Political Debiasing and the Political Bias Test

2015-09-11T19:04:49.452Z · score: 10 (14 votes)

Pro-Con-lists of arguments and onesidedness points

2015-08-21T14:15:36.306Z · score: 3 (4 votes)
Comment by stefan_schubert on Open Thread, Jul. 20 - Jul. 26, 2015 · 2015-07-23T22:34:14.407Z · score: 4 (4 votes) · LW · GW

Does anyone know if there is any data on the political views of the Effective Altruist community? Can't find it in the EA survey.

Comment by stefan_schubert on European Community Weekend 2015 Impressions Thread · 2015-06-15T18:49:56.924Z · score: 11 (11 votes) · LW · GW

Great weekend with lots of good talks. Many thanks to the flawless organization! Truly excellent. It would be great if Berlin could develop into a rationalist/EA hub, as was mentioned at the end.

Comment by stefan_schubert on Opinion piece on the Swedish Network for Evidence-Based Policy · 2015-06-10T13:42:25.531Z · score: 1 (1 votes) · LW · GW

Thanks! I'm sure Germany and Sweden aren't that different in this regard.

There are of course many expert groups inside and outside government working on what in a broad sense could be called "evidence-based policy" in most countries (whether or not that very term is used). Most of these groups have the problem that you mention - they have good advice, but the politicians don't listen to them. That's why we don't want to be another group of that kind, but also a campaigning organization. We want to push the message of political rationality fairly aggressively in the media and on social media (though you have to be careful not to trigger the "Straw Soviet"). That way, we'll try to force politicians' to listen to ours and other expert organizations' advice.

I don't know if there is any similar organization in Germany, though I think there should be.

You're absolutely right about getting civil servants onboard. A few has joined since we wrote the article, and we are working on recruiting more. We lack a bit of practical know-how now, since most of us are academics or students, but I'm confident we will be able to get more people onboard.

Yes, younger politicians are probably more interested in this than older, for several reasons. We are not affiliated with any party, but are independent. We do talk to the parties, though - and the Pirate Party is among them.

I'll talk about this on Saturday in Berlin, by the way, at the LW meeting. Will you be there?

Comment by stefan_schubert on Opinion piece on the Swedish Network for Evidence-Based Policy · 2015-06-10T09:58:52.089Z · score: 0 (0 votes) · LW · GW

I'd guess it's the immigrants he refers to. It's not the case, though. Here's a link in Swedish, and one in English (less informative, I'd think). Children of immigrants do perform worse in school than native children, but that can explain only a fraction of Sweden's fall in the PISA ranking.

Opinion piece on the Swedish Network for Evidence-Based Policy

2015-06-09T21:13:09.327Z · score: 9 (9 votes)
Comment by stefan_schubert on Guidelines for Upvoting and Downvoting? · 2015-05-07T11:00:04.078Z · score: 5 (5 votes) · LW · GW

Thanks. The instructions are quite vague, though. On the one hand, it says:

Please do not vote solely based on how much you agree or disagree with someone's conclusions.

On the other:

In some cases it's probably acceptable to vote in order to register agreement or disagreement. For example, you can vote someone's proposal up or down based on whether you think it should be implemented.

In my view, this gives too much leeway to people to vote purely on the basis on disagreement with the conclusions.

Comment by stefan_schubert on Guidelines for Upvoting and Downvoting? · 2015-05-06T17:59:08.089Z · score: 7 (9 votes) · LW · GW

Lots of people seem to downvote a comment or a post simply because they do not agree with its conclusions, as was discussed here. That's wrong, in my opinion. Instead, you should only downvote in case the reasoning is poor, there are personal attacks, and so forth.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-04T15:56:11.002Z · score: 0 (0 votes) · LW · GW

"The same way, the troll button would also devolve into a disagree button."

There's no suggestion that there should be a troll button.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-02T18:33:17.151Z · score: 2 (2 votes) · LW · GW

Most of the top comments are "this is a terrible idea and here are the reasons we should never do it", and his comment is "we can do it sooner than you think, here's how".

I get that. But in my book you don't downvote a comment simply because you don't agree with it. You downvote a comment because it is poorly argued, makes no sense, or something like that. Clearly, that doesn't apply to this comment.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-01T11:15:36.785Z · score: -1 (1 votes) · LW · GW

That's very interesting! I would obviously love it if such a browser addon could be constructed. And the trollface image is a great idea. :)

By the way, the fact that your very insightful comment is downvoted is really a shame. Why do people downvote interesting and informative comments like this? That makes no sense whatsoever.

Comment by stefan_schubert on Could auto-generated troll scores reduce Twitter and Facebook harassments? · 2015-05-01T11:12:01.106Z · score: 1 (1 votes) · LW · GW

My suggestion was not to train the system on user ratings:

The first is to let a number of sensible people give their troll scores of different Facebook posts and tweets (using the general and vague definition of what is to count as trolling). You would feed this into your algorithms, which would learn which combinations of words are characteristic of trolls (as judged by these people), and which arent't. The second is to simply list a number of words or phrases which would count as characteristic of trolls, in the sense of the general and vague definition.

Comment by stefan_schubert on Rational discussion of politics · 2015-04-30T14:10:56.995Z · score: 1 (1 votes) · LW · GW

Excellent! Great initiative.

Could auto-generated troll scores reduce Twitter and Facebook harassments?

2015-04-30T14:05:45.848Z · score: 5 (11 votes)
Comment by stefan_schubert on Status - is it what we think it is? · 2015-04-02T20:51:11.749Z · score: 1 (1 votes) · LW · GW

Thanks. Those are good points.

Comment by stefan_schubert on Status - is it what we think it is? · 2015-04-02T20:50:44.468Z · score: 1 (1 votes) · LW · GW

Interesting. I'm starting to believe some people might think that they want to be in charge but actually really don't. They have, so to speak, internalized society's expectations that people should want to be in charge. Because it is true that being in charge has serious drawbacks.

Comment by stefan_schubert on Status - is it what we think it is? · 2015-04-01T00:07:37.406Z · score: 1 (1 votes) · LW · GW

Very nice and illuminating conceptual analysis. Thanks!

These people who don't like to be in charge, what are they like, according to you and/or Johnstone? Less confident or just less ambitious? More commonly women, perhaps? I don't have a very clear model of their psychology.

Comment by stefan_schubert on March 2015 Media Thread · 2015-03-02T22:25:33.055Z · score: 2 (2 votes) · LW · GW

Google wants to rank websites based on facts not links

The trustworthiness of a web page might help it rise up Google's rankings if the search giant starts to measure quality by facts, not just links.

...

Instead of counting incoming links, [Google's system for measuring the trustworthiness of a page] – which is not yet live – counts the number of incorrect facts within a page. "A source that has few false facts is considered to be trustworthy," says the team (arxiv.org/abs/1502.03519v1). The score they compute for each page is its Knowledge-Based Trust score.

The software works by tapping into the Knowledge Vault, the vast store of facts that Google has pulled off the internet. Facts the web unanimously agrees on are considered a reasonable proxy for truth. Web pages that contain contradictory information are bumped down the rankings.

Comment by stefan_schubert on [Link] Algorithm aversion · 2015-03-01T07:48:34.427Z · score: 3 (3 votes) · LW · GW

Indeed. That is precisely what the so-called "closet index funds" are doing. They are said to be actively managed funds, but are in reality so-called index trackers, which just are tracking the stock market index.

The reason the managers of the fund are using index-tracking algorithms rather than human experts is, however, not so much that the former are better (as I understand they are roughly on par) but that they are much cheaper. People think that the extra costs that active management brings with it are worth it, however, since they erroneously believe that human experts can consistently beat the index.

Comment by stefan_schubert on [Link] Algorithm aversion · 2015-02-27T22:37:24.901Z · score: 2 (2 votes) · LW · GW

Haha yes that did strike me too. However, I suppose there could have been other explanations of people's unwillingness to trust algorithms than a cognitive bias of this sort. For instance, the explanation could have been that experts conspire to fool people that they are in fact better than the algorithms. The fact that people mistrust algorithms even in this case, where there clearly wasn't an expert conspiracy going on, suggests that that probably isn't the explanation.

Comment by stefan_schubert on [Link] Algorithm aversion · 2015-02-27T22:25:06.814Z · score: 5 (5 votes) · LW · GW

Here's an article in Harvard Business Review about algorithm aversion:

It’s not all egotism either. When the choice was between betting on the algorithm and betting on another person, participants were still more likely to avoid the algorithm if they’d seen how it performed and therefore, inevitably, had seen it err.

My emphasis.

The authors also have a forthcoming paper on this issue:

If showing results doesn’t help avoid algorithm aversion, allowing human input might. In a forthcoming paper, the same researchers found that people are significantly more willing to trust and use algorithms if they’re allowed to tweak the output a little bit. If, say, the algorithm predicted a student would perform in the top 10% of their MBA class, participants would have the chance to revise that prediction up or down by a few points. This made them more likely to bet on the algorithm, and less likely to lose confidence after seeing how it performed.

Of course, in many cases adding human input made the final forecast worse. We pride ourselves on our ability to learn, but the one thing we just can’t seem to grasp is that it’s typically best to just trust that the algorithm knows better.

Presumably another bias, the IKEA effect, which says that people prefer products they've partially created themselves, is at play here.

[Link] Algorithm aversion

2015-02-27T19:26:43.647Z · score: 17 (18 votes)
Comment by stefan_schubert on February 2015 Media Thread · 2015-02-03T20:05:09.292Z · score: 4 (4 votes) · LW · GW

Article on Philip Tetlock's new research on predictions in Harvard Business Review

Comment by stefan_schubert on February 2015 Media Thread · 2015-02-02T18:14:11.475Z · score: 2 (2 votes) · LW · GW

Sure. Sorry about that.

Comment by stefan_schubert on February 2015 Media Thread · 2015-02-01T19:09:34.388Z · score: -2 (2 votes) · LW · GW

Virtual Reality, The Empathy Machine

Virtual reality represents a giant leap forward in mankind’s propensity for compassion. You don’t just walk in someone’s shoes, but see the world through their eyes. In essence, a virtual reality headset is an empathy machine.

Comment by stefan_schubert on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-01-31T14:25:07.494Z · score: 2 (2 votes) · LW · GW

Excellent!!! Many thanks. :) Exactly what I was looking for.

Comment by stefan_schubert on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-01-28T16:17:38.334Z · score: 8 (8 votes) · LW · GW

I seem to recall that some Democrat and Republican donors have agreed not to give to their respective parties, but rather to charity, on the condition that their opponents do the same. Does anyone know about this? Mine and Google's combined efforts have been fruitless. Seems a very nice idea that could be used much more widely to re-distribute resources away from zero-sum games to games with joint interests.

Comment by stefan_schubert on Training Reflective Attention · 2014-12-22T13:47:33.593Z · score: 1 (1 votes) · LW · GW

This series of posts on noticing, attention, metacognition, and so on, is really great and smart. I think it's profoundly important stuff. I hope you keep posting on this.

To what extent are you including this material in CFAR classes?

Comment by stefan_schubert on Open thread, Dec. 8 - Dec. 15, 2014 · 2014-12-10T12:17:13.468Z · score: 1 (1 votes) · LW · GW

No, though I understand my comment could be read in that way. I have thought and read a lot about these questions (and written some things) and sometimes get a bit frustrated with them. I have started to become more pessimistic about the possibilities of convincing mainstream philosophers who like to work on these questions ("scholasticism with a dull knife", as a brilliant colleague of mine scribbled on his noteblock during a talk on the Gettier problem).

Perhaps we should instead focus on showing what alternative things philosophers could do. Also we should make alliances with other subjects. People outside the discipline are much more likely to want to fund work on business ethics or medical ethics than yet another go at some concept or metaphysical question.

I think this view of Matti Eklund's has a lot to be said for it:

Without borrowing wholesale Kuhn’s picture of science, I think some ideas Kuhn introduced are important to keep in mind when considering the trajectory of philosophy. Research programs are adopted, consciously or not, by a certain part of the philosophical community: certain tenets are taken for granted, certain notions are regarded as the proper ones to use as tools, and certain puzzles are regarded as the ones to focus attention on. The research program isn’t abandoned simply on the ground that seemingly compelling arguments against its fundamental assumptions are presented. Rather, it is abandoned when research conducted within its confines is no longer seen as fruitful, and when a new alternative, with some promise of success, is available.

If we can't disprove the Gettier stuff, perhaps we can hope that people will get bored of it (if we provide them with a less boring alternative).

Comment by stefan_schubert on Open thread, Dec. 8 - Dec. 15, 2014 · 2014-12-08T11:59:04.749Z · score: 6 (6 votes) · LW · GW

I agree with this. However, there are philosophers who criticize this practice. For instance, Peter Unger recently published a vehement criticism of mainstream analytic philosophy, Empty Ideas.

One influential view is that we should not try to "analyze" pre-theoretical concepts, but rather construct fruitful, exact and simple "explications". If you have that view, definitions do not become interesting for their own sake. Rather, terms and concepts are a tool in the pursuit of knowledge, which can be more or less effective. See Carnap's dicussion in Logical Foundations of Probability, pp. 3-20 (esp. p. 7).

That said, it is true that many philosophers continue to write papers on the Gettier problem in a very classical essentialist fashion, along the lines you are describing. The same goes for many other philosophical discussions (e.g. on truth, reasons, etc).

I think that there is a selection effect at work here: those who think this is silly move on to other things while those who think that it isn't keep on doing it. This creates the illusion that more people think this is a good and interesting form of philosophy than is actually the case.

Of course now and again some outsiders get so fed up with this that they write a book on it to attack it. Another similar example of this (in addition to Unger) is Ladyman and Ross's attack on mainstream analytic metaphysics (which treats questions like "is the statue and the lump of clay that is made of distinct or identical objects?). I suspect that many others feel, however, that although this kind of philosophy is a bit of a nuisance, there are other more pressing problems more worth focusing on. For instance, I suspect Nick Bostrom doesn't like this kind of philosophy, but as far as I know he hasn't spent much time criticizing it, thinking there are other problems which are more important to spend time on.

Also, it seems surprisingly hard to weed out. The kind of criticism that Carnap gave is at least a century old, but the Gettier problem and other similar problems are still treated seriously.

An interesting argument for why people who are critical of this kind of philosophy should do something about it is, though, that it presents a great opportunity cost:

Why should something as “quixotic”, “mostly harmless”, and null as academic philosophy rouse any strong feelings whatsoever?

Because of the opportunity cost. Harmless-and-null philosophy is crowding out something better, and has been doing so since 1950 or so. Philosophy did not have to be what it is today; it was made what it is by purposeful, destructive action.

The Argument from Crisis and Pessimism Bias

2014-11-11T20:25:44.734Z · score: 15 (17 votes)

Reverse engineering of belief structures

2014-08-26T18:00:31.094Z · score: 7 (11 votes)

Three methods of attaining change

2014-08-16T15:38:45.743Z · score: 7 (8 votes)

Multiple Factor Explanations Should Not Appear One-Sided

2014-08-07T14:10:00.504Z · score: 31 (33 votes)

Separating university education from grading

2014-07-03T17:23:57.027Z · score: 11 (13 votes)

The End of Bullshit at the hands of Critical Rationalism

2014-06-04T18:44:29.801Z · score: 9 (19 votes)

Book review: The Reputation Society. Part II

2014-05-14T10:16:34.380Z · score: 5 (10 votes)

Book Review: The Reputation Society. Part I

2014-05-14T10:13:19.826Z · score: 10 (13 votes)

The Rationality Wars

2014-02-27T17:08:45.470Z · score: 21 (22 votes)

Private currency to generate funds for effective altruism

2014-02-14T00:00:05.931Z · score: 3 (11 votes)

Productivity as a function of ability in theoretical fields

2014-01-26T13:16:15.873Z · score: 14 (25 votes)

Do we underuse the genetic heuristic?

2014-01-22T17:37:26.608Z · score: 4 (9 votes)

Division of cognitive labour in accordance with researchers' ability

2014-01-16T09:28:02.920Z · score: 10 (11 votes)