What do professional philosophers believe, and why?

post by Rob Bensinger (RobbBB) · 2013-05-01T14:40:47.028Z · LW · GW · Legacy · 249 comments

Contents

249 comments

LessWrong has twice discussed the PhilPapers Survey of professional philosophers' views on thirty controversies in their fields — in early 2011 and, more intensively, in late 2012. We've also been having some lively debates, prompted by LukeProg, about the general value of contemporary philosophical assumptions and methods. It would be swell to test some of our intuitions about how philosophers go wrong (and right) by looking closely at the aggregate output and conduct of philosophers, but relevant data is hard to come by.

Fortunately, Davids Chalmers and Bourget have done a lot of the work for us. They released a paper summarizing the PhilPapers Survey results two days ago, identifying, by factor analysis, seven major components consolidating correlations between philosophical positions, influences, areas of expertise, etc.

 

Anti-Naturalist1. Anti-Naturalists: Philosophers of this stripe tend (more strongly than most) to assert libertarian free will (correlation with factor .66), theism (.63), the metaphysical possibility of zombies (.47), and A theories of time (.28), and to reject physicalism (.63), naturalism (.57), personal identity reductionism (.48), and liberal egalitarianism (.32).

Anti-Naturalists tend to work in philosophy of religion (.3) or Greek philosophy (.11). They avoid philosophy of mind (-.17) and cognitive science (-.18) like the plague. They hate Hume (-.14), Lewis (-.13), Quine (-.12), analytic philosophy (-.14), and being from Australasia (-.11). They love Plato (.13), Aristotle (.12), and Leibniz (.1).

 

Objectivist2. Objectivists: They tend to accept 'objective' moral values (.72), aesthetic values (.66), abstract objects (.38), laws of nature (.28), and scientific posits (.28). Note 'Objectivism' is being used here to pick out a tendency to treat value as objectively binding and metaphysical posits as objectively real; it isn't connected to Ayn Rand.

A disproportionate number of objectivists work in normative ethics (.12), Greek philosophy (.1), or philosophy of religion (.1). They don't work in philosophy of science (-.13) or biology (-.13), and aren't continentalists (-.12) or Europeans (-.14). Their favorite philosopher is Plato (.1), least favorites Hume (-.2) and Carnap (-.12).

 

Rationalist3. Rationalists: They tend to self-identify as 'rationalists' (.57) and 'non-naturalists' (.33), to accept that some knowledge is a priori (.79), and to assert that some truths are analytic, i.e., 'true by definition' or 'true in virtue of 'meaning' (.72). Also tend to posit metaphysical laws of nature (.34) and abstracta (.28). 'Rationalist' here clearly isn't being used in the LW or freethought sense; philosophical rationalists as a whole in fact tend to be theists.

Rationalists are wont to work in metaphysics (.14), and to avoid thinking about the sciences of life (-.14) or cognition (-.1). They are extremely male (.15), inordinately British (.12), and prize Frege (.18) and Kant (.12). They absolutely despise Quine (-.28, the largest correlation for a philosopher), and aren't fond of Hume (-.12) or Mill (-.11) either.

 

Anti-Realist4. Anti-Realists: They tend to define truth in terms of our cognitive and epistemic faculties (.65) and to reject scientific realism (.6), a mind-independent and knowable external world (.53), metaphysical laws of nature (.43), and the notion that proper names have no meaning beyond their referent (.35).

They are extremely female (.17) and young (.15 correlation coefficient for year of birth). They work in ethics (.16), social/political philosophy (.16), and 17th-19th century philosophy (.11), avoiding metaphysics (-.2) and the philosophies of mind (-.15) and language (-.14). Their heroes are Kant (.23), Rawls (.14), and, interestingly, Hume (.11). They avoid analytic philosophy even more than the anti-naturalists do (-.17), and aren't fond of Russell (-.11).

 

Externalists

5. Externalists: Really, they just like everything that anyone calls 'externalism'. They think the content of our mental lives in general (.66) and perception in particular (.55), and the justification for our beliefs (.64), all depend significantly on the world outside our heads. They also think that you can fully understand a moral imperative without being at all motivated to obey it (.5).

Beyond externalism, they really have very little in common. They avoid 17th-18th century philosophy (-.13), and tend to be young (.1) and work in the UK (.1), but don't converge upon a common philosophical tradition or area of expertise, as far as the survey questions indicated.

 

Trekophobe6. Star Trek Haters: This group is less clearly defined than the above ones. The main thing uniting them is that they're thoroughly convinced that teleportation would mean death (.69). Beyond that, Trekophobes tend to be deontologists (.52) who don't switch on trolley dilemmas (.47) and like A theories of time (.41).

Trekophobes are relatively old (-.1) and American (.13 affiliation). They are quite rare in Australia and Asia (-.18 affiliation). They're fairly evenly distributed across philosophical fields, and tend to avoid weirdo intuitions-violating naturalists — Lewis (-.13), Hume (-.12), analytic philosophers generally (-.11).

 

Logical Conventionalists7. Logical Conventionalists: They two-box on Newcomb's Problem (.58), reject nonclassical logics (.48), and reject epistemic relativism and contextualism (.48). So they love causal decision theory, think all propositions/facts are generally well-behaved (always either true or false and never both or neither), and think there are always facts about which things you know, independent of who's evaluating you. Suspiciously normal.

They're also fond of a wide variety of relatively uncontroversial, middle-of-the-road views most philosophers agree about or treat as 'the default' — political egalitarianism (.33), abstract object realism (.3), and atheism (.27). They tend to think zombies are metaphysically possible (.26) and to reject personal identity reductionism (.26) — which aren't metaphysically innocent or uncontroversial positions, but, again, do seem to be remarkably straightforward and banal approaches to all these problems. Notice that a lot of these positions are intuitive and 'obvious' in isolation, but that they don't converge upon any coherent world-view or consistent methodology. They clearly aren't hard-nosed philosophical conservatives like the Anti-Naturalists, Objectivists, Rationalists, and Trekophobes, but they also clearly aren't upstart radicals like the Externalists (on the analytic side) or the Anti-Realists (on the continental side). They're just kind of, well... obvious.

Conventionalists are the only identified group that are strongly analytic in orientation (.19). They tend to work in epistemology (.16) or philosophy of language (.12), and are rarely found in 17th-19th century (-.12) or continental (-.11) philosophy. They're influenced by notorious two-boxer and modal realist David Lewis (.1), and show an aversion to Hegel (-.12), Aristotle (-.11), and and Wittgenstein (-.1).

 

An observation: Different philosophers rely on — and fall victim to — substantially different groups of methods and intuitions. A few simple heuristics, like 'don't believe weird things until someone conclusively demonstrates them' and 'believe things that seem to be important metaphysical correlates for basic human institutions' and 'fall in love with any views starting with "ext"', explain a surprising amount of diversity. And there are clear common tendencies to either trust one's own rationality or to distrust it in partial (Externalism) or pathological (Anti-Realism, Anti-Naturalism) ways. But the heuristics don't hang together in a single Philosophical World-View or Way Of Doing Things, or even in two or three such world-views.

There is no large, coherent, consolidated group that's particularly attractive to LWers across the board, but philosophers seem to fall short of LW expectations for some quite distinct reasons. So attempting to criticize, persuade, shame, praise, or even speak of or address philosophers as a whole may be a bad idea. I'd expect it to be more productive to target specific 'load-bearing' doctrines on dimensions like the above than to treat the group as a monolith, for many of the same reasons we don't want to treat 'scientists' or 'mathematicians' as monoliths.

 

Another important result: Something is going seriously wrong with the high-level training and enculturation of professional philosophers. Or fields are just attracting thinkers who are disproportionately bad at critically assessing a number of the basic claims their field is predicated on or exists to assess.

Philosophers working in decision theory are drastically worse at Newcomb than are other philosophers, two-boxing 70.38% of the time where non-specialists two-box 59.07% of the time (normalized after getting rid of 'Other' answers). Philosophers of religion are the most likely to get questions about religion wrong — 79.13% are theists (compared to 13.22% of non-specialists), and they tend strongly toward the Anti-Naturalism dimension. Non-aestheticians think aesthetic value is objective 53.64% of the time; aestheticians think it's objective 73.88% of the time. Working in epistemology tends to make you an internalist, philosophy of science tends to make you a Humean, metaphysics a Platonist, ethics a deontologist. This isn't always the case; but it's genuinely troubling to see non-expertise emerge as a predictor of getting any important question in an academic field right.

 

EDIT: I've replaced "cluster" talk above with "dimension" talk. I had in mind gjm's "clusters in philosophical idea-space", not distinct groups of philosophers. gjm makes this especially clear:

The claim about these positions being made by the authors of the paper is not, not even a little bit, "most philosophers fall into one of these seven categories". It is "you can generally tell most of what there is to know about a philosopher's opinions if you know how well they fit or don't fit each of these seven categories". Not "philosopher-space is mostly made up of these seven pieces" but "philosopher-space is approximately seven-dimensional".

I'm particularly guilty of promoting this misunderstanding (including in portions of my own brain) by not noting that the dimensions can be flipped to speak of (anti-anti-)naturalists, anti-rationalists, etc. My apologies. As Douglas_Knight notes below, "If there are clusters [of philosophers], PCA might find them, but PCA might tell you something interesting even if there are no clusters. But if there are clusters, the factors that PCA finds won't be the clusters, but the differences between them. [...] Actually, factor analysis pretty much assumes that there aren't clusters. If factor 1 put you in a cluster, that would tell pretty much all there is to say and would pin down your factor 2, but the idea in factor analysis is that your factor 2 is designed to be as free as possible, despite knowing factor 1."

249 comments

Comments sorted by top scores.

comment by gjm · 2013-05-01T23:25:23.671Z · LW(p) · GW(p)

Some of the comments here indicate that their authors have severely misunderstood the nature of those seven "major components", and actually I think the OP may have too.

They are not clusters in philosopher-space, particular positions that many philosophers share. They are directions in philosopher-space along which philosophers tend to vary. Each could equivalently have been replaced by its exact opposite. They are defined, kinda, by clusters in philosophical idea-space: groups of questions with the property that a philosopher's position on one tends to correlate strongly with his or her position on another.

The claim about these positions being made by the authors of the paper is not, not even a little bit, "most philosophers fall into one of these seven categories". It is "you can generally tell most of what there is to know about a philosopher's opinions if you know how well they fit or don't fit each of these seven categories". Not "philosopher-space is mostly made up of these seven pieces" but "philosopher-space is approximately seven-dimensional".

So, for instance, someone asked "Is there a cluster that has more than 1 position in common with LW norms?". The answer (leaving aside the fact that these things aren't clusters in the sense the question seems to assume) is yes: for instance, the first one, "anti-naturalism", is simply the reverse of "naturalism", which is not far from being The Standard LW Position on everything it covers. The fourth, "anti-realism", is more or less the reverse of The Standard LW Position on a different group of issues.

(So why did the authors of the paper choose to use "anti-naturalism" and "anti-realism" rather than "naturalism" and "realism"? I think in each case they chose the more distinctive and less usual of the two opposite poles. Way more philosophers are naturalists and realists than are anti-naturalists and anti-realists. I repeat: these things are not clusters in which a lot of philosophers are found; that isn't what they're for.)

Replies from: RobbBB, CasioTheSane
comment by Rob Bensinger (RobbBB) · 2013-05-02T02:24:28.263Z · LW(p) · GW(p)

This is very lucid; upvoting so more people see it. I worry even the images I added for fun may perpetuate this mistake by treating the dimensions as though they were discrete Tribes. I may get rid of those.

Replies from: ChristianKl
comment by ChristianKl · 2013-05-04T10:04:05.438Z · LW(p) · GW(p)

I think that it might help to use the label "Rationalism" instead of "Rationalists" if you talk about dimension as opposed to clusters.

comment by CasioTheSane · 2013-05-03T16:51:31.037Z · LW(p) · GW(p)

This post should be quoted at the very top of the article, I didn't understand what I was reading about until I read this.

comment by paulfchristiano · 2013-05-01T18:53:05.447Z · LW(p) · GW(p)

I also disagree with philosophers, disproportionately regarding their own areas of expertise, but the pattern of reasoning here is pretty suspect. The observation is: experts are uniformly less likely to share LW views than non-experts. The conclusion is: experts are no good.

I think you should tread carefully. This is the sort of thing that gets people (and communities) in epistemic trouble.

Replies from: CarlShulman, pjeby, Thrasymachus
comment by CarlShulman · 2013-05-02T07:27:26.928Z · LW(p) · GW(p)

ETA: more analysis here, using the general undergrad vs target faculty comparison, instead of comparing grad students and faculty within an AOS.

This should be taken very seriously. In the case of philosophy of religion I think what's happening is a selection effect: people who believe in theist religion are disproportionately likely to think it worthwhile to study philosophy of religion, i.e. the theism predates their expertise in the philosophy of religion, and isn't a result of it. Similarly moral anti-realists are going to be less interested in in meta-ethics, and in general people who think a field is pointless or nonsense won't go into it.

Now, I am going to try to test that for religion, meta-ethics, and decision theory by comparing graduate students with a specialty in the field to target (elite) faculty with specialties in the field in the PhilPapers data, available at http://philpapers.org/surveys/results.pl . It looks like target faculty philosophers of religion and meta-ethicists are actually less theistic and less moral realist than graduate students specializing in those areas, suggesting that selection effects rather than learning explain the views of these specialists. There weren't enough data points for decision theory to draw conclusions. I haven't tried any other analyses or looked at other subjects yet, or otherwise applied a publication bias filter.

Graduate students with philosophy of religion as an Area of Specialization (AOS):

God: theism or atheism?

Accept: theism 29 / 43 (67.4%) Lean toward: theism 4 / 43 (9.3%) Lean toward: atheism 3 / 43 (7.0%) Accept: atheism 2 / 43 (4.7%) Agnostic/undecided 1 / 43 (2.3%) There is no fact of the matter 1 / 43 (2.3%) Accept another alternative 1 / 43 (2.3%) Accept an intermediate view 1 / 43 (2.3%) Reject both 1 / 43 (2.3%)

Target faculty with philosophy of religion as AOS:

God: theism or atheism?

Accept: theism 30 / 47 (63.8%) Accept: atheism 9 / 47 (19.1%) Lean toward: theism 4 / 47 (8.5%) Reject both 2 / 47 (4.3%) Agnostic/undecided 2 / 47 (4.3%)

Graduate students with a metaethics AOS:

Meta-ethics: moral realism or moral anti-realism?

Accept: moral realism 50 / 116 (43.1%) Lean toward: moral realism 25 / 116 (21.6%) Accept: moral anti-realism 19 / 116 (16.4%) Lean toward: moral anti-realism 9 / 116 (7.8%) Agnostic/undecided 4 / 116 (3.4%) Accept an intermediate view 4 / 116 (3.4%) Accept another alternative 3 / 116 (2.6%) Reject both 2 / 116 (1.7%)

Target faculty with a meta-ethics AOS:

Meta-ethics: moral realism or moral anti-realism?

Accept: moral realism 42 / 102 (41.2%) Accept: moral anti-realism 17 / 102 (16.7%) Lean toward: moral realism 15 / 102 (14.7%) Lean toward: moral anti-realism 10 / 102 (9.8%) Accept an intermediate view 7 / 102 (6.9%) The question is too unclear to answer 6 / 102 (5.9%) Accept another alternative 3 / 102 (2.9%) Agnostic/undecided 2 / 102 (2.0%)

Graduate students in decision theory:

Newcomb's problem: one box or two boxes?

Accept: two boxes 3 / 9 (33.3%) Accept another alternative 1 / 9 (11.1%) Accept an intermediate view 1 / 9 (11.1%) Lean toward: one box 1 / 9 (11.1%) Accept: one box 1 / 9 (11.1%) Insufficiently familiar with the issue 1 / 9 (11.1%) The question is too unclear to answer 1 / 9 (11.1%)

Target faculty in decision theory:

Newcomb's problem: one box or two boxes?

Accept: two boxes 13 / 31 (41.9%) Accept: one box 7 / 31 (22.6%) Lean toward: two boxes 6 / 31 (19.4%) Other 2 / 31 (6.5%) Agnostic/undecided 2 / 31 (6.5%) Lean toward: one box 1 / 31 (3.2%)

Replies from: buybuydandavis
comment by buybuydandavis · 2013-05-02T08:03:25.614Z · LW(p) · GW(p)

In the case of philosophy of religion I think what's happening is a selection effect: people who believe in theist religion are disproportionately likely to think it worthwhile to study philosophy of religion, i.e. the theism predates their expertise in the philosophy of religion, and isn't a result of it.

I'll give you a slightly different spin on the bias. More evolutionary bias than selection bias.

People who assert that a field is worthwhile are more likely to be successful in that field.

comment by pjeby · 2013-05-01T20:21:54.629Z · LW(p) · GW(p)

The conclusion is: experts are no good

We actually see this across a lot of fields besides philosophy, and it's not LW-specific. For example, simply adding up a few simple scores does better than experts at predicting job performance.

It's been shown that expertise is only valuable in fields where there is a short enough and frequent enough feedback loop for a person to actually develop expertise -- and there is something coherent to develop the expertise in. Outside of such fields, experts are just blowhards with status.

Given the nature of the field, the prior expectation for philosophers having any genuine expertise at anything except impressing people, should be set quite low. (Much like we should expect expert short-term stock pickers to not be expert at anything besides being lucky.)

Replies from: Kaj_Sotala, Juno_Watt, brazil84, Pablo_Stafforini
comment by Kaj_Sotala · 2013-05-02T06:11:06.289Z · LW(p) · GW(p)

Of course, one could argue that LW regulars get even less rapid feedback on these issues than the professional philosophers do. The philosophers at least are frequently forced to debate their ideas with people who disagree, while LW posters mostly discuss these things with each other - that is, with a group that is self-selected for thinking in a similar way. We don't have the kind of diversity of opinion that is exemplified by these survey results.

Replies from: CarlShulman
comment by CarlShulman · 2013-05-02T07:05:19.858Z · LW(p) · GW(p)

This seems right to me.

However see my comment above for evidence suggesting that the views of the specialists are those they brought with them to the field (or shifting away from the plurality view), i.e. that the skew of views among specialists is NOT due to such feedback.

comment by Juno_Watt · 2013-05-01T21:11:24.605Z · LW(p) · GW(p)

It's been shown that expertise is only valuable in fields where there is a short enough and frequent enough feedback loop for a person to actually develop expertise -- and there is something coherent to develop the expertise in

What do you think philosophy is lacking? An (analytical) philosopher who makes a logic error is hauled up very quickly by their peers. That's your feedback loop. So is "something coherent" lacking? Phil. certainly doesn't have a set of established results like engineering, or the more settled areas of science. It does have a lot of necessary skill in formulating, expressing and criticising ideas and arguments. Musicians aren't non-experts just because there is barely such a thing as a musical fact. Philosophy isn't broken science.

Replies from: novalis
comment by novalis · 2013-05-02T01:32:42.735Z · LW(p) · GW(p)

OK, so philosophers manage to avoid logical errors. Good for them. However, they make more complicated errors (see A Human's Guide To Words for some examples), as well as sometimes errors of probability. The thing that philosophers develop expertise in is writing interesting arguments and counterarguments. But these arguments are castles built on air; there is no underlying truth to most of the questions they ask (or, if there is an underlying truth, there is no penalty for being wrong about it). And even some of the "settled" positions are only settled because of path-dependence -- that is, once they became popular, anyone with conflicting intuitions would simply never become a philosopher (see Buckwalter and Stich for more on this).

Scientists (at least in theory) have all of the same skills that philosophers should have -- formulating theories and arguments, catching logical errors, etc. It's just that in science, the arguments are (when done correctly) constrained to be about the real world.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-07T19:05:10.891Z · LW(p) · GW(p)

there is no underlying truth to most of the questions they ask

How do you know?

It's just that in science, the arguments are (when done correctly) constrained to be about the real world.

How do you know? Are you aware that much philosophy is about science.

Replies from: novalis
comment by novalis · 2013-05-08T18:10:21.544Z · LW(p) · GW(p)

there is no underlying truth to most of the questions they ask

How do you know?

To be fair, I have not done an exhaustive survey; "most" was hyperbole.

It's just that in science, the arguments are (when done correctly) constrained to be about the real world.

How do you know? Are you aware that much philosophy is about science.

Sure. But there is no such constraint on philosophy of science.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T11:56:05.650Z · LW(p) · GW(p)

How do you know? Are you aware that much philosophy is about science.

Sure. But there is no such constraint on philosophy of science.

Why is that a problem? Science deals with empirical reality, philosophy of science deals with meta-level issues. Each to their own.

Replies from: novalis
comment by novalis · 2013-05-11T01:20:19.312Z · LW(p) · GW(p)

Why is that a problem? Science deals with empirical reality, philosophy of science deals with meta-level issues. Each to their own.

Because if there is no fact of the matter on the "meta-level issues", then you're not actually dealing with "meta-level issues". You are dealing with words, and your success in dealing with words is what's being measured. Your argument is that expertise develops by feedback, but the feedback that philosophers get isn't the right kind of feedback.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-11T13:32:24.455Z · LW(p) · GW(p)

I don't know what you mean by "fact of the matter". It's not a problem that meta-level isn't object level, any more than it's a problem that cats aren't dogs. I also don't think that there is any problem in identifying the meta level. Philosophers "don't deal with words" in the sense that linguists. They use words to do things, as do many other specialities. You seem to be making the complaint that success isn't well defined in philosophy, but that would require treating object level science as much more algorithmic than it actually is. What makes a scientific theory a good theory? Most scientists agree on it?

Replies from: novalis
comment by novalis · 2013-05-11T23:33:06.772Z · LW(p) · GW(p)

I don't know what you mean by "fact of the matter".

An actual truth about the world.

What makes a scientific theory a good theory?

Have you read A Technical Explanation of Technical Explanation?

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-12T11:30:39.501Z · LW(p) · GW(p)

I don't know what you mean by "fact of the matter".

An actual truth about the world.

I don't know what you mean by that. Is Gresham's law such a truth?

What makes a scientific theory a good theory?

Have you read A Technical Explanation of Technical Explanation?

My question was rhetorical. Science does not deal entirely in directly observable empirical facts -- which might be what you meant by "actual truths about the world". Those who fly under the Bayesian flag by and large don't either: most of the material on this site is just as indirect/meta-levle/higher-level as philosophy. I just don't see anything that justifies the "Boo!" rhetoric.

Replies from: novalis
comment by novalis · 2013-05-12T16:19:50.478Z · LW(p) · GW(p)

Actually, perhaps you should try The Simple Truth, because you seem totally confused.

Yes, a lot of the material on this site is philosophy; I would argue that it is correspondingly more likely to be wrong, precisely because is not subject to the same feedback loops as science. This is why EY keeps asking, "How do I use this to build an AI?"

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-12T18:33:29.906Z · LW(p) · GW(p)

you seem totally confused

So...is Gresham;s Law an actual truth about the world?

perhaps you should try The Simple Truth

Now I'm confused. Is that likely to be wrong or not?

Replies from: novalis
comment by novalis · 2013-05-14T01:36:44.327Z · LW(p) · GW(p)

So...is Gresham;s Law an actual truth about the world?

As far as I can tell, yes (in a limited form), but I'm prepared for an economist to tell me otherwise.

perhaps you should try The Simple Truth

Now I'm confused. Is that likely to be wrong or not?

If we consider it as a definition, then it is either useful or not useful.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-14T10:13:06.755Z · LW(p) · GW(p)

So...is Gresham;s Law an actual truth about the world?

As far as I can tell, yes (in a limited form), but I'm prepared for an economist to tell me otherwise.

The focus of the question was "about the world". Gresham's law, if true, is not a direct empirical fact like the metling point of aluminium, not is it built into the fabric of the universe, since it is indefinable without humans and their economic activity.

perhaps you should try The Simple Truth

Now I'm confused. Is that likely to be wrong or not?

If we consider it as a definition, then it is either useful or not useful.

So this is about the "true" part, not about the "actual world" part? In that case, You are';t complaining that philosophy ins;t connected to reality, your claiming that it is all false. In that case I will have to ask you when and how you became omniscient.

Replies from: novalis
comment by novalis · 2013-05-15T04:31:39.551Z · LW(p) · GW(p)

The focus of the question was "about the world". Gresham's law, if true, is not a direct empirical fact like the melting point of aluminium, not is it built into the fabric of the universe, since it is indefinable without humans and their economic activity.

Humans are part of the world.

So this is about the "true" part, not about the "actual world" part? In that case, You aren't complaining that philosophy isn't connected to reality, your claiming that it is all false. In that case I will have to ask you when and how you became omniscient.

I'm afraid I don't understand what you're saying here. Yes, if you are confused about what truth means, a definition would be useful; I think The Simple Truth is a pretty useful one (if rather long-winded, as is typical for Yudkowsky). It doesn't tell you much about the actual world (except that it hints at a reasonable justification for induction, which is developed more fully elsewhere).

But I'm not sure why you think I am claiming philosophy is all false.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-15T18:45:52.662Z · LW(p) · GW(p)

The focus of the question was "about the world". Gresham's law, if true, is not a direct empirical fact like the melting point of aluminium, not is it built into the fabric of the universe, since it is indefinable without humans and their economic activity.

Humans are part of the world.

Then there is no reason why some philosopihical claims about human nature could not count as Actual Truths About The World, refuting your original point.

Replies from: novalis
comment by novalis · 2013-05-16T01:11:02.870Z · LW(p) · GW(p)

That depends on what you mean by "human nature," but yes, some such claims could. However, they aren't judged based on this (outside of experimental philosophy, of course). So, there is no feedback loop.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-16T10:05:27.228Z · LW(p) · GW(p)

However, they aren't judged based on this

Based on what? Is Gresham's law based on "this"?

Replies from: novalis, shminux
comment by novalis · 2013-05-17T04:20:14.906Z · LW(p) · GW(p)

That comment could have been more clear. My apologies.

Philosophers are not judged based on whether their claims accurately describe the world. This was my original point, which I continue to stand by.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-29T19:04:42.810Z · LW(p) · GW(p)

OK, it has been established that you attach True to the sentence:

"Philosophers are not judged based on whether their claims accurately describe the world".

The question is what that means. We have established that philosophical claims can be about the world, and it seems uncontroversial that some of the make true claims some of the time, since they all disagree with each other and therefore can't all be wrong.

The problem is presumably the epistemology, the justification. Perhaps you mean that philosophy doesn't use enough empiricism. Although it does use empiricism sometimes, and it is not that every scientific question can be settled empirically.

Replies from: novalis, OrphanWilde
comment by novalis · 2013-05-30T15:41:23.261Z · LW(p) · GW(p)

I'm going to leave this thread here, because I think I've made my position clear, and I don't think we'll get further if I re-explain it.

comment by OrphanWilde · 2013-05-29T19:22:13.056Z · LW(p) · GW(p)

since they all disagree with each other and therefore can't all be wrong.

Doesn't follow.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-29T22:43:53.519Z · LW(p) · GW(p)

You mean there are ideas no philosopher has contemplated?

comment by Shmi (shminux) · 2013-05-16T17:17:59.439Z · LW(p) · GW(p)

Just a friendly advice. Having looked through your comment history I have noticed that you have trouble interpreting the statements of others charitably. This is fine for debate-style arguments, but is not a great idea on this forum, where winning is defined by collectively constructing a more accurate map, not as an advantage in a zero-sum game. (Admittedly, this is the ideal case, the practice is unfortunately different.) Anyway, consider reading the comments you are replying to in the best possible way first.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-16T19:44:44.422Z · LW(p) · GW(p)

Speaking of which, I I honestly had no idea what the "this" meant. Do you?

Replies from: shminux
comment by Shmi (shminux) · 2013-05-16T19:58:01.791Z · LW(p) · GW(p)

If you honestly do not understand the point the comment you are replying to is making, a better choice is asking the commenter to clarify, rather than continuing to argue based on this lack of understanding. TheOtherDave does it almost to a fault, feel free to read some of his threads. Asking me does not help, I did not write the comment you didn't understand.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-16T20:11:08.064Z · LW(p) · GW(p)

If you honestly do not understand the point the comment you are replying to is making, a better choice is asking the commenter to clarify

I believe I did:-

' Based on what? Is Gresham's law based on "this"?'

Asking me does not help, I did not write the comment you didn't understand.

The point is that if no one can understand the comment, then I am not uncharitably pretending not to understand the comment:

comment by brazil84 · 2013-09-01T14:06:30.934Z · LW(p) · GW(p)

It's been shown that expertise is only valuable in fields where there is a short enough and frequent enough feedback loop for a person to actually develop expertise -- and there is something coherent to develop the expertise in. Outside of such fields, experts are just blowhards with status.

I don't disagree with this, but do you happen to have a cite?

I would also point out that feedback which consists solely of the opinions of other experts probably shouldn't count as feedback. Too much danger of groupthink.

comment by Pablo (Pablo_Stafforini) · 2013-05-02T03:36:45.155Z · LW(p) · GW(p)

The finding that expertise is only valuable in fields where there is a sufficiently short and frequent feedback look plausibly explains why professional philosophers are no better than the general population at answering philosophical questions. However, it doesn't explain the observation that philosophical expertise seems to be negatively correlated with true philosophical beliefs, as opposed to merely uncorrelated. Why are philosophers of religion less likely to believe the truth about religion, moral philosophers less likely to believe the truth about morality, and metaphysicians less likely to believe the truth about reality, than their colleagues with different areas of expertise?

Replies from: loup-vaillant
comment by loup-vaillant · 2013-05-02T11:35:32.073Z · LW(p) · GW(p)

Edit: this post is mostly a duplicate of this one

I would guess that those particular fields look more interesting when you make the wrong assumptions to begin with. I mean, it's much less interesting to talk about God when you accept there is none. Or to talk about metaphysics, when you accept that the answer will most likely come from physics. (I don't know about morality.)

comment by Thrasymachus · 2013-05-03T21:31:33.344Z · LW(p) · GW(p)

I'm pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don't think many people have prior convictions about decision theory before they study it.

I've noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.

Replies from: Qiaochu_Yuan, CarlShulman, gwern, Randaly, wedrifid
comment by Qiaochu_Yuan · 2013-05-03T21:38:26.405Z · LW(p) · GW(p)

I don't think this is true in every domain. If the domain is bridge building, for example, I have some confidence that the domain experts have built a bridge or two and know what it takes to keep them up and running; if they didn't, they wouldn't have a job. That is, bridge building is a domain in which you are forced to repeatedly make contact with reality, and that keeps your thoughts about bridge building honest. Many domains have this property, but not all of them do. Philosophy is a domain that I suspect may not have this making-contact-with-reality property (philosophers are not paid to resolve philosophical problems, they are paid to write philosophy papers, which means they're actually incentivized not to settle questions); some parts of martial arts might be another, and some parts of psychotherapy might be a third, just so it doesn't sound like I'm picking on philosophy uniquely.

Replies from: Thrasymachus
comment by Thrasymachus · 2013-05-04T10:58:48.640Z · LW(p) · GW(p)

I agree with the signs of the effects you suggest re. philosophers being incentivized to disagree, but that shouldn't explain (taking the strongest example of my case, two-boxing), why the majority of philosophers take the objectively less plausible view.

But plausibly LWers have the same sort of effects explaining their contra-philosophy-experts consensus. Also I don't see how the LWers are more likely to be put in touch with reality re. these questions than philosophers.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-05-04T18:49:40.299Z · LW(p) · GW(p)

Also I don't see how the LWers are more likely to be put in touch with reality re. these questions than philosophers.

Fair point.

comment by CarlShulman · 2013-05-06T20:31:10.515Z · LW(p) · GW(p)

I don't think many people have prior convictions about decision theory before they study it.

You picked literally the most extreme case, where 52.5% of undergraduates answered "insufficiently familiar," followed by 46.1% for A- vs B-theory of time. The average for all other questions was just under 12%, 8.8% for moral realism, 0.9% for free will, 0% for atheism.

For Newcomb most undergrads are not familiar enough with the problem to have an opinion, but people do have differing strong intuitions on first encountering the problem. However, the swing in favor of two boxing for Newcomb from those undergrads with an opinion to target faculty is a relatively large chance in ratio of support from 16:18 to 31:21. Learning about dominance arguments and so forth really does sway people.

I just looked through all the PhilPapers survey questions, comparing undergrads vs target faculty with the coarse breakdown. For each question I selected the plurality non-"Other" (which included insufficient knowledge, not sure, etc) option, and recorded the swing in opinion from philosophy undergraduates to philosophy professors, to within a point.

Now, there is a lot of selection filter between undergraduates and target faculty; the faculty will tend to be people who think philosophy is more worthwhile, keen on graduate education, and will be smarter with associated views (e.g. atheism is higher at more elite schools and among those with higher GRE scores, which correlate with becoming faculty). This is not a direct measure of the effect of philosophy training and study on particular people, but it's still interesting as suggestive evidence about the degree to which philosophical study and careers inform (or otherwise influence) philosophical opinion.

In my Google Doc I recorded an average swing from undergraduates to target faculty of ~10% in the direction of the target faculty plurality, which is respectable but not huge. Compatibilism rises 18 points, atheism 10 points, moral realism 12 points, physicalism 4 points, two-boxing by 15, deontology by 10, egalitarianism by 10. Zombies and personal identity/teletransporter barely move. The biggest swing is ~30 points in favor of non-skeptical realism about the external world.

That said, I agree the LWers who answered the survey questions in a LW thread were overconfident, that the average level of philosophical thinking here is lower quality than you would find in elite philosophy students and faculty (although not uniformly, if for no other reason because some such people read and comment at LW), and that some prominent posters are pretty overconfident (although note that philosophers themselves tend to be very confident in their views despite the similarly confident disagreement of their epistemic peers with rival views, far more than your account would suggest is reasonable, or than I would).

comment by gwern · 2013-05-04T03:25:35.498Z · LW(p) · GW(p)

this screams Dunning-Kruger effect.

Please cite the specific part of the original Dunning-Kruger paper which would apply here. I don't think you've read it or understand what the effect actually is.

Replies from: Thrasymachus
comment by Thrasymachus · 2013-05-04T11:14:02.213Z · LW(p) · GW(p)

From the abstract:

People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.

The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.

To spell it out (in case I've misunderstood what Dunning-Kruger is supposed to connate), the explanation I was suggesting was:

LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being 'fully and completely dissolved problem' on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it). When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.

Yet objectively/outside view wise, the philosophers who specialize in (for example) free will are by far epistemically superior to LWers on questions of free will: they've spent much more time thinking about it, read much more relevant literature, have much stronger credentials in philosophy, etc. Furthermore, the reasons offered by LWers as to why (for example) compatibilism is obviously true are pretty primitive (and responded to) compared to the discussion had in academia.

So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers. Hence the response to expert disagreement with them is to assert the experts must be systematically irrational/biased etc.

Replies from: gwern
comment by gwern · 2013-05-13T21:20:58.989Z · LW(p) · GW(p)

From the abstract:

So, as I thought: you had not read it before, or you would not be quoting the abstract at me, or rather, would be quoting more relevant parts from the paper.

The papers results obviously are not directly applicable, but the general they report (people who are not good at X tend to overestimate their ability at X relative to others) is labelled Dunning-Kruger by most is applicable.

No, it is not. If you actually read the paper, you would have learned that this is not directly applicable and there's no reason to expect that there would even be an indirect applicability. From the full abstract which you chose not to quote, we immediately find at least two areas where DK should break:

Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability.

The average LWer - never mind the people doing most of the commenting and posting - is easily in the 95th+ percentile on logic and grammar.

Besides that, LW is obsessed with 'meta' issues, which knocks out the 'lack of metacognitive ability' which is the other scissor of DK.

Thirdly, DK is generally thought to apply when there is no feedback which can compensate for the imperfect self-assessment; however, LW is notorious for being highly critical and fractious and agreeing on very little (the surveys reveal that we can't even agree on atheism!).

Fourth, the part of DK you don't focus on is how the top quartile reliably underestimates its own performance (see the graphs on pg1124-1126). Unless you have an objective indicator that LWers are very bad at philosophy - and I would note here that LWers routinely exceed the performance I observed of my philosophy classmates and even published philosophy papers I've read, like the dreck that gets published in JET, where I spent more than a few posts here going through and dissecting individual papers - it at least as plausible that LWers are actually underestimating their performance. The top quartile, by the way, in the third experiment actually increased its self-assessed performance by observing the performance of others, and in the fourth experiment this was due to overestimating the performance of others before observing their actual performance Application of this to LW is left as an exercise to the reader...

LWers generally hold views at variance with the balance of domain experts on issues like decision theory, and when they agree with the consensus view of experts, they tend to be much more confident of these views than implied by the split of opinion (e.g. free will being 'fully and completely dissolved problem' on the wiki via compatibilism despite 30% or whatever of specialists disagreeing with it).

A wiki page is a wiki page. If you were informed about LW views, you would be citing the surveys, which are designed for that purpose.

(And are you sure that 30% is right there? Because if 30% disagree, then 70% agree...)

When confronted with the evidence of expert disagreement, LWers generally assume the experts getting it wrong, and think something is going wrong with philosophy training.

Experts think much the same thing: philosophers have always been the harshest critics of philosophers. This does not distinguish LWers from philosophers.

So the explanation that best fits the facts is that LWers are not that great at philosophy, and overestimate their ability relative to actual philosophers.

As I've shown above, none of that holds, and you have distorted badly the DK research to fit your claims. You have not read the paper, you do not understand why it applies, you have no evidence for your meta thesis aside from disagreeing with an unknown and uncited fraction of experts, and you are apparently unaware of your ignorance in these points.

comment by Randaly · 2013-05-04T02:41:45.725Z · LW(p) · GW(p)

Compatibilism doesn't belong on that list; a majority of philosophers surveyed agree, and it seems like most opposition is concentrated within Philosophy of Religion, which I don't think is the most relevant subfield. (The correlation between philosophers of religion and libertarianism was the second highest found.)

Replies from: Thrasymachus
comment by Thrasymachus · 2013-05-04T11:15:45.369Z · LW(p) · GW(p)

True, but LW seems to be overconfident in compatibilism compared to the spread of expert opinion. It doesn't seem it should be considered 'settled' or 'obvious' when >10% of domain experts disagree.

comment by wedrifid · 2013-05-04T02:26:23.359Z · LW(p) · GW(p)

I'm pretty sure an outside view would say it is LWers rather than domain experts who are more likely to be wrong, even when accounting for the selection-confounding Carl Schulman notes: I don't think many people have prior convictions about decision theory before they study it.

I observe that in some cases this can be both a rational thing believe and simultaneously wrong. (In fact this is the case whenever either a high status belief is incorrect or someone is mistaken about the relevance of a domain of authority to a particular question.)

I've noted it previously, but when the LW consensus are that certain views are not just correct but settled questions (obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism etc.), despite the balance of domain experts disagreeing with said consensus, this screams Dunning-Kruger effect.

It does scream that. Indeed, anyone who has literally no other information than that a subculture has a belief along those lines that contradicts an authority that the observer has reason to trust more then Dunning-Kruger is prompted as a likely hypothesis.

Nevertheless: Obviously compatibilism re. free will, obviously atheism, obviously one-box, obviously not moral realism!

The 'outside view' is useful sometimes but it is inherently, by design, about what one would believe if one was ignorant. It is reasoning as though one does not have access to most kinds of evidence but completely confident in beliefs about reference class applicability. In particular in this case it would require being ignorant not merely of lesswrong beliefs but also to be ignorant of philosophophy, philosophy of science and sociology literature too.

Replies from: Thrasymachus
comment by Thrasymachus · 2013-05-04T11:23:13.878Z · LW(p) · GW(p)

Not how helpful this is, but my knowledge of these fields tends to confirm that LW arguments on these tend to recapitulate work already done in the relevant academic circles, but with far inferior quality.

If LWers look at a smattering of academic literature and think the opposite, then fair enough. Yet I think LWers generally form their views on these topics based on LW work, and not look at at least some of the academic work on these topics. If so, I think they should take the outside view argument seriously, as their confidence in LW work doesn't confirm the 'we're really right about this because we've got the better reasons' over dunning-kruger explanations.

comment by Douglas_Knight · 2013-05-01T22:11:13.474Z · LW(p) · GW(p)

Principal component analysis is not the same as clustering. Some of the post seems to make that distinction, while other parts appear to blur it.

If there are clusters, PCA might find them, but PCA might tell you something interesting even if there are no clusters. But if there are clusters, the factors that PCA finds won't be the clusters, but the differences between them. In the simplest case, if there are two clusters, a left-wing party and a right-wing party, PCA will say that there is one interesting factor, the factor that distinguishes the two parties. But PCA will also say that is an interesting factor if there is no cluster, but a clump of people in the middle, thinning towards extremists.

Actually, factor analysis pretty much assumes that there aren't clusters. If factor 1 put you in a cluster, that would tell pretty much all there is to say and would pin down your factor 2, but the idea in factor analysis is that your factor 2 is designed to be as free as possible, despite knowing factor 1.

Replies from: badger, RobbBB
comment by badger · 2013-05-01T23:21:42.347Z · LW(p) · GW(p)

Just about to say this. 'Cluster' is very much the wrong word to use to describe the components. A reasonable word would be 'dimension'. Someone can be more or less realist/anti-realist, rationalist/anti-rationalist, externalist/anti-externalist, and each of those dimensions is relatively independent of the others.

The main point of conducting a principal component / factor analysis is dimension reduction. A philosopher can be fairly well described by how strongly they endorse each component rather than keeping track of each individual answer. This is the same math behind the Big 5 personality model.

This confusion seems to be why the post claims the LW-ish position isn't represented. It's not that there is a well-defined group of anti-naturalists that LWers don't fit into; instead, the dimension just happens to be defined with anti-naturalist on the high end. Then, LW roughly endorses being low on each dimension, except maybe externalism and objectivism.

comment by Rob Bensinger (RobbBB) · 2013-05-02T01:59:52.629Z · LW(p) · GW(p)

Thanks, Douglas! I was worried about this too, but I rushed the post a bit too much and didn't think of obvious better terms. I'll edit the post to speak of 'dimensions' so I don't perpetuate any misunderstandings. If there are any other improvements you'd make, let me know.

comment by Jack · 2013-05-01T15:51:13.544Z · LW(p) · GW(p)

Philosophers working in decision theory are drastically worse at Newcomb than are other philosophers, two-boxing 70.38% of the time where non-specialists two-box 59.07% of the time (normalized after getting rid of 'Other' answers). Philosophers of religion are the most likely to get questions about religion wrong — 79.13% are theists (compared to 13.22% of non-specialists), and they tend strongly toward the Anti-Naturalism cluster. Non-aestheticians think aesthetic value is objective 53.64% of the time; aestheticians think it's objective 73.88% of the time. Working in epistemology tends to make you an internalist, philosophy of science tends to make you a Humean, metaphysics a Platonist, ethics a deontologist.

If you don't believe something exists it is unlikely that you are going to dedicate your life to studying it. This explains the theism, aesthetic objectivism and the Platonism. Similarly, if you believe a question has a very simple answer that does not need to be fleshed out you are unlikely to dedicate your life to answering it. This explains the deontology and the internalism. And Humeanism is still a minority view among philosophers of science (I also wonder if Humeans about laws exactly overlap with Humeans about causality-- I suspect some of the former might not hold the latter view).

I would also be hesitant to assume LW is more likely to be right about these matters when they aren't things LW has thought much about. E.g. I'm pretty modern Platonism is actually true.

Replies from: RobbBB, IlyaShpitser, RobbBB, buybuydandavis
comment by Rob Bensinger (RobbBB) · 2013-05-01T16:12:55.906Z · LW(p) · GW(p)

It probably explains theism -- if you don't take the arguments seriously, you'll more likely want to study religion anthropologically rather than argue it out philosophically -- but I don't see why one couldn't study aesthetics as 'subjective' (whatever precisely that means), or metaphysics as a skeptic. (In fact, many do each of those things. Just not most.) I guess I can see how devoting your whole life's work to destroying illusions could be a downer for some, though.

I agree LW hasn't thought enough about most of these issues to reach a solid, vetted assessment. I'm mostly interested in what these doctrines say about underlying methodology, as a canary in a coalmine. I'm rather less interested in seeing LW and Academic Philosophy duke it out to see who happens to be right on specialized, arcane, mostly not-very-important debates. How many philosophers are epistemic externalists only really matters inasmuch as it's symptomatic of general professional standards and methodology.

Replies from: Jack
comment by Jack · 2013-05-01T16:17:08.494Z · LW(p) · GW(p)

but I don't see why one couldn't study aesthetics as 'subjective' (whatever precisely that means), or metaphysics as a skeptic. (In fact, many do each of those things. Just not most.) I guess I can see how devoting your whole life's work to destroying illusions could be a downer, though.

Subjective aesthetics is probably more the realm of psychology (unless it is so subjective that you can't study it). But I'm obviously not saying only Platonists would want to study metaphysics. I'm just saying that the selection effect is sufficient to explain the differences in positions between specialists and non-specialists.

comment by IlyaShpitser · 2013-05-01T16:01:38.487Z · LW(p) · GW(p)

Philosophers working in decision theory are drastically worse at Newcomb

Listen, this is like someone who believes the Axiom of Choice saying "constructivist mathematicians are drastically worse at set theory" (because they reject Choice). Newcomb is all about how you view free will. This is not a settled question yet.

Replies from: RobbBB, RobbBB, Jack, wedrifid, Desrtopa, Juno_Watt
comment by Rob Bensinger (RobbBB) · 2013-05-01T16:47:07.231Z · LW(p) · GW(p)

Why does 'free will' make any difference? If Omega can only predict you with e.g. 60% accuracy, that's still enough to generate the problem.

I'm not saying the right answer, i.e., the right decision theory, is a settled question. I'm just saying they lose. This matters. If their family members' or friends' welfare were on the line, as opposed to some spare cash, I strongly suspect philosophers would be less blasé about privileging their pet formal decision-making theory over actually making the world a better place. The units of value don't matter; what matters is that causal decision theory loses, and loses by arbitrarily large amounts.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-05-01T17:26:19.571Z · LW(p) · GW(p)

I'm just saying they lose.

I once took a martial arts class (taught by a guy who once appeared on the "ninja episode" of Mythbusters, where they tried to figure out if a human can catch an arrow out of the air). He knew this trick called "choshi dori" (I think it roughly means 'attention/initiative grabbing'). How exactly this trick works is a long story, but it has to do with "hacking the lower brain" of the opponent in various ways. One of the things he could do was have a guy punch him in the face and have the punch instead land on empty air, completely contrary to the volition of the puncher. Note: it would work even if he told you exactly what he was doing.

He could do this because of the way punch targeting works (the largely subconscious system responsible has certain rules it follows that could be influenced in a way that causes you to miss).


There are various ways to defeat "choshi dori," although the gentleman in question could certainly get the vast majority of randomly chosen people to fall for it. Whatever "free will" is, its probably more complicated than just taking Omega at its word. Perhaps Omega achieved his accuracy by a similar defeatable hack. Omega claims to "open up the agent," and my response is to try to "open up Omega," to see what's behind his prediction %.

Replies from: Eliezer_Yudkowsky, RobbBB, wedrifid, atorm
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-01T17:47:46.190Z · LW(p) · GW(p)

I don't see why it would be at all difficult or mysterious for Omega to predict that I one-box. I mean, it's not like my thought processes there are at all difficult to understand or predict.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-05-02T09:40:37.215Z · LW(p) · GW(p)

My point is exactly that it is not mysterious. Omega used some concrete method to win his game, much in the same way that the fellow in question uses a particular method to win the punching game. The interesting question in the Newcomb problem is (a) what is the method, and (b) is the method defeatable. The punching game is defeatable. Giving up too early on the punching game is a missed chance to learn something about volition.

The right response to a "magic trick" is to try to learn how the trick works, not go around for the rest of one's life assuming strangers can always pick out the ace of spades.

Replies from: Eliezer_Yudkowsky, loup-vaillant, Richard_Kennaway, shminux
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-02T17:17:26.388Z · LW(p) · GW(p)

Omega's not dumb. As soon as Omega knows you're trying to "come up with a method to defeat him", Omega knows your conclusion - coming to it by some clever line of reasoning isn't going to change anything. The trick can't be defeated by some future insight because there's nothing mysterious about it.

Free-will-based causal decision theory: The simultaneous belief that two-boxing is the massively obvious, overdetermined answer output by a simple decision theory that everyone should adopt for reasons which seem super clear to you, and that Omega isn't allowed to predict how many boxes you're going to take by looking at you.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-07-31T22:07:24.796Z · LW(p) · GW(p)

Omega's not dumb.

I am not saying anything weird, merely that the statements of the Newcomb's problem I heard do not specify how Omega wins the game, merely that it wins a high percentage (all?) of the previous attempts. The same can be said for the punching game, played by a human (who, while quite smart about the volition of punching, is still defeatable).

There are algorithms that Omega could follow that are not defeatable (people like to discuss simulating players, and some others are possible too). Others might be defeatable. The correct decision theory in the punching game would learn how to defeat the punching game and walk away with $$$. The right decision theory in the Newcomb's problem ought to first try to figure out if Omega is using a defeatable algorithm, and only one box if it is not, or if it is not possible to figure this out.

comment by loup-vaillant · 2013-05-02T11:55:05.637Z · LW(p) · GW(p)

Okay, let's try and defeat Omega. The goal is to do better than Eliezer Yudkowsky, which seems to be trustworthy about doing what he publicly says all over the place. Omega will definitely predict that Eliezer will one-box, and Eliezer will get the million.

The only way to do better is to two-box while making Omega believe that we will one-box, so we can get the $1001000 with more than 99.9% certainty. And of course,

  1. Omega has access to our brain schematics
  2. We don't have access to Omega's schematics. (optional)
  3. Omega has way more processing power than we do.

Err, short of building an AI to beat the crap out of Omega, that looks pretty impossible. $1000 is not enough to make me do the impossible.

comment by Richard_Kennaway · 2013-05-02T14:45:11.342Z · LW(p) · GW(p)

Omega used some concrete method to win his game, much in the same way that the fellow in question uses a particular method to win the punching game.

A crucial difference is that the punching game is real, while Newcomb's problem is fiction, a thought experiment.

In the punching game, you can try to learn how the trick is done and how to defeat the opponent, and you are still engaged in the punching game.

In Newcomb's problem, Omega is not a real thing that you could discover something about, in the way that there is something to discover about a real choshi dori master. There is no such thing as what Omega is really doing. If you think up different things that an Omega-like entity might be doing, and how these might be defeated to win $1,001,000, then you are no longer thinking about Newcomb's problem, but about a different thought experiment in some class of Newcomb-like problems. I expect a lot of such thinking goes on at MIRI, and is more useful than endlessly debating the original problem, but it is not the sort of thing that you are doing to defeat choshi dori.

comment by Shmi (shminux) · 2013-05-02T17:39:06.510Z · LW(p) · GW(p)

The right response to a "magic trick" is to try to learn how the trick works, not go around for the rest of one's life assuming strangers can always pick out the ace of spades.

Here is a trivial model of the "trick" being fool-proof (and I do mean "fool" literally), which I believe has been discussed here a time or ten. Omega runs a perfect simulation of you, terminates it right after you make your selection or if you refuse to choose (he is a mean one), checks what it outputs, uses it to place money in the boxes. Omega won't even offer the real you the game if you are one of those stubborn non-choosers. The termination clause is to prevent you from enjoying the spoils in case YOU are that simulation, so only the "real you" will know if he won or not. And to avoid any basilisk-like acausal trade. He is not that mean.

EDIT: if you think that the termination is a cruel cold-blooded murder, note that you do that all the time when evaluating what other people would do, then stop thinking about it, once you have your answer. The only difference is the fidelity level. If you don't require 100% accuracy, you don't need a perfect simulation.

comment by Rob Bensinger (RobbBB) · 2013-05-01T17:42:19.194Z · LW(p) · GW(p)

Do you think that gets rid of the problem? 'It might be possible to outsmart Omega' strikes me as fairly irrelevant. As long as it's logically possible that you don't successfully outsmart Omega, the original problem can still be posed. You still have to make a decision, in those cases where you don't catch Omega in a net.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-05-01T18:23:19.706Z · LW(p) · GW(p)

I am not saying there isn't a problem, I am saying the problem is about clarifying volition (in a way not too dissimilar to the "choshi dori" trick in my anecdote). Punching empty air is "losing." Does this then mean we should abstain from punching? Seems a bit drastic.

Many problems/paradoxes are about clarification. For example the Simpson's paradox is about clarifying causal vs statistical intuitions.


More specifically, what I am saying is that depending on what commitments you want to make about volition, you would either want to one box, or two box in such a way that Omega can be defeated. The problem is "non-identified" as stated. This is equivalent to choosing axioms in set theory. You don't get to say someone fails set theory if they don't like Choice.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T00:02:57.297Z · LW(p) · GW(p)

1 - Supposing I have no philosophical views at all about volition, I would be rationally obliged to one-box. In a state of ignorance, the choice is clear simply provided that I value whatever is being offered. Why should I then take the time to form a theory of volition, if you're right and at most it can only make me lose more often?

We don't know what the right answer to Newcomb-like problems will look like, but we do know what the wrong answers will look like.

2 - Supposing I do have a view about volition that makes me think I should two-box, I'll still be rationally obliged to one-box in any case where my confidence in that view is low enough relative to the difference between the options' expected values.

For instance, if we assign to two-boxing the value 'every human being except you gets their skin ripped off and is then executed, plus you get $10' and assign to one-boxing the value 'nobody gets tortured or killed, but you miss out on the $10', no sane and reasonable person would choose to two-box, no matter how confident they (realistically) thought they were that they have a clever impossibility proof. But if two-boxing is the right answer sometimes, then, pace Nozick, it should always be the right answer, at least in cases where the difference between the 2B and 1B outcomes is dramatic enough to even register as a significant decision. Every single one of the arguments for two-boxing generalize to the skin-ripping-off case, e.g., 'I can't help being (causal-decision-theory-)rational!' and 'it's unfair to punish me for liking CDT; I protest by continuing to employ CDT'.

3 - You seem to be under the impression that there's something implausible or far-fetched about the premise of Newcomb's Problem. There isn't. If you can't understand a 100% success rate on Omega's part, then imagine a 99% success rate, or a 50% one. The problem isn't altered in substance by this.

Replies from: Jack, IlyaShpitser
comment by Jack · 2013-05-02T00:15:03.642Z · LW(p) · GW(p)

A 50% success rate would recommend two boxing.

Edit: and come to think of it I am somewhat less sure about the lower success rates in general. If I can roughly estimate Omega's prediction about me that would seem to screen off any timeless effect. Like, you could probably pretty reliably predict how someone would answer this question based on variables like Less Wrong participation and having a Phd in philosophy. Using this information, I could conclude that an Omega with 60% accuracy is probably going to classify me as a one-boxer no matter what I decide... and in that case why not two box?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T02:39:13.281Z · LW(p) · GW(p)

Sorry, by a 50% success rate I meant that Omega correctly predicts your action 50% of the time, and the other half of the time just guesses. Guessing can also yield the right answer, so this isn't equivalent to a 50% success rate in the sense you meant, which was simply 'Does Omega put the money in the box he would have wished to?'

If you know that Omega will take into account that you're a LessWronger, but also know that he won't take into account any other information about you (including not taking into account the fact that you know that he knows you're a LessWronger!), then yes, you should two-box. But that's quite different from merely knowing that Omega has a certain success rate. Let's suppose we know that 60% of the time Omega makes the decision it would have wished were it omniscient. Then we get:

  • If I one-box: 60% chance of $1,000,000, 40% chance of $1000.
  • If I two-box: 60% chance of $1000, 40% chance of $1,001,000.

Then the expected value of one-boxing is $600,400. Expected value of two-boxing is $401,000. So you should one-box in this situation.

Replies from: Jack
comment by Jack · 2013-05-02T02:41:22.966Z · LW(p) · GW(p)

This makes sense.

comment by IlyaShpitser · 2013-05-02T08:50:53.297Z · LW(p) · GW(p)

You are not listening to me. Suppose this fellow comes by and offers to play a game with you. He asks you to punch him in the face, where he is not allowed to dodge or push your hand. If you hit him, he gives you 1000 dollars, if you miss, you give him 1000 dollars. He also informs you that he has a success rate of over 90% playing this game with randomly sampled strangers. He can show you videos of previous games, etc.

This game is not a philosophical contrivance. There are people who can do this here in physical reality where we both live.

Now, what is the right reaction here? My point is that if your right reaction is to not play then you are giving up too soon. The reaction to not play is to assume a certain model of the situation and leave it there. In fact, all models are wrong, and there is much to be learned about e.g. how punching works in digging deeper into how this fellow wins this game. To not play and leave it at that is incurious.

Certainly the success rate this fellow has with the punching game has nothing to do with any grand philosophical statement about the lack of physical volition by humans.

Learning about how punching works, rather than winning 1000 dollars, is the entire point of this game.


My answer to Newcomb's problem is to one-box if and only if Omega is not defeatable and two-box in a way that defeats Omega otherwise. Omega can be non-defeatable only if certain things hold. For example if it is possible to fully simulate in physical reality a given human's decision process at a particular point in time, and have this simulation be "referentially transparent."

edit: fixed a typo.

Replies from: arundelo, pjeby
comment by arundelo · 2013-05-02T14:53:10.429Z · LW(p) · GW(p)

If you hit him, he gives you 1000 dollars, if you miss, he gives you 1000 dollars.

There is a typo here.

comment by pjeby · 2013-05-02T23:51:19.252Z · LW(p) · GW(p)

My answer to Newcomb's problem is to one-box if and only if Omega is not defeatable and two-box in a way that defeats Omega otherwise

But now you've laid out your decision-making process, so all Omega needs to do now is to predict whether you think he's defeatable. ;-)

In general, I expect Omega could actually be implemented just by being able to tell whether somebody is likely to overthink the problem, and if so, predict they will two-box. That might be sufficient to get better-than-chance predictions.

To put it yet another way: if you're trying to outsmart Omega, that means you're trying to figure out a rationalization that will let you two-box... which means Omega should predict you'll two-box. ;-)

comment by wedrifid · 2013-05-02T13:59:10.011Z · LW(p) · GW(p)

There are various ways to defeat "choshi dori," although the gentleman in question could certainly get the vast majority of randomly chosen people to fall for it. Whatever "free will" is, its probably more complicated than just taking Omega at its word. Perhaps Omega achieved his accuracy by a similar defeatable hack.

You are (merely) fighting the hypothetical.

Omega claims to "open up the agent," and my response is to try to "open up Omega," to see what's behind his prediction %.

Let's try using your martial arts analogy. Consider the following:

You find yourself in a real world physical confrontation with a ninja who demands your wallet. You have seen this ninja fight several other ninjas, a pirate and a Jedi in turn and each time he used "choshi dori" upon them then proceeded to break both of their legs and take their wallet. What do you do?

  • Punch the ninja in the face.
  • Shout "I have free will!" and punch the ninja in the face.
  • Think "I want to open up the ninja and see how his choshi dori works" then try to punch the ninja in the face.
  • Toss your wallet to the ninja and then run away.

This isn't a trick question. All the answers that either punch the ninja in the face or take two boxes are wrong. They leave you with two broken legs or an otherwise less desirable outcome.

Replies from: Prismattic
comment by Prismattic · 2013-05-03T02:46:22.134Z · LW(p) · GW(p)

Sometimes people fight a hypothetical because the hypothetical is problematic. I lean toward two-boxing in Newcomb's problem, basically because I can't not fight this hypothetical. My reasoning is more or less as follows. If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I'm almost certainly a simulation. One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box. If I'm not a simulation, I don't accept the possibility of Omega existing in the first place, so I two-box. Basically, I think Newcomb's problem is not a particularly useful hypothetical, because I don't see it as predictive of decision-making in other circumstances.

Replies from: wedrifid, TheOtherDave
comment by wedrifid · 2013-05-03T03:51:15.813Z · LW(p) · GW(p)

One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box.

It seems to me that if Omega concludes that you are aware that you are in a simulation based on the fact that you take one box then Omega is systematically wrong when reasoning about a broad class of agents that happens to include all the rational agents (and some others). This is rather a significant flaw in an Omega implementation.

Basically, I think Newcomb's problem is not a particularly useful hypothetical, because I don't see it as predictive of decision-making in other circumstances.

For agents with coherent decision making procedures it is equivalent to playing a Prisoner's Dilemma against a clone of yourself. That is something that feels closer to a real world scenario for some people. It is similarly equivalent to Parfit's Hitch-hiker when said hitch-hiker is at the ATM.

Replies from: Prismattic
comment by Prismattic · 2013-05-03T03:59:31.129Z · LW(p) · GW(p)

That's why I don't like Newcomb's problem. In a prisoner's dilemma with myself, I'd cooperate (I trust me to cooperate with myself). Throwing Omega in confuses this pointlessly. I suspect if people substituted "God" for "Omega" I'd get more sympathy on this.

comment by TheOtherDave · 2013-05-03T03:03:48.222Z · LW(p) · GW(p)

Are you suggesting that if you are a simulation, two-boxing reduces your risk of being turned off?
If not, I don't understand your reasoning at all.
If so, I guess I understand your reasoning from that point on (presumably you feel no particular loyalty to the entity you're simulating?), but I don't understand how you arrive at that point.

Replies from: Prismattic
comment by Prismattic · 2013-05-03T03:41:59.526Z · LW(p) · GW(p)

At a minimum, I can't see how two-boxing could be worse in terms of risk of being turned off. I suppose Omega could think I was trying to be tricky by two-boxing specifically to avoid giving my awareness that I'm being simulated away, but at that point the psychology becomes infinitely recursive. I'll take my chances while the simulator puzzles that out.

I'm not sure I understand your parenthetical. Does the existence of a simulation imply the existence of an outside entity being simulated?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-03T05:22:47.096Z · LW(p) · GW(p)

can't see how two-boxing could be worse in terms of risk of being turned off.

Neither can I. Nor can I see how it could be better. In fact, I see no likely correlation between one/two-boxing and likelihood of being turned off at all. But if my chances of being turned off aren't affected by my one/two-box choice, then "One-boxing would [..] risk getting me turned off [..] so I two-box" doesn't make much sense.

You clearly have a scenario in mind wherein I get turned off if my simulator is aware that I'm aware that I'm being simulated and not otherwise, but I don't understand why I should expect that.

Does the existence of a simulation imply the existence of an outside entity being simulated?

To be honest, I've never quite understood what the difference is supposed to be between the phrases "existing in a simulation" and "existing".

But regardless, my understanding of "If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I'm almost certainly a simulation" had initially been something like "If Omega can perfectly model Dave's mental processes in order to determine Dave's likely actions, then Omega will probably create lots of simulated Daves in the process. Since those simulated Daves will think they are Dave, and there are many more of them than there are of Dave, and I think I'm Dave, the odds are (if Omega exists and can do this stuff) that I'm in a simulation."

All of which also implies that there's an outside entity being simulated in this scenario, in which case if I feel loyalty to that entity (or otherwise have some basis for caring about how my choices affect it) then whether I get turned off or not isn't my only concern anyway..

I infer from your question that I misunderstood you in the first place, though, in which case you can probably ignore my parenthetical. Let me back up and ask, instead, why if the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I'm almost certainly a simulation?

Replies from: Prismattic
comment by Prismattic · 2013-05-04T02:25:38.780Z · LW(p) · GW(p)

My thinking here is that if a being suddenly shows up and can perfectly model me, despite not having scanned my neural pathways, taken any tissue samples, observed my life history, or gathered any other data whatsoever, then it's cheating somehow -- i.e. I'm a simulation and it has my source code.

This doesn't require there to be a more real Prismattic one turtle down, as it were. I could be a simulation created to test a set of parameters, not necessarily a model of another entity.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-04T04:01:52.408Z · LW(p) · GW(p)

Ah, I see.
OK, thanks for clarifying.

comment by atorm · 2013-05-02T00:40:23.827Z · LW(p) · GW(p)

I would like to know more about this "choshi dori". Do you know of videos or useful write-ups of the technique?

Replies from: khafra
comment by khafra · 2013-05-03T17:20:09.392Z · LW(p) · GW(p)

Discussion from a ninjutsu (Bujinkan) forum

Discussion from a general martial arts forum

A fat Russian guy demonstrating the same thing, from a different system.

In general, you can't make people miss or fall over without touching them unless they know you can make them miss or fall over when touching is allowed.

comment by Rob Bensinger (RobbBB) · 2013-05-02T02:52:12.982Z · LW(p) · GW(p)

I don't think controversies over the Axiom of Choice are similar in the right ways to controversies over Newcomb's Problem. In pragmatic terms, we know that true two-boxers will willingly take on arbitrarily large disutility (or give up arbitrarily large utility), inasmuch as they're confident that two-boxing is the right answer. The point can even be put psychologically: To the extent that it's a psychological fact that humans don't assign infinite value to being Causal Decision Theorists, the utility (relative to people's actual values) of following CDT can't outweigh the bad consequences of consistently two-boxing.

I know of no correspondingly strong demonstration that weakening Choice or eliminating LEM leads demonstrably to irrationality (relative to how the world actually is and, in particular, what preferences people actually have).

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-05-02T11:29:20.175Z · LW(p) · GW(p)

In pragmatic terms, we know that true two-boxers will willingly take on arbitrarily large disutility

This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked? This view corresponds to a certain view of how choices get made, how the choice making algorithm is simulated, and various properties of this simulation as embodied in physical reality. Absent an actual proof, this view is just that -- a view.

Two-boxers aren't (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.

Replies from: wedrifid, DaFranker
comment by wedrifid · 2013-05-02T13:31:22.643Z · LW(p) · GW(p)

Two-boxers aren't (necessarily!) stupid, they simply adhere to commitments that make it possible to fool Omega.

No, they don't. You seem to be confused not just about Newcomb's Problem but also about why the (somewhat educated subset of) people who Two-Box make that choice. They emphatically do not do it because they believe they are able to fool Omega. They expect to lose (ie. not get the $1,000,000).

comment by DaFranker · 2013-05-02T14:08:36.583Z · LW(p) · GW(p)

This is only the case in a world-view that accepts that Omega cannot be tricked. How do you know Omega cannot be tricked?

By hypothesis, this is how it works. Omega can predict your choice with >0.5 accuracy (strictly more than half the time). Regardless of Free Will or Word of God or trickery or Magic.

The whole point of the thought experiment is to analyze a choice under some circumstances where the choice causes the outcomes to have been laid out differently.

If you fight the hypothesis by asserting that some other worldviews grant players Magical Powers From The Beyond to deceive Omega (who is just a mental tool for the thought experiment), then I can freely assert that Omega has Magical Powers From The Outer Further Away Beyond that can neutralize those lesser powers or predict them altogether. Or maybe Omega just has a time machine. Or maybe Omega just fucking can, don't fight the premises damnit!

And as wedrifid pointed out, this is not even the main reason why the smarter two-boxers two-box. It's certainly one of the common reasons why the less-smart ones do though, in my experience. (Since they never read the Sequences, aren't scientists, and never learned to not fight the premises! Ahem.)

comment by Jack · 2013-05-01T16:08:49.847Z · LW(p) · GW(p)

I would say Newcomb is all about how you view personal identity. But I'm not sure why this comment was directed at me.

Replies from: RobbBB, Douglas_Reay
comment by Rob Bensinger (RobbBB) · 2013-05-01T17:46:05.755Z · LW(p) · GW(p)

Why would you say personal identity is relevant?

Replies from: Jack
comment by Jack · 2013-05-01T18:46:34.746Z · LW(p) · GW(p)

I think the ease with which this community adopts one boxing has to do with us having internalized a computationalist view of the mind and the person. This has a lot in common with the psychological view of person-hood. Basically, we treat agents as decision algorithms which makes it much easier to see how decisions could have non-causal properties.

This is, incidentally, related to my platonism you asked me about. Computationalism leads to a Platonic view of personhood (where who you are is basically an algorithm that can have multiple instantiations). One-boxing falls right out of this theory. The decision you make in Newcombs problem is determined by your decision algorithm. You decision algorithm can be wholly or partly instantiated by Omega and that's what allows Omega to predict your behavior.

Replies from: Jiro
comment by Jiro · 2013-05-01T19:19:41.947Z · LW(p) · GW(p)

My problem with thinking of Newcomb's paradox this way is that it is possible that my decision algorithm will be "try to predict what Omega does, and...." For Omega to predict my behavior by running through my algorithm will involve a self-reference paradox; it may be literally impossible, even in principle, for Omega to predict what I do.

Of course, you can always say "well, maybe you can't predict what Omega does", but the problem as normally posed implies that there's an algorithm for producing the optimal result and that I am capable of running such an algorithm; if there are some algorithms I can't run, I may be incapable of properly choosing whether to one-box or two-box at all.

Replies from: Jack
comment by Jack · 2013-05-01T19:37:23.418Z · LW(p) · GW(p)

Your prediction of what Omega does is just as recursive as as Omega's prediction. But if you actually make a decision at some point that means that your decision algorithm has an escape clause (ow! my brain hurts!) which means that Omega can predict what you're going to do (by modelling the all the recursions you did).

but the problem as normally posed implies that there's an algorithm for producing the optimal result and that I am capable of running such an algorithm

It doesn't actually. The optimal result is two boxing when Omega thinks you are going to one box. But since Omega is a God-like super computer and you aren't that isn't going to happen. If you happen to have more information about Omega than it has about you and the hardware to run a simulation of Omega then you can win like this. But that isn't the thought experiment.

Replies from: Jiro
comment by Jiro · 2013-05-01T20:51:10.987Z · LW(p) · GW(p)

My point (or the second part of it) is that simply by asking "what should you do to achieve an optimal result", the question assumes that your reasoning capacity is good enough to compute the optimal result. If computing the optimal result requires being able to simulate Omega, then the original question implicitly assumes that you are able to simulate Omega.

Replies from: Jack, RobbBB
comment by Jack · 2013-05-01T22:29:13.441Z · LW(p) · GW(p)

Right, I just don't agree that the question assumes that.

comment by Rob Bensinger (RobbBB) · 2013-05-02T00:12:26.475Z · LW(p) · GW(p)

Where does the question assume that you can compute the optimal result? Newcomb's Problem simply poses a hypothetical and asks 'What would you do?'. Some people think they've gotten the right answer; others are less confident. But no answer should need to presuppose at the outset that we can arrive at the very best answer no matter what; if it did, that would show the impossibility of getting the right answer, not the trustworthiness of the 'I can optimally answer this question' postulate.

Replies from: Jiro
comment by Jiro · 2013-05-02T02:05:33.709Z · LW(p) · GW(p)

I once had a man walk up to me and ask me if I had the correct time. I looked at my watch and told him the time. But it seemed a little odd that he asked for the correct time. Did he think that if he didn't specify the qualifier "correct", I might be uncertain whether I should give him the correct or incorrect time?

I think that asking what you would do, in the context of a reasoning problem, carries the implication "figure out the correct choice" even if you are not being explicitly asked what is correct. Besides, the problem is seldom worded exactly the same way each time and some formulations of it do ask for the correct answer.

For the record, I would one-box, but I don't actually think that finding the correct answer requires simulating Omega. But I can think of variations of the problem where finding the correct answer does require being able to simulate Omega (or worse yet, produces a self-reference paradox without anyone having to simulate Omega.)

comment by Douglas_Reay · 2013-05-03T08:37:07.519Z · LW(p) · GW(p)

I would say Newcomb is all about how you view personal identity.

See the sequence:

Replies from: Jack
comment by Jack · 2013-05-03T18:42:45.851Z · LW(p) · GW(p)

When you suggest someone read three full length posts in response to a single sentence some context is helpful, especially if they weren't upvoted. Maybe summarize their point or something.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2013-05-09T10:01:08.197Z · LW(p) · GW(p)

If it was easy to summarize, it wouldn't have required a three parter sequence. :-)

However, perhaps one relevant point from it is:

For the purposes of Newcomb's problem, and the rationality of Fred's decisions, it doesn't matter how close to that level of power Omega actually is. What matters, in terms of rationality, is the evidence available to Fred about how close Omega is to having to that level of power; or, more precisely, the evidence available to Fred relevant to Fred making predictions about Omega's performance in this particular game.

Since this is a key factor in Fred's decision, we ought to be cautious. Rather than specify when setting up the problem that Fred knows with a certainty of 1 that Omega does have that power, it is better to specify a concrete level of evidence that would lead Fred to assign a probability of (1 - δ) to Omega having that power, then examine the effect upon which option to the box problem it is rational for Fred to pick, as δ tends towards 0.

comment by wedrifid · 2013-05-02T13:25:45.689Z · LW(p) · GW(p)

Listen, this is like someone who believes the Axiom of Choice saying "constructivist mathematicians are drastically worse at set theory" (because they reject Choice). Newcomb is all about how you view free will. This is not a settled question yet.

To the extent that Newcomb's Problem is 'about how you view free will' people who two box on Newcomb's Problem are confused about free will.

This isn't like constructivist mathematicians being worse at set theory because they reject choice. It's closer to a kindergarten child scribbling in crayon on a Math exam then insisting "other people are bad at Math too therefore you should give me full marks anyway".

Replies from: None, IlyaShpitser
comment by [deleted] · 2013-05-02T14:46:54.716Z · LW(p) · GW(p)

To the extent that Newcomb's Problem is 'about how you view free will' people who two box on Newcomb's Problem are confused about free will.

I don't think that's fair (though I also don't think Newcomb's problem has anything to do with free will either). The question is whether one-boxing or two-boxing is rational. It's not fair to respond simply with 'One-boxing is rational because you get more money', because two-boxers know one-boxing yields more money. They still say it's irrational. It would be question begging to try to dismiss this view because rationality is just whatever gets you more money, since that's exactly what the argument is about.

comment by IlyaShpitser · 2013-05-02T13:40:20.877Z · LW(p) · GW(p)

To the extent that Newcomb's Problem is 'about how you view free will' people who two box on Newcomb's Problem are confused about free will.

If you say so. If I learn enough about "choshi dori" to fool the punch-avoiding algorithm and win 1000 dollars, and you don't play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.

Replies from: wedrifid
comment by wedrifid · 2013-05-02T14:10:28.497Z · LW(p) · GW(p)

If you say so. If I learn enough about "choshi dori" to fool the punch-avoiding algorithm and win 1000 dollars, and you don't play, who is confused? Rationalists are supposed to win, remember, not stick to a particular view of a problem.

Rational agents who play Newcomb's Problem one box. Rational agents who are in entirely different circumstances make entirely different decisions as determined by said circumstances. They also tend to have a rudimentary capability of noticing the difference between problems.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-05-02T14:33:09.254Z · LW(p) · GW(p)

(a) You are being a dick. I certainly did not insult anyone in this thread.

(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.

Or, alternatively, you can try to "open up the agent you are playing against" and try to trick it. It's certainly possible in the punching game. It may or may not be possible in the game with Omega -- the problem doesn't specify.

If you say "well, rational people do X and not Y, end of story" that's fine. I am going to make my updates on you and move on.

A typical example of irrational behavior is intransitive preference. As the money pump thread shows people often don't actually fall for money pumping, even if they have intransitive preferences. In other words, the map doesn't fully reflect the territory of what people actually do.

Another example is gwern's example with correlation and causation. Correlation does not imply causation, says gwern, but if we knew how often it does imply it, we may well be rational to conclude the latter from the former if the odds are good enough. He's right -- but no one does this (I don't think!).

I used the example of the punching game on purpose -- it makes the theoretical situation with Omega practical, as in you can go and try this game if you wanted. My response to trying the game was to learn how it works, rather than give up playing it. This is what people actually do. If your model doesn't capture it, it's not a good model.


A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.

Replies from: wedrifid
comment by wedrifid · 2013-05-03T02:27:37.974Z · LW(p) · GW(p)

(a) You are being a dick.

It took a non-trivial exertion in the direction of politeness to refrain from answering the rhetorical question "who is confused?" with a literal answer.

I certainly did not insult anyone in this thread.

Arguable. I would concede at least that you did not say anything insulting that you do not sincerely believe is warranted.

(b) The isomorphism is exact. The point is granularity. If the guy can avoid the punch 90% of the time (or more precisely guess what your punch decision algorithm will do in response to some inputs 90% of the time), and Omega guesses what you will do correctly 90% of the time, that ought to be sufficient to do the math on expected values, if you want to leave it there.

Doing expected value calculations on probabilistic variants of newcomb's problem is also old news. And results in one boxing unless the probability gets quite close to random guessing. Once again, if you choose a sufficiently different problem than Newcomb's (such as by choosing an accuracy sufficiently close to 0.5, reducing the payoff ratio or by positing that you are in fact more intelligent than Omega) then you have failed to respond to a relevant question (or an interesting question, for that matter).

If you say "well, rational people do X and not Y, end of story" that's fine. I am going to make my updates on you and move on.

Please do. I have likewise updated. Evidence suggests you are ill suited to considering counterfactual problems and unlikely to learn. My only recourse here is to minimize the damage you can do to the local sanity waterline. I'll leave further attempts at verbal interaction to the half a dozen others who have been attempting to educate you, assuming they have more patience than I.

A broader comment: I do math for a living. The issues of applicability of math to practical problems, and changing math models around is something I think about quite a bit.

See.

comment by Desrtopa · 2013-05-01T23:01:06.046Z · LW(p) · GW(p)

I would be interested in seeing how philosophers do on tests of analytical versus intuitive reasoning (I forget the name of the test normally used for gauging this) and ability to narrow down hypotheses when the answers are known and easily verifiable.

Replies from: satt, JonathanLivengood
comment by satt · 2013-05-02T02:42:18.844Z · LW(p) · GW(p)

I would be interested in seeing how philosophers do on tests of analytical versus intuitive reasoning (I forget the name of the test normally used for gauging this)

Cognitive Reflection Test?

Replies from: Desrtopa
comment by Desrtopa · 2013-05-02T02:47:40.880Z · LW(p) · GW(p)

That was the one, thanks.

comment by JonathanLivengood · 2013-05-02T06:07:18.190Z · LW(p) · GW(p)

We do pretty well, actually (pdf). (Though I think this is a selection effect, not a positive effect of training.)

comment by Juno_Watt · 2013-05-01T16:51:05.429Z · LW(p) · GW(p)

Upvoted for understatement.

comment by Rob Bensinger (RobbBB) · 2013-05-01T17:07:36.923Z · LW(p) · GW(p)

Out of curiosity, do you think mathematical platonism is true for Tegmark-style reasons? Or some other reason?

Replies from: Jack, endoself
comment by Jack · 2013-05-01T17:24:10.839Z · LW(p) · GW(p)

Quinean reasons. Tegmark's position, as far as I can tell, is that all abstract objects are also physically instantiated (or that the only difference between concrete and abstract objects is indexical). Which I think is plausible-- but I think abstract objects could be an entirely different sort of thing from concrete, physically existing objects, and still exist.

Replies from: RobbBB, Juno_Watt
comment by Rob Bensinger (RobbBB) · 2013-05-01T17:37:26.800Z · LW(p) · GW(p)

Do you think abstract objects have anything causally to do with the things (about our universe, or about mathematical practice) that convinced you they exist? My worry is that in the absence of a causal connection, if there weren't such abstract objects, mathematics would be just as 'unreasonably effective'. The numbers aren't doing anything to us to make mathematics work, so their absence wouldn't deprive us of anything (causally). If a hypothesis can't predict the data any more reliably than its negation can, then the data can't be used to support the hypothesis.

In general, I'd like to hear more talk about what sorts of relations these number things enter into with our own world.

Replies from: Jack
comment by Jack · 2013-05-01T18:14:18.805Z · LW(p) · GW(p)

Do you think abstract objects have anything causally to do with the things (about our universe, or about mathematical practice) that convinced you they exist?

No. But that is essentially true by definition. On the other hand, I think all causal claims are claims about abstract facts. E.g. when you say "The match caused the barn to burn to the ground" you're invoking a causal model of the world and models of the world are abstractions (though obviously they can be represented).

My worry is that in the absence of a causal connection, if there weren't such abstract objects, mathematics would be just as 'unreasonably effective'.

To me this is like hearing "If mass and velocity didn't exist Newtonian physics would be just as 'unreasonably effected'. Mathematical objects are part of mathematics. The fact that math is unreasonably effective is why we can say mathematical facts are true and mathematical entities exist. Just like the fact that quantum theory is unreasonably effective is the reason we can say that quarks exist. This is true of just everyday objects too. We say your chair exists because the chair is the best way of explaining some of your sensory impressions. It just happens that not all entities are particulars embedded in the causal world.

Replies from: Juno_Watt, RobbBB
comment by Juno_Watt · 2013-05-10T12:21:50.066Z · LW(p) · GW(p)

No. But that is essentially true by definition. On the other hand, I think all causal claims are claims about abstract facts. E.g. when you say "The match caused the barn to burn to the ground" you're invoking a causal model of the world and models of the world are abstractions (though obviously they can be represented).

Causal claim may be expressed with abstract models, but that does not mean they are about abstract models. Causal models do not refer to themselves, in which case they would be about the abstract, they refer to whatever real-world thing they refer to.

To me this is like hearing "If mass and velocity didn't exist Newtonian physics would be just as 'unreasonably effected'. Mathematical objects are part of mathematics. The fact that math is unreasonably effective is why we can say mathematical facts are true and mathematical entities exist.

Maths isn't unreasonably effective at understanding the world in the sense that any given mathematical truth is automatically also a physical truth. If one mathematical statement (eg an inverse square law of gravity) is physically true, and infinity of others (inverse cube law, inverse power of four...) is automatically false. So when we reify out best theories, we are reifying a small part of maths for reasons which aren't purely mathematical. There is not path from the effectiveness of some maths at describing the physical universe to the reification of all maths, because physical truth is a selection of the physically applicable parts of maths.

comment by Rob Bensinger (RobbBB) · 2013-05-02T03:09:16.633Z · LW(p) · GW(p)

No. But that is essentially true by definition.

Sure, but it's not true by definition that numbers are abstract. Given your analogy to mass and velocity, and your view that mathematical objects help explain the unreasonable effectiveness of mathematics, it seems to me that it would make much more sense to treat these number things as playing a causal or constitutive role in the makeup of our universe itself, e.g., as universals. Then it would no longer just be a coincidence that our world conveniently accompanies a causally dislocated Realm of correlates for our mathematical discourse.

To me this is like hearing "If mass and velocity didn't exist Newtonian physics would be just as 'unreasonably effected'.

But it makes a difference to how our world is that objects have velocity and mass. By hypothesis, it doesn't make a difference to how our world is that there are numbers. (And from this it follows that it wouldn't make a difference if there weren't numbers.) If numbers do play a role as worldly 'difference-makers' of some special sort, then could you explain more clearly what that role is, since it's not causal?

Mathematical objects are part of mathematics.

I don't know what that means. If by 'mathematics' you have in mind a set of human behaviors or mental states, then mathematics isn't abstract, so its objects are neither causally nor constitutively in any relation to it. On the other hand, if by 'mathematics' you have in mind another abstract object, then your statement may be true, but I don't see the explanatory relevance to mathematical practice.

The fact that math is unreasonably effective is why we can say mathematical facts are true and mathematical entities exist.

Sure, but it's also why we can assert doctrines like mathematical fictionalism and nominalism. A condition for saying anything at all is that our world exhibit the basic features (property repetition, spatiotemporal structure...) that suffice for there to be worldly quantities at all. I can make sense of the idea that we need to posit something number-like to account in some causality-like way for things like property repetition and spatiotemporal structure themselves. But I still haven't wrapped my head around why assuming numbers are not difference-makers for the physical world (unlike the presence of e.g. velocity), we should posit them to explain the efficacy of theories whose efficacy they have no impact upon.

Just like the fact that quantum theory is unreasonably effective is the reason we can say that quarks exist.

The properties of quarks causally impact our quantum theorizing. In a world where there weren't quarks, we'd be less likely to have the evidence for them that we do. If that isn't true of mathematics (or, in some ways even worse, if we can't even coherently talk about 'mathless worlds'), then I don't see the parity.

Replies from: Jack
comment by Jack · 2013-05-02T20:02:04.148Z · LW(p) · GW(p)

Sure, but it's not true by definition that numbers are abstract.

Huh?

it seems to me that it would make much more sense to treat these number things as playing a causal or constitutive role in the makeup of our universe itself, e.g., as universals.

I don't recognize a difference between universals and abstract objects but neither plays a causal role in the make up of the universe.

Then it would no longer just be a coincidence that our world conveniently accompanies a causally dislocated Realm of correlates for our mathematical discourse.

You're taking metaphors way too literally. There is no "Realm".

The properties of quarks causally impact our quantum theorizing. In a world where there weren't quarks, we'd be less likely to have the evidence for them that we do. If that isn't true of mathematics (or, in some ways even worse, if we can't even coherently talk about 'mathless worlds'), then I don't see the parity.

It's not that complicated. We have successful theories that posit certain entities. I think believing in those theories requires believing in those entities. Some of those entities figure causally and spatio-temporally in our theories. Some don't. When you say "in a world where there weren't quarks" I have no idea what you're talking about. It appears to be some kind of possible world where the laws of physics are different. But now we're making statements of fact about abstract objects. It is very difficult to say this about mathematics since math appears likely to work the same way in all possible worlds. But that's a really strange reason to conclude mathematical objects don't exist. Numbers and quarks are both theoretically posited entities that we need to explain our world.

As far as I can tell everything you have said is just different forms of "but mathematical objects aren't causal!". I readily agree with this but since abstract objects aren't causal by definition and the entire question is about abstract objects it seems like you're begging the question.

(Edit: Not my downvote btw)

Replies from: RobbBB, Juno_Watt, RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T22:02:58.581Z · LW(p) · GW(p)

Huh?

If in axiomatizing arithmetic we are ontologically committed to saying that 1 exists, 2 exists, 3 exists,etc., then we may say that there are numbers even if it is not axiomatic that 1, 2, 3, etc. are causally inert, nonphysical, etc.

Instead of being a platonist and treating numbers as abstract, you could treat them as occupying spacetime (like immanent universals or tropes), you could treat them as non-spatiotemporal but causally efficacious (like the actual Forms of Plato), or you could assert both. (You could also treat them as useful fictions, but I'll assume that fictionalism is an error theory of mathematics.)

I think many of the views on which mathematical objects have some causal (or, if you prefer, 'difference-making') effect on our mathematical discourse are reasonable. The views on which it's just a coincidence are not reasonable, and I don't think abstract numbers can easily escape the 'just a coincidence' concern (unless, perhaps, accompanied by a larger Tegmark-style framework).

I don't recognize a difference between universals and abstract objects but neither plays a causal role in the make up of the universe.

Let's take the property 'electrically charged' as an example. If charge is a universal, then it's something wholly and constitutively shared in common between every charged thing; universals occur exactly in the spatiotemporal locations where their instances are, and they are exhausted by these worldly things. So there's no need to posit anything outside our universe to believe in universals. Redness is, as it were, 'in' every red rose. Generally, universals are assumed to play causal roles (it's because roses instantiate redness that I respond to them as I do), though in principle you could posit a causally inert one. (Such a universal still wouldn't be abstract, because it would still occur in our universe.)

If electric charge is instead an abstract object, then it exists outside space and time, and has no effect at all on the electrically charged things in our world. (So abstract electric charge serves absolutely no explanatory role in trying to understand how things in our world are charged. However, it might be a useful posit for the nominalist about universals, just to provide a (non-nominalistic) correlate for our talk in terms of abstract nouns like 'charge'.

A third option would be to treat electric charge as a Platonic Form, i.e., something outside spacetime but causally responsible for the distribution of charge instances in our universe. (This is confusing, because Platonic Forms aren't 'platonic' in the sense in which mathematical platonism are 'platonic'. Plato himself was a nominalist about abstract objects, and also a nominalist about universals. His Forms are a totally different thing from the sorts of posits philosophers these days generally entertain.)

A natural way to think of bona-fide ancient Platonism (as opposed to the lowercase-p 'platonism' of modern mathematicians) is as cellular automata; for Plato, our universe is an illusion-like epiphenomenon arising from much simpler, lower-level relationships that are not temporal. (Space still plays a role, but as an empty geometry that comes to bear properties only in a derivative way, via its relationships to particular Forms.)

You're taking metaphors way too literally. There is no "Realm".

Hm? How do you know I'm taking it too literally? First, how do you know that 'Realm' isn't just part of the metaphor for me? What signals to you when I stop talking about 'objects' and start talking about 'Realms' that I've crossed some line? (Knowing this might help tell me about which parts of your talk you take seriously, and which you don't.)

Second, as long as we don't interpret 'Realm' spatially, what's wrong with speaking of a Realm of abstract objects, literally? Physical things occur in spacetime; abstract things exist just as physical ones do, but outside spacetime. Perhaps they occupy their own non-spatial structure, or perhaps they can't be said to 'occupy' anything at all. Either way, we've complicated our ontology quite a bit.

Replies from: Jack
comment by Jack · 2013-05-03T00:27:16.056Z · LW(p) · GW(p)

If in axiomatizing arithmetic we are ontologically committed to saying that 1 exists, 2 exists, 3 exists,etc., then we may say that there are numbers even if it is not axiomatic that 1, 2, 3, etc. are causally inert, nonphysical, etc.

I'm still lost here.

Instead of being a platonist and treating numbers as abstract, you could treat them as occupying spacetime (like immanent universals or tropes), you could treat them as non-spatiotemporal but causally efficacious (like the actual Forms of Plato), or you could assert both. (You could also treat them as useful fictions, but I'll assume that fictionalism is an error theory of mathematics.)

I'm not sure I would say Plato's forms are causally efficacious in the way we understand that concept-- but that isn't really important. Any way, I have issues with the various alternatives to modern Platonism, immanent realism, trope theory etc. -- though not the time to go into each one. If I were to make a general criticism I would say all involve different varieties of torturous philosophizing and the invention of new concepts to solve different problems. Platonism is easier and doesn't cost me anything.

I think many of the views on which mathematical objects have some causal (or, if you prefer, 'difference-making') effect on our mathematical discourse are reasonable. The views on which it's just a coincidence are not reasonable, and I don't think abstract numbers can easily escape the 'just a coincidence' concern (unless, perhaps, accompanied by a larger Tegmark-style framework).

Ah! This seems like a point of traction. I certainly don't think there is anything coincidental about the fact that mathematical truths tell us things about physical truths. I just don't think the relationship is causal. I believe causal facts are facts about possible interventions on variables. Since there is no sense in which we can imagine intervening on mathematical objects I don't see how that relationship can be causal. But that doesn't mean it is a coincidence or isn't sense making. I Mathematics is effective because everything in the natural world is an instantiation of an abstract object. Instantiations have the properties of the abstract object they're instantiating. This kind of information can be used in a straightforward, explanatory way.

universals occur exactly in the spatiotemporal locations where their instances are, and they are exhausted by these worldly things.

This is a particular way of understanding universals. You need to specify immanent realism. Plenty of philosophers believe in universals as abstract objects.

comment by Juno_Watt · 2013-05-10T12:28:46.076Z · LW(p) · GW(p)

We have successful theories that posit certain entities. I think believing in those theories requires believing in those entities. Some of those entities figure causally and spatio-temporally in our theories. Some don't

We think the ones that don't figure causally or spatio-temporally aren't actually being posited at all. That's how you read physics. If you know how to read a map, you know that rivers and mountains on the map are suposed to be in the territory, but lines of lattitude and contour lines aren't.

comment by Rob Bensinger (RobbBB) · 2013-05-02T22:09:10.555Z · LW(p) · GW(p)

When you say "in a world where there weren't quarks" I have no idea what you're talking about. It appears to be some kind of possible world where the laws of physics are different. But now we're making statements of fact about abstract objects.

No, when I say 'in a world where there weren't quarks' I mean in an imagined scenario in which quarks are imagined not to occur. I'm not committed to real non-actual worlds. (If possible worlds were abstract, then they'd have no causal relation to my thoughts about them, so I'd have no reason to think my thoughts about modality were at all on the right track. It's because modality is epistemic and cognitive and 'in the head' that I can reason about hypothetical and counterfactual situations productively.) I'm a modal fictionalist, and a mathematical fictionalist.

In imagined scenarios where we sever the causal links between agents and quarks, e.g., by replacing quarks with some other mechanism that can produce reasoning agents, it seems less likely that the agents would have hypothesized quarks. When we remove abstract numbers from a hypothetical scenario, on the other hand, nothing about the physical world seems to be affected (since, inasmuch as they are causally inert, abstract numbers are in no way responsible for the way our world is).

That suggests that positing numbers is wholly unexplanatory. It might happen to be the case that there are such things, but it can't do anything to account for the unreasonable effectiveness of mathematics, because of the lack of any causal link.

Abstract objects play a similar role in current physical theories to that which luminiferous aether used to play. The problem with aether isn't just that it was theoretically dispensable; it was that, even if we weren't smart enough to figure out how to reformulate our theories without assuming aether, it would still be obvious that the theoretical successes that actually motivated us to form such theories would have arisen in exactly the same way even if there were no aether. Aether doesn't predict aether-theories like ours, because our aether theory is not based on empirical evidence of aether.

(Aether might still be reasonable to believe in, but only if it deserves a very high prior, such that the lack of direct empirical confirmation is OK. But you haven't argued for platonism based on high priors, e.g., via a Tegmark hypothesis; you've argued for it empirically, based on the real-world successes of mathematicians. That doesn't work, unless you add some kind of link between the successes and the things you're positing to explain those successes.)

Modern-day platonists try to make their posits appear 'metaphysically innocent' by depriving them of causal roles, but in the process they do away with the only features that could have given us positive reasons to believe such things. It would be like if someone objected to string theory because it's speculative and lacks evidence, and string theorists responded by replacing strings with non-spatiotemporal, causally inert structures that happen to resemble the physical world's structures. The whole point of positing strings is that they be causally or constitutively linked to our beliefs about strings, so that the success of our string theory won't just be a coincidence; likewise, the whole point of reifying mathematical objects should be to treat them as causally or constitutively responsible for the success of mathematics. Without that responsibility, the posit is unmotivated.

math appears likely to work the same way in all possible worlds.

What do you mean by "work the same way"? I can pretty easily imagines world where mathematicians consistently fail to get reliable results. There may even be actual planets like that in the physical universe, if genetic drift eroded the mathematical reasoning capabilities of some species, or if there are aliens who rely heavily on math but don't relate it to empirical reality in sensible ways. If such occurrences don't falsify platonism, then our own mathematicians' remarkable successes don't verify platonism. So what phenomenon is it that you're really claiming we need platonism to explain? What kind of 'unreasonable effectiveness' is relevant?

Replies from: Jack
comment by Jack · 2013-05-03T00:52:52.570Z · LW(p) · GW(p)

When we remove abstract numbers from a hypothetical scenario, on the other hand, nothing about the physical world seems to be affected (since, inasmuch as they are causally inert, abstract numbers are in no way responsible for the way our world is).

I can come up with possible worlds without quarks (in a vague, non-specific way). I have no idea what it means to "remove abstract numbers from a hypothetical scenario". I don't think abstract objects have modal variation which is closely related to their (not) being causal. But in so far as mathematics posits abstract entities and mathematics is explanatory than I don't think there is anything mysterious about the sense in which abstract objects are explanatory.

Abstract objects play a similar role in current physical theories to that which luminiferous aether used to play. The problem with aether isn't just that it was theoretically dispensable; it was that, even if we weren't smart enough to figure out how to reformulate our theories without assuming aether, it would still be obvious that the theoretical successes that actually motivated us to form such theories would have arisen in exactly the same way even if there were no aether. Aether doesn't predict aether-theories like ours, because our aether theory is not based on empirical evidence of aether.

I disagree. I think the problem with aether is entirely just that it was theoretically dispensable. And I think the sentences that follow that are just a way of saying "aether was theoretically dispensable".

Modern-day platonists try to make their posits appear 'metaphysically innocent' by depriving them of causal roles, but in the process they do away with the only features that could have given us positive reasons to believe such things.

Their utility in our explanations is sufficient reason to believe they exist even if their role in those explanations is not causal. Your string theory comparison doesn't sound like a successful scientific theory.

What do you mean by "work the same way"?

As in we can't develop models of possible worlds in which mathematics works differently. This has nothing to do with the abilities of hypothetical mathematicians.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T13:08:09.175Z · LW(p) · GW(p)

As in we can't develop models of possible worlds in which mathematics works differently.

Or we can't develop models of mathematically possible worlds where maths works differently. Or maybe we can, since we can image the AoC being either true or false Actually, it is easier for realists to imagine maths being different in different possible worlds, since, for realists, the existence of numbers makes an epistemic difference. For them, some maths that is formally valid (deducable from axioms) might be transcendentally incorrect (eg, the AoC was assumed but is actually false in Plato's Heaven).

comment by Juno_Watt · 2013-05-01T17:39:06.390Z · LW(p) · GW(p)

but I think abstract objects could be an entirely different sort of thing from concrete, physically existing objects, and still exist.

It's logically possible..like so many things.

Either these non physical things interact with matter (eg the brains of mathematicians) or they don't. If they do, that is supernaturalism. If they don't, they succumb to Occam's razor.

Replies from: Jack
comment by Jack · 2013-05-01T18:18:26.570Z · LW(p) · GW(p)

If they don't, they succumb to Occam's razor.

No. They don't. Stating scientific theories without abstract objects makes theories vastly more complicated when they can even be stated at all.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-01T18:48:12.125Z · LW(p) · GW(p)

I didn't say delete numbers from theories. I mean't don't reify them. There is stuff in theories that you are supposed not to reify, like centres of gravity.

Replies from: Jack
comment by Jack · 2013-05-01T19:01:49.789Z · LW(p) · GW(p)

Centers of gravity are an even better example of a real abstract object. I'm definitely not reifying anything according to the dictionary definition of that word: neither numbers nor centers of gravity are at all concrete. They're abstract.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-01T20:33:06.574Z · LW(p) · GW(p)

OK. So, in what sense do these "still exist", and in what sense are they "entirely different" from concrete objects? And are common-or-garden numbers included?

Replies from: Jack
comment by Jack · 2013-05-01T23:49:16.467Z · LW(p) · GW(p)

I think it might be best if you read the above-linked SEP article and some of the related pieces. But the short form.

  1. We should believe our best scientific theories
  2. Our best scientific theories make reference to/quantifier over abstract objects-- mathematical objects like numbers, sets and functions and non-mathematical abstract objects like types, forces and relations. Entities that theories refer to/quantifier over are called their ontic commitments.
  3. Belief in our best scientific theories means belief in their ontic commitments.

C: We should believe in the existence of the abstract objects in our best scientific theories.

One and two seem uncontroversial. 3 can certainly be quibbled with and I spent a few years as a nominalist trying to think of ways to paraphrase out or find reasons to ignore the abstract objects among science's ontic commitments. Lots of people have done this and have occasionally demonstrated a bit of success. A guy named Hartry Field wrote a pretty cool book in which he axiomatizes Newtonian mechanics without reference to numbers or functions. But he was still incredibly far away from getting rid of abstract objects all together (lots of second order logic) and the resulting theory is totally unwieldy. At some point, personally, I just stopped seeing any reason to deny the existence of abstract objects. Letting them exists costs me nothing. It doesn't lead to false beliefs and requires far less philosophizing.

The concrete-abstract distinction still gets debated but a good first approximation is that concrete objects can be part of causal chains and are spatio-temporal while abstract objects are not. As for common-or-garden numbers: I see no reason to exclude them.

Replies from: Juno_Watt, bogus
comment by Juno_Watt · 2013-05-09T22:30:25.393Z · LW(p) · GW(p)

Quine has a logician's take on physics -- he assumes that the formal expression of a physical law is complete itself, and therefore seeks a purely formal criterion of ontological commitment, or objecthood. However, physics doesn't work like that. Physical formalisms have semantic implications that aren;t contained in the formalism itself: for instance, f=ma is mathematically identical to p=qr or a=bc, or whatever. But The f, the m and the a all have their own meaning, their own relation to measurement, as a far as a physicist is concerned.

I spent a few years as a nominalist trying to think of ways to paraphrase out or find reasons to ignore the abstract objects among science's ontic commitments.

The reasons are already part of the theory..in the sense that the theory is more than the written formalism Physics students are taught that centers of gravity should not be reified --that is part of the theory. No physcs student is taught that any pure number is a reifiable object, and few hit upon the idea themselves.

Letting them exists costs me nothing. It doesn't lead to false beliefs and requires far less philosophizing.

No philosophizing is required to get rid of abstract objects, one only needs to follow the instructions about what is refiable that are already part of the informal part of a theory.

I can't see how you can claim that Platonism doesn't lead to false beliefs without implicitly claiming omniscience. If abstract entities do not exist, then belief in them is false, by a straightforward correspondence theory. Moreover, is Platoism is true, then some common fomlations of physicalism, such as "everything that exists,, exists spatio-temporally" is false. Perhaps you meant Platonism doesn;t lead to false beliefs with any practical upshot, but violations of Occam's razor generally don't.

The concrete-abstract distinction still gets debated but a good first approximation is that concrete objects can be part of causal chains and are spatio-temporal while abstract objects are not.

OK, but that means that centres-of-gravity aren;t abstract:: the center of gravity of the Earth has a location. That doesn't mean they are fully concete either. Jerrold Katz puts them into a third category, that of the mixed concrete-and-abstract. (His favoured example is the equator).

As for common-or-garden numbers: I see no reason to exclude them.

If you are going to include centers of gravity, and Katz's categorisation is correct, then there is still no reason to include fully abstract entities. And there is a reason to exclude centers of gravity, which is the informal semantics of physics.

Replies from: Jack
comment by Jack · 2013-05-09T23:47:44.216Z · LW(p) · GW(p)

The reasons are already part of the theory..in the sense that the theory is more than the written formalism Physics students are taught that centers of gravity should not be reified --that is part of the theory. No physcs student is taught that any pure number is a reifiable object, and few hit upon the idea themselves.

There's that word again. I'm not reifying numbers. Abstract objects aren't "things". They aren't concrete. Platonists don't want to reify centers of gravity or numbers.

I can't see how you can claim that Platonism doesn't lead to false beliefs without implicitly claiming omniscience. If abstract entities do not exist, then belief in them is false, by a straightforward correspondence theory. Moreover, is Platoism is true, then some common fomlations of physicalism, such as "everything that exists,, exists spatio-temporally" is false. Perhaps you meant Platonism doesn;t lead to false beliefs with any practical upshot, but violations of Occam's razor generally don't.

Platonism and nominalism don't differ in anticipations of future sensory experiences. The difference is entirely about theory and methodology. I've already replied to the Occam's razor thing: our theories that include abstract objects are radically simpler and easier to use than the attempts that do not exclude abstract objects.

OK, but that means that centres-of-gravity aren;t abstract:: the center of gravity of the Earth has a location. That doesn't mean they are fully concete either. Jerrold Katz puts them into a third category, that of the mixed concrete-and-abstract. (His favoured example is the equator).

I'm not sure they have a location in the same way that is generally meant by spatio-temporal: but the exact classification of centers of gravity isn't that important to me. I'm not claiming to have the details of that figured out.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T09:23:49.403Z · LW(p) · GW(p)

There's that word again. I'm not reifying numbers. Abstract objects aren't "things". They aren't concrete. Platonists don't want to reify centers of gravity or numbers

There has to be some content to Platonism. You seem to be assuming that by "reifying" I must mean "treat as concretely existent". In context, what I mean is "treat as being existent in whatever sense Platonists think abstracta are existent". I am not sure what that is, but there has to be something to it, or there is no content to Platonism, and in any case it is not my job to explain it.

Platonism and nominalism don't differ in anticipations of future sensory experiences. The difference is entirely about theory and methodology.

I am not sure what you mean by that. The difference is about ontology. If two theories make the same predictions, and one of them has more entities, one of them is multiplying entities unnecessarily.

I've already replied to the Occam's razor thing: our theories that include abstract objects are radically simpler and easier to use than the attempts that do not exclude abstract objects.

And I have replied to the reply. The Quinean approach incorrectly takes a scientific theory to be a formalism. It is only methodologicaly simpler to reify whatever is quantified over, formally, but that approach is too simple because it leaves out the semantics of physics--it doensn't distinguish between f=ma and p=qr.

I'm not sure they have a location in the same way that is generally meant by spatio-temporal: but the exact classification of centers of gravity isn't that important to me. I'm not claiming to have the details of that figured out.

Such details are what could bring Platonism down.

Replies from: Jack
comment by Jack · 2013-05-10T10:00:19.205Z · LW(p) · GW(p)

You seem to be assuming that by "reifying" I must mean "treat as concretely existent".

Oh come on now. That's literally what the word means. It's the dictionary definition. Don't complain about me assuming things if you're using words contrary to their dictionary definition and not explaining what you mean.

In context, what I mean is "treat as being existent in whatever sense Platonists think abstracta are existent".

As I've said a thousand times I think all there is to "being existent" is to be an entity quantified over in our best scientific theories. So in this case treating abstract objects as being existent requires scientists to literally do nothing different.

I am not sure what you mean by that. The difference is about ontology. If two theories make the same predictions, and one of them has more entities, one of them is multiplying entities unnecessarily.

Neither nominalism nor platonism make predictions. Scientific theories make predictions and there are no nominalist scientific theories.

The Quinean approach incorrectly takes a scientific theory to be a formalism. It is only methodologicaly simpler to reify whatever is quantified over, formally, but that approach is too simple because it leaves out the semantics of physics--it doensn't distinguish between f=ma and p=qr.

Honestly, I don't see how this is relevant. I don't agree that the Quinean approach leaves out the semantics of physics and I don't see how including the semantics would let you have a simple scientific theory that didn't reference abstract objects.

Such details are what could bring Platonism down.

Obviously it is possible that there are arguments that could convince me I'm wrong. I'm not obligated to have a preemptive reply to all of them.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T10:11:21.238Z · LW(p) · GW(p)

As I've said a thousand times I think all there is to "being existent" is to be an entity quantified over in our best scientific theories.

The point of Quinean Platonism is to inflate the formal criterion of quantification into an ontological claim of existence, not to deflate existence into a mere formalism.

So in this case treating abstract objects as being existent requires scientists to literally do nothing different.

It requries them to ignore part of the informal interpretation of a theory.

Neither nominalism nor platonism make predictions.

Then one of them is unnecessarily complicated as an ontology. You see to think Platonism isn't ontology. I have no idea what your would then think it is.

there are no nominalist scientific theories.

Whether theories are nominalist, or whatever, depends on how you read them. They don't have their own interpretation built-in, as I have pointed out a 1000 times.

I don't agree that the Quinean approach leaves out the semantics of physics a

nd I don't see how including the semantics would let you have a simple scientific theory that didn't reference abstract objects.

Theories can include numbers and centers of gravity, and reference them in that sense, and that is not the slightest argument for Platonism. Platonism requires that certain symbols have real referents -- whichis another sense of "reference".

Looking at a symbol on a piece of paper doesn't tell you that the symbol has a real referent. Non-Platonism isnt the claim that such symbols need to be deleted, it is an interpretation whereby some symbols get reified -- have real world referents --and others don't. Platonism is not the claim that there are abstract symbols in formalisms, it is an ontological claim about what exists.

comment by bogus · 2013-05-02T00:42:37.109Z · LW(p) · GW(p)

Doesn't this imply that equivalent scientific theories may have quite different implications wrt. what abstract objects exist, depending on how exactly they are formulated (i.e. the extent to which they rely on quantifying over variables)?

Also, given the context, it's not clear that rejecting theories which rely on second-order and higher-order logics makes sense. The usual justification for dismissing higher-order logics is that you can always translate such theories to first-order logic, and doing so is a way of "staying honest" wrt. their expressiveness. But any such translation is going to affect how variables are quantified over in the theory, hence what 'commitments' are made.

Replies from: Jack
comment by Jack · 2013-05-02T01:03:22.506Z · LW(p) · GW(p)

Doesn't this imply that equivalent scientific theories may have quite different implications wrt. what abstract objects exist, depending on how exactly they are formulated (i.e. the extent to which they rely on quantifying over variables)?

I'm not sure what you mean by "equivalent" here. If you mean "makes the same predictions" then yes-- but that isn't really an interesting fact. There are empirically equivalent theories that quantify over different concrete objects too. Usually we can and do adjudicate between empirically equivalent theories using additional criteria: generality, parsimony, ease of calculation etc.

comment by endoself · 2013-05-01T17:22:22.585Z · LW(p) · GW(p)

I think Jack meant the sort of modern platonism that philosophers believe, not Tegmark-style platonism. Modern platonism is the position that, as Wikipedia says, abstract objects exist in a sense "distinct both from the sensible external world and from the internal world of consciousness", while in Tegmark's platonism, abstract objects exist in the same sense as the external world, and the external world is a mathematical structure.

Replies from: fubarobfusco, RobbBB
comment by fubarobfusco · 2013-05-01T21:05:40.631Z · LW(p) · GW(p)

This seems to be a question of "How are we allowed to use the word 'exist' in this conversational context without being confusing?" or "What sort of definition do we care to assign to the word 'exist'?" rather than an unquoted question of what exists.

In other words, I would be comfortable saying that my office chair and the number 3 both plexist (Platonic-exist), whereas my office chair mexists (materially exists) whereas 3 does not.

Replies from: Jack, Juno_Watt, endoself
comment by Jack · 2013-05-04T21:13:04.881Z · LW(p) · GW(p)

Well it is certainly the case that knowing how to use the word "exist" is helpful for answering the question: "what exists?" And a consistent application of the usage of the word "exist" is how the modern platonic argument get's its start. We look at universally agreed upon cases of the usage of "exist", formulate criteria for something to exist and apply those criteria. The modern Platonist generally has a criteria along the lines of "If and only if an entity is quantified over by our best scientific theories then it exists." Since our best scientific theories quantify over abstract objects the modern Platonist concludes that abstract objects exist.

Once can deny the criteria and come up with a different one or deny that abstract objects meet the criteria. But what advantage do these neologisms give us? Does using two different words, plexist and mexist, do anything more than recognize that material objects and abstract objects are two different kinds of things? If so why isn't calling one "material" and the other "abstract" sufficient for for making that distinction? Presumably we wouldn't want to come up with a different word for every way something might exist: quark-exist, chair-exist, triangle-exist, three-exist and so on.

Why not just have one word and distinguish entities from each other with adjectives?

Replies from: fubarobfusco
comment by fubarobfusco · 2013-05-04T22:18:33.635Z · LW(p) · GW(p)

Why not just have one word and distinguish entities from each other with adjectives?

Because what we're saying about our descriptions of things is different. For some nouns, saying that it "exists" means that it has mass and takes up space, can be bumped into and such. For other nouns, "exists" means it can be defined without contradiction, or some such.

The verb "exist" is being used polysemously, even metaphorically — in the manner that "run" is used of sprinters, computer programs, and the dyed color of a laundered shirt. A sprinter, program, and dye are not actually doing anything like the same thing when they "run", but we use the same word for them. This is a fact about our language, not about the things those three entities are doing. If there were any confusion what we meant, we would not hesitate to say that the program is "executing" and the dye is "spreading" or some such.

Replies from: Jack
comment by Jack · 2013-05-05T00:15:29.780Z · LW(p) · GW(p)

For some nouns, saying that it "exists" means that it has mass and takes up space, can be bumped into and such. For other nouns, "exists" means it can be defined without contradiction, or some such.

The whole Platonist position begins from a definition of "exists" that works equally well for abstract and concrete objects. You alternative definitions are bad: "has mass and takes up space, can be bumped into and such" isn't even a necessary set of criteria for a wide variety of concrete objects. Photons and gluons for instance.

Replies from: Juno_Watt, shminux
comment by Juno_Watt · 2013-05-09T22:44:47.578Z · LW(p) · GW(p)

We don't know that it "works equally well", since we don't have omniscient knowledge about the existence of abstract objects. If abstract objects don't exist, then the quantification criterion is too broad, and therefore does not work.

Replies from: Jack
comment by Jack · 2013-05-09T23:37:22.593Z · LW(p) · GW(p)

This straight-forwardly begs the question. I say "What it means to exist is to be quantified over in our best scientific theories". Your reply is basically "If you're wrong about the definition then you're wrong about the definition."

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T09:27:20.569Z · LW(p) · GW(p)

Your claim was "If we are right about the definition, we are right about the definition".

comment by Shmi (shminux) · 2013-05-05T06:29:17.911Z · LW(p) · GW(p)

The whole Platonist position begins from a definition of "exists" that works equally well for abstract and concrete objects.

I'm yet to see such a definition. Do you mean the "definition" (a postulate, really) such as the one on Wikipedia? (SEP isn't any better.)

With a lower case "p", "platonism' refers to the philosophy that affirms the existence of abstract objects, which are asserted to "exist" in a "third realm distinct both from the sensible external world and from the internal world of consciousness..."

If so, then it's a separate definition, not something that "works equally well". Besides, I have trouble understanding why one needs to differentiate between the abstract world and "the world of consciousness".

Replies from: Juno_Watt, Jack
comment by Juno_Watt · 2013-05-09T22:46:42.553Z · LW(p) · GW(p)

Besides, I have trouble understanding why one needs to differentiate between the abstract world and "the world of consciousness".

It's just a way of categorising Platoniists. Conceptualists think 3 is just a concept in their mind, Ptatonists don't.

comment by Jack · 2013-05-05T10:47:41.396Z · LW(p) · GW(p)

No, I don't mean that. I've given a definition/criterion like eight times in this thread include two comments up :-).

The modern Platonist generally has a criteria along the lines of "If and only if an entity is quantified over by our best scientific theories then it exists.

In other words, theories about the world generally make reference to entities of various kinds. The say "Some x are y" or "There is an x that y's" etc. These x's are a theory's ontological commitments. To say "the number the 3 is prime" implies 3 exists just as "some birds can fly" implies birds exist. Existence is simply being an entity posited by a true scientific theory. Making anything more out of "existence" gives it a metaphysical woo-ness the concept isn't entitled to.

Replies from: Juno_Watt, shminux
comment by Juno_Watt · 2013-05-09T22:50:08.332Z · LW(p) · GW(p)

To say "the number the 3 is prime" implies 3 exists just as "some birds can fly" implies birds exist.

What does "Sherlock Holmes is a bachelor" imply?

Existence is simply being an entity posited by a true scientific theory.

"Sherlock Holmes is married" is false. But the truth of "Sherlock Holmes is a bachelor" doesn't imply much about his existence.

A lot of lifting seems to be being done by the "scientific" in "scientific theory".

Replies from: Jack
comment by Jack · 2013-05-09T23:31:47.528Z · LW(p) · GW(p)

"Sherlock Holmes is a bachelor" implies that Sherlock Holmes exists. But when you say that you're simply taking part in a fictitious story. It's story telling and everyone knows you're not trying to describe the universe. If the fiction of Arthur Conan Doyle turned out to be a good theory of something-- say it was an accurate description of events that really took place in the late 19th century-- and accurately predicted lots of historic discoveries and Sherlock Holmes and the traits attributed to him were essential for that theory, then we would sat Sherlock Holmes existed.

A lot of lifting seems to be being done by the "scientific" in "scientific theory".

I am rightly shifting the criteria of "what exists" to people who actually seem to know what they're doing.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T09:34:30.276Z · LW(p) · GW(p)

"Sherlock Holmes is a bachelor" implies that Sherlock Holmes exists

That is not uncontentious.

But when you say that you're simply taking part in a fictitious story.

In which case SH is not implied to exist. But I knew that it is a fictitious story. The point was that "the number the 3 is prime" doens't imply that 3 exists, since properties can be correctly or incorrectly ascribe to fictive entities. There is no obvious implication from a statement being true to a statement involving entities that actually exist. Mathematical formalism and fictivism hold 3 to be no more existent than SH, and are not obviously false.

I am rightly shifting the criteria of "what exists" to people who actually seem to know what they're doing.

You are not, because you are ignoring them when they say centres don't exist. You are trying to read ontology from formalism, without taking into account the interpretation of the formalism, the semantics. "

Replies from: Jack
comment by Jack · 2013-05-10T10:19:21.895Z · LW(p) · GW(p)

You are not, because you are ignoring them when they say centres don't exist.

I don't agree that I am.

In which case SH is not implied to exist. But I knew that it is a fictitious story. The point was that "the number the 3 is prime" doens't imply that 3 exists, since properties can be correctly or incorrectly ascribe to fictive entities. There is no obvious implication from a statement being true to a statement involving entities that actually exist. Mathematical formalism and fictivism hold 3 to be no more existent than SH, and are not obviously false.

I don't understand what you're trying to accomplish with this line of reasoning. Obviously, "truths" about fictitious stories do not imply the existence of the entities they quantify over. A fiction is a sort of mutually agreed upon lie. (I don't agree, btw, that a statement about Sherlock Holmes is true in the same way that "There are white Swans" is true). But it is none the less the case that the assertion "Sherlock Holmes is a bachelor" implies the existence of Sherlock Holmes. It just so happens that everyone plays along with the story. But unlike the stories of Sherlock Holmes I really do believe in quantum mechanics and so take the theory's word for it that the entities it implies exist actually do exist.

I'm obviously aware there are alternatives to Platonism and that there is plenty of debate. I presumably have reasons for rejecting the alternatives. But instead of actually asserting a positive case for any alternative you seem to just be picking at things and disagreeing with me without explaining why (plus a decent amount of misunderstanding the position). If you'd like to continue this discussion please do that instead of just complaining about my position. It's unpleasant and not productive.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T11:01:25.063Z · LW(p) · GW(p)

So do I. But I take "the entities it implies" to mean "the entities that you are supposed to believe in according to the informal interpretation of the formalism", not "the entities quantified over".

"Maddy's first objection to the indispensability argument is that the actual attitudes of working scientists towards the components of well-confirmed theories vary from belief, through tolerance, to outright rejection (Maddy 1992, p. 280). The point is that naturalism counsels us to respect the methods of working scientists, and yet holism is apparently telling us that working scientists ought not have such differential support to the entities in their theories. Maddy suggests that we should side with naturalism and not holism here. Thus we should endorse the attitudes of working scientists who apparently do not believe in all the entities posited by our best theories. We should thus reject P1."

SEP

comment by Shmi (shminux) · 2013-05-05T20:24:04.146Z · LW(p) · GW(p)

I've given a definition/criterion like eight times in this thread include two comments up :-).

Sorry, I should have looked first.

The modern Platonist generally has a criteria along the lines of "If and only if an entity is quantified over by our best scientific theories then it exists." Since our best scientific theories quantify over abstract objects the modern Platonist concludes that abstract objects exist.

Ah, I see. How is it different from "we define stuff we think about that is not found in nature as "abstract""?

To say "the number the 3 is prime" implies 3 exists just as "some birds can fly" implies birds exist.

I guess that's where I am having problems with this approach. "Number 3 is prime" is a well-formed string in a suitable mathematical model, whereas "some birds can fly" is an observation about external world. Basically, it seems to me that the term "exist" is redundant in it. Everything you can talk about "exists" in Platonism, so the term is devoid of meaningful content.

Hmm, where do pink unicorns exist? Not in the external world, so somewhere in the internal world then? Or do they not exist at all? Then what definition of existence do they fail? For example, "our best scientific theories" imply that people can think about pink unicorns as if they were experimental facts. Thus they must exist in our imagination. Which seems uncontroversial, but vacuous and useless.

Replies from: Juno_Watt, Jack
comment by Juno_Watt · 2013-05-09T22:52:10.759Z · LW(p) · GW(p)

. Everything you can talk about "exists" in Platonism, so the term is devoid of meaningful content.

I can talk about a Highest Prime. Specifically, I can say it doesn't exist.

Replies from: shminux, shminux
comment by Shmi (shminux) · 2013-05-11T04:39:38.561Z · LW(p) · GW(p)

Would a Platonist think that a tulpa exists?

Replies from: Jack, Juno_Watt
comment by Jack · 2013-05-13T14:11:14.253Z · LW(p) · GW(p)

I don't think the hypothesis that there is an independent conscious person existing along with you in your mind (or whatever those people think they're doing) is the best explanation for the experiences they're describing. If they just want to use it as shorthand for a set of narratively consistent hallucination then I suppose I could be okay with saying a tulpa exists. But either way: I don't think a tulpa is an abstract object. It's a mental object like an imaginary friend or a hallucination. Like any entity, I think the test for existence is how it figures in scientific explanation but I think Platonists and non-Platonists are logically free to admit or deny tulpas existence.

comment by Juno_Watt · 2013-05-12T12:13:20.894Z · LW(p) · GW(p)

A Tegmarkian would.

Replies from: wedrifid
comment by wedrifid · 2013-05-13T12:41:49.941Z · LW(p) · GW(p)

A Tegmarkian would.

Really? The 'existence' status of that kind of mental entity seems to be an orthogonal issue to what (I am guessing) you mean by Tegmarkian considerations.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-13T23:40:29.585Z · LW(p) · GW(p)

Tegmarkia includes every possible arrangement of physical law, including forms of psycho-phsycial parallelism whereby what is thought automatically becomes real.

comment by Shmi (shminux) · 2013-05-10T01:53:27.437Z · LW(p) · GW(p)

Ah, fair point. I went too far. Still, I'm dubious about conflating the logical and the physical definition of existence. But hey, go wild, it's of no consequence.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-10T14:55:57.664Z · LW(p) · GW(p)

Have you noticed that, although you and Jack have completely opposite (minimal and maxima) ontologies, you both have the same motivation, of avoiding "philosophising". Well, I suppose "everything exists" and "nothing exists" both impose minimal cognitive burden -- if you believe some non -trivial subset exists, you have to put effort into populating it.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-11T04:41:45.847Z · LW(p) · GW(p)

I haven't noticed that Jack has a motivation of "avoiding philosophizing". And I don't say that "nothing exists", I just avoid the term as mostly vacuous, except in specific narrow cases, like math.

comment by Jack · 2013-05-08T20:22:19.996Z · LW(p) · GW(p)

I would say pink unicorns do not exist at all. The term, for me, describes a concrete entity that does not exist. "The Unicorn" could be type-language, which are abstract objects-- like "the Indian Elephant" or "The Higgs Boson" but unlike the Indian Elephant the Unicorn is not something quantified over in zoology and it is hard to think of a useful scientific process which would ever involve an ontological commitment to unicorns (aside from studying the mythology of unicorns which is clearly something quite different). "3 is prime" is a well-formed string in a suitable mathematical model-- which is to say a system of manipulating symbols. But this particular method of symbol manipulation is utterly essential to the scientific enterprise and it is trivial to construct methods of symbol manipulation that are not.

Our best scientific theories imply that people can think about pink unicorns as if they were experimental facts. So thoughts about pink unicorns certainly exist. It may also be the case the unicorns possibly exist. But our best scientific theories certainly do not imply the actual existence of unicorns. So pink unicorns do not exist (bracketing modal concerns).

How is it different from "we define stuff we think about that is not found in nature as "abstract""?

So to conclude: it's different in that the criterion for existence requires that the entity actually figures in scientific explanation, in our accurate model of the universe, not simply that it is something we can think about.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-08T21:28:58.888Z · LW(p) · GW(p)

So, if a theory of pink unicorns was useful to construct an "accurate model of the universe" (presumably not including the part of the universe that is you and me discussing pink unicorns?) these imaginary creatures would be as real as imaginary numbers?

Replies from: Jack
comment by Jack · 2013-05-09T02:40:48.599Z · LW(p) · GW(p)

Sure! Another way of saying that: If we discovered pink unicorns on another planet they would be as real as imaginary numbers.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-09T22:57:04.311Z · LW(p) · GW(p)

A lot of lifting is being done by "scientific" here. It's uncontroversial that scientific theories have to be about the real world in some sense, but it doesn't follow from that that every term mentioned in them successfully refers to something real.

comment by Juno_Watt · 2013-05-01T21:42:41.681Z · LW(p) · GW(p)

But if "plexists" means something like "I have an idea of it in my head", then there is no substance to the claim that 3 plexists..3 is then no more real than a unicorn.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-05-01T22:40:52.961Z · LW(p) · GW(p)

The number 3 has well-defined properties; such that I can be pretty sure that if I talk about 3 and you talk about 3, we're talking about the same sort of thing. Sources on unicorns vary rather more broadly on the properties ascribed to them.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-05-01T22:54:16.858Z · LW(p) · GW(p)

I don't see what that has to do with existence. We could cook up a well-defined fubarosco-juno unicorn.

comment by endoself · 2013-05-02T03:03:30.474Z · LW(p) · GW(p)

In other words, I would be comfortable saying that my office chair and the number 3 both plexist (Platonic-exist), whereas my office chair mexists (materially exists) whereas 3 does not.

I agree that this is useful, but it is essential to recognize that these words are just wrapping up our confusion, and that there are other questions that are still left unanswered when we have answered yours. It can sometimes help to determine which things plexist and which mexist, but we still don't really know what we mean when we say these, and having words for them can sometimes cause us to forget that. (I suppose I should refer to phlogiston here.) I think that Tegmark-platonism is probably a step towards resolving that confusion, but I doubt that any current metaphysical theory that has completed the job; I certainly don't know of any that doesn't leave me confused.

Replies from: Jack
comment by Jack · 2013-05-04T21:14:51.036Z · LW(p) · GW(p)

We can wonder about the nature of concrete objects and the nature of abstract objects without quarreling about whether or not one exists.

Replies from: endoself
comment by endoself · 2013-05-05T17:44:54.603Z · LW(p) · GW(p)

I don't think we really can. The categories of concrete and abstract objects are supposed to carve reality at its joins: I see a chair, I prove a theorem. You can't really do this sort of analysis without reference to the chairs and the theorems, and if you do make those references, you must have already settled the question of whether a chair is concrete, and a fortiori whether concrete objects exist. The alternative, studying concepts that were originally intended to carve reality at its joins without intending to do so yourself, has historically been unproductive, except to some extent in math.

Replies from: Jack
comment by Jack · 2013-05-05T19:15:15.137Z · LW(p) · GW(p)

Right, so accept that both abstract and concrete objects exist.. While you're not doing science feel free to think about what abstraction is, what concrete means and so on.

Replies from: endoself
comment by endoself · 2013-05-05T19:25:21.612Z · LW(p) · GW(p)

I don't think I've been clear. I'm saying that the categories of abstract and concrete objects are themselves generated by experience and are intended to reflect natural categories, and that it's not useful to think about what abstraction is without thinking about particular abstract objects and what makes us consider them abstract.

comment by Rob Bensinger (RobbBB) · 2013-05-01T17:29:24.591Z · LW(p) · GW(p)

Wikipedia's fine, but I'd rely more on SEP for quick stuff like this. The question of what makes something 'mathematical' is a difficult one, but it's not important for evaluating abstract-object realism. What makes something abstract is just that it's causally inert and non-spatiotemporal. Tegmark's MUH asserts things like that. Sparser mathematical platonisms also assert things like that. For present purposes, their salient difference is how they motivate realism about abstract objects, not how they conceive of the nature of our own world.

Replies from: endoself, None
comment by endoself · 2013-05-01T18:56:51.944Z · LW(p) · GW(p)

If I understand this correctly, I disagree. Modern philosophical platonism means different things by 'abstract' than Tegmark's platonism. In philosophical platonism, I accept your definition that something is abstract if it is causally inert and non-spatiotemporal. For Tegmark, this doesn't really make sense though, since the universe is causal in the same sense that a mathematical model of a dynamical system is causal, and it is spatiotemporal in the same sense that the mathematical concept of Minkowski spacetime is spatiotemporal, since the universe is just (approximately) a dynamical system on (approximately) Minkowski spacetime. The usual definition of an abstract object implies that physical, spatiotemporal objects are not abstract, which contradicts the MUH. I don't think we really have a precise definition of abstract object that makes sense in Tegmark's platonism, since something like 'mathematical structure' is obviously imprecise.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T03:16:28.456Z · LW(p) · GW(p)

For Tegmark, this doesn't really make sense though, since the universe is causal in the same sense that a mathematical model of a dynamical system is causal, and it is spatiotemporal in the same sense that the mathematical concept of Minkowski spacetime is spatiotemporal

I don't think that means that abstract objects in the ordinary sense don't make sense. It just means that he counts a lot of things as concrete that most people might think of as abstract. We don't need a definition of 'mathematical structure' for present purposes, just mathematically precise definitions of 'causal' and 'spatiotemporal'.

comment by [deleted] · 2013-05-02T03:07:45.520Z · LW(p) · GW(p)

The abstract/concrete distinction is actually a separate ontic axis from the mathematical/physical one. You can have abstract (platonic) physical objects, and concrete mathematical objects.

Example of abstract physical objects: Fields

Example of concrete mathematical objects: Software

My definitions:

Abstract: universal , timeless and acausal (always everywhere true and outside time and space, and not causally connected to concrete things).
Concrete: can be located in space and time, is causal, has moving parts

Mathematical: concerned with categories, logics and models
Physical: concerned with space, time, and matter

My take on modern Platonism is that abstract objects are considered the only real (fundamental) objects. Abstract objects can’t interact with concrete objects, because concrete objects don’t actually exist! Rather, concrete things should be thought of as particular parts (cross-sections, aspects of) abstract things. Abstract objects encompass concrete objects. But the so-called concrete objects are really just categories in our own minds (a feature of the way we have chosen to ‘carve reality at the joints’).

Replies from: Jack
comment by Jack · 2013-05-04T21:19:58.513Z · LW(p) · GW(p)

My take on modern Platonism is that abstract objects are considered the only real (fundamental) objects. Abstract objects can’t interact with concrete objects, because concrete objects don’t actually exist!

This isn't modern Platonism.

Example of concrete mathematical objects: Software

A program is an abstract object. Particular copies of a program stored in your hard drive, are concrete.

Replies from: None
comment by [deleted] · 2013-05-05T05:20:51.988Z · LW(p) · GW(p)

This isn't modern Platonism.

Ok, then its Geddesian Platonism ;) The easiest solution is to do away with the concrete dynamic objects as anything fundamental and just regard reality as a timeless Platonia . I thought thats more or less what Julian Barbour suggests.

http://en.wikipedia.org/wiki/Platonia_(philosophy)

A program is an abstract object. Particular copies of a program stored in your hard drive, are concrete.

The actual timeless (abstract) math objects are the mathematical relations making up the algorithm in question. But the particular model or representation of a program stored on a computer can be regarded as a concrete math object. And an instantiated (running) program can be viewed as a concrete math object also ( a dynamical system with input, processing and output).

These analogies are exact:

Space is to physics as categories are to math

Time is to physics as dynamical systems (running programs) are to math

Matter is to physics as data models are to math

comment by buybuydandavis · 2013-05-02T08:11:23.917Z · LW(p) · GW(p)

Similarly, if you believe a question has a very simple answer that does not need to be fleshed out you are unlikely to dedicate your life to answering it.

And you are unlikely to be able to make discussing the simple solution with others into a viable career in academic publishing.

comment by Emile · 2013-05-01T17:01:16.920Z · LW(p) · GW(p)

Nice summary; a minor nitpick:

Anti-Naturalists tend to work in philosophy of religion (.3) or Greek philosophy (.11). They avoid philosophy of mind (-.17) and cognitive science (-.18) like the plague. They hate Hume (-.14), Lewis (-.13), Quine (-.12), analytic philosophy (-.14), and being from Australasia (-.11).

"avoid like the plague" and "hate" are pretty strong terms to use for such low correlations...

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-01T17:06:24.608Z · LW(p) · GW(p)

Yes. During that part of my post I was being hilarious.

Replies from: atorm, jkaufman, Randy_M, Emile
comment by atorm · 2013-05-02T00:46:14.882Z · LW(p) · GW(p)

I'm afraid that during this comment you are being hilarious, but during that part of your post you were being unintentionally misleading.

comment by jefftk (jkaufman) · 2013-05-01T18:46:45.652Z · LW(p) · GW(p)

Until I went back and compared the numbers I took your description to mean that their dislikes here were much stronger than the ones for other clusters.

comment by Randy_M · 2013-05-01T22:02:46.759Z · LW(p) · GW(p)

And here I just thought you had really strong priors for perfectly equal distribution.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T02:06:39.615Z · LW(p) · GW(p)

I thought the tongue-in-cheekery was obvious given 'and being from Australasia'.

comment by Emile · 2013-05-02T08:55:27.419Z · LW(p) · GW(p)

It totally flew over my head then (and I'm probably not the only one). I guess I'm too used to popular science journalism that takes a study with a small correlation or a small effect or a small sample size, and rounds it to the nearest cliche to put a triumphant headline like "SMOKING CAUSES HOMOSEXUALITY", whereas the original study was that in a group of 20 rats, filling their cage with smoke made male-female rat interaction frequency decrease by 7%.

comment by [deleted] · 2013-05-01T15:05:39.327Z · LW(p) · GW(p)

but it's genuinely troubling to see non-expertise emerge as a predictor of getting any important question in an academic field right.

Perhaps it is a self-selection effect. Here is a relevant link.

If the example mentioned in the link can be extrapolated, perhaps non-experts in [insert any relevant philosophical subfield] avoid that particular subfield precisely because they think it is a fraud.

Replies from: magfrump
comment by magfrump · 2013-05-01T17:11:47.976Z · LW(p) · GW(p)

perhaps non-experts in [philosophy as a whole] avoid that particular subfield precisely because they think it is a fraud.

This was my default expectation beforehand, and still is. People form their opinions first and then talk about justifying them. Making it more complicated than this doesn't help explain the data.

comment by Douglas_Reay · 2013-05-03T08:31:26.034Z · LW(p) · GW(p)
  1. Anti-Naturalists
  2. Objectivists
  3. Rationalists
  4. Anti-Realists
  5. Externalists
  6. Star Trek Haters
  7. Logical Conventionalists

Finally, an objective way of deciding the division into character classes for the roleplaying game Dungeons & Discourse

comment by Protagoras · 2013-05-01T16:04:40.938Z · LW(p) · GW(p)

I grant that anti-naturalism is a cluster of crazy, but several of the other views you seem to categorize as obviously wrong seem either not wrong or at least not obviously so to me. One problem is that many of the terms are much less clear in their meanings than they seem, and what a view means and whether it's plausible or crazy depends heavily on how that meaning is clarified ("objective," for example, is an extremely troublesome words in that regard).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-01T16:26:58.801Z · LW(p) · GW(p)

I'd say the Anti-Naturalism and Anti-Realism clusters are obviously wrong. Trekophobia and Logical Conventionalism are less obvious, though they clearly go against a lot of the basic views and tendencies on LW. Objectivism and Rationalism are more debatable, and Externalism seems the most LWy on a cursory look. But even Externalism only gets a couple of things consistently right, at best. (And in a fairly arbitrary manner.)

Perhaps I under-emphasized a really crucial point: All of these clusters should be criticized not just for the views you see, but for the ones you don't see. Why don't Externalists knock physicalism out of the ballpark? Why aren't Rationalists winning at Newcomb's Problem 80, 90, 100% of the time? Defensibility is an indefensibly low standard; expecting mediocrity is expecting too little even of high school philospohy students, to say nothing of those who have devoted 30, 40, 50 years to grasping these topics, with all the resources of human civilization at their disposal.

Too slow.

Replies from: Protagoras, JonathanLivengood, shminux
comment by Protagoras · 2013-05-01T19:52:18.680Z · LW(p) · GW(p)

Anti-realism is one of the examples I was thinking of. The survey found more anti-realists in philosophy of science than outside it, probably because those in philosophy of science were more likely to be thinking of instrumentalism, operationalism, or various forms of positivism vs. realism while those outside philosophy of science were more likely to be thinking of idealism vs. realism. Admittedly idealism is another vague, murky concept, but I suppose the most plausible interpretations would include some of the mistakes of the anti-naturalists. But since most anti-realists aren't idealists (especially not those in the philosophy of science), the problems with idealism aren't relevant.

Ernest Nagel[1] discussed the instrumentalism vs. realism debate in his very influential The Structure of Science, and I personally agree with both his conclusion that the issue is far less substantial than it may appear, and his comment that "many scientists as well as philosophers have indeed often used the term ' real' in an honorific way to express a value judgment and attribute a 'superior' status to the things asserted to be real. There is perhaps an aura of such honorific connotations whenever the word is employed, despite explicit avowals to the contrary and certain to the detriment of clarity. For this reason it would be desirable to ban the use of the word altogether" (151). Nagel's views in the philosophy of science were close to those of the positivists, and of course many of the positivists were scientists by training (and it remains true that philosophers of science often have strong backgrounds in science).

It seems to me that some of the debate over reductionism is worsened by realist biases. While many anti-reductionists are motivated by a more general anti-naturalism, there have been many prominent advocates of "non-reductive physicalism" of one kind or another (people like Fodor and Putnam and their followers) who have a different motivation. They seem to be very concerned that reductionism somehow claims that psychological laws and phenomena are somehow not "real" because on reductionism only the physical base is really "real." It seems to me that those mistaken views are an example of the problem Nagel talks about.

[1] Not to be confused with Thomas Nagel..

comment by JonathanLivengood · 2013-05-02T05:59:16.200Z · LW(p) · GW(p)

I'm guessing that you don't really know what anti-realism in philosophy of science looks like. I suspect that most of the non-specialist philosophers who responded also don't really know what the issues are, so this is hardly a knock against you. Scientific realism sounds like it should be right. But the issue is more complicated, I think.

Scientific realists commit to at least the following two theses:

(1) Semantic Realism. Read scientific theories literally. If one theory says that space-time is curved and there are no forces, while the other says that space-time is flat and there are primitive forces (so the two have exactly the same observational consequences in all cases), then the realist says that at most one of the two is true.

(2) Epistemic Realism. In every case, observation and experimentation can provide us with good epistemic (as opposed to pragmatic) reasons to believe that what some single theory, read literally, says about the world.

Denying either of these leads to some form of anti-realism, broadly construed. Positivists, instrumentalists, and pragmatists deny (1), as Einstein seems to have done in at least two cases. Constructive empiricists deny (2) in order to keep a commitment to (1) while avoiding inflationary metaphysics. Structural realists deny one or both of these commitments, meaning that they are anti-realists in the sense of the question at stake.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T06:14:16.716Z · LW(p) · GW(p)

Jonathan, Anti-Realism here isn't restricted to the view in philosophy of science. It's also associated with a rejection of the correspondence and deflationary theories of truth and of external-world realism. I'm currently somewhere in between a scientific realist and a structural realist, and I'm fine with classifying the latter as an anti-realism, though not necessarily in the sense of Anti-Realism Chalmers coined above to label one of the factors.

Your characterization of scientific realism, though, is way too strong. "In every case" should read "In most cases" or "In many cases", for Epistemic Realism. That's already a difficult enough view to defend, without loading it with untenable absolutism.

My main concern with Anti-Realists isn't that they're often skeptical about whether bosons exist; it's that they're often skeptical about whether tables exist, and/or about whether they're mind-independent, and/or about whether our statements about them are true in virtue of how the world outside ourselves is.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2013-05-02T06:50:16.000Z · LW(p) · GW(p)

Ah, I see that I misread. Somehow I had it in my head that you were talking about the question on the philpapers survey specifically about scientific realism. Probably because I've been teaching the realism debate in my philosophy of science course the last couple of weeks.

I am, however, going to disagree that I've given a too strong characterization of scientific realism. I did (stupidly and accidentally) drop the phrase "... is true or approximately true" from the end of the second commitment, but with that in place, the scientific realist really is committed to our being able to uniquely determine by evidence which of several literal rivals we ought to believe to be true or approximately true. Weakening to "most cases" or "many cases" deflates scientific realism significantly. Even constructive empiricists are going to believe that many scientific theories are literally true, since many scientific theories do not say anything about unobservable entities.

Also, without the "in every case," it is really hard to make sense of the concern realists have about under-determination. If realists thought that sometimes they wouldn't have good reasons to believe some one theory to be true or approximately true, then they could reply to real-life under-determination arguments (as opposed to the toy examples sometimes offered) by saying, "Oh, this is an exceptional case."

Anyway, the kinds of anti-realist who oppose scientific realism almost never deny that tables exist. (Though maybe they should for reasons coming out of material object metaphysics.)

comment by Shmi (shminux) · 2013-05-01T18:08:29.240Z · LW(p) · GW(p)

I'd say the Anti-Naturalism and Anti-Realism clusters are obviously wrong.

I wonder what your definition of obviously wrong is. Is it instrumental, like two-boxing on Newcomb? Bayesian, like theism failing the Occam's razor? Or something else? Or a combination?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-01T18:19:06.858Z · LW(p) · GW(p)

Generally it's Bayesian. If at this point in the history of civilization statements like 'there are chairs' don't get to count as obviously right, or 'physics be damned, I don't need on stinkin' causes for my volition!' as obviously wrong, then I confess I no longer find it obvious what 'obvious' is even supposed to mean.

I'm not saying Anti-Naturalists and Anti-Realists aren't extremely sophisticated, or in a number of cases well worth reading; sophistication is compatible with obvious wrongness.

comment by JoshuaZ · 2013-05-01T19:06:35.195Z · LW(p) · GW(p)

Regarding specialists sometimes being wrong, and in some categories being more likely to be wrong, there are two issues here. First, it might be that this should be evidence to cause us to wonder whether they are in fact wrong and the LW conventional wisdom is correct (or at least that it is nearly as obviously as correct).

Second, many of these issues becomes substantially less interesting as fields if one accepts the LW correct view. This is most obviously the issue where 79.13% of phil religion people are theists but only 13.22% of non-specialists. are theists. Simply put, philosophy of religion is much harder to justify as a subject with much content if theism is wrong.

I'm a little confused why you describe Humean as wrong. Although LW doesn't buy into some of Hume's ideas (e.g. the inherent unreliability of induction), a lot of what is discussed here are ideas somewhat compatible or at least not in conflict with Hume. For example, the fact that an AI might not share human values at all is pretty Humean in its approach to moral questions.

Replies from: Protagoras, RobbBB
comment by Protagoras · 2013-05-01T20:14:30.261Z · LW(p) · GW(p)

The only question which used the word "Humean" concerned the laws of nature, so none of Hume's other views are relevant, though I admit to not being sure why RobbBB is so certain that the Humean view of laws of nature is wrong. The Humean view is sometimes called the regularity view; the short version is that it is the view that what it is for a law of nature to be true just is for there to be a pattern of events in the universe conforming to the law. The non-Humean view insists that there is something more, some additional metaphysical component, which must be present in order for it to be true that a regular pattern reveals a genuine law of nature.

One simple argument for the non-Humean view is that it seesm that some patterns are coincidences, and we need some way to distinguish the concidences from the genuine laws of nature. The standard Humean response to this is to say that there are ways for a Humean to identify something as a coincidence. A Humean identifies a coincidence by noting that the pattern fails to cohere with broader theory; laws of nature fit fairly neatly together into larger wholes up to the general theory of everything, while coincidences are patterns which in light of broader theory look more like chance. Further, the Humean will argue that when there aren't clues like broader theoretical concerns, saying that it's a law of nature if it's got the right metaphysical extra and not otherwise is useless, since there's no way to detect the metaphysical extra directly.

Hope that helps.

Replies from: Jack, RobbBB, JoshuaZ
comment by Jack · 2013-05-01T23:55:46.977Z · LW(p) · GW(p)

Just to make everything more confusing, it turns out that David Hume was not a Humean about laws of nature.

comment by Rob Bensinger (RobbBB) · 2013-05-02T05:49:34.995Z · LW(p) · GW(p)

This is a great summary! My confidence that Humeanism is false is not extremely high; but it's high enough for me to think philosophers of science are far too confident of Humeanism relative to the evidence. The main source of uncertainty for me here is that I'm really not clear on what it takes for something to be a 'law of nature'. But if the essential question here is whether extremely strong correlations in fundamental physics call for further explanation, then I side with the 'yeah, some explanation would be great' side. The main worry is that if these events are brute coincidences, we have no Bayesian reason to expect the coincidence to continue into the future. The core intuition underlying non-Humeanism is that some simpler and more unitary mechanism is far more likely to have given rise to such empirical consistency than is such consistency to be the end of the story.

My concern with philosophers of science confidently endorsing Humeanism is that I expect this to be a case of 'I'm supposed to think like a Scientist, and Scientists are skeptical of things, especially weird things we can't directly observe'. Even if Humeanism itself is true, I would be very surprised if most Humeans believe in it for the right reasons.

In some ways this argument parallels the argument Eliezer has with himself over whether our universe is more like first-order logic or more like second-order; first-order logic is similar in some ways to Humeanism, because it doesn't think we need to subsume the instances within a larger generalization in order to fully explain and predict them. (Or, more precisely, it thinks such generalizations are purely anthropocentric, human constructs for practical ends; they give us little if any reason to update in favor of any metaphysical posits.)

comment by JoshuaZ · 2013-05-01T20:17:37.136Z · LW(p) · GW(p)

Hmm, I may need to reread the paper, but my understanding that they were also using Hume in the context of the question which asked which philosophers or philosophical schools people most identified with.

Replies from: Protagoras
comment by Protagoras · 2013-05-01T22:17:40.741Z · LW(p) · GW(p)

True, but when RobbBB comments on experts being wrong, he specifically mentions the tendency of philosophers of science to be Humeans, so I'm pretty sure he means to say they are mistaken in being Humean in this specific sense, not that it's just generally a mistake to be influenced by Hume (I don't get the impression that he is that down on Hume generally).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T05:40:06.102Z · LW(p) · GW(p)

Yes, thanks for pointing this out. I'm not criticizing anyone for being a follower of Hume in general. I am decidedly not down on Hume, usually.

comment by Rob Bensinger (RobbBB) · 2013-05-02T05:35:55.310Z · LW(p) · GW(p)

it might be that this should be evidence to cause us to wonder whether they are in fact wrong and the LW conventional wisdom is correct (or at least that it is nearly as obviously as correct).

It's evidence for both. If philosophers disagree with us, we should be less confident that we're right, and also less confident that they're right. The examples I provided were ones where I had high enough priors on the issues to not be dragged into agnosticism by specialists' disagreeing with me. (But, of course, I could still change my mind if we talked about it more. I don't mean to encourage lock-step adherence to some vague idea of LW Consensus as a shortcut for avoiding actually evaluating these philosophical doctrines.)

Regarding Humeanism, I was voicing my own view (and Eliezer's), not speaking for LessWrong as a whole. I'm more worried about philosophers being wrong than about their being un-LWy as such. Note that Humeanism here only refers to skepticism or reductionism about laws of ntaure; it doesn't refer to any of Hume's other views. (In fact, Hume himself was not a Humean in the sense used in the PhilPapers Survey. 'Humeanism' is like 'Platonism' in a modern context; a view only vaguely and indirectly inspired by the person for whom it's named.)

comment by EricHerboso · 2013-05-01T16:29:31.102Z · LW(p) · GW(p)

After comparing my own answers to the clusters Bouget & Chalmers found, I don't appear to fit well in any one of the seven categories.

However, I did find the correlations between philosophical views outlined in section 3.3 of the paper to be fairly predictive of my own views. Nearly everything in Table 4 that I agree with on the left side corresponds to an accurate prediction of what I'd think about the issue on the right side.

Interestingly, not all of these correlations seem like they have an underlying reason why they should logically go together. Does this mean that I've fallen prey to agreeing with the greens over the blues for something other than intellectual reasons?

comment by Decius · 2013-05-02T05:12:02.402Z · LW(p) · GW(p)

Why where the names 'Objectivists' and 'Rationalists' chosen as cluster names, when those are names of rather specific systems?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T05:27:25.733Z · LW(p) · GW(p)

Chalmers and Bourget chose the first five names. 'Objectivism' and 'Rationalism' are already used by a large number of very different world-views, which I imagine made them less worried about adding two more to the pile. Also, a higher value in Rationalism does tend to make you more likely to self-identify as a rationalist, in the 'not an empiricist' sense.

Replies from: Decius
comment by Decius · 2013-05-02T05:32:36.590Z · LW(p) · GW(p)

Thanks; the names were a factor which had me turned around a bit before I saw the 'dimension' clarification, because I thought that those names were referring to one of the groups which claims that name as their referent.

comment by falenas108 · 2013-05-01T15:50:33.213Z · LW(p) · GW(p)

I'd be interested in a breakdown of what percent of philosophers fall into each category. A quick scan of the linked paper doesn't show that they included this, though I could have missed it.

Replies from: badger, diegocaleiro
comment by badger · 2013-05-01T23:30:31.023Z · LW(p) · GW(p)

See Douglas Knight's comment and my reply. These are more like dimensions, not categories. It makes sense to talk about the distribution on each dimension (i.e. most people tend to be high on the anti-naturalism scale, with long tail on the low end), but everyone has sits somewhere on the dimension.

comment by diegocaleiro · 2013-05-01T23:19:54.519Z · LW(p) · GW(p)

Me too.

comment by buybuydandavis · 2013-05-02T07:57:10.237Z · LW(p) · GW(p)

Time to trot out Jaynes:

He quoted a colleague:

Philosophers are free to do whatever they please, because they don’t have to do anything right.

I just realized he was wrong, though. He's right that they don't have to do anything right, but they're only free to do what gets them tenure. It needs to be an area that allows for a lot of publishing. If you're the type of philosopher who dissolves problems, you're likely out of luck. When a problem is dissolved, there's nothing left to write about. The more your theories promote and enable an intellectual circle jerk, the better your career prospects. Not only will you have a lot to write about, you're enabling others as well, and so will have allies in asserting the value of the topic.

Replies from: RobbBB, PhilGoetz
comment by Rob Bensinger (RobbBB) · 2013-05-02T08:25:36.952Z · LW(p) · GW(p)

I think you're overstating your case. Successfully (persuasively, accurately, etc.) dissolving questions is a fantastic way to get tenure in philosophy. The problem is that it's a lot of work. (And justifying putting so much work into something is really hard to do when you're in a feedback loop with other philosophers' arguments more so than with independent empirical data.) Dissolving questions like this is not an afternoon's affair; in many cases it takes months, years, decades to complete this project, even when it's intuitively obvious from the get-go that some dissolution must be possible. (Remember, dissolving questions requires that one be able to understand and explain where others went wrong.)

It's also a mistake to think that rejecting the terms of debates is a novel or revolutionary notion somehow foreign to philosophy. Almost the opposite is the truth; in many cases philosophers have failed to make progress precisely because they've been too quick to flatly accept or flatly dismiss questions, rather than trying to take questions apart and see how they work without rushing to assert that they Must Be Meaningful or Must Be Meaningless ab intio. The history of the 20th century is in many ways a war between academics trying to dissolve one anothers' questions, and I think a lot of the recent mistakes we see in philosophy (e.g., the overreliance on Quine over Bayes, the insistence on treating philosophy and science as separate disciplines. . .) are in fact a byproduct of how crude, lazy, and simply unconvincing historical positivists' treatment of these issues was, often relying more on whether views sound Sciencey than on whether they're well-defined or true.

Replies from: buybuydandavis
comment by buybuydandavis · 2013-05-02T08:37:05.227Z · LW(p) · GW(p)

dissolving questions is a fantastic way to get tenure in philosophy.

Can you give examples?

Replies from: None, diegocaleiro
comment by [deleted] · 2013-05-02T16:10:34.829Z · LW(p) · GW(p)

Wittgenstein? The arch-dissolver.

Would you count Gettier?

Replies from: diegocaleiro
comment by diegocaleiro · 2013-05-04T01:45:17.142Z · LW(p) · GW(p)

Derrida was a famous dissolver.

comment by diegocaleiro · 2013-05-04T01:43:45.196Z · LW(p) · GW(p)

Tenure usually happens early on, so it is hard to detect if that person planned on dissolving questions beforehand.

If you think about the highest h-index philosophers, David Lewis, Daniel Dennet (67 and 66 respectively) you'll see that to spend one's lifetime dissolving questions gets you loads of impact. Parfit was awesome from the beggining (earning a special scholarship granted to four outstanding kids per year maximum) and is world famous for dissolving personal identity, and mathematizing some aspects of ethics. His H-index is lower because his books are unbelievably long.

Even if you publish a lot of papers, contra whomever said the opposite in another comment, you still get impact by dissolving.
Pinker's is 66. Nick is 28.

Notable exceptions would be chalmers, searle and putnam I suppose.

Someone outside philosophy of mind may speak for other areas. From what I recall, in philosophy of Math you really only go forward by not dissolving stuff.

comment by PhilGoetz · 2013-05-03T01:03:50.158Z · LW(p) · GW(p)

When John Searle and Jerry Fodor spoke at U. of Buffalo, they each gave me the impression that they were trying to be talked about. Fodor began with a reasonable position on the existence of faculties of the mind, and turned it into a caricature of itself that he didn't really seem to believe in, that seemed intended to be more outrageous than what Chomsky was saying at the time about universal grammar. Searle leered gleefully whenever he said something particularly provocative or deceptive, or dodged a question with a witty remark. Judging from what made him smile, he was interested in philosophy only as a competition.

Replies from: None, buybuydandavis
comment by [deleted] · 2013-05-03T01:10:14.766Z · LW(p) · GW(p)

I've had bad experiences of this kind with a bunch of famous philosophers, though the problem doesn't seem to extend to their writing. I think being famous, especially when you're in the presence of the people with whom you are famous, is really, really hard on your rationality.

comment by buybuydandavis · 2013-05-05T16:54:08.416Z · LW(p) · GW(p)

Judging from what made him smile, he was interested in philosophy only as a competition.

As a display of virtuosity. An instrumental value (verbal dexterity) becomes an end in itself. Technique for technique's sake. Cleverness for cleverness' sake. Not necessarily competition against anyone in particular, but evaluation versus a standard and the general population distribution of those evaluations.

comment by DanielLC · 2013-05-01T23:22:18.281Z · LW(p) · GW(p)

I would expect philosophers to believe what they're payed to believe. Most of them aren't likely to shift their opinions for grant money, but the ones who have the right opinions are more likely to become philosophers, and will be better known. They will also make their mark on philosophy, and other philosophers will follow that.

What are philosophers payed to believe? What exactly is their job?

Replies from: RobbBB, AspiringRationalist
comment by Rob Bensinger (RobbBB) · 2013-05-02T01:56:19.679Z · LW(p) · GW(p)

They're paid in large part to come up with minor variants on, and argue persuasively for and against, existing beliefs and practices that other philosophers strongly disagree about. The kinds of arguments, and to some extent the kinds of views, are determined to a significant extent by conventions about what it means to be a 'philosopher'. E.g., one's immediate goal is to try to persuade and earn the respect of dialectical opponents and near-allies, not to reliably answer questions like 'If my life were on the line, and I really had to come up with the right answer and not just an Interesting and Intuitively Appealing reef of arguments, how confident would I actually be that teleportation is death, is determinately death, and that that's the end of the story?'

If I could change just two small things about philosophers, it would probably be to (1) make them stop thinking of themselves (and being thought of by others) as a cohesive lump called 'Philosophy', and (2) make them think of their questions as serious, life-or-death disputes, not as highly refined intellectual recreation or collaborative play.

Replies from: None
comment by [deleted] · 2013-05-02T13:21:20.987Z · LW(p) · GW(p)

What makes this question

'If my life were on the line, and I really had to come up with the right answer and not just an Interesting and Intuitively Appealing reef of arguments, how confident would I actually be that teleportation is death, is determinately death, and that that's the end of the story?'

different from this question?

Is teleportation death?

Also, what effect do you suppose identifying as philosophers has on philosophers, or adhering to conventions about what it means to be a philosopher? Do you mean that this produces methodological problems?

comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-02T03:23:12.152Z · LW(p) · GW(p)

Relatedly, how are they funded? I'm assuming the large majority of professional philosophers are academics, but that still leaves it open. Is it primarily from tuition / university general funds, from endowed faculty positions, from government grants, from private grants or something else entirely?

comment by Luke_A_Somers · 2013-05-01T16:18:02.092Z · LW(p) · GW(p)

Is there a cluster that has more than 1 position in common with LW norms? None of these fit more than a little.

Replies from: Jack, shminux
comment by Jack · 2013-05-01T16:27:30.863Z · LW(p) · GW(p)

We should give the same survey to LW.

Replies from: endoself, Kaj_Sotala
comment by endoself · 2013-05-01T17:28:21.816Z · LW(p) · GW(p)

The problem with that is that people here aren't familiar with many of the concepts. For example, I like Hume's work on the philosophy of science, but I'm not a philosopher and I have no idea what it means for a position to be Humean or non-Humean. I think more people would answer without really understanding what they are answering than would take the time to figure out the questions.

Replies from: None, Jack, RobbBB
comment by [deleted] · 2013-05-01T17:52:29.653Z · LW(p) · GW(p)

I would argue that this was a problem for the professional philosophers who took this survey as well. A moral philosopher may have a passing knowledge of the philosophy of time, but not enough to defend the particular position she reports in the survey.

comment by Jack · 2013-05-01T17:30:44.060Z · LW(p) · GW(p)

Yes. It would be important to at least have respondents provide some self-assessment of how well they understand each question.

comment by Rob Bensinger (RobbBB) · 2013-05-01T17:52:09.616Z · LW(p) · GW(p)

I agree. I think it would make more sense to just have discussions about whichever of the topics interested people, rather than having a fixed poll. If there were such a poll, it should be one designed to encourage 'other' views and frequent revisions of one's view.

I might make something like this at some point, if only as a pedagogical tool or conversation-starter. At the moment, I have good introductions and links explaining all the PhilPapers questions up here.

comment by Kaj_Sotala · 2013-05-01T18:39:05.035Z · LW(p) · GW(p)

http://lesswrong.com/lw/56q/how_would_you_respond_to_the_philpapers_what_are/

Replies from: AspiringRationalist, Jack
comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-02T03:17:21.273Z · LW(p) · GW(p)

Glancing at the survey, it looks like it contains a large amount of jargon that although very likely accessible to professional philosophers, most people here (myself included) would not know what most of the questions are asking, so I don't think it would be practical to do this survey as is among LW.

comment by Jack · 2013-05-01T18:47:31.805Z · LW(p) · GW(p)

Right, but I meant in an accessible way that would let us analyze the data-- e.g. a google survey.

comment by Shmi (shminux) · 2013-05-01T18:11:45.936Z · LW(p) · GW(p)

They think the content of our mental lives in general (.66) and perception in particular (.55), and the justification for our beliefs (.64), all depend significantly on the world outside our heads. They also think that you can fully understand a moral imperative without being at all motivated to obey it (.5).

All these seem to be vaguely LW-like.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-01T18:25:31.452Z · LW(p) · GW(p)

I guess it depends what you mean by 'depending significantly on the world outside our heads'. If they mean it in the trivial sense, then the fractions in all schools should be so close to 1 that you shouldn't be able to get significant differences in correlation out (a covariance, I suppose). Since there was significant variation, I took them to mean something else. If so, that would be likely to mess us up first.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T01:44:45.040Z · LW(p) · GW(p)

By 'depend' I don't primarily mean causal dependence. One heuristic: If you're an internalist, you're likely to think that a brain in a vat could have the same mental states as you. If you're an externalist, you're likely to think that a brain in a vat couldn't have the same mental states as you even if it's physical state and introspective semblance were exactly alike, because the brain in a vat's environment and history constitutively (and not just causally) alter which mental states it counts as having.

Perhaps the clearest example of this trend is disjunctivism, which is in the Externalism cluster. Disjunctivists think that a hallucination as of an apple, and a veridical perception of an apple, have nothing really in common; they may introspectively seem the same, and they may have a lot of neurological details in common, but any class that groups those two things (and only those two things) together will be a fairly arbitrary, gerrymandered collection. The representational, causal, historical, etc. links between my perception and the external world play a fundamental role in individuating that mental state, and you can't abstract away from those contextual facts and preserve a sensible picture of minds/brains.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-02T13:28:28.926Z · LW(p) · GW(p)

Thanks.

So yeah, Externalism isn't particularly close to an LW norm.

comment by Shmi (shminux) · 2013-05-01T16:03:28.431Z · LW(p) · GW(p)

Looks like Phenomenal Externalism is the closest to the prevailing views here. (My personal views have some elements of Anti-Realism mixed in.)

Also, I find it quite unscientific that different groups of philosophers cannot even agree on the framework in which the Big Questions can be answered.

comment by Decius · 2013-05-02T05:21:17.887Z · LW(p) · GW(p)

Philosophers working in decision theory are drastically worse at Newcomb than are other philosophers, two-boxing 70.38% of the time where non-specialists two-box 59.07% of the time (normalized after getting rid of 'Other' answers). Philosophers of religion are the most likely to get questions about religion wrong — 79.13% are theists (compared to 13.22% of non-specialists), and they tend strongly toward the Anti-Naturalism dimension. Non-aestheticians think aesthetic value is objective 53.64% of the time; aestheticians think it's objective 73.88% of the time. Working in epistemology tends to make you an internalist, philosophy of science tends to make you a Humean, metaphysics a Platonist, ethics a deontologist. This isn't always the case; but it's genuinely troubling to see non-expertise emerge as a predictor of getting any important question in an academic field right.

Decision theory supposes free will (equivalently, unpredictability of a future agent's decisions); Newcomb's problem supposes the opposite: predictability of an agents decisions even when the prediction is a factor in the decision. It makes sense that their answer should differ from Newcomb's. Theism is untestable and therefore not even wrong, rather than being objectively wrong. Likewise with objective aesthetics.

None of the specific things you say that experts get 'wrong' is objective or testable in a meaningful manner. Wouldn't it be better to say that you generally disagree with experts?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-05-02T06:04:03.275Z · LW(p) · GW(p)

If an agent only has cause to reason decision-theoretically if it is operating with uncertainty, then that might show that Omega itself has no need for decision theory. But even then it would do nothing to show that we have no need for decision theory. Knowing that some other agent has access to knowledge that we lack about our future decisions can't erase the need for us to make a decision. This is basically the same reason decision theory works if we assume determinism; the fact that the universe is deterministic doesn't matter to me so long as I myself am ignorant of the determining factors.

Also, if decision theorists think Newcomb's Problem is incoherent on decision theory (i.e., it violates some basic assumption you need to do decision theory properly), then their response should be 'Other' or 'I can't answer that question' or 'That's not a question'. It should never be 'I take both boxes'. Taking both boxes is just admitting that you do think that decision theory outputs an answer to this question.

Most theisms are testable, and untestable statements can still be wrong (i.e., false, unreasonable to believe, etc.).

Replies from: Decius
comment by Decius · 2013-05-02T15:36:34.514Z · LW(p) · GW(p)

"Omega predicts that you take both boxes, but you are ignorant of the fact. What do you do, given that Omega predicted correctly?"

"Omega makes a prediction that you don't know. What do you do, given that Omega predicted correctly?"

I fail to see the difference between the decision theory used in these two scenarios.

And can you give an example of an untestable statement that could be true but is objectively false? What does it mean for a statement to be objectively unreasonable to believe?

Replies from: benelliott
comment by benelliott · 2013-05-02T22:23:42.182Z · LW(p) · GW(p)

The first is contradictory, you've just told me something, then told me I don't know it, which is obviously false.