Philosophy: A Diseased Discipline

post by lukeprog · 2011-03-28T19:31:58.441Z · LW · GW · Legacy · 448 comments

Contents

    Failed methods
    A diseased discipline
  Philosophy: the way forward
None
448 comments

Part of the sequence: Rationality and Philosophy

Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most.

That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.)

Failed methods

Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used.

Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.

A diseased discipline

What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians.

Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature:

  1. Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful.
  2. Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend.
  3. Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again.
  4. Because they were trained in traditional philosophical ideas, arguments, and frames of mind, naturalists will anchor and adjust from traditional philosophy when they make progress, rather than scrapping the whole mess and starting from scratch with a correct understanding of language, physics, and cognitive science. Sometimes, philosophical work is useful to build from: Judea Pearl's triumphant work on causality built on earlier counterfactual accounts of causality from philosophy. Other times, it's best to ignore the past confusions. Eliezer made most of his philosophical progress on his own, in order to solve problems in AI, and only later looked around in philosophy to see which standard position his own theory was most similar to.
  5. Many naturalists aren't trained in cognitive science or AI. Cognitive science is essential because the tool we use to philosophize is the brain, and if you don't know how your tool works then you'll use it poorly. AI is useful because it keeps you honest: you can't write confused concepts or non-natural hypotheses in a programming language.
  6. Mainstream philosophy publishing favors the established positions and arguments. You're more likely to get published if you can write about how intuitions are useless in solving Gettier problems (which is a confused set of non-problems anyway) than if you write about how to make a superintelligent machine preserve its utility function across millions of self-modifications.
  7. Even much of the useful work naturalistic philosophers do is not at the cutting-edge. Chalmers' update for I.J. Good's 'intelligence explosion' argument is the best one-stop summary available, but it doesn't get as far as the Hanson-Yudkowsky AI-Foom debate in 2008 did. Talbot (2009) and Bishop & Trout (2004) provide handy summaries of much of the heuristics and biases literature, just like Eliezer has so usefully done on Less Wrong, but of course this isn't cutting edge. You could always just read it in the primary literature by Kahneman and Tversky and others.

Of course, there is mainstream philosophy that is both good and cutting-edge: the work of Nick Bostrom and Daniel Dennett stands out. And of course there is a role for those who keep arguing for atheism and reductionism and so on. I was a fundamentalist Christian until I read some contemporary atheistic philosophy, so that kind of work definitely does some good.

But if you're looking to solve cutting-edge problems, mainstream philosophy is one of the last places you should look. Try to find the answer in the cognitive science or AI literature first, or try to solve the problem by applying rationalist thinking: like this.

Swimming the murky waters of mainstream philosophy is perhaps a job best left for those who already spent several years studying it - that is, people like me. I already know what things are called and where to look, and I have an efficient filter for skipping past the 95% of philosophy that isn't useful to me. And hopefully my rationalist training will protect me from picking up bad habits of thought.

Philosophy: the way forward

Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable. How can we proceed?

First, we must remain vigilant with our rationality training. It is not easy to overcome millions of years of brain evolution, and as long as you are human there is no final victory. You will always wake up the next morning as homo sapiens.

Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality. Ask a fellow rationalist who is knowledgeable about philosophy what the standard positions and arguments in philosophy are on your topic. If any of them seem really useful, grab those particular works and read them. But again: you're probably better off trying to solve the problem by thinking like a cognitive scientist or an AI programmer than by ingesting mainstream philosophy.

However, I must say that I wish so much of Eliezer's cutting-edge work wasn't spread out across hundreds of Less Wrong blog posts and long SIAI articles written in with an idiosyncratic style and vocabulary. I would rather these ideas were written in standard academic form, even if they transcended the standard game of mainstream philosophy.

But it's one thing to complain; another to offer solutions. So let me tell you what I think cutting-edge philosophy should be. As you might expect, my vision is to combine what's good in LW-style philosophy with what's good in mainstream philosophy, and toss out the rest:

  1. Write short articles. One or two major ideas or arguments per article, maximum. Try to keep each article under 20 pages. It's hard to follow a hundred-page argument.
  2. Open each article by explaining the context and goals of the article (even if you cover mostly the same ground in the opening of 5 other articles). What topic are you discussing? Which problem do you want to solve? What have other people said about the problem? What will you accomplish in the paper? Introduce key terms, cite standard sources and positions on the problem you'll be discussing, even if you disagree with them.
  3. If possible, use the standard terms in the field. If the standard terms are flawed, explain why they are flawed and then introduce your new terms in that context so everybody knows what you're talking about. This requires that you research your topic so you know what the standard terms and positions are. If you're talking about a problem in cognitive science, you'll need to read cognitive science literature. If you're talking about a problem in social science, you'll need to read social science literature. If you're talking about a problem in epistemology or morality, you'll need to read philosophy.
  4. Write as clearly and simply as possible. Organize the paper with lots of heading and subheadings. Put in lots of 'hand-holding' sentences to help your reader along: explain the point of the previous section, then explain why the next section is necessary, etc. Patiently guide your reader through every step of the argument, especially if it is long and complicated.
  5. Always cite the relevant literature. If you can't find much work relevant to your topic, you almost certainly haven't looked hard enough. Citing the relevant literature not only lends weight to your argument, but also enables the reader to track down and examine the ideas or claims you are discussing. Being lazy with your citations is a sure way to frustrate precisely those readers who care enough to read your paper closely.
  6. Think like a cognitive scientist and AI programmer. Watch out for biases. Avoid magical categories and language confusions and non-natural hypotheses. Look at your intuitions from the outside, as cognitive algorithms. Update your beliefs in response to evidence. [This one is central. This is LW-style philosophy.]
  7. Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).
  8. Don't dwell too long on what old dead guys said, nor on semantic debates. Dissolve semantic problems and move on.
  9. Conclude with a summary of your paper, and suggest directions for future research.
  10. Ask fellow rationalists to read drafts of your article, then re-write. Then rewrite again, adding more citations and hand-holding sentences.
  11. Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Meeting journals standards is not the most important reason to follow the suggestions above. Write short articles because they're easier to follow. Open with the context and goals of your article because that makes it easier to understand, and lets people decide right away whether your article fits their interests. Use standard terms so that people already familiar with the topic aren't annoyed at having to learn a whole new vocabulary just to read your paper. Cite the relevant positions and arguments so that people have a sense of the context of what you're doing, and can look up what other people have said on the topic. Write clearly and simply and with much organization so that your paper is not wearying to read. Write lots of hand-holding sentences because we always communicate less effectively then we thought we did. Cite the relevant literature as much as possible to assist your most careful readers in getting the information they want to know. Use your rationality training to remain sharp at all times. And so on.

That is what cutting-edge philosophy could look like, I think.

Next post: How You Make Judgments

Previous post: Less Wrong Rationality and Mainstream Philosophy

448 comments

Comments sorted by top scores.

comment by djc · 2011-03-30T05:22:56.581Z · LW(p) · GW(p)

As a professional philosopher who's interested in some of the issues discussed in this forum, I think it's perfectly healthy for people here to mostly ignore professional philosophy, for reasons given here. But I'm interested in the reverse direction: if good ideas are being had here, I'd like professional philosophy to benefit from them. So I'd be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources.

(The two main contributions that I'm aware of are ideas about friendly AI and timeless/updateless decision theory. I'm sure there are more, though. Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.)

Replies from: lukeprog, None, XiXiDu, Vladimir_Nesov, radical_negative_one
comment by lukeprog · 2011-03-30T05:32:31.812Z · LW(p) · GW(p)

Yes, this is one reason I'm campaigning to have LW / SIAI / Yudkowsky ideas written in standard form!

comment by [deleted] · 2013-06-09T20:43:42.565Z · LW(p) · GW(p)

As a professional philosopher who's interested in some of the issues discussed in this forum. . .

Oh wow. The initials 'djc' match up with David (John) Chalmers. Carnap and PhilPapers are mentioned in this user's comments. Far from conclusive evidence, but my bet is that we've witnessed a major analytic philosopher contribute to LW's discussion. Awesome.

Replies from: enye-word
comment by enye-word · 2017-05-10T08:51:59.127Z · LW(p) · GW(p)

In the comment he links to above, djc states "One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas."

When asked about LessWrong in a reddit AMA, David Chalmers stated "i think having subcommunities of this sort that make their own distinctive assumptions is an important mechanism of philosophical progress" and an interest in TDT/UDT.

(See also: https://slatestarcodex.com/2017/02/06/notes-from-the-asilomar-conference-on-beneficial-ai/)

(Sorry to dox you, David Chalmers. Hope you're doing well these days.)

comment by XiXiDu · 2011-03-30T13:23:59.862Z · LW(p) · GW(p)

So I'd be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources.

Actually in one case this "forum" could benefit from the help of professional philosophers, as the founder Eliezer Yudkowsky especially asks for help on this problem:

I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers...

I think that if you show that professional philosophy can dissolve that problem then people here would be impressed.

comment by Vladimir_Nesov · 2011-03-30T10:51:20.254Z · LW(p) · GW(p)

Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.

Do you know about the TDT paper?

comment by radical_negative_one · 2011-03-30T06:23:59.320Z · LW(p) · GW(p)

Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.

Just in case you haven't seen it, here is Eliezer's Timeless Decision Theory paper. It's over a hundred pages so i'd hope that it represents a "clear statement". (Although i can't personally comment on anything in it because i don't currently have time to read it.)

Replies from: djc, None
comment by djc · 2011-03-30T06:45:48.897Z · LW(p) · GW(p)

That's the one. I sent it to five of the world's leading decision theorists. Those who I heard back from clearly hadn't grasped the main idea. Given the people involved, I think this indicates that the paper isn't a sufficiently clear statement.

comment by [deleted] · 2011-03-30T06:31:40.856Z · LW(p) · GW(p)

It's somewhat painful to read. I've tried to read it in the past and get a bit eyesore after the first twenty pages.

Doing the math, I realize it's probably irrational for Yudkowsky-san to spend time learning LaTeX or some other serious typesetting system, but I can dream, right?

Replies from: lukeprog, Richard_Kennaway
comment by lukeprog · 2012-07-13T04:58:53.751Z · LW(p) · GW(p)

Your dream has come true.

Replies from: None, gmpalmer
comment by [deleted] · 2012-07-13T05:41:24.635Z · LW(p) · GW(p)

Happiness is too general a term to express my current state of mind.

May the karma flow through you like so many grains of sand through a sieve.

Replies from: wedrifid
comment by wedrifid · 2012-07-13T05:50:06.042Z · LW(p) · GW(p)

May the karma flow through you like so many grains of sand through a sieve.

Not quite sure how this one works. Usually I associate sieve with "leaking like a sieve", generally a bad thing---do you want all his karma to be assassinated away as fast as it comes?

Replies from: None
comment by [deleted] · 2012-07-13T06:03:17.911Z · LW(p) · GW(p)

Oh, no. Lukeprog is the sieve, and the grains of sand are whatever fraction of a hedon he gets from being upvoted.

comment by gmpalmer · 2012-12-10T14:23:43.277Z · LW(p) · GW(p)

I hope this is corrected later in the paper and my apologies if this is a stupid question but could you please explain how the example of gum chewing and abscesses makes sense?

That is, in the explanation you are making your decision based on evidence. Indeed, you'd be happy--or anyone would be happy--to hear you're chewing gum once the results of the second study are known. How is that causal and not evidential?

I see later in the paper that gum chewing is evidence for the CGTA gene but that doesn't make any sense. You can't change whether or not you have the gene and the gum chewing is better for you at any rate. Still confused about the value of the gum chewing example.

comment by Richard_Kennaway · 2011-03-30T10:14:56.832Z · LW(p) · GW(p)

The LaTeX to format a document like that can be learnt in an hour or two with no previous experience, assuming at least basic technically-minded smarts.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-03-30T12:10:42.199Z · LW(p) · GW(p)

The LaTeX to format a document like that can be learnt in an hour or two

And the learning (and formatting of the document) does not have to be done by the author of the document.

comment by prase · 2011-03-28T22:56:41.296Z · LW(p) · GW(p)

Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable.

Isn't this true just because the way philosophy is effectively defined? It's a catch-all category for poorly understood problems which have nothing in common except that they aren't properly investigated by some branch of science. Once a real question is answered, it no longer feels like a philosophical question; today philosophers don't investigate motion of celestial bodies or structure of matter any more.

In other words, I wonder what are the fundamentally philosophical questions. The adverb fundamentally creates the impression that those questions will be still regarded as philosophical after being uncontroversially answered, which I doubt will ever happen.

Replies from: ata, Technoguyrob, quen_tin, ksolez
comment by ata · 2011-03-29T00:16:46.178Z · LW(p) · GW(p)

Strongly agreed. I think "philosophical questions" are the ones that are fun to argue endlessly about even if we're too confused to actually solve them decisively and convincingly. Thinking that any questions are inherently philosophical (in that sense) would be mind projection; if a question's philosophicalness can go away due to changes in facts about us rather than facts about the question, then we probably shouldn't even be using that as a category.

Replies from: prase, Vladimir_Nesov, shokwave, Perplexed
comment by prase · 2011-03-29T09:25:27.430Z · LW(p) · GW(p)

I would say that the sole thing which philosophical questions have in common is that it is only imaginable to solve them using intuition. Once a superior method exists (experiment, formal proof), the question doesn't belong to philosophy.

comment by Vladimir_Nesov · 2011-03-29T00:20:44.992Z · LW(p) · GW(p)

due to changes in facts about us rather than facts about the question

Nice pattern.

comment by shokwave · 2011-03-29T08:10:05.985Z · LW(p) · GW(p)

if a question's philosophicalness can go away due to changes in facts about us rather than facts about the question, then we probably shouldn't even be using that as a category.

I think that's a good reason to keep using the category. By looking at current philosophy, we can determine what facts about us need changing. Cutting-edge philosophy (of the kind lukeprog wants) would be strongly determining what changes need to be made.

To illustrate: that there is a "philosophy of the mind" and a "free will vs determinism debate" tells us there are some facts about us (specifically, what we believe about ourselves) that need changing. Cutting-edge philosophy would be demonstrating that we should change these facts to ones derived from neuroscience and causality. Diagrams like this would be cutting-edge philosophy.

comment by Perplexed · 2011-03-29T01:13:43.671Z · LW(p) · GW(p)

I think "philosophical questions" are the ones that are fun to argue endlessly about even if we're too confused to actually solve them decisively and convincingly.

The thing that I find attractive about logic and 'foundations of mathematics' is that no one argues endlessly about philosophical questions, even though the subject matter is full of them.

Instead, people in this field simply assume the validity of some resolution of the philosophical questions and then proceed on to do the real work.

What I think that most fans of philosophy fail to realize is that answers to philosophical questions are like mathematical axioms. You don't justify them. Instead, you simply assume them and then work out the consequences.

Don't care for the consequences? Well then choose a different set of axioms.

comment by robertzk (Technoguyrob) · 2011-10-03T04:44:12.639Z · LW(p) · GW(p)

Are you suggesting that philosophy lies in the orthogonal complement to science and potential science (the questions science is believed to be capable of eventually answering)?

Replies from: prase
comment by prase · 2011-10-04T11:07:36.336Z · LW(p) · GW(p)

I am suggesting that the label philosophical is usually attached to problems where we have no agreed upon methodology of investigation. Therefore whether a question belongs to philosophy or science isn't defined solely by its objective properties, but also by our knowledge, and as our knowledge grows the formerly philosophical question is more likely to move into "science" category. The point thus was that potential science isn't orthogonal to philosophy, on the contrary, I have expressed belief that those categories may be identical (when nonsensical parts of philosophy are excluded).

On the other hand, I assume philosophy and actual (in contrast to potential) science are disjoint. This is just how the words are used.

comment by quen_tin · 2011-03-29T14:30:40.204Z · LW(p) · GW(p)

In a sense, science is nothing but experimental philosophy (in a broad sense), and the job of non-experimental-philosophy (what we label philosophy) is to make any question become an experimental question... But I would say that philosophy remains important as the framework where science and scientific fundamental concepts (truth, reality, substance) are defined and discussed.

Replies from: prase
comment by prase · 2011-03-29T15:41:01.296Z · LW(p) · GW(p)

science is nothing but experimental philosophy

Not universally. It's hard to find experiments in mathematics.

Replies from: None, Vladimir_M, Richard_Kennaway, Marius
comment by [deleted] · 2011-03-29T20:57:25.073Z · LW(p) · GW(p)

It's hard to find experiments in mathematics.

You'd have to look inside mathematicians' heads.

comment by Vladimir_M · 2011-03-30T16:28:47.458Z · LW(p) · GW(p)

It's hard to find experiments in mathematics.

In a sense, computers are nothing but devices for doing experimental mathematics.

Replies from: Clippy
comment by Clippy · 2011-03-30T16:29:52.353Z · LW(p) · GW(p)

In a sense, apes are nothing but devices for making ape DNA.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-03-30T16:32:24.812Z · LW(p) · GW(p)

I think Richard Dawkins made that observation a while ago at book length.

Replies from: Clippy
comment by Clippy · 2011-03-30T16:45:27.645Z · LW(p) · GW(p)

In a sense, Richard Dawkins is nothing but a device for making books.

Replies from: Friendly-HI
comment by Friendly-HI · 2011-11-15T23:17:52.855Z · LW(p) · GW(p)

In a sense, a book is nothing but a device for copying memes into other brains.

comment by Richard_Kennaway · 2011-03-30T10:33:40.374Z · LW(p) · GW(p)

Experimental Mathematics.

Replies from: Clippy
comment by Clippy · 2011-03-30T16:17:48.204Z · LW(p) · GW(p)

I do a lot of that when I experiment with various strings to find preimages for a class of hashes.

comment by Marius · 2011-03-29T15:50:58.185Z · LW(p) · GW(p)

Which is why mathematics isn't science.

Replies from: cousin_it
comment by cousin_it · 2011-03-29T16:10:35.193Z · LW(p) · GW(p)

I sense an argument about definitions of words. Please don't.

Replies from: Marius
comment by Marius · 2011-03-29T20:42:36.998Z · LW(p) · GW(p)

"what is science" is not a mere matter of definitions. It's fundamental to how we decide how certain we are of various propositions.

Replies from: cousin_it
comment by cousin_it · 2011-03-29T20:44:42.606Z · LW(p) · GW(p)

Um... no it isn't? A Bayesian processes evidence the same way whether or not it's labeled "science".

If you're talking about the word "science" as some sort of FDA seal of approval, invented so people can quickly see who to trust without examining the claims in detail, then I see no reason to exclude math. Do you think math gives less reliable conclusions than empirical disciplines?

Replies from: Marius
comment by Marius · 2011-03-29T22:22:40.944Z · LW(p) · GW(p)

A Bayesian may process probabilities the same way, but information is not evaluated the same way. Determining that a piece of information was derived scientifically does not provide a "seal of approval", it tells us how to evaluate the likelihood of that information being true.

For instance, if I know that a piece of information was derived via scientific methods, I know to look at related studies. A single study is never definitive, because science involves reproducible results based on empirical evidence. Further studies may alter my understanding of the information the first study produced.

On the other hand, if I know that a piece of information was derived mathematically, I need only look at a single proof. If the proof is sound, I know that the premises lead inexorably to the conclusion. On the other hand, encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian - a new proof must be created. On the other hand, experiments may yield some useful evidence even if the study has flawed premises or methods; precisely what parts are useful requires an understanding of what science is.

So this is actually important - it's not just a matter of definitions.

Replies from: cousin_it, JoshuaZ, twanvl
comment by cousin_it · 2011-03-29T22:50:01.703Z · LW(p) · GW(p)

Thanks, that's a valid argument that I didn't think of.

But it's sorta balanced by the fact that a lot of established math is really damn established. For example, compare Einstein's general relativity with Brouwer's fixed point theorem. Both were invented at about the same time, both are really important and have been used by lots and lots of people. Yet I think Brouwer's theorem is way more reliable and less likely to be overturned than general relativity, and I'm not sure if anyone anywhere thinks otherwise.

Replies from: Dreaded_Anomaly, Marius
comment by Dreaded_Anomaly · 2011-03-30T02:52:56.932Z · LW(p) · GW(p)

I'm not sure if "overturning" general relativity is the appropriate description. We may well find a broader theory which contains general relativity as a limiting case, just as general relativity has special relativity and Newtonian mechanics as limiting cases. With the plethora of experimental verifications of general relativity, however, I wouldn't expect to see it completely discarded in the way that, e.g., phlogiston theory was.

comment by Marius · 2011-03-30T02:15:21.327Z · LW(p) · GW(p)

Oh, I'm not calling mathematics more or less reliable than science. I'm saying that the ways in which one would overturn an established useful theorem would be very different from the ways in which one would overturn an established scientific theory. Another way in which mathematics is more reliable is that bias is irrelevant. Scientists have to disclose their conflicts of interest because it's easy for those conflicts to interfere with their objectivity during data collection or analysis, and so others must pay special attention. Mathematicians don't need to because all their work can be contained in one location, and can be checked in a much more rigorous fashion.

comment by JoshuaZ · 2011-03-30T02:36:43.964Z · LW(p) · GW(p)

On the other hand, if I know that a piece of information was derived mathematically, I need only look at a single proof. If the proof is sound, I know that the premises lead inexorably to the conclusion. On the other hand, encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian - a new proof must be created. On the other hand, experiments may yield some useful evidence even if the study has flawed premises or methods; precisely what parts are useful requires an understanding of what science is.

This doesn't follow. If for example, one does have a single proof and one encounters a hole in it and the hole looks like it makes plausible assumptions then one should still increase one's confidence that the claim is true. Thus, physicists are very fond of assuming that terms in series are of lower order even when they can't actually prove it. Very often, under reasonable assumptions, their claims are correct. To use a specific example, Kempe's "proof" of the four color theorem had a hole and so a repaired version could only prove that planar maps require at most five colors. But, the general thrust of the argument provided a strong plausibility heuristic for believing the claim as a whole.

Similarly, from a Bayesian stand-point, seeing multiple distinct proofs of a claim should make one more confident in the claim since even if one of the proofs has an unseen flaw, the others are likely to go through.

(There are complicating factors here. No one seems to have a good theory of confidence for mathematical statements which allows for objective priors since most standard objective priors (such as those based on some notion of computability) only make sense if one can perform arbitrary calculations correctly. Similarly it isn't clear how one meaningfully can talk about say the probability that Peano arithmetic is consistent.)

Replies from: Marius
comment by Marius · 2011-03-30T03:12:39.812Z · LW(p) · GW(p)

I don't think we actually disagree at all. Your "hole" is really the introduction of additional premises. If the premises are true and the reasoning sound, the conclusions follow. If they are shown to be untrue, you can discard the conclusion. Mathematics rarely has a way to evaluating the likelihood its premises are true - usually the best it can do is to show that certain premises are or are not compatible with one another. What you are saying regarding multiple distinct proofs of a claim is true according to some informal logic, but not in any strict mathematical sense. Mathematically, you've either proven something or you haven't. Mathematicians may still be convinced by scientific, theologic, literary, financial, etc. arguments of course.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-03-30T03:20:34.030Z · LW(p) · GW(p)

I don't think we actually disagree at all. Your "hole" is really the introduction of additional premises.

Not really. Consider for example someone who has seen Kempe's argument. They should have a higher confidence that say "The four color theorem is true in ZFC" then someone who has not seen Kempe's argument. There's no additional premise being added but Kempe's argument is clearly wrong.

What you are saying regarding multiple distinct proofs of a claim is true log

Not sure what you mean here. It looks like the sentence was cut off?

Replies from: Marius
comment by Marius · 2011-03-30T03:28:25.054Z · LW(p) · GW(p)

Would you mind explain in a little more detail why you say a person who has seen Kempe's flawed proof should have higher confidence than one who has not? Do you mean that it's so emotionally compelling that one's mind is convinced even if the math doesn't add up? Or that the required (previously-hidden) premise that allows Kempe to ignore the degree 5 vertex has some possibility of truth, so that the conclusion has an increased likelihood of truth?

also: fixed the end.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-03-30T03:46:13.570Z · LW(p) · GW(p)

Explain better why you say a person who has seen Kempe's flawed proof should have higher confidence than one who has not.

Hmm, I'm not sure how to do so without just going through the whole proof. Essentially, Kempe's proof showed that a smallest counterexample graph couldn't have certain properties. One part of the proof was showing that the graph could not contain a vertex of degree 5. But this part was flawed. But Kempe did show that it couldn't contain a vertex of degree 4, and moreover, it showed that any minimal counterexample must have a vertex of degree 5. This makes us more confident in the original claim since a minimal counterexample has to have a very restricted looking form.

Replying to the fixed end here so as to minimize confusion:

What you are saying regarding multiple distinct proofs of a claim is true according to some informal logic, but not in any strict mathematical sense. Mathematically, you've either proven something or you haven't. Mathematicians may still be convinced by scientific, theologic, literary, financial, etc. arguments of course.

Well, yes but the claim I was addressing was that the claim you made that "encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian" which is wrong. I agree that a flawed proof is not a proof.

And yes, the logic is in any case informal. See my earlier parenthetical remark. I actually consider the problem of confidence in mathematical reasoning to be one of the great difficult open problems within Bayesianism. One reason I don't (generally) self-identify as a Bayesian is due to an apparent lack of this theory. (This itself deserves a disclaimer that I'm by no means at all an expert in this field and so there may be work in this direction but if so I haven't seen any that is at all satisfactory.)

Replies from: Marius
comment by Marius · 2011-03-30T03:51:03.775Z · LW(p) · GW(p)

"encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian" which is wrong

I think you are assuming I count a dubious premise as an incorrect premise. Obviously, a merely dubious premise allows the conclusion to have some utility to the Bayesian.

I really don't think we actually disagree.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-03-30T04:02:51.073Z · LW(p) · GW(p)

I think you are assuming I count a dubious premise as an incorrect premise. Obviously, a merely dubious premise allows the conclusion to have some utility to the Bayesian.

Really? Even incorrect premises can be useful. For example, one plausibility argument for the Riemann hypothesis rests on assuming that the Mobius function behaves like a random variable. But that's a false statement. Nevertheless, it acts close enough to being a random variable that many find this argument to be evidence for RH. And there's been very good work trying to take this false statement and make true versions of it.

Similarly, if one believes what you have said then one would have to conclude that if one lived in the 1700s that all of calculus would have been useless because it rests on the notion of infinitesimals which didn't exist. The premise was incorrect, but the results were sound.

Replies from: Sniffnoy, Marius
comment by Sniffnoy · 2011-03-31T02:52:39.047Z · LW(p) · GW(p)

Incidentally, as more evidence, apparently this AC0 conjecture has just been proved true by Ben Green (rather, he noticed that other people had already done stuff that had this as a consequence, which the people asking the question hadn't known about).

comment by Marius · 2011-03-30T10:10:15.838Z · LW(p) · GW(p)

Ok, I need to refine my description of math a bit. I'd claimed that an incorrect premise gives useless conclusions; actually as you point out if we have a close-to-correct premise instead, we can have useful conclusions. The word "instead" is important there, because otherwise we can then add in a correct contradictory premise, generating new and false conclusions. In some sense this is necessary to all math, most evidently geometry: we don't actually have any triangles in the world, but we use near-triangles all the time, pretending they're triangles, with great utility.

Also, to look again at Kempe's "proof": we can see where we can construct a vertex of degree 5 where his proof does not hold up. And we can try to turn that special case back into a map. The fact that nobody's managed to construct an actual map relying on that flaw does not give any mathematical evidence that an example can't exist. Staying within the field of math, the Bayesian is not updated and we can discard his conclusion. But we can step outside math's rules and say "there's a bunch of smart mathematicians trying to find a counterexample, and Kempe shows them exactly where the counterexample would have to be, and they can't find one." That fact updates the Bayesian, but reaches outside the field of math. The behavior of mathematicians faced by a math problem looks like part of mathematics, but actually isn't.

comment by twanvl · 2011-03-29T22:31:34.555Z · LW(p) · GW(p)

A single study is never definitive, because science involves reproducible results based on empirical evidence.

That simply doesn't follow: why does involving reproducible results imply not being definitive?

Empirical results are never 'definitive' as in being 100.0% certain, but we can get very close. Whether this is done in a single study or with multiple studies doesn't matter at all. In practice there are good reasons to want multiple studies, but they have more to do with questions not addressed in a single study, trustworthiness of the authors, etc.

On the other hand, encountering a single incorrect premise or step means that the conclusion has zero utility

Even wrong mathematical proofs have a non-zero utility, because they often lead to new insights. For example, if only the last of 100 steps is wrong, then you are 99 steps closer to some goal.

Replies from: Marius
comment by Marius · 2011-03-29T22:39:34.696Z · LW(p) · GW(p)

A single study can't get close to 100% certainty, because that's just not how science works. If you look at all the studies that were true with 95% certainty, you'll find that well over 5% have found conclusions now believed to be false. There are issues of trust, issues of data collection errors, issues of statistical evaluation, the fact that scientific methods are designed under the assumption that studies will be repeated, etc.

The steps within unsound mathematical proofs may be valuable, but their conclusions are not.

Replies from: twanvl
comment by twanvl · 2011-03-29T22:51:27.638Z · LW(p) · GW(p)

A single study can't get close to 100% certainty, because that's just not how science works. ... the fact that scientific methods are designed under the assumption that studies will be repeated, etc.

The current scientific method is in no way ideal. If a study were properly Bayesian, then you should be able to confidently learn from its results. That still leaves issues of trust and the possibility of human error, but there might also be ways to combat those. But in a human society, repeating studies is perhaps the best thing one can hope for.

The steps within unsound mathematical proofs may be valuable, but their conclusions are not.

Agreed. That is the one part of an unsound proof that is useless.

Replies from: Marius
comment by Marius · 2011-03-30T02:28:13.783Z · LW(p) · GW(p)

Can you describe a better, more Bayesian scientific method? The main way I would change it is to increase the number of studies that are repeated, to improve the accuracy of our knowledge. How would you propose to improve our confidence other than by showing that an experiment has reproducible results?

comment by ksolez · 2011-03-29T02:03:30.763Z · LW(p) · GW(p)

In a recent interview on Singularity One on One http://singularityblog.singularitysymposium.com/question-everything-max-more-on-singularity-1-on-1/ (first video) Max More, one of the founders of transhumanism talks about how important philosophy was as the starting point for the important things he has done. Philosophy provided an important vantage point from which he wrote the influential papers which started transhumanism. Philosophy is not something to give up or shun, you just need to know what parts of it to ignore in pursuing important objectives.

Replies from: prase
comment by prase · 2011-03-29T09:20:01.648Z · LW(p) · GW(p)

I am not questioning the importance of philosophy, but the use of the label "philosophical" together with "fundamental". If someone draw a map of human knowledge, mathematics and biology and physics and history would form wedges starting from well-established facts near the center and reaching more recent and complex theories further away; philosophy, on the other hand, would be the whole ever receding border region of uncertain conjectures still out of reach of scientific methods. To expand the human knowledge these areas indeed must be explored, but once that happens, some branch of science will claim their possession and there will be no reason to further call them philosophy.

comment by wedrifid · 2011-03-29T04:05:20.471Z · LW(p) · GW(p)

Eliezer's anti-philosophy rant Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

This opening paragraph set off a huge warning claxon in my bullshit filter. To put it generously it is heavy on 'spin'. Specifically:

  • It sets up a comparison based on upvotes between a post written in the last month and a post written on a different blog.
  • Luke's post is presented as a contrast to controversy despite being among the most controversial posts to ever appear on the site. This can be measured based on the massive series of replies and counter replies, most of which were heavily upvoted - which is how controversy tends to present itself here. (Not that controversy is a bad thing.)
  • Upvotes for a well written post that contains useful references are equivocated with support for the agenda that prompted the author to write it.
  • The first 3.5 words were "Eliezer's anti-philosophy rant". Enough said.

All of the above is unfortunate because the remainder of this post was overwhelmingly reasonable and a promise of good things too come.

Replies from: lukeprog
comment by lukeprog · 2011-03-29T04:32:19.982Z · LW(p) · GW(p)

Interesting, thanks.

By the way, what is 'the agenda that prompted the author to write it'?

Replies from: lukeprog
comment by lukeprog · 2011-03-29T04:59:52.444Z · LW(p) · GW(p)

I just realized that 'rant' doesn't have the usual negative connotations for me that it probably does for others. For example, here is my rant about people changing the subject in the middle of an argument.

For the record, the article originally began "Eliezer's anti-philosophy rant..." but I'm going to change that.

Replies from: FAWS, lessdazed
comment by FAWS · 2011-03-29T05:26:52.768Z · LW(p) · GW(p)

Rant doesn't necessarily have negative connotations for me either, it really depends on the context. Your usage didn't look pejorative at all to me. It's sort of like a less intensive version of "vitriol" and there is no problem (implied) if the target deserves it (or is presented so).

comment by lessdazed · 2011-03-30T10:00:31.019Z · LW(p) · GW(p)

It is similar to the word "extremist", the technical definition is rarely only what people mean to invoke, and it's acquiring further connotations.

Losing precise meaning is the way to newspeak, and it distresses me. It is sometimes the result of being uncomfortable with or incapable of discussing specific facts, which is harder than the inside view.

comment by nhamann · 2011-03-28T21:42:32.021Z · LW(p) · GW(p)

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Your vision of how to do philosophy suspiciously conforms to how philosophy has traditionally been done, i.e. in journals. Have you read Michael Nielsen's Doing Science Online? It's written specifically about science, but I see no reason why it couldn't be applied to any kind of scholarly communication. He makes a good argument for including blog posts into scientific communication, which, at present, doesn't seem to be amenable with writing journal articles (is it kosher to cite blog posts?):

Many of the best blog posts contain material that could not easily be published in a conventional way: small, striking insights, or perhaps general thoughts on approach to a problem. These are the kinds of ideas that may be too small or incomplete to be published, but which often contain the seed of later progress.

You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes equations, why not have a few thousand talented people listen in? Why not enable the most insightful to contribute their insights back?

I would much rather see SIAI form an open-access online journal or scholarly FAI/existential risks wiki or blog for the purposes of disseminating writings/thoughts on these topics. This likely would not reach as many philosophers as publishing in philosophy journals, but would almost certainly reach far more interested outsiders. Plus, philosophers have access to the internet, right?

Replies from: lukeprog, alfredmacdonald
comment by lukeprog · 2011-03-28T22:20:48.182Z · LW(p) · GW(p)

No, I agree that much science and philosophy can be done in blogs and so on. Usually, it's going to be helpful to do some back-and-forth in the blogosphere before you're ready to publish a final 'article.' But the well-honed article is still very valuable. It is much easier for people to read, it cites the relevant literature, and so on.

Articles could be, basically, very well-honed and referenced short summaries of positions and arguments that have developed over dozens of conversations and blog posts and mailing list discussions and so on.

Replies from: Dustin
comment by Dustin · 2011-03-29T03:25:39.872Z · LW(p) · GW(p)

I often get lost in back-and-forth on blogs because it jumps from here to there and assumes the reader has kept track of everything everyone involved has said on the subject.

My point being, that I agree that both the blogosphere and article are important.

comment by alfredmacdonald · 2012-12-15T08:33:40.076Z · LW(p) · GW(p)

YeahOKButStill has an interesting take on the interaction between philosophy done in blogs and philosophy done in journals:

"... Many older philosophers lament the current lack of creativity and ingenuity in the field (as compared to certain heady, action-packed periods of the 20th century), yet, it is a well-established fact that in order to be published in a major journal or present at a major conference, a young philosopher has to load their paper/presentation with enormous amounts of what is called the "relevant literature". This means that even the most creative people among us (a group I do not count myself as belonging to) must spend huge amounts of time, space and energy trying to demonstrate just how widely they have read and just how many possible objections to their view they can consider, lest some irritable senior philosopher think that their view has not been given a fair shake. Of course, there is no evidence whatsoever that the great philosophers of the 20th century wrote and thought in this manner, as a quick survey of that relevant literature will show.

Blogs are a space for young philosophers to explore their ideas without these sorts of constraints, to try ideas on for size and to potentially get feedback from a wide audience. Indeed, the internet has the potential to host forums that could make reading groups at Oxford and Cambridge look positively stultifying. Yet, this is not how things are playing out: most young philosophers I know are afraid to even sign their real names to a comment thread. This, as anyone can see, is an absurd situation. However, since I have no control over it, I must bid this public space adieu."

comment by FAWS · 2011-03-28T20:12:29.688Z · LW(p) · GW(p)

Eliezer's anti-philosophy rant Against Modal Logics hovers near 0 karma points, while my recent pro-philosophy (by LW standards) post and my list of mainstream philosophy contributions were massively upvoted.

The karma of pre-LW OvercomingBias posts that were ported over should not be compared to that of LW post proper. Most of Eliezer's old posts are massively under-voted that way, though some frequently linked to posts less so.

Replies from: lukeprog
comment by lukeprog · 2011-03-28T20:26:51.857Z · LW(p) · GW(p)

True, but most of Eliezer's substantive pre-LW posts seem to have karma in the low teens, and the comments section of Against Modal Logics also shows that post was highly controversial.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-03-28T20:51:57.696Z · LW(p) · GW(p)

Not exactly. Most posts published around the same time have similar Karma level. Earliest posts or highly linked-to posts get more Karma, but people either rarely get far in reading of the archives, or their impulse to upvote atrophies by the time they've read hundreds of posts, and as a result Karma level of a typical post starting from about April 2008 currently stands at about 0-10. The post in question currently ranks 4 Karma.

Replies from: Normal_Anomaly, lukeprog
comment by Normal_Anomaly · 2011-03-30T13:37:18.158Z · LW(p) · GW(p)

Most posts published around the same time have similar Karma level. Earliest posts or highly linked-to posts get more Karma, but people either rarely get far in reading of the archives, or their impulse to upvote atrophies by the time they've read hundreds of posts,...

Also, many users read the early posts while still in the lurker stage, at which point they can't upvote.

Replies from: David_Gerard
comment by David_Gerard · 2011-03-30T13:57:05.583Z · LW(p) · GW(p)

Also, many users read the early posts while still in the lurker stage, at which point they can't upvote.

Do we actually know this?

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-03-30T14:05:40.877Z · LW(p) · GW(p)

Well, whenever somebody starts posting and doesn't act like they've already read the sequences, they get told to go read the sequences and come back afterward.

Also, in the past year or so many new users have joined the site from MoR, and the link in the MoR author's notes goes to the main sequences list. I know that I at least decided to join LW when MoR linked me to the sequences and I liked them.

Replies from: David_Gerard
comment by David_Gerard · 2011-03-30T14:30:46.487Z · LW(p) · GW(p)

they get told to go read the sequences and come back afterward.

I haven't seen this in several months (and I've been watching); the admonishment seems to have vanished from the local meme selection. More often, someone links to a specific apposite post, or possibly sequence.

It's just entirely unclear how we'd actually measure whether people who read the sequences do so before or after logging in.

(I'd suspect not, given they're a million words of text and a few million of accompanying comments, but then that's not even an anecdote ...)

Replies from: Normal_Anomaly, wedrifid, Normal_Anomaly, Normal_Anomaly, Desrtopa, Normal_Anomaly, Normal_Anomaly
comment by Normal_Anomaly · 2011-03-30T14:42:23.239Z · LW(p) · GW(p)

Poll: If you read the sequences before opening your account, upvote this comment.

comment by wedrifid · 2011-03-30T15:16:25.729Z · LW(p) · GW(p)

If you read the sequences before LessWrong was created upvote this comment.

comment by Normal_Anomaly · 2011-03-30T14:42:16.652Z · LW(p) · GW(p)

Poll: If you read the sequences after opening your account, upvote this comment.

comment by Normal_Anomaly · 2011-03-30T14:41:12.149Z · LW(p) · GW(p)

I haven't seen this in several months (and I've been watching); the admonishment seems to have vanished from the local meme selection. More often, someone links to a specific apposite post, or possibly sequence.

You may be right. I think there has been less of that lately.

It's just entirely unclear how we'd actually measure whether people who read the sequences do so before or after logging in.

I wouldn't say it's entirely unclear. I'm curious enough to start a poll.

Replies from: David_Gerard
comment by David_Gerard · 2011-03-30T14:59:58.898Z · LW(p) · GW(p)

Could also do with "Poll: If you still haven't read the sequences, upvote this comment."

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-03-30T15:22:02.932Z · LW(p) · GW(p)

I'd been considering that, and since you agree I went and added it.

comment by Desrtopa · 2011-03-30T22:20:50.030Z · LW(p) · GW(p)

I think this has mainly declined after a number of posts discussing the sheer length of the sequences and the deceptive difficulty of the demand, and potential ways to make the burden easier.

comment by Normal_Anomaly · 2011-03-30T15:21:21.750Z · LW(p) · GW(p)

Poll: If you haven't read the sequences yet, upvote this comment.

Replies from: Desrtopa
comment by Desrtopa · 2011-03-30T22:21:48.120Z · LW(p) · GW(p)

Should this perhaps be made into a discussion article where it will be noticed more?

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-03-31T00:20:33.187Z · LW(p) · GW(p)

I'm tempted to start a poll to see if people think I should make this a discussion article, but I will restrain myself. I'll just go ahead and post the discussion article: there's been enough traffic in the poll that it apparently interests people.

comment by Normal_Anomaly · 2011-03-30T14:42:03.757Z · LW(p) · GW(p)

This is a karma balance.

comment by lukeprog · 2011-03-28T20:58:23.927Z · LW(p) · GW(p)

Funny, I remember it having 0 points, and then when I published this post it had 2 points.

Anyway, thanks FAWS and Vladimir_Nesov for the correction. I've changed the wording of the original post.

Replies from: FAWS
comment by FAWS · 2011-03-28T21:06:27.628Z · LW(p) · GW(p)

Funny, I remember it having 0 points, and then when I published this post it had 2 points.

Yes, that's an example of the effect of linking. I guess the post in question will easily break 10 now, perhaps even 20.

comment by David_Gerard · 2011-03-28T21:07:58.260Z · LW(p) · GW(p)

Have you considered taking some of EY's work and jargon-translating it into journal-suitable form?

Replies from: lukeprog
comment by lukeprog · 2011-03-28T21:13:12.080Z · LW(p) · GW(p)

I'd love to do that if I had time, and if Yudkowsky was willing to answer lots of questions.

Replies from: Jordan, None, Miller
comment by Jordan · 2011-03-29T00:50:40.600Z · LW(p) · GW(p)

You could probably find other philosophers to help out. The end result, if supported properly by Eliezer, could be very helpful to SIAI's cause.

If SIAI donations could be earmarked for this purpose I would double my monthly contribution.

comment by [deleted] · 2011-03-28T21:40:49.650Z · LW(p) · GW(p)

.

Replies from: Emile, atucker, atucker
comment by Emile · 2011-03-30T15:26:30.580Z · LW(p) · GW(p)

For what it's worth, I greatly enjoy Eliezer's style, and usually find him quite clear and understandable (except maybe some older texts like the intuitive explanantion of Bayes and the technical explanation of technical explanations).

Replies from: None
comment by [deleted] · 2011-03-30T17:57:12.640Z · LW(p) · GW(p)

.

comment by atucker · 2011-03-29T03:23:11.590Z · LW(p) · GW(p)

If you and/or atucker could remedy this, I'd be much obliged. You guys are super readable.

I'm flattered that you think so, and to be mentioned in the same sentence as lukeprog.

Out of a mixture of a desire to help and curiosity (probably along with a dash of vanity), what comments have you found particularly readable?

That would actually help me a lot in improving my writing style.

Replies from: None
comment by [deleted] · 2011-03-29T03:29:04.237Z · LW(p) · GW(p)

.

comment by atucker · 2011-03-29T11:34:25.024Z · LW(p) · GW(p)

Presumably, Eliezer's upcoming book would do the same.

comment by Miller · 2011-03-28T23:16:39.201Z · LW(p) · GW(p)

What would you start with?

comment by ata · 2011-03-29T00:10:30.259Z · LW(p) · GW(p)

Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).

I'd add "Learn LaTeX" to this one; if you're publishing in a journal, that matters more than your font preferences and formatting skills (which won't be used in the published version), and if you're publishing online, it can make your paper look like a journal article, which is probably good for status. Even TeX's default Computer Modern font, which I wouldn't call beautiful, has a certain air of authority to it — maybe due to some of its visual qualities, but possibly just by reputation.

Replies from: None
comment by [deleted] · 2011-03-29T01:38:14.842Z · LW(p) · GW(p)

The ironic bit is that I don't know a modern philosophy journal that accepts TeX.

EDIT: Minds and Machines, as mentioned below. Also, Mind doesn't.

Replies from: lukeprog, ata, thomblake
comment by lukeprog · 2011-03-29T01:44:02.789Z · LW(p) · GW(p)

You just export to PDF. Lyx is a fairly easy-to-use LaTeX editor.

Replies from: lukstafi, None
comment by lukstafi · 2011-03-29T18:02:24.274Z · LW(p) · GW(p)

I personally recommend texmacs over latex, even if the latex is edited in Lyx or auxtex, although I use emacs for programming and wiki edition.

comment by [deleted] · 2011-03-29T14:42:23.819Z · LW(p) · GW(p)

That doesn't make sense to me. One can't re-typeset PDF -- well, perhaps you can, but I can't imagine it would be easy.

Replies from: None, Manfred
comment by [deleted] · 2011-03-29T17:26:01.130Z · LW(p) · GW(p)

I'm a bit confused. What I'm used to is, I make a TeX document (editable) then I typeset it into a PDF document. Anybody can read the PDF, but can't edit it. If I want the receiver to be able to edit, I send both the TeX file and the PDF.

Did you mean that philosophy journals won't accept the .tex file format, or that they'll reject a .pdf written in LaTeX for stylistic reasons?

comment by Manfred · 2011-03-29T15:42:20.580Z · LW(p) · GW(p)

Adobe's business model is to give away the reader for free and then sell the editor for a profit. So I would guess most publishers would have no problem.

comment by ata · 2011-03-29T17:52:53.847Z · LW(p) · GW(p)

The ironic bit is that I don't know a modern philosophy journal that accepts TeX.

Hey, I didn't say it wasn't a diseased discipline. :P

comment by thomblake · 2011-03-29T13:48:54.349Z · LW(p) · GW(p)

Last I checked, Minds and Machines requires LaTeX.

Replies from: None
comment by [deleted] · 2011-03-29T14:40:27.726Z · LW(p) · GW(p)

Ah, okay. I knew Mind didn't, and now I realize I was generalizing from one example. Oops.

comment by lukeprog · 2011-06-13T05:18:59.410Z · LW(p) · GW(p)

This paragraph, from Eugene Mills' 'Are Analytic Philosophers Shallow and Stupid?', made me laugh out loud:

The paradox of analysis concludes that

(PA) A conceptual analysis is correct only if it is trivial.

Philosophers from Socrates onward have [provided] conceptual analyses of knowledge, freedom, truth, goodness, and more. The paradox of analysis suggests that these philosophers... are shallow and stupid: shallow because they stalk triviality, stupid because it so often eludes them.

Mills goes on to defend philosophers, with two sections entitled 'Embracing Triviality, Part I' and 'Embracing Triviality, Part II.'

comment by JonathanLivengood · 2011-08-30T19:17:30.837Z · LW(p) · GW(p)

I agree with a lot of the content -- or at least the spirit -- of the post, but I worry that there is some selectivity that makes philosophy come off worse than it actually is. Just to take one example that I know something about: Pearl is praised (rightly) for excellent work on causation, but very similar work developed at the same time by philosophers at Carnegie Mellon University, especially Peter Spirtes, Clark Glymour, and Richard Scheines, isn't even mentioned.

Lots of other philosophers could be added to the list of people making interesting, useful contributions to causation research: Christopher Hitchcock at Caltech, James Woodward at Pitt HPS, John Norton at Pitt HPS, Frederick Eberhardt at WashU, Luke Glynn at Konstanz, David Danks at CMU, Ned Hall at Harvard, Jonathan Schaffer at Rutgers, Nancy Cartwright at the LSE, and many others (maybe even including my own humble self).

I am not trying to defend philosophy on the whole. I agree that we have some disease in philosophy that ought to be cut away. But I don't think that philosophy is in as bad a shape as the post suggests. More importantly, there is a lot of good, interesting, useful work being done in philosophy, if you know where to look for it.

Replies from: None
comment by [deleted] · 2011-08-30T19:43:47.798Z · LW(p) · GW(p)

Thanks for your comment; I'm working on learning causation theory at the moment, and I didn't know anyone in the field other than Pearl.

Replies from: JonathanLivengood
comment by JonathanLivengood · 2011-08-30T22:24:51.583Z · LW(p) · GW(p)

You're welcome, of course. Pearl's book on causality is a great place to start. I also recommend Spirtes, Glymour, and Scheines Causation, Prediction, and Search. Depending on your technical level and your interests, you might find Woodward's book Making Things Happen a better place to start. After that, there are many excellent papers, depending on your interests.

Replies from: None
comment by [deleted] · 2011-08-30T22:59:11.725Z · LW(p) · GW(p)

I'm a graduate student in mathematics; the more technical, the better. I'm currently three chapters into Pearl. After that in my queue comes Tversky and Kahneman, and now I'll add Spirtes et al. to the end of that.

comment by curiousepic · 2011-03-29T15:24:17.419Z · LW(p) · GW(p)

I posted this on Reddit r/philosophy, if anyone would like to upvote it there.

comment by RichardChappell · 2011-03-30T01:19:01.194Z · LW(p) · GW(p)

philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.

What makes you think this? It's true that many philosophers recognize the genetic fallacy, and hence don't take "you judge that P because of some fact about your brain" to necessarily undermine their judgment. But it's ludicrously uncharitable to interpret this principled epistemological disagreement as a mere factual misunderstanding.

Again: We can agree on all the facts about how human psychology works. What we disagree about (some of us, anyway -- there's much dispute here within philosophy too, as seen e.g. if you browse the archives of the Arche methodology weblog ) is the epistemological implications.

Similar objections apply to the claim that "Most philosophers don't understand the basics... that people are made of atoms and intuitions don't trump science." Are you serious?

Replies from: lukeprog, Eliezer_Yudkowsky
comment by lukeprog · 2011-03-30T01:44:30.488Z · LW(p) · GW(p)

Richard Chappell,

Of course, you know how intuitions are generally used in mainstream philosophy, and why I think most such arguments are undermined by facts about where our intuitions come from, which undermine the epistemic usefulness of those intuitions. (So does the cross-checking problem.)

I'll break the last part into two bits:

What I'm saying with the 'people are made of atoms' bit is that it looks like a slight majority of philosophers may now think that is at least a component of a person that is not made of atoms - usually consciousness.

As for intuitions trumping science, that was unclear. What I mean is that, in my view, philosophers still often take their intuitions to be more powerful evidence than the trends of science (e.g. reductionism) - and again I can point to this example.

I'm sure this post must have been highly annoying to a pro such as yourself, and I appreciate the cordial tone of your reply.

Replies from: RichardChappell, jhuffman, ohwilleke
comment by RichardChappell · 2011-03-30T16:58:00.715Z · LW(p) · GW(p)

As for intuitions trumping science, that was unclear. What I mean is that, in my view, philosophers still often take their intuitions to be more powerful evidence than the trends of science (e.g. reductionism) - and again I can point to this example.

Ah, you mean capital-S 'Science', as opposed to just the empirical data. One might have a view compatible with all the scientific data without buying in to the ideological picture that we can't use non-empirical methods (viz. philosophy) when investigating non-empirical questions.

Replies from: lukeprog
comment by lukeprog · 2011-03-30T19:25:50.032Z · LW(p) · GW(p)

Non-empirical questions like... what? Mathematical questions?

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T21:48:34.813Z · LW(p) · GW(p)

Like, whether phenomenal properties just are certain physical/functional properties, or whether the two are merely nomologically co-extensive (going together in all worlds with the same natural laws as our own). This is obviously neither mathematical nor empirical. Similarly with normative questions: what's a reasonable credence to have given such-and-such evidence, etc.

See: Overcoming Scientism

comment by jhuffman · 2011-03-30T14:40:56.941Z · LW(p) · GW(p)

As for intuitions trumping science, that was unclear. What I mean is that, in my view, >philosophers still often take their intuitions to be more powerful evidence than the >trends of science (e.g. reductionism) - and again I can point to this example.

The comments on your linked article really do a good job of demonstrating the enormous gulf between many philosophical thinkers and the LW community. I especially enjoyed the comments about how physicalism always triumphs because it expands to include new strange idea. So, the dualists understand that their beliefs are not based on evidence, and in fact they sneer at evidence as if its a form of cheating.

Sorry but I do not think this patient can be saved.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-03-30T19:40:00.402Z · LW(p) · GW(p)

Which comments do you agree or disagree with?

What is the patient? LW? Many-philosophers? The idea of LW-contributing-to-philosophy (or conversely)?

comment by ohwilleke · 2011-03-31T01:40:36.293Z · LW(p) · GW(p)

It seems to me that philosophy is most important for refining mere intuitions and bumbling around until we find a rigorous way of posing the questions that are associated with those intuitions. Once you have a well posed question, any old scientist can answer it.

But, philosophy is necessary to turn the undifferentiated mass of unprocessed data and potential ideas into something that is succeptible to being examined.

Rationality is all fine and good, but reason applies known facts and axioms with accepted logical relationships to reach conclusions.

The importance of hypothesis generation is much underappreciated by scientists, but critical to the enterprise, and to generate a hypothesis, one needs intuition as much as reason.

Genius, meanwhile, comes from being able to intuitively generate a hypothesis the nobody else would, breaking the mold of others intuitions, and building new conceptual structures from which to generate novel intuitive hypothesises and eventually to formulate the conceptual structure well enough that it can be turned over to the rationalists.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-03-30T02:47:33.272Z · LW(p) · GW(p)

Richard, I'm pretty sure I remember you treating the apparent conceivability of zombies as a primary fact about the conceivability of zombies to which you have direct access, rather than treating it as an output of some cognitive algorithm in your brain and asking what sort of thought process might have produced it.

Replies from: CuSithBell, None, Peterdjones, RichardChappell
comment by CuSithBell · 2011-04-22T14:14:05.380Z · LW(p) · GW(p)

It seems like some people are using "conceivable" to mean "imaginable at some resolution", and some to mean "coherently imaginable at any resolution", or something. By which I mean, the first group would say that they could conceive of "America lost the Revolutionary War" or "heavier objects fall faster" or "we are composed of sentient superstrings, and the properties of matter are their tiny, tiny emotions" or "the president has been kidnapped by ninjas"; whereas the second group would say these things are not conceivable.

As a result, group A wouldn't really consider the conceivability of p-zombies as evidence of their possibility (well, it'd technically be extremely weak evidence), whereas group B would consider the problem of the conceivability of p-zombies as essentially equivalent to the actuality of p-zombies. (There may be other groups, such as those who think "If it's imaginable, then it's coherent," but based on my brief glance the discussion hasn't actually made it that far yet.)

Is this right? I'd think the whole thing could be resolved if you taboo'd "conceivable"...?

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T14:30:38.672Z · LW(p) · GW(p)

Talking about "the" possibility of p-zombies is pretty pointless, because of the important difference between logical and physical impossibility. Even Chalmers thinks PZs are physically/naturally impossible.

I don't think the coherent/incoherent distinction you are making is clear. Of course, in a universe where everything is exactly the same, heavier objects would not fall faster in vacuo. But then we understand gravity and acceleration, so we can say what the contradictions would be. We don't understand what the contradictions would be in the case of p-zombies, because we don't have the psychophysical laws.Physicalism is Not An Explanation.

Replies from: CuSithBell
comment by CuSithBell · 2011-04-22T15:30:58.642Z · LW(p) · GW(p)

By 'coherent', I mean something like 'consistent' (to make an analogy to logic) - given all our observations, and extrapolating the concept as needed, there are no contradictions. "Heavier objects fall faster" leads to contradictions pretty quickly. Some people believe that "p-zombies are possible" (in some sense, which might match up with what you mean by either logical or physical) also leads to a contradiction, though we of course don't understand the laws that would cause this.

This is beside the point! I'm not arguing for or against p-zombies (here), I'm saying I think the people in this argument are talking past each other because they have diverging definitions.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:41:39.800Z · LW(p) · GW(p)

"Heavier objects fall faster" leads to contradictions with a theory,

If we don't know the laws that would contradict p-zombies, there is no see-able contradiction in them, and conceivability=logical possibility follows.

Replies from: CuSithBell
comment by CuSithBell · 2011-04-22T16:02:00.899Z · LW(p) · GW(p)

"Heavier objects fall faster" is imaginable at a particular resolution. Once you ask, say, "what happens if you glue two stones together?", it contradicts more deeply-held notions, and the concept falls apart at that resolution.

Some people believe that p-zombies are incoherent if analyzed sufficiently, or expect that they necessitate a severe contradiction of much more deeply-held beliefs.

Moreover, it is possible to hold that we don't know the laws that would contradict p-zombies but that they are nevertheless contradicted - as it is possible to hold that things should not fall up without knowing the laws of gravitation (leaving aside that some things do fall up).

Do you disagree with my central assertion, or just my definition of coherence?

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T16:08:13.406Z · LW(p) · GW(p)

The stone-gluing can be worked around with auxilliary laws. To assume those laws are absent is to assume some other laws.

People can believe what they like. If you are going to stake a claim that there is a literal self contradiction in p-zombies, you need to say what it is. However most cases of aleged self contradiction turn out to be contradiction with unexamined background assumptions--laws, again. Talk of "resolution" is misleading: this is cognitive, not pictorial.

It is in fact the philosopher's point that p-zombies are really, for unknown reasons, impossible. They are not arguing zombies in order to argue zombies! Non-philosophers keep misunderstanding that.

Replies from: CuSithBell
comment by CuSithBell · 2011-04-22T16:53:18.898Z · LW(p) · GW(p)

So, ah, just the latter then?

That's all right, and I admit it's a fuzzy term. But if you want to make any progress, I suggest you consider the former point instead.

comment by [deleted] · 2011-03-30T02:59:20.347Z · LW(p) · GW(p)

Can you make the connection between Richard's comment and yours clearer?

comment by Peterdjones · 2011-04-22T13:58:08.507Z · LW(p) · GW(p)

That's a difference that doesn't make a difference. That I can (not) conceive p-zombies can only mean that my cognitive processes produce a certain output. Whether is is somehow a mistaken output is another matter entirely.

comment by RichardChappell · 2011-03-30T03:08:32.201Z · LW(p) · GW(p)

Distinguish two questions: (1) Are zombies logically coherent / conceivable? (2) What cognitive processes make it seem plausible that the answer to Q1 is 'yes'?

I'm fully aware that one can ask the second, cogsci question. But I don't believe that cogsci answers the first question.

Replies from: komponisto, None
comment by komponisto · 2011-03-30T05:01:42.492Z · LW(p) · GW(p)

The first question should really be: what does the apparent conceivability of zombies by humans imply about their possibility?

Philosophers on your side of the debate seem to take it for granted (or at least end up believing) that it implies a lot, but those of us on the other side think that the answer to the cogsci question undermines that implication considerably, since it shows how we might think zombies are conceivable even when they are not.

It's been quite a while since I was actively reading philosophy, so maybe you can tell me: are there any reasons to believe zombies are logically possible other than people's intuitions?

Replies from: RichardChappell, NancyLebovitz, Peterdjones
comment by RichardChappell · 2011-03-30T22:16:57.243Z · LW(p) · GW(p)

The first question should really be: what does the apparent conceivability of zombies by humans imply about their possibility?

I'm aware that the LW community believes this, but I think it is incorrect. We have an epistemological dispute here about whether non-psychological facts (e.g. the fact that zombies are coherently conceivable, and not just that it seems so to me) can count as evidence. Which, again, reinforces my point that the disagreement between me and Eliezer/Lukeprog concerns epistemological principles, and not matters of empirical fact.

For more detail, see my response to TheOtherDave downthread.

Replies from: komponisto
comment by komponisto · 2011-03-31T03:04:51.748Z · LW(p) · GW(p)

We have an epistemological dispute here about whether non-psychological facts (e.g. the fact that zombies are coherently conceivable, and not just that it seems so to me) can count as evidence

At least around here, "evidence (for X)" is anything which is more likely to be the case under the assumption that X is true than under the assumption that X is false. So if zombies are more likely to be conceivable if non-physicalism is true than if physicalism is true, then I for one am happy to count the conceivability of zombies as evidence for non-physicalism.

But again, the question is: how do you know that zombies are conceivable? You say that this is a non-psychological fact; that's fine perhaps, but the only evidence for this fact that I'm aware of is psychological in nature, and this is the very psychological evidence that is undermined by cognitive science. In other words, the chain of inference still seems to be

people think zombies are conceivable => zombies are conceivable => physicalism is false

so that you still ultimately have the "work" being done by people's intuitions.

Replies from: RichardChappell
comment by RichardChappell · 2011-03-31T03:57:13.184Z · LW(p) · GW(p)

How do you know that "people think zombies are conceivable"? Perhaps you will respond that we can know our own beliefs through introspection, and the inferential chain must stop somewhere. My view is that the relevant chain is merely like so:

zombies are conceivable => physicalism is false

I claim that we may non-inferentially know some non-psychological facts, when our beliefs in said facts meet the conditions for knowledge (exactly what these are is of course controversial, and not something we can settle in this comment thread).

Replies from: komponisto
comment by komponisto · 2011-03-31T04:58:28.539Z · LW(p) · GW(p)

I know that people think zombies are conceivable because they say they think zombies are conceivable (including, in some cases, saying "zombies are conceivable").

To say that we may "non-inferentially know" something appears to violate the principle that beliefs require justification in order to be rational. By removing "people think zombies are conceivable", you've made the argument weaker rather than stronger, because now the proposition "zombies are conceivable" has no support.

In any case, you now seem as eminently vulnerable to Eliezer's original criticism as ever: you indeed appear to think that one can have some sort of "direct access" to the knowledge that zombies are conceivable that bypasses the cognitive processes in your brain. Or have I misunderstood?

Replies from: RichardChappell
comment by RichardChappell · 2011-03-31T17:21:45.389Z · LW(p) · GW(p)

Depending on what you mean by 'direct access', I suspect that you've probably misunderstood. But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.

Replies from: SilasBarta
comment by SilasBarta · 2011-03-31T17:32:22.162Z · LW(p) · GW(p)

You're still getting voted up on net, despite not explaining how, as you've claimed, the psychological fact of p-zombie plausibility is evidence for it (at least beyond references to long descriptions of your general beliefs).

Replies from: komponisto, None
comment by komponisto · 2011-03-31T18:24:43.776Z · LW(p) · GW(p)

the psychological fact of p-zombie plausibility is evidence for it

Actually he seems to have denied this here, so at this point I'm stuck wondering what the evidence for zombie-conceivability is.

Replies from: antigonus, TheOtherDave, SilasBarta
comment by antigonus · 2011-04-02T07:00:13.336Z · LW(p) · GW(p)

I believe he's trying to draw a distinction between two potential sources of evidence:

  1. The factual claim that people believe zombies are conceivable, and
  2. The actual private act of conceiving of zombies.

Richard is saying that his justification for his belief that p-zombies are conceivable lies in his successful conception of p-zombies. So what licenses him to believe that he's successfully conceived of zombies after all? His answer is that he has direct access to the contents of his conception, in the same way that he has access to the contents of his perception. You don't need to ask, "How do I know I'm really seeing blue right now, and not red?" Your justification for your belief that you're seeing blue just is your phenomenal act of noticing a real, bluish sensation. This justification is "direct" insofar as it comes directly from the sensation, and not via some intermediate process of reasoning which involve inferences (which can be valid or invalid) or premises (which can be true or false). Similarly, he thinks his justification for his belief that p-zombies are conceivable just is his p-zombie-ish conception.

A couple of things to note. One is that this evidence is wholly private. You don't have direct access to his conceptions, just as you don't have direct access to his perceptions. The only evidence Richard can give you is testimony. Moreover, he agrees that testimony of this sort is extremely weak evidence. But it's not the evidence he claims that his belief rests on. The evidence that Richard appeals to can be evidence-for-Richard only.

Another thing is that the direct evidence he appeals to is not "neutral." If p-zombies really are inconceivable, then he's in fact not conceiving of p-zombies at all, and so his conception, whatever it was, was never evidence for the conceivability of p-zombies in the first place (in just the same way that seeing red isn't evidence that you're seeing blue). So there's no easy way to set aside the question of whether Richard's conception is evidence-for-him from the question of whether p-zombies are in general conceivable. The worthiness of Richard's source of evidence is inextricable from the actual truth or falsehood of the claim in contention, viz., that p-zombies are conceivable. But he thinks this isn't a problem.

If you want to move ahead in the discussion, then the following are your options:

  1. You simply deny that Richard is in fact conceiving of p-zombies. This isn't illegitimate, but it's going to be a conversation-stopper, since he'll insist that he does have them but that they're private.
  2. You accept that Richard can successfully conceive of p-zombies, but that this isn't good evidence for their possibility (or that the very notion of "possibility" in this context is far too problematic to be useful).
  3. You deny that we have direct access to anything, or that access to conceptions in particular is direct, or that one can ever have private knowledge. If you go this route, you have to be careful not to set yourself up for easy reductio. Specifically, you'd better not be led to deny the rationality of believing that you're seeing blue when, e.g., you highlight this text.

I hope this helps clear things up. It pains me when people interpret their own confusion as evidence of some deep flaw in academic philosophy.

Replies from: komponisto, wnoise, SilasBarta
comment by komponisto · 2011-04-04T19:07:18.544Z · LW(p) · GW(p)

I believe he's trying to draw a distinction between two potential sources of evidence:

  1. The factual claim that people believe zombies are conceivable, and
  2. The actual private act of conceiving of zombies.

I was very deliberately ignoring this distinction: "people" includes Richard, even for Richard. The point is that Richard cannot simply trust his intuition; he has to weigh his apparent successful conception of zombies against the other evidence, such as the scientific success of reductionism, the findings from cognitive science that show how untrustworthy our intuitions are, and in particular specific arguments showing how we might fool ourselves into thinking zombies are conceivable.

The evidence that Richard appeals to can be evidence-for-Richard only

This would appear to violate Aumann's agreement theorem.

If p-zombies really are inconceivable, then he's in fact not conceiving of p-zombies at all, and so his conception, whatever it was, was never evidence for the conceivability of p-zombies in the first place...The worthiness of Richard's source of evidence is inextricable from the actual truth or falsehood of the claim in contention

This is a confusion of map and territory. It is possible to be rationally uncertain about logical truths; and probability estimates (which include the extent to which a datum is evidence for a proposition) are determined by the information available to the agent, not the truth or falsehood of the proposition (otherwise, the only possible probability estimates would be 1 and 0). It may be rational to assign a probability of 75% to the truth of the Riemann Hypothesis given the information we currently have, even if the Riemann Hypothesis turns out to be false (we may have misleading information).

If you want to move ahead in the discussion, then the following are your options:

My position could be described by any of those three options -- in other words, they seem to differ only in the interpretation of terms like "conceivable", and don't properly hug the query.

1.You simply deny that Richard is in fact conceiving of p-zombies.

I must do so to the extent I believe zombies are in fact inconceivable. But I don't see why it should be a conversation-stopper: if Richard is right and I am wrong, Richard should be able to offer evidence that he is unusually capable of determining whether his apparent conception is in fact successful (if he can't, then he should be doubting his own successful conception himself).

2.You accept that Richard can successfully conceive of p-zombies, but that this isn't good evidence for their possibility

I can assent to this if "conceive" is interpreted in such a way that it is possible to conceive of something that is logically impossible (i.e. if it is granted that I can conceive of Fermat's Last Theorem being false).

3. You deny that we have direct access to anything, or that access to conceptions in particular is direct, or that one can ever have private knowledge.

"Private knowledge" in this sense is ruled out by Aumann, as far as I can tell. As for "direct access", well, that was Eliezer's original point, which I agree with: all knowledge is subject to some uncertainty due to the flaws in human psychology, and in particular all knowledge claims are subject to being undermined by arguments showing how the brain could generate them independently of the truth of the proposition in question. (In other words, the "genetic fallacy" is no fallacy, at least not necessarily.)

Specifically, you'd better not be led to deny the rationality of believing that you're seeing blue when, e.g., you highlight this text.

I think it's overwhelmingly likely that I'm seeing blue, but I could turn out to be mistaken.

Replies from: antigonus
comment by antigonus · 2011-04-05T00:59:17.232Z · LW(p) · GW(p)

I was very deliberately ignoring this distinction: "people" includes Richard, even for Richard. The point is that Richard cannot simply trust his intuition; he has to weigh his apparent successful conception of zombies against the other evidence, such as the scientific success of reductionism, the findings from cognitive science that show how untrustworthy our intuitions are, and in particular specific arguments showing how we might fool ourselves into thinking zombies are conceivable.

I don't think Richard said anything to dispute this. He never said that his direct access to the conceivability of zombies renders his justification indefeasible.

This would appear to violate Aumann's agreement theorem.

"Private knowledge" in this sense is ruled out by Aumann, as far as I can tell.

This is not a case in which you share common priors, so the theorem doesn't apply. You don't have, and in fact can never have, the information Richard (thinks he) has. Aumann's theorem does not imply that everyone is capable of accessing the same evidence.

This is a confusion of map and territory. It is possible to be rationally uncertain about logical truths; and probability estimates (which include the extent to which a datum is evidence for a proposition) are determined by the information available to the agent, not the truth or falsehood of the proposition (otherwise, the only possible probability estimates would be 1 and 0). It may be rational to assign a probability of 75% to the truth of the Riemann Hypothesis given the information we currently have, even if the Riemann Hypothesis turns out to be false (we may have misleading information).

That's certainly true, but I can't see its relevance to what I said. In part because of some of the very reasons you name here, we can be mistaken about whether an observation O confirms a hypothesis H or not, hence whether an observation is evidence for a hypothesis or not. If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I've made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O' and have been mistaking it for O. And since the observability of O' doesn't have any evidentiary bearing on H, he'll say, my observation wasn't actually the evidence that I took it to be. That's the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H. [Edited this sentence to make it clearer.]

But I don't see why it should be a conversation-stopper: if Richard is right and I am wrong, Richard should be able to offer evidence that he is unusually capable of determining whether his apparent conception is in fact successful (if he can't, then he should be doubting his own successful conception himself).

I don't really see what this could mean.

As for "direct access", well, that was Eliezer's original point, which I agree with: all knowledge is subject to some uncertainty due to the flaws in human psychology, and in particular all knowledge claims are subject to being undermined by arguments showing how the brain could generate them independently of the truth of the proposition in question. (In other words, the "genetic fallacy" is no fallacy, at least not necessarily.)

Richard didn't state that his evidence for the conceivability of zombies is absolutely incontrovertible. He just said he had direct access to it, i.e., he has extremely strong evidence for it that doesn't follow from some intermediary inference.

Replies from: komponisto
comment by komponisto · 2011-04-05T03:33:04.081Z · LW(p) · GW(p)

This is not a case in which you share common priors

Why not?

Postulating uncommon priors is not to be done lightly: it imposes specific constraints on beliefs about priors. See Robin Hanson's paper "Uncommon Priors Require Origin Disputes".

In any case, what I want to know is how I should update my beliefs in light of Richard's statements. Does he have information about the conceivability of zombies that I don't, or is he just making a mistake?

If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I've made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O' and have been mistaking it for O. And since the observability of O' doesn't have any evidentiary bearing on H, he'll say, my observation wasn't actually the evidence that I took it to be. That's the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H.

In such a dispute, there is some observation O'' that (both parties can agree) you made, which is equal to (or implies) either O or O', and the dispute is about which one of these it is the same as (or implies). But since O implies H and O' doesn't, the dispute reduces to the question of whether O'' implies H or not, and so you may as well discuss that directly.

In the case at hand, O is "Richard has conceived of zombies", O' is "Richard mistakenly believes he has conceived of zombies", and O'' is "Richard believes he has conceived of zombies". But in the discussion so far, Richard has been resisting attempts to switch from discussing O (the subject of dispute) to discussing O'', which obviously prevents the discussion from proceeding.

Replies from: antigonus
comment by antigonus · 2011-04-05T04:31:28.754Z · LW(p) · GW(p)

Why not?

Because, again, you do not have access to the same evidence (if Richard is right about the conceivability of zombies, that is!). Robin's paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree. To reiterate, Richard (as I understand him) believes that you and he do not share the same information.

In any case, what I want to know is how I should update my beliefs in light of Richard's statements.

Well, you shouldn't take his testimony of zombie conceivability as very good evidence of zombie conceivability. In that sense, you don't have to sweat this conversation very much at all. This is less a debate about the conceivability of zombies and more a debate about the various dialectical positions of the parties involved in the conceivability debate. Do people who feel they can "robustly" conceive of p-zombies necessarily have to found their beliefs on publicly evaluable, "third-person" evidence? That seems to me the cornerstone of this particular discussion, rather than: Is the evidence for the conceivability of p-zombies any good?

In such a dispute, there is some observation O'' that (both parties can agree) you made, which is equal to (or implies) either O or O', and the dispute is about which one of these it is the same as (or implies). But since O implies H and O' doesn't, the dispute reduces to the question of whether O'' implies H or not, and so you may as well discuss that directly.

Yes, that's the "neutral" view of evidence Richard professed to deny.

The actual values of O and O' at hand are "That one particular mental event which occurred in Richard's mind at time t [when he was trying to conceive of zombies] was a conception of zombies," and "That one particular mental event which occurred in Richard's mind at time t was a conception of something other than zombies, or a non-conception." The truth-value value of the O'' you provide has little bearing on either of these.

EDIT: Here's a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you're experiencing red?

EDIT 2: Looking over this once again, I think I should be less glib in my first paragraph. Note that I'm denying that you share common priors, but then appealing to different evidence that you have to explain why this can be rational. If the difference in priors is a result of the difference in evidence, aren't they just posteriors?

The answer I personally would give is that there are different kinds of evidence. Posteriors are the result of conditionalizing on propositional evidence, such as "Snow is white." But not all evidence is propositional. In particular, many of our introspective beliefs are justified (when they are justified at all) by the direct access we have to our own experiences. Experiences are not propositions! You cannot conditionalize on an experience. You can conditionalize on a sentence like "I am having experience E," of course, but the evidence for that sentence is going to come from E itself, not another proposition.

Replies from: komponisto
comment by komponisto · 2011-04-06T01:15:46.302Z · LW(p) · GW(p)

Robin's paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree.

This is not correct. Even the original Aumann theorem only assumes that the Bayesians have (besides common priors) common knowledge of each other's probability estimates -- not that they share all the same information! (In fact, if they have common priors and the same information, then their posteriors are trivially equal.)

Robin's paper imposes restrictions on being able to postulate uncommon priors as a way of escaping Aumann's theorem: if you want to assume uncommon priors, certain consequences follow. (Roughly speaking, if Richard and I have differing priors, then we must also disagree about the origin of our priors.)

In any event, you do get closer to what I regard as the point here:

Experiences are not propositions! You cannot conditionalize on an experience.

Another term for "conditionalize" is "update". Why can't you update on an experience?

The sense I get is that you're not wanting to apply the Bayesian model of belief to "experiences". But if our "experiences" affect our beliefs, then I see no reason not to.

The actual values of O and O' at hand are "That one particular mental event which occurred in Richard's mind at time t [when he was trying to conceive of zombies] was a conception of zombies," and "That one particular mental event which occurred in Richard's mind at time t was a conception of something other than zombies, or a non-conception." The truth-value value of the O'' you provide has little bearing on either of these.

In these terms, O'' is simply "that one particular mental event occurred in Richard's mind" -- so again, the question is what the occurrence of that mental event implies, and we should be able to bypass the dispute about whether to classify it as O or O' by analyzing its implications directly. (The truth-value of O'' isn't a subject of dispute; in fact O'' is chosen that way.)

Here's a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridiculous and point to all sorts of icky human biases tainting our judgment. How do you update your belief that you're experiencing red?

It goes down, since the tribespeople would be more likely to say that if there is no visual experience than if there is. Of course, the amount it goes down by will depend on my other information (in particular, if I know they're congenitally blind, that significantly weakens this evidence).

comment by wnoise · 2011-04-04T18:02:59.999Z · LW(p) · GW(p)

I would categorize my position as somewhere between 1 and 2, depending on what you mean by "conceiving". I think he has a name attached to some properties associated with p-zombies and a world in which they exist, but this doesn't mean a coherent model of such a world is possible, nor that he has one. That is, I believe that following out the necessary implications will eventually lead to contradiction. My evidence for this is quite weak, of course.

I can certainly talk about an even integer larger than two that is not expressible as the sum of two primes. But that doesn't mean it's logically possible. It might be, or it might not. Does a name without a full-fledged model count as conceiving, or not? Either way, it doesn't appear to be significant evidence for.

comment by SilasBarta · 2011-04-04T15:21:53.039Z · LW(p) · GW(p)

I think the critics of Richard Chappell here are taking route 2 in your categorization.

Replies from: antigonus
comment by antigonus · 2011-04-04T17:39:29.690Z · LW(p) · GW(p)

komponisto and TheOtherDave appear to have been taking route 3. (challenging Richard's purported access to evidence for zombie conceivaiblity).

Replies from: SilasBarta
comment by SilasBarta · 2011-04-04T18:47:25.358Z · LW(p) · GW(p)

I think they were stuck on the task of getting him to explain what that evidence was (and what evidence the access he does have gives him), which in turn was complicated by his insistence that he wasn't referring to a psychological fact of ease of conceivability.

comment by TheOtherDave · 2011-03-31T18:49:16.784Z · LW(p) · GW(p)

If it helps (which I don't expect it does), I've been pursuing the trail of this (and related things) here.

Thus far his response seems to be that certain beliefs don't require evidence (or, at least, don't require "independent justification," which may not be the same thing), and that his beliefs about zombies "cohere well" with his other beliefs (though I'm not sure which beliefs they cohere well with, or whether they coheres better with them than their negation does), and that there's no reason to believe it's false (though it's not clear what role reasons for belief play in his decision-making in the first place).

Replies from: komponisto
comment by komponisto · 2011-03-31T19:06:59.052Z · LW(p) · GW(p)

So, the Bayesian translation of his position would seem to be that he has a high prior on zombies being conceivable. But of course, that in turn translates to "zombies are conceivable for reasons I'm not being explicit about". Which is, naturally, the point: I'd like to know what he thinks he knows that I don't.

Regarding coherence, and reasons to believe it's false: the historical success of reductionism is a very good reason to believe it's false, it seems to me. Despite Richard's protestations, it really does appear to me that this is a case of undue reluctance on the part of philosophers to update their intuitions, or at least to let them be outweighed by something else.

comment by SilasBarta · 2011-03-31T18:30:40.988Z · LW(p) · GW(p)

Good point. I think my biggest frustration is that I can't tell what point Richard Chappell is actually making so I can know whether I agree with it. It's one thing to make a bad argument; it's quite another to have a devastating argument that you keep secret.

comment by [deleted] · 2011-03-31T18:43:25.921Z · LW(p) · GW(p)

You would probably have had more opportunity to draw it out of him if it weren't for the karma system discouraging him from posting further on the topic. Remember that next time you're tallying the positives and negatives of the karma system.

Replies from: SilasBarta
comment by SilasBarta · 2011-03-31T18:54:00.037Z · LW(p) · GW(p)

I don't follow: he's getting positive net karma from this discussion, just not as much as other posters. Very few of his comments, if any, actually went negative. In what sense in the karma system discouraging him?

Replies from: None
comment by [deleted] · 2011-03-31T19:02:59.726Z · LW(p) · GW(p)

I don't follow: he's getting positive net karma

Yes, slightly positive. Whether something encourages or discourages a person is a fact, not about the thing considered in itself, but about its effect on the person. The fact that the karma is slightly net positive is a fact about the thing considered in itself. The fact that he himself wrote:

But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.

tells us something about its effect on the person.

Replies from: SilasBarta
comment by SilasBarta · 2011-03-31T19:21:48.140Z · LW(p) · GW(p)

Yes, he's taking that as evidence that his posts are not valued. And indeed, like most posts that don't (as komponisto and I noted) clearly articulate what their argument is, his posts aren't valued (relative to others in the discussion). And he is correctly reading the evidence.

I was interpreting the concerns about "low karma being discouraging" as saying that if your karma goes negative, you actually get posting restrictions. But that's not happening here; it's just that Richard Chappell is being informed that his posts aren't as valued as the others on this topic. Still positive value, mind you -- just not as high as others.

In the absence of a karma system, he would either be less informed about his unhelpfulness in articulating his position, or be informed through other means. I don't understand what your complaint is.

Yes, people who cannot articulate their position rigorously are going to have their feelings hurt at some level when people aren't satisfied with their explanations. What does that have to do with the merits of the karma system?

Replies from: None
comment by [deleted] · 2011-03-31T19:46:06.447Z · LW(p) · GW(p)

Yes, people who cannot articulate their position rigorously are going to have their feelings hurt at some level when people aren't satisfied with their explanations

You are speculating about possible reasons that people might have had for faling to award karma points.

What does that have to do with the merits of the karma system?

The position of your sentence implies that "that" refers to your speculation about the reasons that people might have had for withholding karma points. But my statement concerning the merits of the karma system had not referred to that speculation. Here is my statement again:

You would probably have had more opportunity to draw it out of him if it weren't for the karma system discouraging him from posting further on the topic.

I am pointing out that had he not been discouraged as early as he was in the exchange, then you would probably have had more opportunity to draw him out. Do you dispute this? And then I wrote:

Remember that next time you're tallying the positives and negatives of the karma system.

I have left it up to you to decide whether your loss of this opportunity is on the whole a positive or a negative.

Replies from: SilasBarta
comment by SilasBarta · 2011-03-31T20:07:21.804Z · LW(p) · GW(p)

You are speculating about possible reasons that people might have had for faling to award karma points.

Kind of. I was drawing on my observations about how the karma system is used. I've generally noticed (as have others) that people with outlier views do get modded up very highly, so long as they articulate their position clearly. For example: Mitchell Porter on QM, pjeby on PCT, lukeprog on certain matters of mainstream philosophy, Alicorn on deontology and (some) feminism, byrnema on theism, XiXiDu on LW groupthink.

Given that history, I felt safe in chalking up his "insufficiently" high karma to inscrutability rather than "He's deviating from the party line -- get him!" And you don't get to ignore that factor (of controversial, well-articulated positions being voted up) by saying you "weren't referring to that speculation".

I am pointing out that had he not been discouraged as early as he was in the exchange, then you would probably have had more opportunity to draw him out. Do you dispute this?

My response is that, to the extent that convoluted, error-obscuring posting is discouraged, I'm perfectly fine with such discouragement, and I don't want to change the karma system to be more favoring of that kind of posting.

If Richard couldn't communicate his insight about "p-zombies being so easy to conceive of" on the first three tries, we're probably not missing out on much by him being discouraged to post the fifty-third.

My most recent comment directed toward him was not saying, "No! Please don't leave us! I love your deep insights!" Rather, it was saying, "Hold on -- there's an easy way to dig yourself out of this hole, as there has been the whole time. Just tell us why [...]."

Moreover, to the extent that the karma system doesn't communicate to him what it did, that just means we'd have to do it another way, or fail to communicate it at all, neither of which is particularly appealing to me.

comment by NancyLebovitz · 2011-03-30T07:16:55.088Z · LW(p) · GW(p)

Thanks for laying this out. I'm one of the people who thinks philosophical zombies don't make sense, and now I understand why-- they seem like insisting that a result is possible while eliminating the process which leads to the result.

This doesn't explain why it's so obvious to me that pz are unfeasible and so obvious to many other people that pz at least make enough sense to be a basis for argument. Does the belief or non-belief in pz correlate with anything else?

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T14:11:52.960Z · LW(p) · GW(p)

Since no physical law is logically necessary, it is always logically possible that an effect could fail to follow from a cause.

comment by Peterdjones · 2011-04-22T14:00:47.742Z · LW(p) · GW(p)

since "logically possible" just means "conceviable" there doesn't need to be.

comment by [deleted] · 2011-03-30T07:29:36.643Z · LW(p) · GW(p)

It's hard to be sure that I'm using the right words, but I am inclined to say that it's actually the connection between epistemic conceivability and metaphysical possibility that I have trouble with. To illustrate the difference as I understand it, someone who does not know better can epistemically conceive that H2O is not water, but nevertheless it is metaphysically impossible that H2O is not water. I am not confident I know the meanings of the philosophical terms of the preceding comment, but employing mathematics-based meanings of the words "logic" and "coherent", then it is perfectly logically coherent for someone who happens to be ignorant of the truth to conceive that H2O is not water, but this of course tells us very little of any significant interest about the world. It is logically coherent because try as he might, there is no way for someone ignorant of the facts to purely logically derive a contradiction from the claim that H2O is not water, and therefore reveal any logical incoherence in the claim. To my way of understanding the words, there simply is no logical incoherence in a claim considered against the background of your (incomplete) knowledge unless you can logically deduce a contradiction from inside the bounds of your own knowledge. But that's simply not a very interesting fact if what you're interested in is not the limitations of logic or of your knowledge but rather the nature of the world.

I know Chalmers tries to bridge the gap between epistemic conceivability and metaphysical possibility in some way, but at critical points in his argument (particularly right around where he claims to "rescue" the zombie argument and brings up "panprotopsychism") he loses me.

Replies from: AlephNeil, AlephNeil, RichardChappell
comment by AlephNeil · 2011-03-30T10:04:25.262Z · LW(p) · GW(p)

My view on this question is similar to that of Eric Marcus (pdf).

When you think you're imagining a p-zombie, all that's happening is that you're imagining an ordinary person and neglecting to imagine their experiences, rather than (impossibly) imagining the absence of any experience. (You can tell yourself "this person has no experiences" and then it will be true in your model that HasNoExperiences(ThisPerson) but there's no necessary reason why a predicate called "HasNoExperiences" must track whether or not people have experiences.)

Here, I think, is how Chalmers might drive a wedge between the zombie example and the "water = H2O" example:

Imagine that we're prescientific people familiar with a water-like substance by its everyday properties. Suppose we're shown two theories of chemistry - the correct one under which water is H2O and another under which it's "XYZ" - but as yet have no way of empirically distinguishing them. Then when we epistemically conceive of water being XYZ, we have a coherent picture in our minds of 'that wet stuff we all know' turning out to be XYZ. It isn't water, but it's still wet.

To epistemically but not metaphysically conceive of p-zombies would be to imagine a scenario where some physically normal people lack 'that first-person experience thing we all know' and yet turn out to be conscious after all. But whereas there's a semantic gap between "wet stuff" and "real water" (such that only the latter is necessarily H2O), there doesn't seem to be any semantic gap between "that first-person thing" and "real consciousness". Consciousness just is that first-person thing.

Perhaps you can hear the sound of some hairs being split. I don't think we have much difference of opinion, it's just that the idea of "conceiving of something" is woolly and incapable of precision.

Replies from: None, RichardChappell
comment by [deleted] · 2011-03-30T12:04:48.031Z · LW(p) · GW(p)

Thanks, I like the paper. I understand the core idea is that to imagine a zombie (in the relevant sense of imagine) you would have to do it first person - which you can't do, because there is nothing first person to imagine. I find the argument for this persuasive.

And this is just what I have been thinking:

the idea of "conceiving of something" is woolly and incapable of precision.

comment by RichardChappell · 2011-03-30T16:41:16.434Z · LW(p) · GW(p)

When you think you're imagining a p-zombie, all that's happening is that you're imagining an ordinary person and neglecting to imagine their experiences, rather than (impossibly) imagining the absence of any experience. (You can tell yourself "this person has no experiences" and then it will be true in your model that HasNoExperiences(ThisPerson) but there's no necessary reason why a predicate called "HasNoExperiences" must track whether or not people have experiences.)

This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness? It's not like we can imagine a microphysical duplicate of our world that's lacking chairs. Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair. It's analytic. But there's no such conceptual connection between neurons-instantiating-computations and consciousness, which arguably precludes identifying the two.

Replies from: FAWS, TheOtherDave, None, komponisto, Clippy, quen_tin, Peterdjones
comment by FAWS · 2011-03-30T16:58:19.081Z · LW(p) · GW(p)

But there's no such conceptual connection between neurons-instantiating-computations and consciousness

Only for people who haven't properly internalized that they are brains. Just like people who haven't internalized that heat is molecular motion could imagine a cold object with molecules vibrating just as fast as in a hot object.

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T21:27:18.129Z · LW(p) · GW(p)

Distinguish physical coldness from phenomenal coldness. We can imagine phenomenal coldness (i.e. the sensation) being caused by different physical states -- and indeed I think this is metaphysically possible. But what's the analogue of a zombie world in case of physical heat (as defined in terms of its functional role)? We can't coherently imagine such a thing, because physical heat is a functional concept; anything with the same microphysical behaviour as an actual hot (cold) object would thereby be physically hot (cold). Phenomenal consciousness is not a functional concept, which makes all the difference here.

Replies from: FAWS
comment by FAWS · 2011-03-30T21:43:51.477Z · LW(p) · GW(p)

You are simply begging the question. For me philosophical zombies make exactly as much sense as cold objects that behave like hot objects in every way. I can even imagine someone accepting that molecular movement explains all observable heat phenomena, but still confused enough to ask where hot and cold come from, and whether it's metaphysically possible for an object with a lot of molecular movement to be cold anyway. The only important difference between that sort of confusion and the whole philosophical zombie business in my eyes is that heat is a lot simpler so people are far, far less likely to be in that state of confusion.

Replies from: RichardChappell, SilasBarta
comment by RichardChappell · 2011-03-30T22:09:11.770Z · LW(p) · GW(p)

This comment is unclear. I noted that out heat concepts are ambiguous, between what we can call physical heat (as defined by its causal-functional role) and phenomenal heat (the conscious sensations). Now you write:

I can even imagine someone accepting that molecular movement explains all observable heat phenomena, but still confused enough to ask where hot and cold come from...

Which concept of 'hot' and 'cold' are you imagining this person to be employing? If the phenomenal one, then they are (in my view) correct to see a further issue here: this is simply the consciousness debate all over again. If the physical-functional concept, then they are transparently incoherent.

Now, perhaps you are suggesting that you only have a physical-function conception of consciousness, and no essentially first-personal (phenomenal) concepts at all. In that case, we are talking past each other, because you do not have the concepts necessary to understand what I am talking about.

Replies from: FAWS
comment by FAWS · 2011-03-30T22:37:11.022Z · LW(p) · GW(p)

You are over-extending the analogy. The heat case has an analogous dichotomy (heat vs. molecular movement) to first and third person, but if you try to replace it with the very same dichotomy the analogy breaks. The people I imagine are thinking about heat as a property of the objects themselves, so non-phenomenal, but using words like functional or physical would imply accepting molecular movement as the thing itself, which they are not doing. They are talking about the same thing as physical heat, but conceptionalize it differently.

Now, perhaps you are suggesting that you only have a physical-function conception of consciousness, and no essentially first-personal (phenomenal) concepts at all.

No, and I imagine you also have some degree of separation between the concepts of physical heat and molecular movement even though you know them to be the same, so you can e. g. make sense of cartoons with freeze rays fueled by "cold energy". The fact that I understand "first-" and "third person consciousness" to be the same thing doesn't mean I have no idea at all what people who (IMO confusedly) treat them as different things mean when they are talking about first person consciousness.

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T22:58:00.159Z · LW(p) · GW(p)

but using words like functional or physical would imply accepting molecular movement as the thing itself, which they are not doing.

Yes and no. It's a superficially open question what microphysical phenomena fills the macro-level functional role used to define physical heat (causing state changes, making mercury expand in the thermometer, or whatever criteria we use to identify 'heat' in the world). So they can have a (transparently) functional concept of heat without immediately recognizing what fills the role. But once they have all the microphysical facts -- the Laplacean demon, say -- it would clearly be incoherent for them to continue to see a micro-macrophysical "gap" the way that we (putatively) find a physical-phenomenal gap.

Replies from: FAWS
comment by FAWS · 2011-03-30T23:21:26.634Z · LW(p) · GW(p)

(knowledge that molecular movement is sufficient to explain observable macro-phenomena was assumed so the first half of the reply does not apply)

But once they have all the microphysical facts -- the Laplacean demon, say -- it would clearly be incoherent for them to continue to see a micro-macrophysical "gap" the way that we (putatively) find a physical-phenomenal gap.

You and I would agree on that, but presumably they would disagree on being incoherent. And I see no important distinction between their claim to coherence and that of philosophical zombies, other than simplicity of the subject matter.

Replies from: RichardChappell
comment by RichardChappell · 2011-03-31T00:01:59.517Z · LW(p) · GW(p)

You can show that they're incoherent by (i) explicating their macro-level functional conception of heat, and then (ii) showing how the micro functional facts entail the macro functional facts.

The challenge posed by the zombie argument is to get the physicalist to offer an analogous response. This requires either (i) explicating our concept of phenomenal consciousness in functional terms, or else (ii) showing how functional-physical facts can entail non-functional phenomenal facts (by which I mean, facts that are expressible using non-functional phenomenal concepts).

Do you think you can do one of these? If so, which one?

Replies from: None, FAWS
comment by [deleted] · 2011-03-31T00:36:02.003Z · LW(p) · GW(p)

You can show that they're incoherent by (i) explicating their macro-level functional conception of heat, and then (ii) showing how the micro functional facts entail the macro functional facts.

Okay, let's imagine this. First, to explicate "macro functional facts", we have the examples:

causing state changes, making mercury expand in the thermometer, or whatever criteria we use to identify 'heat' in the world

So, you try to show someone that jiggling around the molecules of mercury will cause the mercury to expand. How exactly would you do this? I'll try to imagine it. You present them with some mercury. You lend them an instrument which lets them see the individual molecules of the mercury. Then you start jiggling the molecules directly by some means (demonic powers maybe), and the mercury expands. Or, alternatively, you apply what they recognize as heat to mercury, and you show them that the molecules are jiggling faster. So, in experience after experience, you show them that what they recognize as heat rises if and only if the molecules jiggle faster.

This is not mere observation of correlation, because you are manipulating the molecules and the mercury by one means or another rather than passively observing.

But what they can say to you is, "I accept that there seems to be some sort of very tight relationship between the jiggling and the heat, but this doesn't mean that the jiggling is the heat. After all, we already know that there is a tight relationship between manipulations of the brain and conscious experiences, but that doesn't disprove dualism."

What could you say in response? Maybe: "if you jiggle the molecules, the molecules spread apart, i.e., the mercury expands." They could reply, "you are assuming that the molecules are identical with the mercury. But all I see is nothing but a tight correlation between where the molecules are and where the mercury is - similar to the tight correlation between where the brain is and where the conscious mind finds itself, but that doesn't disprove dualism."

How do you force a reluctant person to accept the identification of certain macro facts with certain micro facts?

But of course, you don't really have to, because when people see such strong correlations, their natural inclination is to stop seeing two things and start seeing one thing. They might even lose the ability to see two things - for example, when we look at the world with our two eyes, what we see is one image with depth, rather than two flat images (though we can see the individual images by closing one eye). So of course, someone who has experienced the correlation between a micro fact and macro fact will have no trouble merging them into one fact merely seen from two perspectives (micro versus macro).

In principle, the brain could be manipulated in all sorts of ways. Nobody would be willing to submit to arbitrary manipulations, but in principle it could be done, and someone who had undergone such manipulations might develop a strong identification with his physical brain.

comment by FAWS · 2011-03-31T01:56:30.812Z · LW(p) · GW(p)

You can show that they're incoherent by (i) explicating their macro-level functional conception of heat, and then (ii) showing how the micro functional facts entail the macro functional facts.

They already agree on that, just like zombie postulators will (usually?) grant that a functional view will be sufficient to explain all outward signs of consciousness. Their postulated opinion that there is something more to the question is IMO only more transparently incoherent than the equivalent. If you were claiming that the functional view was insufficient to explain people writing about conscious experience that would mean not sharing the same incoherence.

For example, assume I stubbed by toe. From my first person perspective I feel pain. From a third person perspective a nerve signal is sent to the brain and causes various parts of the neural machinery to do things. If I look at what I call "pain" from my first person perspective I can discriminate various, but perhaps not all parts of the sensation. I can feel where it comes from, spatially, and that the part of my body it comes from is that toe. From a third person perspective this information must be encoded somewhere since the person can answer the corresponding questions, or simply point, and perhaps we can already tell form neuroimaging? From an evolutionary perspective it's obvious why that information is present.

Back to first person, I strongly want it to stop. Also verifiable and explainable. I have difficulty averting my attention, find myself physically reacting in various ways unless I consciously stop it, I have pain related associations like the word "ouch" or the color red, and so on. Nothing I can observe first person except the base signal and baggage I can deduce to have a correlate third person stands out.

The signal itself seems uninteresting enough that I'm not sure if I would even notice if it was replaced with a different signal as long as all baggage was kept the same (and that didn't imply my memories changed to match). I'm not even completely sure that I really perceive such a base signal and it's not just the various types of baggage bleeding together. If such a base signal is there for me to perceive and is what made me write this it obviously also must also be part of the functional side. if it isn't it doesn't require any explanation.

comment by SilasBarta · 2011-03-30T22:19:31.826Z · LW(p) · GW(p)

still confused enough to ask ... whether it's metaphysically possible for an object with a lot of molecular movement to be cold anyway.

Not so fast! That is possible, and that was EY's point here:

Suppose there was a glass of water, about which, initially, you knew only that its temperature was 72 degrees. Then, suddenly, Saint Laplace reveals to you the exact locations and velocities of all the atoms in the water. You now know perfectly the state of the water, so, by the information-theoretic definition of entropy, its entropy is zero. Does that make its thermodynamic entropy zero? Is the water colder, because we know more about it?

Ignoring quantumness for the moment, the answer is: Yes! Yes it is!

And then he gave the later example of the flywheel, which we see as cooler than a set of metal atoms with the same velocity profile but which is not constrained to move in a circle:

But the more important point: Suppose you've got an iron flywheel that's spinning very rapidly. That's definitely kinetic energy, so the average kinetic energy per molecule is high. Is it heat? That particular kinetic energy, of a spinning flywheel, doesn't look to you like heat, because you know how to extract most of it as useful work, and leave behind something colder (that is, with less mean kinetic energy per degree of freedom).

Replies from: FAWS
comment by FAWS · 2011-03-30T22:40:02.256Z · LW(p) · GW(p)

Doesn't touch the point of the analogy though. Add "disordered" or something wherever appropriate.

Replies from: SilasBarta
comment by SilasBarta · 2011-03-30T22:55:05.197Z · LW(p) · GW(p)

Doesn't touch the point of the analogy though.

I think it does. Richard was making the point that your analogy blurs an important distinction between phenomenal heat and physical heat (thereby regressing to the original dilemma).

And it turns out this is important even in the LW perspective: the physical facts about the molecular motion are not enough to determine how hot you experience it to be (i.e. the phenomenal heat); it's also a function of how much you know about the molecular motion.

comment by TheOtherDave · 2011-03-30T17:01:45.809Z · LW(p) · GW(p)

If you met someone who said with a straight face "Of course I can imagine something that is physically identical to a chair, but lacks the fundamental chairness that chairs in our experience partake of... and is therefore merely a fake chair, although it will pass all our physical tests of being-a-chair nevertheless," would you consider that claim sufficient evidence for the existence of a non-physical chairness?

Or would you consider other explanations for that claim more likely?

Would you change your mind if a lot of people started making that claim?

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T21:29:32.280Z · LW(p) · GW(p)

You misunderstand my position. I don't think that people's claims are evidence for anything.

When I invite people to imagine the zombie world, this is not because once they believe that they can do so, this belief (about their imaginative capabilities) is evidence for anything. Rather, it's the fact that the zombie world is coherently conceivable that is the evidence, and engaging in the appropriate act of imagination is simply a psychological precondition for grasping this evidence.

That's not to say that whenever you believe that you've coherently imagined X, you thereby have the fact that X is coherently conceivable amongst your evidence. For this may not be a fact at all.

(This probably won't make sense to anyone who doesn't know any epistemology. Basically I'm rejecting the dialectical or "neutral" view of evidence. Two participants in a debate may be unable to agree even about what the evidence is, because sometimes whether something qualifies as evidence or not will depend on which of the contending views is actually correct. Which is to reiterate that the disagreement between me and Lukeprog, say, is about epistemological principles, and not any empirical matter of fact.)

Replies from: TheOtherDave, Tyrrell_McAllister, TheOtherDave, Alicorn
comment by TheOtherDave · 2011-03-30T21:51:49.963Z · LW(p) · GW(p)

I agree that your belief that you've coherently imagined X does not imply that X is coherently conceivable.

I agree that, if it were a fact that the zombie world were coherently conceivable, that could be evidence of something.

I don't understand your reasons for believing that the zombie world is coherently conceivable.

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T22:02:00.287Z · LW(p) · GW(p)

Are you assuming that in order for me to be able to justifiedly believe and reason from the premise that the zombie world is conceivable, I need to be able to give some independent justification for this belief? That way lies global skepticism.

I can tell you that the belief coheres well with my other beliefs, which is a necessary but not sufficient condition for my being justified in believing it. There's no good reason to think that it's false. (Though again, I don't mean to suggest that this fact suffices to make it reasonable to believe.) Whether it's reasonable to believe depends, in part, on facts that cannot be agreed upon within this dialectic: namely, whether there really is any contradiction in the idea.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-03-31T02:54:31.537Z · LW(p) · GW(p)

At the moment, I'm asking you what your reasons are for believing that the zombie world is coherently conceivable; I will defer passing judgment on them until I'm confident that I understand them, as I try to avoid judging things I don't understand.

So, no, I'm not making that assumption, though I'm not rejecting that assumption either.

Which of your other beliefs cohere better with a belief that the zombie world is coherently conceivable than with a belief that it isn't?

comment by Tyrrell_McAllister · 2011-03-31T20:24:25.666Z · LW(p) · GW(p)

When I invite people to imagine the zombie world, this is not because once they believe that they can do so, this belief (about their imaginative capabilities) is evidence for anything. Rather, it's the fact that the zombie world is coherently conceivable that is the evidence, and engaging in the appropriate act of imagination is simply a psychological precondition for grasping this evidence.

If someone were to claim the following, would they be making the same point as you are making?

"The non-psychological fact that 'SS0 + SS0 = SSSS0' is a theorem of Peano arithmetic is evidence that 2 added to 2 indeed yields 4. A psychological precondition for grasping this evidence is to go through the process of mentally verifying the steps in a proof of 'SS0 + SS0 = SSSS0' within Peano arithmetic.

"This line of inquiry would provide evidence to the verifier that 2+2 = 4. However, properly speaking, the evidence would not be the psychological fact of the occurrence of this mental verification. Rather, the evidence is the logical fact that 'SS0 + SS0 = SSSS0' is a theorem of Peano arithmetic."

comment by TheOtherDave · 2011-03-30T21:45:49.453Z · LW(p) · GW(p)

Yes, when you make statements about how easy it is to imagine this thing or that thing, you do indeed seem to me to be presenting those statements as evidence of something.

If I've misunderstood that, then I'll drop the subject here.

comment by Alicorn · 2011-03-30T21:38:41.255Z · LW(p) · GW(p)

I don't think that people's claims are evidence for anything.

I claim to be wearing blue today.

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T21:53:40.111Z · LW(p) · GW(p)

It's a restricted quantifier :-)

comment by [deleted] · 2011-03-30T19:13:13.849Z · LW(p) · GW(p)

This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness? It's not like we can imagine a microphysical duplicate of our world that's lacking chairs.

But these kinds of imagining are importantly dissimilar. Compare:

1) imagine the physical properties without imagining consciousness

2) imagine a microphysical duplicate of our world that's lacking chairs

The key phrases are: "without imagining" and "that's lacking". It is one thing to imagine one thing without imagining another, and quite another to imagine one thing that's lacking another. For example, I can imagine a ball without imagining its color (indeed, as experiments have shown, we can see a ball without seeing its color), but I may not be able to imagine a ball that's lacking color.

This is no small distinction.

To bring (2) into line with (1) we would need to change it to this:

2a) imagine a microphysical duplicate of our world without imagining chairs

And this, I submit, is possible. In fact it is possible not only to imagine a physical duplicate of our world without imagining chairs, it is (in parallel to the ball example above) possible to see a duplicate of our world (namely the world itself) without seeing (i.e. perceiving, recognizing) chairs. It's a regular occurrence that we fail to see (to recognize) what's right in front of us in plain view. It is furthermore possible for a creature like Laplace's Demon to imagine every particle in the universe and all their relations to each other without recognizing, in its own imagined picture, that a certain group of particles make up a chair, etc. The Demon can in other words fail to see the forest for the trees in its own imagined world.

Now, if instead of changing (2) to bring it into line with (1), we change (1) to bring it into line with (2), we get:

1a) imagine a microphysical duplicate of our world that's lacking consciousness

Now, your reason for denying (2) was:

Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair.

Converting this, we have the following proposition:

Once we've imagined the atoms-arranged-personwise, that's all it is to be a person.

But this seems to be nothing other than the issue in question, namely, the issue of whether there is anything more to being a person than atoms-arranged-personwise. If you assume that there is, then you are assuming the possibility of philosophical zombies. In other words this particular piece in the argument for the possibility of philosophical zombies assumes the possibility of philosophical zombies.

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T22:32:13.986Z · LW(p) · GW(p)

we get: 1a) imagine a microphysical duplicate of our world that's lacking consciousness

Right, that's the claim. I explain why I don't think it's question-begging here and here

Replies from: pjeby
comment by pjeby · 2011-03-31T01:36:31.523Z · LW(p) · GW(p)

we get: 1a) imagine a microphysical duplicate of our world that's lacking consciousness Right, that's the claim. I explain why I don't think it's question-begging here and here

How can can you perform that step unless you've first defined consciousness as something that's other-than-physical?

If the "consciousness" to be imagined were something we could point to and measure, then it would be a physical property, and would thus be duplicated in our imagining. Conversely, if it is not something that we can point to and measure, then where does it exist, except in our imagination?

The logical error in the zombie argument comes from failing to realize that the mental models we build in our minds do not include a term for the mind that is building the model. When I think, "Richard is conscious", I am describing a property of my map of the world, not a property of the world. "Conscious" is a label that I apply, to describe a collection of physical properties.

If I choose to then imagine that "Zombie Richard is not conscious", then I am saying, "Zombie Richard has all the same properties, but is not conscious." I can imagine this in a non-contradictory way, because "conscious" is just a label in my brain, which I can choose to apply or not apply.

All this is fine so far, until I try to apply the results of this model to the outside world, which contains no label "conscious" in the first place. The label "conscious" (like "sound" in the famous tree-forest-hearing question) is strictly something tacked on to the physical events to describe a common grouping.

In other words, my in-brain model is richer than the physical world - I can imagine things that do not correspond to the world, without contradiction in that more-expressive model.

For example, I can label Charlie Sheen as "brilliant" or "lunatic", and ascribe these properties to the exact same behaviors. I can imagine a world in which he is a genius, and one in which he is an idiot, and yet, he remains exactly the same and does the same things. I can do this because it's just my label -- my opinion -- that changes from one world to the other.

The zombie world is no different: in one world, you have the opinion that I'm conscious, and in the other, you have the opinion that I'm not. It's your failure to notice that "conscious" is an opinion or judgment -- specifically, your opinion or judgment -- that makes it appear as though it is proving something more profound than the proposition that people can hold contradictory opinions about the same thing.

If you map the argument from your imagination to the real world, then you can imagine/opine that people are conscious or zombies, while the physical world remains the same. This isn't contradictory, because it's just an opinion, and you can change your opinion whenever you like.

The reason the zombie world doesn't then work as an argument for non-materialism, is that it cheats by dropping out the part where the person doing the experiment is the one holding the opinion of consciousness. In your imagined world, you are implicitly holding the opinion, then when you switch to thinking about the real world, you're ignoring the part that it's still just you, holding an opinion about something.

comment by komponisto · 2011-03-30T17:21:12.500Z · LW(p) · GW(p)

we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness?

And that is a question of cognitive science, is it not?

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T22:19:52.299Z · LW(p) · GW(p)

Ha, indeed, poorly worded on my part :-)

Replies from: SilasBarta
comment by SilasBarta · 2011-03-30T22:24:41.797Z · LW(p) · GW(p)

What was poor about it? The rest of your point is consistent with that wording. What would you put there instead so as to make your point more plausible?

Replies from: RichardChappell
comment by RichardChappell · 2011-03-30T22:48:37.579Z · LW(p) · GW(p)

Good question. It really needed to be stated in more objective terms (which will make the claim less plausible to you, but more logically relevant):

It's a fact that a scenario containing a microphysical duplicate of our world but lacking chairs is incoherent. It's not a fact that the zombie world is incoherent. (I know, we dispute this, but I'm just explaining my view here.)

With the talk of what's easily imaginable, I invite the reader to occupy my dialectical perspective, and thus to grasp the (putative) fact under dispute; but I certainly don't think that anything I'm saying here forces you to take my position seriously. (I agree, for example, that the psychological facts are not sufficient justification.)

Replies from: SilasBarta
comment by SilasBarta · 2011-03-30T22:59:28.489Z · LW(p) · GW(p)

Okay, but there was some evidence you were trying to draw on that you previously phrased as "it's easy to imagine p-zombies..." -- and presumably that evidence can be concisely stated, without having to learn your full dialectic perspective. Whether or not you think it's "not a fact that the zombie world is incoherent", there was something you thought was relevant, and that something was related (though not equivalent!) to the ease of imagining p-zombies. What was that?

(And FWIW, I do notice you are replying to many different people here and appreciate your engagement.)

comment by Clippy · 2011-03-30T16:47:27.093Z · LW(p) · GW(p)

This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness?

World-models that are deficient at this aspect of world representation in ape brains.

comment by quen_tin · 2011-03-30T20:07:14.379Z · LW(p) · GW(p)

Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair. It's analytic. But there's no such conceptual connection between neurons-instantiating-computations and consciousness, which arguably precludes identifying the two.

That's true. The difference between chairs and consciousness is that chair is a 3rd person concept, whereas consciousness is a 1st person concept. Imagining a world without consciousness is easy, because we never know if there are consciousnesses or not in the world - consciousness is not an empirical data, it's something we speculate other have by analogy with ourselves.

comment by Peterdjones · 2011-04-22T14:15:22.740Z · LW(p) · GW(p)

Or in simpler terms: we can't see how particular physics produces particular consciousness, even if we accept in general that physics produces consciousness. The conceivability of p-zombies doesn't mean they are really possible, or that physicalism is false, but it does mean that our explanations are inadequate. Reductivism is not, as it stands, an explanation of consciousness, but only a proposal of the form an explanation would have.

comment by AlephNeil · 2011-03-30T09:05:00.267Z · LW(p) · GW(p)

My view on this question is similar to that of Eric Marcus (pdf).

When you think you're imagining a p-zombie, all that's happening is that you're imagining an ordinary person and neglecting to imagine their experiences, rather than (impossibly) imagining the absence of any experience. (You can tell yourself "this person has no experiences" and then it will be true in your model that "HasNoExperiences(ThisPerson)" but there's no necessary reason why an inert predicate called "HasNoExperiences" must track whether or not people have no experiences.)

comment by RichardChappell · 2011-03-30T16:34:51.791Z · LW(p) · GW(p)

Aleph basically has it right in his reply: 'water' is a special case because it's a rigid designator, picking out the actual watery stuff in all counterfactual worlds (even when some other stuff, XYZ, is the watery stuff instead of our water).

Conceiving of the "twin earth" world (where the watery stuff isn't H2O) is indeed informative, since if this really is a coherent scenario then there really is a metaphysically possible world where the watery stuff isn't H2O. It happens that we shouldn't call that stuff "water", if it differs from the watery stuff in our world, but that's mere semantics. The reality is that there is a possible world corresponding to the one we're (coherently) conceiving of.

For more detail, see Misusing Kripke; Misdescribing Worlds, or my undergrad thesis on Modal Rationalism

comment by mtraven · 2011-03-29T16:57:18.827Z · LW(p) · GW(p)

A few points:

  • Philisophy is (by definition, more or less) meta to everything else. By its nature, it has to question everything, including things that here seem to be unuqestionable, such as rationality and reductionism. The elevation of these into unquestionable dogma creates a somewhat cult-like environment.

  • Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago. That's one reason philosophers emphasize the history of ideas so much. It's probably a mistake to think you are so smart you will avoid all the pitfalls they've already fallen into.

  • I agree with the linked post of Eliezer's that much of analytic philosophy (and AI) is mostly just slapping formal terms over unexamined everyday ideas, which is why I find most of it bores me to tears.

  • Continental philosophy, on the other hand, if you can manage to make sense of it, actually can provide new perspectives on the world, and in that sense is worthwhile. Don't assume that just because you can't understand it, it doesn't have anything to say. Complaining because they use what seems like an impenetrable language is about on the level of an American traveling to Europe and complaining that the people there don't speak English. That said, Sturgeon's law definitely applies, perhaps at the 99% level.

  • I'm recomending Bruno Latour to everyone these days. He's a French sociologist of science and philosopher, and if you can get past the very French style of abstraction he uses, he can be mind-blowing in the manner described above.

Replies from: jwdink, lukeprog, ohwilleke, alfredmacdonald
comment by jwdink · 2011-03-30T21:36:09.075Z · LW(p) · GW(p)

Continental philosophy, on the other hand, if you can manage to make sense of it, actually >can provide new perspectives on the world, and in that sense is worthwhile. Don't assume >that just because you can't understand it, it doesn't have anything to say.

It's not that people coming from the outside don't understand the language. I'm not just frustrated the Hegel uses esoteric terms and writes poorly. (Much the same could be said of Kant, and I love Kant.) It's that, when I ask "hey, okay, if the language is just tough, but there is content to what Hegel/Heidegger/etc is saying, then why don't you give a single example of some hypothetical piece of evidence in the world that would affirm/disprove the putative claim?" In other words, my accusation isn't that continental philosophy is hard, it's that it makes no claims about the objective hetero-phenomenological world.

Typically, I say this to a Hegelian (or whoever), and they respond that they're not trying to talk about the objective world, perhaps because the objective world is a bankrupt concept. That's fine, I guess-- but are you really willing to go there? Or would you claim that continental philosophy can make meaningful claims about actual phenomena, which can actually be sorted through?

I guess I'm wholeheartedly agreeing with the author's statement:

You will occasionally stumble upon an argument, but it falls prey to magical categories >and language confusions and non-natural hypotheses.

Replies from: mtraven
comment by mtraven · 2011-03-31T01:06:33.099Z · LW(p) · GW(p)

I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy.

So the question is whether philosophy's position as meta to science and everything else can provide utility. I've found it useful, YMMV.

BTW here is the latest round of Heideggerian critique of AI (pdf) which, again, you may or may not find useful.

Replies from: jwdink
comment by jwdink · 2011-03-31T21:37:03.754Z · LW(p) · GW(p)

I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy.

Hmm.. I suspect the phrasing "evidence/phenomena in the world" might give my assertion an overly mechanistic sound to it. I don't mean verifiable/disprovable physical/atomistic facts must be cited-- that would be begging the question. I just mean any meaningful argument must make reference to evidence that can be pointed to in support of/ in criticism of the given argument. Note that "evidence" doesn't exclude "mental phenomena." If we don't ask that philosophy cite evidence, what distinguishes it from meaningless nonsense, or fiction?

I'm trying to write a more thorough response to your statement, but I'm finding it really difficult without the use of an example. Can you cite some claim of Heidegger's or Hegel's that you would assert is meaningful, but does not spring out of an argument based on empirical evidence? Maybe then I can respond more cogently.

Replies from: jwdink, mtraven
comment by jwdink · 2011-03-31T21:58:46.076Z · LW(p) · GW(p)

Unless you think the "Heideggerian critique of AI" is a good example. In which case I can engage that.

comment by mtraven · 2011-04-01T04:10:42.542Z · LW(p) · GW(p)

I'm not at all a fan of Hegel, and Heidegger I don't really understand, but I linked to a paper that describes the interaction of Heideggerian philosophy and AI which might answer your question.

I still think you don't have your categories straight. Philosophy does not make "claims" that are proved or disproved by evidence (although there is a relatively new subfield called "experimental philosophy"). Think of it as providing alternate points of view.

To illustrate: your idea that the only valid utterances are those that are supported by empirical evidence is a philosophy. That philosophy itself can't be supported by empirical evidence; it rests on something else.

Replies from: jwdink
comment by jwdink · 2011-04-01T18:11:55.571Z · LW(p) · GW(p)

That philosophy itself can't be supported by empirical evidence; it rests on something else.

Right, and I'm asking you what you think that "something else" is.

I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?

Replies from: mtraven
comment by mtraven · 2011-04-02T01:19:24.628Z · LW(p) · GW(p)

Right, and I'm asking you what you think that "something else" is.

Hell, how would I know? Let's say "thinking" for the sake of argument.

I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?

People think it makes sense.

"Definitions may be given in this way of any field where a body of definite knowledge exists. But philosophy cannot be so defined. Any definition is controversial and already embodies a philosophic attitude. The only way to find out what philosophy is, is to do philosophy." -- Bertrand Russell

comment by lukeprog · 2011-03-29T18:16:20.588Z · LW(p) · GW(p)

A reply on just one point:

I don't mean to make reductionism unquestionable, I'm just not making reductionism "my battle" so much anymore. Heck, for several years I spent my time arguing about theism. I'm just moving on to other subjects, and taking for granted the non-existence of magical beings, and so on. Like I say in my original post, I'm glad other people are working those out, and of course if I was presented with good reason to believe in magical beings or something, I hope I would have the honesty to update. Nobody's suggesting discrimination or criminal charges for not "believing in" reductionism.

comment by ohwilleke · 2011-03-31T01:56:49.104Z · LW(p) · GW(p)

"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago."

Really? When I look at Aquinas or Plato or Aristotle, I see people mostly asking questions that we no longer care about because we have found better ways of dealing with the issues that made those questions worth thinking about.

Scholastic discourse about the Bible or angels makes much less sense when you have a historical-critical context to explain how it emerged in the way that it did, and a canon of contemporaneous secular works to make sense of what was going on in their world at the time.

Philosophical atomism is irrelevant once you've studied modern physics and chemistry.

The notion that we have Platonic a priori knowledge looks pretty silly without a great deal of massaging as we learn more about the mechanism of brain development.

Also, not all new perspectives on the world have value. Continental philosophy and post-modernism are to philosophy what mid-20th century art music is to music composition. It is a rabbit hole that a whole generation of academics got sucked into and wasted their time on. It turned out that the future of worthwhile music was elsewhere, in people like Elvis and the Beatles and rappers and Nashville studios and Motown artists and ressurrections of the greats of the classical and romantic periods in new contexts, and the tone poems and dissonant musics and other academic experiements of that era were just garbage. They lost sight of what music was for, just as the continental philosophers and post-modernist philosophers lost sight of what philosophy was for.

The language in impenatrable because they have nothing to say. I know what it is like to read academic literature, for example, in the sciences or economics, that is impenetrable because it is necessarily so, but that isn't it. People who use sophisticated jargon when it is really necessary are also capable of speaking much more clearly about the essence of what is going on - people like Richard Feynman. But, our modern day philosophical sophisticates are known to no one but each other and are not adding to large understanding. Instead, all of the other disciplines are busy purging themselves of all that dreck so that they can get back on solid ground.

Replies from: gjm, TheAncientGeek, mtraven
comment by gjm · 2017-04-13T22:31:38.646Z · LW(p) · GW(p)

mid-20th century art music [...] tone poems and dissonant musics [...] were just garbage

wat?

Here are a few pieces of mid-20th century art music. I'm taking "mid-20th-century" to mean 1930 to 1970. Some of them are quite dissonant. None of them is actually a tone poem, as it happens. They are all pieces that (1) I like, (2) are well regarded by the classical music "establishment", (3) are pretty accessible even to (serious) listeners of fairly conservative taste, (4) are still being performed, recorded, etc., (5) are clearly part of the mainstream of mid-20th-century art music, and (6) seem to me to show no lack of awareness of what music is for.

  • 1930: Stravinsky, Symphony of Psalms
  • 1936: Barber, Adagio for strings
  • 1941: Tippett, A child of our time
  • 1942: Prokofiev, Piano sonata #7
  • 1945: Britten, Peter Grimes
  • 1948: Strauss, Four last songs
  • 1960: Shostakovich: String quartets #7,8
  • 1965: Bernstein, Chichester Psalms

(I make no claim that these are the best or most important works by their composers. I wanted things reasonably well spread out over the period in question, and subject to that picked fairly randomly.)

Are these all garbage? Perhaps you had in mind only music "weirder" than those: Second Viennese School twelve-tone music (though I'd call that early rather than mid 20th century), Cage-style experimentalism, and so forth. I'm not at all convinced that that stuff had no value or influence, but in any case it's far from all that was happening in western art music in the middle of the 20th century.

Replies from: g_pepper
comment by g_pepper · 2017-04-14T02:07:39.153Z · LW(p) · GW(p)

Great list of 20th century compositions! 20th century art music gets an undeservedly bad rap, IMO. I would add a few more composers:

  • 1930: Kurt Weill: Aufstieg und Fall der Stadt Mahagonny
  • 1935: George Gershwin: Porgy and Bess
  • 1940-1941: Olivier Messiaen: Quatuor pour la fin du temps
  • 1944: Aaron Copeland: Appalachian Spring

Kurt Weill's work might be considered theater music rather than art music, but I would argue that it is both of those things. Messiaen is admittedly avant garde and a bit outside of the mainstream, but is approachable by a wide range of audiences, including many who would not care for the composers of the Second Viennese School. Many of Messiaen's compositions could have been added to the list, so I picked one of the best known.

Replies from: gjm
comment by gjm · 2017-04-14T14:46:57.763Z · LW(p) · GW(p)

For what it's worth, I omitted Weill and Gershwin because I thought ohwilleke might not consider them arty enough, Messiaen becase I wasn't confident enough ohwilleke would concede that his music sounds good, and Copeland because Appalachian Spring was the obvious work to use and I already had enough from around that time :-). Of course I agree that otherwise those works are all worthy of inclusion in any list like mine.

comment by TheAncientGeek · 2017-04-08T13:59:25.397Z · LW(p) · GW(p)

"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago."

Really

Eg reinventing logical positivism!

Replies from: gjm
comment by gjm · 2017-04-13T22:02:52.772Z · LW(p) · GW(p)

hundreds or thousands of years ago

reinventing logical positivism

Logical positivism isn't even one hundred years old yet.

comment by mtraven · 2011-03-31T03:26:01.402Z · LW(p) · GW(p)

"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago."

See the paper on the Heideggerian critique of AI I posted earlier.

The notion that we have Platonic a priori knowledge looks pretty silly without a great deal of massaging as we learn more about the mechanism of brain development.

Oh? I would think that one of the lessons of neuroscience is that we are in fact hardwired for a great many things.

The language in impenatrable because they have nothing to say.

How do you know? That is, what evidence other than your lack of understanding do you have for this?

comment by alfredmacdonald · 2012-12-15T08:41:42.221Z · LW(p) · GW(p)

Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago. That's one reason philosophers emphasize the history of ideas so much. It's probably a mistake to think you are so smart you will avoid all the pitfalls they've already fallen into.

While I agree that it's important to avoid succumbing to these ideas, philosophy curricula tend to emphasize not just the history of ideas but the history of philosophers, which makes the process of getting up to speed for where contemporary philosophy is take entirely too long. It is not so important that we know what Augustine or Hume thought so much as why their ideas can't be right now.

Also, "the history of ideas" is really broad, because there are a lot of ideas that by today's standards are just absurd. Including the likes of Anaximander and Heraclitus in "the history of ideas" is probably a waste of time and cognitive energy.

comment by atucker · 2011-03-29T03:18:08.159Z · LW(p) · GW(p)

Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).

Could you please write a translation key for these?

I think it would help LWers read mainstream philosophy, and people with philosophy backgrounds read LW.

Replies from: lukeprog
comment by lukeprog · 2011-03-29T03:21:06.271Z · LW(p) · GW(p)

Not a bad idea, though it's far more complicated than termX = termY.

Replies from: atucker
comment by atucker · 2011-03-29T23:07:40.013Z · LW(p) · GW(p)

Fair enough.

I think that reading about how the terms differ would actually help a lot with getting a brief background in the subject, more than a direct but inaccurate one to one mapping.

comment by Bugmaster · 2011-11-28T22:30:08.326Z · LW(p) · GW(p)

Think like a cognitive scientist and AI programmer.

Is it possible to think "like an AI programmer" without being an AI programmer ? If the answer is "no", as I suspect it is, then doesn't this piece of advice basically say, "don't be a philosopher, be an AI programmer instead" ? If so, then it directly contradicts your point that "philosophy is not useless".

To put it in a slightly different way, is creating FAI primarily a philosophical challenge, or an engineering challenge ?

Replies from: TimS, lessdazed
comment by TimS · 2011-11-29T03:30:03.328Z · LW(p) · GW(p)

Creating AI is an engineering challenge. Making FAI requires an understanding of what we mean by Friendly. If you don't think that is a philosophy question, I would point to the multiplicity of inconsistent moral theories throughout history to try to convince you otherwise.

Replies from: Bugmaster
comment by Bugmaster · 2011-11-29T03:50:24.220Z · LW(p) · GW(p)

Thanks, that does make sense. But, in this case, would "thinking like an AI programmer" really help you answer the question of "what we mean by Friendly" ? Of course, once we do get an answer, we'd need to implement it, which is where thinking like an AI programmer (or actually being one) would come in handy. But I think that's also an engineering challenge at that point.

FWIW, I know there are people out there who would claim that friendliness/morality is a scientific question, not a philosophical one, but I myself am undecided on the issue.

Replies from: Vaniver
comment by Vaniver · 2011-11-29T04:02:37.608Z · LW(p) · GW(p)

But, in this case, would "thinking like an AI programmer" really help you answer the question of "what we mean by Friendly" ? Of course, once we do get an answer, we'd need to implement it, which is where thinking like an AI programmer (or actually being one) would come in handy. But I think that's also an engineering challenge at that point.

If you don't think like an AI programmer, you will be tempted to use concepts without understanding them well enough to program them. I don't think that's reduced to the level of 'engineering challenge.'

Replies from: Bugmaster
comment by Bugmaster · 2011-11-29T04:12:55.879Z · LW(p) · GW(p)

Are you saying that it's impossible to correctly answer the question "what does 'friendly' mean ?" without understanding how to implement the answer by writing a computer program ? If so, why do you think that ?

Edit: added "correctly" in the sentence above, because it's trivially possible to just answer "bananas !" or something :-)

Replies from: DSimon
comment by DSimon · 2011-11-29T04:27:02.434Z · LW(p) · GW(p)

I don't think the division is so sharp as all that. Rather, what Vanvier is getting at, I think, is that one is capable of correctly and usefully answering the question "What does 'Friendy' mean?" in proportion to one's ability to reason algorithmically about subproblems of Friendliness.

Replies from: Bugmaster, Vaniver
comment by Bugmaster · 2011-11-29T21:35:02.861Z · LW(p) · GW(p)

I see, so you're saying that a philosopher who is not familiar with AI might come up with all kinds of philosophically valid definitions of friendliness, which would still be impossible to implement (using a reasonable amount of space and time) and thus completely useless in practice. That makes sense. And (presumably) if we assume that humans are kind of similar to AIs, then the AI-savvy philosopher's ideas would have immediate applications, as well.

So, that makes sense, but I'm not aware of any philosophers who have actually followed this recipe. It seems like at least a few such philosophers should exist, though... do they ?

Replies from: DSimon
comment by DSimon · 2011-11-29T23:19:43.161Z · LW(p) · GW(p)

[P]hilosophically valid definitions of friendliness, which would still be impossible to implement (using a reasonable amount of space and time) and thus completely useless in practice.

Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation.

Programmers like to say "You don't truly understand how to perform a task until you can teach a computer to do it for you". A computer, or any other sort of rigid mathematical mechanism, is unable to make the 'common sense' connections that a human mind can make. We humans are so good at that sort of thing that we often make many such leaps in quick succession without even noticing!

Implementing an idea on a computer forces us to slow down and understand every step, even the ones we make subconsciously. Otherwise the implementation simply won't work. One doesn't get as thorough a check when explaining things to another human.

Philosophy in general is enriched by an understanding of math and computation, because it provides a good external view of the situation. This effect is of course only magnified when the philosopher is specifically thinking about how to represent human mental processes (such as volition) in a computational way.

Replies from: Bugmaster
comment by Bugmaster · 2011-11-29T23:26:12.584Z · LW(p) · GW(p)

I agree with most of what you said, except for this:

Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation.

Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques. Assuming the techniques are demonstrated to work reliably, of course.

Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.

Replies from: DSimon
comment by DSimon · 2011-11-29T23:39:08.786Z · LW(p) · GW(p)

Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques.

Indeed, I should have been more specific; not all processes used in AI need to be analogous to humans, of course. All I meant was that it is very easy, when trying to provide a complete spec of a human process, to accidentally lean on other human mental processes that seem on zeroth-glance to be "obvious". It's hard to spot those mistakes without an outside view.

Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.

To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques, since they're all likely to be non-locally-cohesive and heavily interdependent.

Replies from: Bugmaster
comment by Bugmaster · 2011-11-30T00:37:07.693Z · LW(p) · GW(p)

It's hard to spot those mistakes without an outside view.

Right, that makes sense.

To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques...

True, but I wasn't thinking of using an uploaded mind to extract and study those ideas, but simply to plug the mind into your overall architecture and treat it like a black box that gives you the right answers, somehow. It's a poor solution, but it's better than nothing -- assuming that the Singularity is imminent and we're all about to be nano-recycled into quantum computronium, unless we manage to turn the AI into an FAI in the next 72 hours.

comment by Vaniver · 2011-11-29T14:57:20.768Z · LW(p) · GW(p)

Endorsed.

comment by lessdazed · 2011-11-29T01:54:27.670Z · LW(p) · GW(p)

is creating FAI primarily a philosophical challenge, or an engineering challenge ?

An analogy:

http://eccc.hpi-web.de/report/2011/108/

Computational complexity theory is a huge, sprawling field; naturally this essay will only touch on small parts of it...One might think that, once we know something is computable, whether it takes 10 seconds or 20 seconds to compute is obviously the concern of engineers rather than philosophers. But that conclusion would not be so obvious, if the question were one of 10 seconds versus 101010 seconds!
And indeed, in complexity theory, the quantitative gaps we care about are usually so vast that one has to consider them qualitative gaps as well. Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number.
More precisely, complexity theory asks the question: how do the resources needed to solve a problem scale with some measure n of the problem size...

Need it be primarily one or the other? But if I must pick one, I pick philosophy.

Replies from: Bugmaster
comment by Bugmaster · 2011-11-29T02:29:32.846Z · LW(p) · GW(p)

An analogy: http://eccc.hpi-web.de/report/2011/108/

I'm afraid I don't see how this article is analogous. The article points out that computational complexity puts a very real limit on what can be computed in practice. Thus, even if you'd proved that something is computable in principle, it may not be computable in our current Universe, with its limited lifespan. You can apply computational complexity to practical problems (f.ex., devising an optimal route for inspecting naval buoys) as well as to theoretical ones (f.ex., discarding the hypothesis that the human brain is a giant lookup table). But these are still engineering and scientific concerns, not philosophical ones.

Need it be primarily one or the other? But if I must pick one, I pick philosophy.

I still don't understand why. If you want to know the probability of FAI being feasible at all, you're asking a scientific question; in order to answer it, you'll need to formulate a hypothesis or two, gather evidence, employ Bayesian reasoning to compute the probability of your hypothesis being true, etc. If, on the other hand, you are trying to actually build an FAI, then you are solving a specific engineering problem; of course, determining whether FAI is feasible or not would be a great first step.

So, I can see how you'd apply science or engineering to the problem, but I don't see how you'd apply philosophy.

Replies from: lessdazed
comment by lessdazed · 2011-11-29T03:02:54.668Z · LW(p) · GW(p)

If you want to know the probability of FAI being feasible at all, you're asking a scientific question

To fill in the content the term "FAI" stands for, science isn't enough. Engineering is by guess and check, I suppose, but not really.

Replies from: Bugmaster
comment by Bugmaster · 2011-11-29T03:52:29.520Z · LW(p) · GW(p)

Sorry, I couldn't parse your comment at all; I'm not sure what you mean by "content". My hunch is that you meant the same thing as TimS, above; if so, my reply to him should be relevant. If not, my apologies, but could you please explain what you meant ?

Replies from: lessdazed
comment by lessdazed · 2011-11-29T04:09:05.817Z · LW(p) · GW(p)

I meant what I think he did, so you got it.

comment by Emile · 2011-03-28T21:24:58.978Z · LW(p) · GW(p)

I'd welcome more quality discussion of philosophical topics such as morality here. You occasionally see people pop up and say confused things about morality, like

It has been suggested that animals have less subjective experience than people. For example, it would be possible to have an animal that counts as half a human for the purposes of morality.

... that got downvoted, but I still get the impression that confused thinking like that pops up more often on the topic of morality than on others (except Friendly AI), and that Eliezer didn't do a good enough job teaching sane and clear thinking about morality to his readers - including myself.

And morality is a topic that's whack in the middle of philosophy, and AI and statistics don't teach us much about it (though cognitive science and experimental philosophy do). So I have the hope that more input from academic philosophy might raise the quality of thinking here about morality.

Replies from: lukeprog, Oscar_Cunningham
comment by lukeprog · 2011-03-28T21:29:06.549Z · LW(p) · GW(p)

Metaethics is my specialty, so I've got some 'dissolving moral problems' posts coming up, but I need to write some dependencies first.

comment by Oscar_Cunningham · 2011-04-01T19:20:12.454Z · LW(p) · GW(p)

Why is the quoted example confused? It seems to be that subjective experience has something to do with morality, and in such a way that having less of it would make you less morally significant.

Replies from: Emile
comment by Emile · 2011-04-01T20:11:42.416Z · LW(p) · GW(p)

Possibly "something to do with morality", yes, but moral worth isn't equal to subjective experience to the point that you can use that to calculate the ratio between "how much some animal is worth" and "how much a human is worth". Or, maybe it is, but we'd need an actual argument, not just assuming it's so.

comment by buybuydandavis · 2014-11-24T20:27:37.266Z · LW(p) · GW(p)

Seems like an appropriate article to relay a bit of wisdom from E.T. Jaynes.

Jaynes quotes a colleague: “Philosophers are free to do whatever they please, because they don’t have to do anything right.”

comment by Daniel_Burfoot · 2011-03-28T22:56:00.241Z · LW(p) · GW(p)

This feels more like a style guide than a "vision of how to do philosophy".

Replies from: Perplexed
comment by Perplexed · 2011-03-28T23:05:46.506Z · LW(p) · GW(p)

This feels more like a style guide than a "vision of how to do philosophy".

I agree, though it might be redeemed by (1) an argument why this style is the best for doing philosophy successfully, and (2) an explanation of how success at doing philosophy ought to be measured and why anyone should seek this kind of success.

The question that Prase asks, nearby, seems to be related.

Replies from: lukeprog
comment by lukeprog · 2011-03-29T01:45:30.249Z · LW(p) · GW(p)

All throughout the 'style guide', I gave reasons for why these suggestions matter. Then in penultimate paragraph, I repeated these reasons.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-03-29T19:53:13.147Z · LW(p) · GW(p)

I'd phrase the complaint this way: the "vision" part said much about how to communicate philosophical results once you've obtained them, but little about how to obtain those results in the first place. Out of the 11 items, only two (6 and 8) are about the latter instead of the former.

Of course how to obtain philosophical results is a much harder problem, so you can't really be blamed for not having a huge amount to say on that. It's really just an expectation management issue. If you declare a "vision of how to do philosophy", people will naturally expect more than writing tips.

Replies from: lukeprog
comment by lukeprog · 2011-03-29T20:26:33.500Z · LW(p) · GW(p)

Yes, I understand, but the subject of how to do philosophy is, like, half of Less Wrong. That's why I kept talking about dissolving semantic problems, reductionism, thinking like a cognitive scientist and an AI programmer. Those are all part of my vision of how to do philosophy, and I talked about them in the post and linked to articles on those subjects, but of course I can't repeat all of that content in this little blog post.

Don't be fooled by the count of items on the list devoted to style vs. content. Item #6 is really, really important, and covered in detail throughout the archives of Less Wrong.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-03-29T20:55:46.431Z · LW(p) · GW(p)

In that case, the problem is ironically one of style. Given that #6 is really, really important, you didn't indicate its importance in any way stylistically. It's listed smack in a middle of a bunch of writing tips. It's not bolded or italicized. It doesn't link or cite any other articles.

Replies from: lukeprog
comment by lukeprog · 2011-03-29T22:36:06.196Z · LW(p) · GW(p)

Sheesh you guys are picky. :)

Seriously though, I've improved the original post in response. Thanks.

comment by Vladimir_Nesov · 2011-03-28T22:24:30.170Z · LW(p) · GW(p)

Use your rationality training, but avoid language that is unique to Less Wrong. All these terms and ideas have standard names outside of Less Wrong

Most, probably not all. Universal statements like this are brittle and rarely correct.

Replies from: lukeprog
comment by lukeprog · 2011-03-28T22:27:46.535Z · LW(p) · GW(p)

True, dat.

Fixed. Thanks.

comment by Vladimir_Nesov · 2011-03-28T21:35:46.360Z · LW(p) · GW(p)

justification bottoms out in the lens that can see its flaws

This statement seems misleading, since justification doesn't actually "hit bottom", doesn't stop. For contrast, a quotation from the post:

So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I'm not halting the chain of examination at the point that I encounter Occam's Razor, or my brain, or some other unquestionable. The chain of examination continues - but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques.

Replies from: SilasBarta
comment by SilasBarta · 2011-03-28T21:51:41.112Z · LW(p) · GW(p)

I think that sentence was written just to include the names of the articles when linking them, not because it should be taken literally.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-03-28T22:01:51.798Z · LW(p) · GW(p)

I don't expect Luke misunderstood the posts in question to the point of making this error, so I'm not talking about his intention behind writing the statement. I'm merely pointing out that, as written, it's somewhat misleading, whatever the circumstances that shaped it.

Replies from: lukeprog
comment by lukeprog · 2011-03-28T22:30:16.629Z · LW(p) · GW(p)

I'm struggling to come up with a very quick way to say this more accurately in the post. Can you think of anything?

Replies from: Vaniver, Vladimir_Nesov
comment by Vaniver · 2011-03-28T22:48:46.537Z · LW(p) · GW(p)

Does "justification rests in the lens that can see its flaws" work?

Replies from: lukeprog
comment by lukeprog · 2011-03-29T01:48:04.963Z · LW(p) · GW(p)

I chose this one, thanks.

comment by Vladimir_Nesov · 2011-03-28T23:00:07.385Z · LW(p) · GW(p)

Say, "The process of justification should never stop, not even flaws in the lens itself are to be overlooked."

comment by i _ i * · 2020-05-26T23:22:19.749Z · LW(p) · GW(p)

Why would it be bad for philosophy to work (primarily) with intuitions? And why would philosophy need empirical evidence? (Relating to the point in the linked post on criticism of dualists not having any evidence). Empirical evidence is not what is (primarily) used in mathematics. If everything could be solved with empirical evidence, there would be no need for philosophy. I don’t see how scientific evidence is better than intuition. Or even possible without them... In case you mean not only empirical evidence but also logical/ mathematical (?) evidence: Why is intuition not also evidence itself, i.e. why not let it be a PART of the reasoning?

Also I don‘t get why we would need training in cognitive sciences. What do you mean that the brain is the tool and we need to know how to use it? What will it help my reasoning to know how neural networks are connected (for example)? And why is it necessary? (To the point where it is all you need, and one can just discard philosophical thinking?)

Replies from: TAG
comment by TAG · 2020-05-27T13:45:20.681Z · LW(p) · GW(p)

Before saying intuitions are bad, you need to show that you can manage without them -- entirely.

comment by MarkusRamikin · 2013-11-27T19:40:09.032Z · LW(p) · GW(p)

as long as you are human there is no final victory.

Hm, that makes a nifty quote.

comment by lukeprog · 2011-05-14T02:31:41.802Z · LW(p) · GW(p)

One thing I mean by saying that philosophers could benefit from 'thinking like AI programmers' is that forcing yourself to think about the algorithm that would generate a certain reality can guard against superstition, because magic doesn't reduce to computer code.

I recently came across Leibniz saying much the same thing in a passage where he imagines a future language of symbolic logic that had not yet been invented:

The characters would be quite different from what has been imagined up to now... The characters of this script should serve invention and judgment as in algebra and arithmetic... It will be impossible to write, using these characters, chimeral notions.

For the record, I didn't get this little gem from reading Leibniz. I stumbled onto it in Gleick's new history of information, The Information.

Replies from: Vladimir_Nesov, rhollerith_dot_com
comment by Vladimir_Nesov · 2011-05-14T13:16:27.389Z · LW(p) · GW(p)

For the record, I didn't get this little gem from reading Leibniz.

I appreciate this disclaimer.

comment by RHollerith (rhollerith_dot_com) · 2011-05-14T03:55:09.563Z · LW(p) · GW(p)

What I take Leibniz to have meant was that when he uses math he is much less prone to self-deception and to mistakenly believing he's had an insight than when he uses natural language, so he tried (and failed) to extend math so that he could use it to talk about or think about all of the things he uses language to talk about, including human and personal things.

Gottlob Frege, the creator of predicate logic, had a similar ambition.

Note that creating FAI that will extrapolate the volition of the humans requires using math (broadly construed) or formal language to talk about some human things. In particular, you must formally define "human", "volition" and the extrapolation process. The fact that Leibniz and Frege did not get very far with their ambition (although the creation of predicate logic strikes me as some progress) suggests that for us to teach ourselves how to do that might require nontrivial effort -- although I tend to think that we have a head start in some of our mathematical tools. In particular the AIXI formalism (and to a lesser extent) some of the more intellectually-deep traditions we have for designing programming languages and writing programs strike me as superior to any of the "head starts" (including predicate logic) that Leibniz or Frege (who died in 1925) had at their disposal.

(Pearl's technical explanation of causality is another things that sort of seems to me that it might possibly somehow assist in this enterprise.)

SIAI has not included me in their private or not-completely-public discussions of Friendliness theory to any significant degree, so they might have insights that render my speculations here obsolete.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-14T04:21:47.369Z · LW(p) · GW(p)

Another person who seems to have had the same general ambition as Liebniz and Frege is the Free Software Foundation's lawyer, the man who with Richard Stallman created the General Public License. Eben Moglen. Here's Moglen in 2000:

I was committed to the idea that what we were doing with computers was making languages that were better than natural languages for procedural thought. The idea was to do for whole ranges of human thinking what mathematics has been doing for thousands of years in the quantitative arrangement of knowledge, and to help people think in more precise and clear ways.

comment by byrnema · 2011-03-28T21:20:48.080Z · LW(p) · GW(p)

What is an example of a magical category being used in philosophy? (That is, a convenient handle that I can use to represent the term, 'magical category' when I read it).

Replies from: lukeprog
comment by lukeprog · 2011-03-28T21:32:27.982Z · LW(p) · GW(p)

Yudkowsky gives some good examples. Or, consider "objectification." Really, they are ubiquitous in philosophy.

Replies from: Tyrrell_McAllister, byrnema
comment by Tyrrell_McAllister · 2011-03-28T22:10:55.402Z · LW(p) · GW(p)

I'm not sure that it's fair to apply the "magical categories" critique to philosophers who discuss "objectification".

Nussbaum would have committed the fallacy of magical categories if she had thought that her discussion would suffice to teach an AI to recognize instances of objectification. But the most that she would purport to have done is to teach humans in her intellectual community how to recognize instances of objectification. So she is allowed the "anthropomorphic optimism" that would be fallacious if she were trying to train an AI. And probably, after reading her article, you could do a very reliable job of categorizing (what she would call) instances of objectification.

Replies from: lukeprog
comment by lukeprog · 2011-03-28T22:17:05.867Z · LW(p) · GW(p)

Fair enough; it's a magical category in one sense, and not a magical category in another sense.

Replies from: byrnema
comment by byrnema · 2011-03-28T23:35:06.441Z · LW(p) · GW(p)

In what sense is it a magical category?

Replies from: Strange7
comment by Strange7 · 2011-03-29T12:47:47.983Z · LW(p) · GW(p)

Would it qualify as ironic if "magical categories" turned out to be a member of the set of all sets that contain themselves as members?

Replies from: Jack, byrnema
comment by Jack · 2011-03-29T20:53:51.162Z · LW(p) · GW(p)

I'm not sure I believe in non-magical categories.

comment by byrnema · 2011-03-29T15:30:32.300Z · LW(p) · GW(p)

Would it qualify as ironic if "magical categories" turned out to be a member of the set of all sets that contain themselves as members?

I guess what is ironic is that if "magical categories" are themselves magical, we could never know that they are.

Further, not knowing the meaning of a magical category (not even knowing if the meaning is knowable) it is possible that the set of all sets that contain themselves is magical.

I'm trying to guess from the context, but I think that being a magical category means that there is no universal algorithm that could be applied to determine if an object x is contained within it. Suppose that this is the definition and that being a magical category strongly means that there is also no algorithm to determine if an object x is not contained within it.

All this to quip that if magical categories are magical, then they are contained in the set of all sets containing themselves. If magical categories are strongly magical, they are contained in and contain the set of sets containing themselves. (Since using the property of strongness, it would be impossible to determine if the set-of-sets-containing-themselves are magical or not, making the set-of-sets-containing-themselves magical.)

comment by byrnema · 2011-03-28T23:42:27.614Z · LW(p) · GW(p)

Those examples don't have citations.* I would like to see how magical categories actually appear in an argument in a philosophy article.

This is how I like to handle assimilating generalizations.. I will accept a generalization as true, but I tie it to an actual example. That way, if the generalization is later challenged I can look to see if the context/meaning/framing is different.

I am also curious as to whether there is any self awareness of this problem of magical categories in philosophy.

* I see now that your post did. However, I still haven't studied enough of your post to gather the details of the magical category there.

comment by jo lima (jo-lima) · 2021-04-19T20:26:01.303Z · LW(p) · GW(p)

Philosophy was for a long time the leading discipline and a cornerstone of the rationalistic ideals you lesswrong folks seem to follow - from Aristoteles to Kant to Wittgenstein. But then there is a growing number of philosophers who start questioning rationality itself or the attempt to create a "system" to explain everything. (From Nietzsche over Freud to Derrida or Foucault). 

For people who seem so concerned to ,figure it all out' based on rational thinking it baffles me, how little effort I see, to understand what fundamental flaws others have seen in the kind of thinking you base your entire project on. (Please correct me, if I have simply overseen this kind of earnest discussions or misunderstood the lesswrong way alltogether). 

Does it not make you wonder, that a smart cookie like Wittgenstein changed his mind about having once and for all logically solved the questions of 'what can be said sensibly' with his Tractatus Logico-Philosophicus? 

comment by Polytopos · 2021-02-08T21:16:26.821Z · LW(p) · GW(p)

I'm curious how many people here think of rationalism as synonymous with something like Quinean Naturalism (or just naturalism/physicalism in general). It strikes me that naturalism/physicalism is a specific view one might come to hold on the basis of a rationalist approach to inquiry, but it should not be mistaken for rationalism itself. In particular, when it comes to investigating foundational issues in epistemology/ontology a rationalist should not simply take it as a dogma that naturalism answers all those questions. Quine's Epistemology Naturalized is an instructive text because it attempt actually to produce a rational argument for approaching foundational philosophical issues naturalistically. This is something I haven't seen much on LW, it usually seems like this is taken as an assumed axiom with no argument.

The value of attempting to make the arguments for naturalized epistemology explicit is that they can then be critiqued and evaluated. As it happens, when one reads Quine's work on this and thinks carefully about it, it becomes pretty evident that it is problematic for various reasons as many mainstream philosophers have attempted to make clear (e.g., the literature around the myth of the given).

I'd like to see more of that kind of foundational debate here, but maybe that's just because I've already been corrupted by the diseased discipline of philosophy ; )

Replies from: TAG
comment by TAG · 2021-02-10T01:36:41.319Z · LW(p) · GW(p)

There's a way of doing rationality which is maximally open and undogmatic, but that isnt the Less Wrong way. Theres a way of doing naturalism, where you make first sure that science has a firm epistemic foundation and only then accept its results, and that's not the Less Wrong way either.

If you look at this passage

As Michael Vassar observes, philosophers are “spectacularly bad” at understanding that their intuitions are generated by cognitive algorithms.

.,it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.

Replies from: Polytopos
comment by Polytopos · 2021-02-10T15:30:09.076Z · LW(p) · GW(p)

it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.

I find this claim interesting. I’m not entirely sure what you intend by the word “downstream” but I will interpret it as saying that logic and probability are epistemically justified by neuroscience. In particular, I understand this to include that claim a priori intuition unverified by neuroscience is not sufficient to justify mathematical and logical knowledge. If by "downstream" you have some other meaning in mind, please clarify. However, I will point out that you can't simply mean causally downstream, i.e., the claim that intuition is caused by brain stuff, because a merely causal link does not relate neuroscience to epistemology (I am happy to expand on this point if necessary, but I'll leave it for now).

So given my reading of what you wrote, the obvious question to ask is, do we have to know neuroscience to do mathematics rationally? This would be news to Bayse who lived in the 18th century when there wasn’t much neuroscience to speak of. Your view implies that Bayse (or Euclid for that matter) were unjustified epistemically in their mathematical reasoning because they didn’t understand the neural algorithms underlying their mathematical inferences.

If this is what you are claiming, I think it’s problematic on a number of levels. First, on it faces a steep initial plausibility problem in that it implies mathematics as a field is unjustified for most of its thousands of years of history until some research in empirical science validates it. That is of course possible, but I think most rationalists would balk at seriously claiming that Euclid didn't know anything about geometry because of his ignorance of cognitive algorithms.

But a second deeper problem affects the claim even if one leaves off historical considerations and only looks at the present state of knowledge. Even today when we do know a fair amount about the brain and cognitive mechanisms, the idea that math and logic are epistemically grounded in this knowledge is viciously circular. Any sophisticated empirical science relies on the validity of mathematical inference to establish it’s theories. You can’t use neuroscience to validate statistics when the validity of neuroscientific empirical methods themselves depend on the epistemic bonafides of statistics. With logic the case is even more obvious. An empirical science will rely on the validity of deductive inference in formulating it’s arguments (read any paper in any scientific journal). So there is no chance that the rules of logic will be ultimately justified through empirical research. Note this isn't the same as saying we can't know anything without assuming the prior validity of math and logic. We might have lots of basic kinds of knowledge about tables and chairs and such, but we can't have sophisticated knowledge of the sort gained through rigorous scientific research as this relies essentially on complex reasoning for it's own justification.

An important caveat to this is that of course we can have fruitful empirical research into our cognitive biases. For example, the famous Wason selection task showed that humans in general are not very reliable at applying the logical rule of modus tollens in an abstract context. However, crucially, in order to reach this finding, Wason (and other researchers) had to assume that they themselves knew the right answer on the task. i.e.., the cognitive science researchers assumed the a priori validity of the deductive inference rule based on their knowledge of formal logic. The same is true for Kahneman and Tversky’s studies of bias in the areas of statistics and probability.

In summary, I am wholeheartedly in favour of using empirical research to inform our epistemology (in the way that the cognitive biases literature does). But there is a big difference between this and the claim that epistemology doesn't need anything in addition to empirical science. This is simply not true. Mathematics is the clearest example of why this argument fails, but once one has accepted its failure in the case of mathematics, one can start to see how it might fail in other less obvious ways.

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-26T21:48:49.850Z · LW(p) · GW(p)

As a scientist, not a philosopher, I still don't see much virtue in writing "simply". This is a particulary Anglo-Saxon tradition, whereas I (and most of the German-Russian tradition, AFAIK) have always felt that when you try writing simply you lose at least speed of the train of thought and quite likely some of your arguments' power. "No math - no science" is a specific example, but not the only one.

comment by bubblesort · 2019-02-26T10:30:00.637Z · LW(p) · GW(p)

Sorry, I'm not a professional philosopher but did study it at university and still retain an interest in it. I was interested to read this statement. "Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless.".

I take that to mean you consider Tractatus Logico-Philosophicus to be his better work. I do too and have been mocked for saying so when I was a student. I was taught by some very famous professors at a well placed university but I wasn't much of a student, not the brightest crayon in that years box.

I agree, the professional philosophers look back at some fairly ancient text, but there is a reason for that. If you are building a car that is aiming to be better than everything else on the road you dont start blindly, you look at other cars and check their faults and build something that does not have the same faults. The aim of philosophy is not a combative sport. Its true aim is to build upon knowledge, but what knowledge is also becomes a philosophical debate. I prefer meta-philosophy and scientific method, I prefer igtheism to atheism and prefer Hume to Descartes. The old dead guys determine, in part how I think of the alive new guys. The old guys gave me a set of tools to dissect the work of the new guys.

comment by scientism · 2011-04-06T04:09:15.654Z · LW(p) · GW(p)

Peter Hacker is not somebody who thinks "philosophy should be useless." Of the list of "basics" that you cite Peter Hacker would agree that "things are made of atoms", "that many questions don't need to be answered but instead dissolved" and "that language is full of tricks." He also explicitly states that "Philosophical Foundations of Neuroscience" should be judged on its usefulness (which is why methodological concerns are relegated to the back pages). Indeed, it seems you equate dissolving problems with "thinking philosophy should be useless" (you cite the later Wittgenstein and dissolution was his method), despite the fact that you also cite it favourably. I find this odd.

Replies from: lukeprog
comment by lukeprog · 2011-04-11T10:41:27.715Z · LW(p) · GW(p)

You're right. I mis-remembered Hacker's positions. I've updated the original post. Thanks for the correction.

comment by wnewman · 2011-03-29T14:33:13.559Z · LW(p) · GW(p)

lukeprog wrote "philosophers are 'spectacularly bad' at understanding that their intuitions are generated by cognitive algorithms." I am pretty confident that minds are physical/chemical systems, and that intuitions are generated by cognitive algorithms. (Furthermore, many of the alternatives I know of are so bizarre that given that such an alternative is the true reality of my universe, the conditional probability that rationality or philosophy is going to do me any good seems to be low.) But philosophy as often practiced values questioning everything, and so I don't think it's quite fair to expect philosophers to "understand" this (which I read in this context as synonymous as "take this for granted"). I'd prefer a formulation like "spectacularly bad at seriously addressing [or, perhaps, even properly understanding] the obvious hypothesis that their intuitions are generated by cognitive algorithms." It seems to me that the criticism rewritten in this form remains severe.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T14:54:28.160Z · LW(p) · GW(p)

I agree, and I really doubt philosophers fail to deeply question their own intuitions.

comment by ksolez · 2011-03-28T22:37:13.262Z · LW(p) · GW(p)

It may just be my physician's bias, but "diseased" seems like a very imprecise term. The title would be more informative and more widely quoted with another word choice. In medicine you would not find that word in an article title.

There needs to be more cross-talk between philosophy and science. It is not an "either or" choice; we need an amalgam of the two. As a scientist I object strongly to your statement "Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy." Combined approaches are what is needed, not abandonment of philosophy.

Replies from: FAWS
comment by FAWS · 2011-03-29T00:27:14.702Z · LW(p) · GW(p)

It may just be my physician's bias, but "diseased" seems like a very imprecise term.

It's a callback to an earlier less wrong article

comment by Vladimir_Nesov · 2011-03-28T22:08:38.098Z · LW(p) · GW(p)

It is not easy to overcome millions of years of brain evolution [...]

Since evolution, in particular, formed our moral inclination and our reasoning ability, this statement sounds a bit unfair/one-sided.

Replies from: lukeprog
comment by lukeprog · 2011-03-28T22:31:14.849Z · LW(p) · GW(p)

What do you mean?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-03-28T22:50:59.752Z · LW(p) · GW(p)

From The Gift We Give To Tomorrow:

But it still seems to me that, in this creation of humans by evolution, something happened that is precious and marvelous and wonderful. If we cannot call it a physical miracle, then call it a moral miracle.

Replies from: lukeprog
comment by lukeprog · 2011-03-29T01:46:39.795Z · LW(p) · GW(p)

I don't know what you're getting at. Is there a problem with the statement, "It's not easy to overcome millions of years of brain evolution"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-03-29T09:17:49.769Z · LW(p) · GW(p)

You don't want to "overcome" a lot of what millions of years of brain evolution have formed, only some things.

Replies from: XiXiDu
comment by XiXiDu · 2011-03-29T10:48:22.819Z · LW(p) · GW(p)

You don't want to "overcome" a lot of what millions of years of brain evolution have formed, only some things.

But how do we choose what we want to "overcome" and what not? I suppose it is up to the answer to the question, what constitutes winning? If rationality is about winning and winning means to achieve goals given to us by evolution, to satisfy our desires that have been implemented by evolution, then it needs to be able to discern goals and desires from biases and fallacies.

If winning means to satisfy our evolutionary template, all of our complex values, desires and goals, what is it that rationality is doing by helping us to win? How does it measure success? If I give in to Akrasia, how did I fail to satisfy an evolutionary desire? If I procrastinate, how does rationality measure that I am acting irrationally? What is the unit in which success is measured?

Hyperbolic discounting is called a bias. Yet biases are largely a result of our evolution, just as our desire for complexity. You read LessWrong and are told to change your mind, overcome bias and disregard discounting as a biased desire that leads to a suboptimal outcome. The question we have to answer is why we don’t go a step further and disregard human nature altogether in favor of something that can value the possibilities of reality maximally? Where do we draw the line?

If rationality is prescriptive and can say No to procrastination and Yes to donating money to the Singularity Institute for Artificial Intelligence, then it is already telling us to disregard most of our desires to realize its own idea of what we ought to do. Why then is it irrational to say No to evolutionary values and Yes to whatever maximizes what rationality is measuring?

ETA: For more see the discussion here.

Replies from: timtyler, Strange7
comment by timtyler · 2011-03-30T20:13:01.661Z · LW(p) · GW(p)

But how do we choose what we want to "overcome" and what not?

Which parts of our evolved preferences we embrace seems to be partly a matter of taste - though most agree that there are problems with consuming too much chocolate gateau.

comment by Strange7 · 2011-03-29T12:44:07.132Z · LW(p) · GW(p)

SIAI's argument for donations could be reduced to "or else everybody dies." Survival of self, offspring, and/or kin is very much an evolutionary value.

Replies from: XiXiDu
comment by XiXiDu · 2011-03-30T11:43:26.876Z · LW(p) · GW(p)

Yes, of course. What I wanted to ask about is why we don't apply rationality to choose our values as well. We already call it rational to most effectively achieve our values. Yet we also use rationality to discern the utility of different values and choose to maximize those with the highest expected utility while disregarding others. We also use rationality to detect inconsistencies in our actions, thinking and beliefs. Why don't we apply this to values as well? And as I said in the previous comment, we already do so to some extent, but where do we draw the line? If utility is strongly correlated with the amount of expected pleasure then how wouldn't it be rational to change our desires completely if there was a set of desires that could be more efficiently satisfied to yield the largest amount of pleasure? That is what I tried to say by the comparing procrastination to donating to the SIAI. We pick the SIAI rather than our evolutionary desire to rest because we pick the more important goal of two mutually exclusive goals. Vladimir Nesov wrote that we don't want to overcome most of what evolution came up with and I asked why that is the case? Why not pick the highest order goal, maybe experiencing pleasure, and try to maximize that? Is it really rational to have complex goals if having narrow goals yields more utility? The comment is now unnecessarily downvoted to -5, so I suppose something is wrong with it. Yet the reason that I posted it is to figure out what is wrong, where I am confused. Sadly nobody commented and I can't infer what is wrong from mere downvotes in this case.

Replies from: TheOtherDave, jimrandomh, Marius
comment by TheOtherDave · 2011-03-30T14:07:30.738Z · LW(p) · GW(p)

My $0.02: if I believed that I had a single highest-order goal -- that is, something I would always want to maximize no matter what the external situation was -- I would in fact endorse trying to maximize that.

But I don't believe that I have such a thing, and I've seen no convincing evidence that anyone else does either.

Certainly I don't believe that pleasure qualifies.

Incidentally, I also don't believe that equating "procrastination" with "our evolutionary desire to rest" is even remotely correct.

Replies from: XiXiDu
comment by XiXiDu · 2011-03-31T13:47:08.639Z · LW(p) · GW(p)

Incidentally, I also don't believe that equating "procrastination" with "our evolutionary desire to rest" is even remotely correct.

I got that from here. For everything else see this comment, would love to hear your opinion. I am not sure what exactly it is that I am confused about. Thank you.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-03-31T14:42:04.256Z · LW(p) · GW(p)

I might agree that we consciously convince ourselves that procrastination is a form of rest, as one of many ways of rationalizing procrastination. But not that we've evolved procrastination as a way of satisfying our desire to rest, which is what I understood you to be implying.

Then again, I mostly didn't buy that post at all, so take my reaction for what it's worth.

As for the terminal/instrumental goal thing... here, also, I'm probably the wrong guy to ask, as I don't really buy into the standard LW position.

As I've said a few times (most coherently here), I'm not actually sold on the idea that terminal goals exist in the first place.

So, yeah, I would agree that goals change over time, or at least can change, and utility functions (insofar as such things exist) change, and that this whole idea of human values being a fixed fulcrum against which we can measure the motion of the universe isn't quite right. We are part of the world we change, not some kind of transcendent Unmoved Mover that stands outside of it.

Which is not to say I oppose the enterprise of building optimizing agents in a way that preserves our "terminal values": that's the right direction to go in. It's just that I expect the result of that enterprise will in fact be that we preserve our most stable and mutually-reinforcing "instrumental" values, and they will be somewhat modified by the process.

Growing up is like that sometimes... we become something we couldn't have conceived of and wouldn't have approved of, had we been consulted.

To put this in LW terms: I expect that what a sufficiently powerful seed AI extracts from an analysis of humanity's coherent extrapolated volition will not be, technically speaking, a set of terminal values... rather, I expect it will be a set of particularly stable and mutually reinforcing instrumental values, which are the closest approximation human minds contain to terminal values.

And I don't expect that implementing that CEV will be a one-time operation where we do it and we Win and nothing more needs to be done ever... rather, I expect that it will be a radical improvement in our environment, which we will become accustomed to, which will alter the balance of our values, which will cause us to identify new goals that we optimize for.

All that said, I don't object to people making the simplifying assumption that their current targets are actually universal terminal points. For some people (and I'm often one of them!), simplicity is an important motivational factor, and the alternative is to just sit in a paralyzed puddle of inconceivable alternatives.

comment by jimrandomh · 2011-03-30T13:25:59.214Z · LW(p) · GW(p)

Changing terminal values is almost always negative-utility according to the original values, which are the ones you use to decide whether to switch or not. If you delete a goal to focus on some other goal, then the deleted goal won't be fulfilled. While you might not care anymore after it's deleted, you care at the moment you're deciding whether to delete it or not, so you won't do it.

Where rationality helps is with instrumental values, ie goals that we have only because we think they'll further our other goals. For example, if I want to get through a television series because I think it'll make me happy, but all the time sitting around actually makes me depressed, then it's rational to give up that goal. On the other hand, if I want to eliminate poverty in Obscureistan, but I find out that achieving this won't make me happy, that doesn't make me change my goal at all.

Replies from: XiXiDu
comment by XiXiDu · 2011-03-31T13:45:38.362Z · LW(p) · GW(p)

I am aware of what it means to be rational.

On the other hand, if I want to eliminate poverty in Obscureistan, but I find out that achieving this won't make me happy, that doesn't make me change my goal at all.

But how do you know that this line of reasoning is not culturally induced and the result of abstract high-order contemplations about rational conduct? My problem is that I perceive rationality to change and introduce terminal goals. The toolkit that is called 'rationality', the rules and heuristics developed to help us to achieve our terminal goals are also altering and deleting them. A stone age hunter-gatherer seems to possess very different values than I do. If he learns about rationality and moral ontology his values will be altered considerably. Rationality was meant to help him achieve his goals, e.g. become a better hunter. Rationality was designed to tell him what he ought to do (instrumental goals) to achieve what he wants to do (terminal goals). Yet what actually happens is that he is told, that he will learn what he ought to want. If an agent becomes more knowledgeable and smarter then this does not leave its goal-reward-system intact if it is not especially designed to be stable. An agent who originally wanted to become a better hunter and feed his tribe would end up wanting to eliminate poverty in Obscureistan. The question is, how much of this new "wanting" is the result of using rationality to achieve terminal goals and how much is a side-effect of using rationality, how much is left of the original values versus the values induced by a feedback loop between the toolkit and its user? Here I think it would be important to ask how humans assign utility, if there exist some sort of intrinsic property that makes agents assign more utility to some experiences and outcomes versus others. We have to discern what we actually want from what we think we ought to want. This might sound contradictory, but I don't think it is. If an agent is facing the Prisoner's dilemma that agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a more "valuable" goal? It seems to me that becoming more knowledgeable and smarter is gradually altering our utility functions. But what is it that we are approaching if rationality becomes a purpose in and of itself? If we can be biased, if our map of the territory can be distorted, why can't we be wrong about what we value as well? If that is possible, how can we discover better values? What rationality is doing is to extrapolate our volition to calculate the expected utility of different outcomes. But this might distort or alter what we really value by installing new cognitive toolkits designed to achieve an equilibrium between us and other agents with the same toolkit. This is why I think it might be important to figure out what all high-utility goals have in common. Here happiness is just an example, I am not claiming that happiness is strongly correlated with utility, that happiness is the highest order goal. One might argue that we would choose a world state in which all sentient agents are maximally happy over one where all sentient agents achieved arbitrary goals but are on average not happy about it. But is this true? I don't know. I am just saying that we might want to reconsider what we mean by "utility" and objectify its definition. Otherwise the claim that we don't want to "overcome" a lot of what millions of years of brain evolution have formed is not even wrong because if we are unable to prove some sort of human-goal-stability then what we want is a fact about our cultural and intellectual evolution more than a fact about us, about human nature. Are we using our tools or are the tools using us, are we creating models or are we modeled, are we extrapolating our volition or following our extrapolations?

Replies from: Perplexed, timtyler, timtyler
comment by Perplexed · 2011-04-03T17:25:20.252Z · LW(p) · GW(p)

I was led to this comment by your request for assistance here. You seem to be asking about the relationship between our intuitive values and our attempts to systematize those values rationally. To what extent should we let our intuitions guide the construction of our theories? To what extent should we allow our theories to reform and override our intuitions?

This is the very important and difficult issue of reflective equilibrium as expounded upon by Goodman and Rawls, to say nothing of Yudkowsky. I hope the links are helpful.

My own take on this is that there can be levels (degrees of stability) of equilibria. For example, the foundational idea of utility and expected utility maximization (as axiomatized by Savage or Aumann) strikes me as pretty solid. But when you add on additional superstructure (such as interpersonal comparison of utilities, or universalist, consequentialist ethics) it becomes more and more difficult to bring the axiomatic structure into equilibrium with the raw intuitions of everyone.

comment by timtyler · 2011-04-07T18:00:57.245Z · LW(p) · GW(p)

I think it would be important to ask how humans assign utility, if there exist some sort of intrinsic property that makes agents assign more utility to some experiences and outcomes versus others.

As I understand it, the equation looks something like: warmth + orgasms x 100 - thirst x 5 - hunger x 2 - pain x 10.

comment by timtyler · 2011-04-07T17:57:44.347Z · LW(p) · GW(p)

An agent who originally wanted to become a better hunter and feed his tribe would end up wanting to eliminate poverty in Obscureistan.

Only if they absorbed a bunch of memes about utilitarianism in the process.

comment by Marius · 2011-03-30T12:38:45.842Z · LW(p) · GW(p)

Is there a reason you think that maximizing our own pleasure is our highest order goal? Do you have an explanation for the fact that if you ask a hedonist to name their heroes, those heroes' accomplishments are often not of the form "man, that guy could party," but instead lie elsewhere?

comment by Mirzhan_Irkegulov · 2014-07-11T12:08:51.056Z · LW(p) · GW(p)Join https://www.reddit.com/r/SneerClub/Replies from: ike
comment by Laoch · 2013-11-29T15:00:26.793Z · LW(p) · GW(p)

Look at your intuitions from the outside, as cognitive algorithms.

Which Less Wrong post do I need to read to find out how to do that? Also is there a hard definition of an AI programmer?

comment by Ben Pace (Benito) · 2012-10-26T12:31:19.128Z · LW(p) · GW(p)

The difference between much of mainstream philosophy and LessWrongian philosophy: http://www.lulztruck.com/43901/the-thinker-and-the-doer/

Replies from: Peterdjones
comment by Peterdjones · 2012-10-26T14:12:38.326Z · LW(p) · GW(p)

Out of the way! The Singularity is coming! http://www.dismuse.com/wp-content/uploads/2010/10/Glacier2_p.jpg

comment by zaph · 2011-03-31T17:54:52.759Z · LW(p) · GW(p)

This is my viewpoint as a philosophical laymen. I've liked a lot of the philosophy I've read, but I'm thinking about what the counter-proposal to what your post might be, and I don't know that it wouldn't result in a better state of affairs. I don't believe we'd have to stop reading writers from prior eras, or keep reinventing the wheel for "philosophical" questions. But why not just say, from here on out, the useful bits of philosophy can be categorized into other disciplines, and the general catch all term is no longer warranted? Philosophy covered just too wide a swath of topics: political science/economics, physics/cosmology, and psychology, just to name a few. I don't really know how to categorize everything Leibnitz and Newton were interested in. Now that these topics have more empirical data, there's less room for general speculation like there was in the old days. When you reclassify the useful stuff of philosophers' work as science, math, or logic I think it's very clarifying. All that remains afterwards (in my opinion) is more cultural commentaries and criticisms, and general speculations about life. I wouldn't call them useless; I found Rawls and Nozick to be interesting. But there would be big picture thinkers, cross-disciplinary studiers, and other types of thinkers even without a formal academic discipline called philosophy.

Replies from: mytyde
comment by mytyde · 2012-11-13T21:20:05.091Z · LW(p) · GW(p)

The decision of what disciplines belong to "science" or "humanitees", "art" or "engineering" is significantly a political decision. Indeed, it is a political question which disciplines exist in which organization and how they fit together.

Rationalist philosophers just need to call themselves "Psychologists of Quantitative Reasoning" in order to get funding. In the current political era, it is fashionable to claim 'objectivity' in one's profession despite frequently inquiring into non-empirical matters. This claim of objectivity often serves to hide one's personal biases which, if made explicit, might otherwise be useful in interpretation of research.

The drive to be unconcerned with the political implications of one's work is the ideal paradigm for economic exploitation of a class of highly-educated scientists by institutions and people who control how funding is utilized to enables, disables, or actualize research and engineering.

Fox News is a perfect example of brutally skewing scientific evidence towards political ends "How Roger Ailes Built the Fox News Fear Factory" http://www.rollingstone.com/politics/news/how-roger-ailes-built-the-fox-news-fear-factory-20110525

(For those of you who would: instead of voting me down because you dislike these ideas, how about trying to engage with them?)

comment by elhelado · 2011-03-31T16:45:10.657Z · LW(p) · GW(p)

The traditional definition of philosophy (in Greek) implied that philosophy's purpose was not to convey information, but to produce a transformation in the individual who practices it. In that sense, it is not supposed to be "useless", but it may appear so to someone who is looking to it for "information" about reality. By this standard, very little of what goes on in academic Philosophy departments today would qualify.

Replies from: mytyde
comment by mytyde · 2012-11-13T21:27:05.300Z · LW(p) · GW(p)

I would charge that the same 'institutionalization' which has neutered psychology has changed philosophy into a funding-chaser.

Psychology was invented as a means of studying society so that the social situation could be improved: Freud was a socialist. Because many disciplines have moved to institutions, they have less freedom to pursue research and less freedom to depart from the views of their institutions.

Also, because funding is dependent on people who have ulterior motives in what they choose to fund, it would be almost impossible for a school of psychology to develop which says, for instance "there's something seriously wrong with our society" because they would be hard-pressed to find research funding. That the general population surrenders so much initiative to scientists who are so strongly influenced by veiled politics is the true tragedy of our time.

Replies from: wedrifid
comment by wedrifid · 2012-11-14T01:27:41.939Z · LW(p) · GW(p)

Psychology was invented as a means of studying society

That sounds more likely "Sociology". If you are actually trying to talk about Psychology then your claim seems wrong.

Replies from: mytyde
comment by mytyde · 2013-01-15T04:24:22.057Z · LW(p) · GW(p)

No, my claim is literal. The role of the discipline 'psychology' has shifted over time away from what we now consider 'sociology' and towards an individualistic approach to mental health. The assumption didn't used to be that mental problems were profoundly unique to the individual, but now mainstream psychology does not take into account the sociological factors which affect mental health in all situations.

Some sources to elaborate the transformation of the discipline are historiologists & sociologists like Immanuel Wallerstein and Michel Foucault, but there are plenty of non-mainstream psychologists who still practice holistic psychology like Helene Shulman & Mary Watkins.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-15T10:12:52.698Z · LW(p) · GW(p)

mainstream psychology does not take into account the sociological factors which affect mental health in all situations.

Really? I often hear dire warnings about how our society e.g. contributes to suicides by publicizing them. These are generally billed as coming from experts in the field.

Full disclosure: I live in Ireland, it may be different in other countries.

[EDIT: typos)

comment by ohwilleke · 2011-03-31T01:31:06.801Z · LW(p) · GW(p)

"3. Philosophy has grown into an abnormally backward-looking discipline."

Indeed. One of the salutory roles that philosophy served until about the 18th century (think e.g. "natural philosophy") was to serve as an intellectual context within new disciplines could emerge and new problems could be formulated into coherent complexes of issues that became their own academic disciplines.

In a world where cosmology and quantum physics and neuroscience and statistics and scientific research methods and psychology and "law and whatever" are vibrant we don't need philosophers to deal with metaphysics and epistomology, but we may need considerable more philosophical attention to questions like "what about a book has value?", or "what obligations do people have to each other in an unequal society?," or "what does it mean to be human?"

One of philosophy's main cutting edge agendas should be formulating new questions to ask and serving as an incubator from which to outline the boundaries of new disciplines of specialists to answer those questions.

Any summary of the discipline that is looks like an index of the last two thousand years of philosophical thought is probably missing the stuff that philosophers should be spending their time considering.

Alternately, one approach that many academic philosophers seem to be fond of taking is to consider themselves to be primarily intelllectual historians, with a particularly rich and subtle tradition to understand so that it can be understood by those who are primarily interested in the history of ideas. In the same way, Freud is a bad place to look for someone interesting in doing clinical psychology, but a good place to look for someone interesting in understanding the conceptual roots of lots of ideas that shaped by lay and professional understanding of the individual mind.

comment by Liosis · 2011-03-30T05:11:52.804Z · LW(p) · GW(p)

The philosophers I study under criticise the sciences for not being rigorous enough. The problem goes both ways. The sciences often do not understand the basic concepts from which they are functioning. A good scientist will also have a rudimentary understanding of philosophy, in order to fiddle with the background epistemology of their work.

You are correct in thinking that Continental philosophy is not continuous with the sciences, because it is the core of the humanities and as such being continuous with the sciences would be unnatural for it. I still think that asking questions about our connection to existence is interesting and important, although I personally do not find Continental philosophy as potentially fruitful as Analytic.

Intuitions are by no means accepted within the discipline as a whole, and are also an interesting topic of debate within it. Because philosophy is a highly speculative discipline it isn't going to be following a normal scientific model, but instead will model constant discovery. If you want to see where science connects up with philosophy what you should look at is the disciplines that end up coming out of philosophy as questions that can be answered scientifically. This is what we produce with regard to science.

Philosophy is the core of the academic disciplines. It isn't in the business of scientific inquiry and it should not be. Some philosophers are still looking for universal truths after all. Simply disagreeing with the idea of a priori does not make it go away.

It is good that you recognise there are problems in philosophy. Too many people take it as dogma and do not question the area they have explored. My advice is to take what you can from the discipline well keeping in mind that every piece you take comes with a centuries long dialogue.

Replies from: Eliezer_Yudkowsky, Emile
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-03-30T05:19:23.248Z · LW(p) · GW(p)

The philosophers I study under criticise the sciences for not being rigorous enough.

Acid test 1: Are they complaining about experimenters using arbitrary subjective "statistical significance" measures instead of Bayesian likelihood functions?

Acid test 2: Are they chiding physicists for not decisively discarding single-world interpretations of quantum mechanics?

Acid test 3: Are all of their own journals open-access?

It may be ad hominem tu quoque, but any discipline that doesn't pass the three acid tests has not impressed me with its superiority to our modern, massively flawed academic science.

Replies from: jimrandomh, quen_tin, Will_Newsome, SilasBarta
comment by jimrandomh · 2011-03-30T16:07:27.176Z · LW(p) · GW(p)

(2) appears to reject any discipline that ignores quantum mechanics entirely, or which pays attention to quantum mechanics but whose practitioners consider themselves too confused about it to challenge the consensus position.

(3) appears to reject almost all of academia. In particular, it rejects disciplines stuck at the common equilibrium of closed-access journals combined with authors publishing the same articles on their own web pages.

comment by quen_tin · 2011-03-30T16:17:00.621Z · LW(p) · GW(p)

Acid test (1) and (2): this is where dogma starts.

Replies from: Broggly
comment by Broggly · 2011-04-05T00:48:02.850Z · LW(p) · GW(p)

I get the problem with (2), although mostly because I haven't thought about quantum mechanics enough to have an opinion, but (1) is no more dogma than "DNA is transcribed to mRNA which is then translated as an amino acid sequence". There are lots of good reasons to investigate the actual likelihood of the null and alternative hypotheses rather than just assuming it's about 95% likely it's all just a coincidence Of course, until this becomes fairly standard doing so would mean turning your paper into a meta-analysis as well as the actual experiment, which is probably hard work and fairly boring.

comment by Will_Newsome · 2011-03-30T16:11:46.411Z · LW(p) · GW(p)

Acid test 2: Are they chiding physicists for not decisively discarding single-world interpretations of quantum mechanics?

ETA: The following comment is mostly off-base due to the reason pointed out in JGWeissman's reply. Mea culpa.

Ugh, it's not like many worlds is even the most elegant interpretation: http://arxiv.org/abs/1008.1066 . Talk of MWI is kind of misleading if people haven't already thought about spatially infinite universes for more than 5 minutes, which they mostly haven't.

I realize that world-eater supporters are almost definitely wrong, but I'm really suspicious of putting people into the irrational bin because they've failed according to a metric that is knowably fundamentally flawed. I doubt the utility lost via setting a precedent (even if you're damn well sure they're wrong in this case) of actually figuring out ways a person could have fundamentally correct epistemology is more than the utility lost by disregarding everyone and going all Only Sane Man. But my experience is with SIAI and not SL4. Maybe I'd think differently if I was Quirrell.

Replies from: JGWeissman
comment by JGWeissman · 2011-03-30T21:20:52.349Z · LW(p) · GW(p)

Ugh, it's not like many worlds is even the most elegant interpretation:

The proposed theory does not seem to be an alternative to MW QM so much as a possible answer to "What adds up to MW QM?". In this light, does pushing MW over Collapse really warrant an "ugh" response?

comment by SilasBarta · 2011-03-30T16:09:59.176Z · LW(p) · GW(p)

[insert pun about philosophers dropping acid]

comment by Emile · 2011-03-30T15:23:50.740Z · LW(p) · GW(p)

This doesn't do much to convince me; for example in these bits you could substitute "philosophy" with "theology", and it would sound the same:

Because philosophy is a highly speculative discipline it isn't going to be following a normal scientific model, but instead will model constant discovery.

[...] It isn't in the business of scientific inquiry and it should not be. Some philosophers are still looking for universal truths after all. Simply disagreeing with the idea of a priori does not make it go away.

It is good that you recognise there are problems in philosophy. Too many people take it as dogma and do not question the area they have explored. My advice is to take what you can from the discipline well keeping in mind that every piece you take comes with a centuries long dialogue.

The bit about "take what you can" and "every piece comes with a centuries long dialogue" especially could be said of a lot of things (law, for example) and it's not clear why those are good things in themselves.

comment by Hi there · 2019-10-01T01:10:01.905Z · LW(p) · GW(p)

Philosophy is usually negative. Change my mind.

Replies from: Teerth Aloke
comment by Teerth Aloke · 2019-10-01T01:14:37.534Z · LW(p) · GW(p)

Does the work of FHI come under Philosophy?

Replies from: Hi there
comment by Hi there · 2019-10-01T01:21:12.231Z · LW(p) · GW(p)

No

comment by quen_tin · 2011-03-29T13:40:46.462Z · LW(p) · GW(p)

What's weird is that you begin criticizing continental philosophy. Then you say that philosophers do not understand how their brain work, and what their intuition is (linking to an article which explains that our intuition of reality is not reality). But one of the main topic of continental philosophy, long before cognitive science existed, was to argue that we are in a sense trapped inside our cognitive situation with no way out, and for that reason, we cannot know what reality-in-itself is. It feels like you rediscovered Kant... I agree that continental philosophy as somehow derived to something obscure, and that analytic philosophy is much clearer. But it is also argued sometimes that analytic philosopher never read or cite past philosophers, and that they tend to ignore some large areas that have been widely discussed before. I would say this article illustrate that.

Replies from: cousin_it, Jack
comment by cousin_it · 2011-03-29T13:54:08.071Z · LW(p) · GW(p)

Then you say that philosophers do not understand how their brain work, and what their intuition is (linking to an article which explains that our intuition of reality is not reality). But one of the main topic of continental philosophy, long before cognitive science existed, was to argue that we are in a sense trapped inside our cognitive situation with no way out, and for that reason, we cannot know what reality-in-itself is.

These two statements are only superficially similar. If some of our intuitions are sometimes wrong, that doesn't imply that none of our perceptions can give any information about reality.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T14:03:13.562Z · LW(p) · GW(p)

They are very similar. Kant does not claim that we have no information about reality, and the linked article does not only say that we are sometimes wrong with our intuition...

This statement for example is very "Kantian" : Before you can question your intuitions, you have to realize that what your mind's eye is looking at is an intuition - some cognitive algorithm, as seen from the inside - rather than a direct perception of the Way Things Really Are.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-03-29T15:07:11.941Z · LW(p) · GW(p)

Kant does not claim that we have no information about reality

Kant says that we can know about the representations that appear in the manifold of appearances provided to us by our senses. But, in his view, we can know nothing, zip, zilch, nada, about whatever it is that stands behind those sensory representations.

In a sense, Kant takes the map/territory distinction to an extreme. For Kant, the territory is so distinct from the map that we know nothing about the territory at all. All of our knowledge is only about the map.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T15:30:57.241Z · LW(p) · GW(p)
  • That is also what the linked article seems to entail. The statement I quoted, as I understand it, says that every information we have about reality is the result of "some cognitive algorithm" (=the representations that appears (...) provided by our senses)
  • The map is certainly a kind of information about the territory (though we cannot know it with certainty). Strictly speaking, Kant does not say we have no information about reality, he says we cannot know if we have or not.
Replies from: Tyrrell_McAllister, cousin_it
comment by Tyrrell_McAllister · 2011-03-30T00:07:22.499Z · LW(p) · GW(p)

Strictly speaking, Kant does not say we have no information about reality, he says we cannot know if we have or not.

I don't think that Kant makes the distinction between "knowing" and "having information about" that you and I would make. If he doesn't outright deny that we have any information about the world beyond our senses, he certainly comes awfully close.

On A380, Kant writes,

If, therefore, as the present critique obviously requires of us, we remain true to the rule established earlier not to press our questions be­yond that with which possible experience and its objects can supply us, then it will not occur to us to seek information about what the objects of our senses may be in themselves, i.e., apart from any relation to the senses.

And, on A703/B731, he writes,

[I]f charming and plau­sible prospects did not lure us to reject the compulsion of these doc­trines [i.e., doctrines for which Kant has argued], then of course we might have been able to dispense with our painstaking examination of the dialectical witnesses which a transcen­dent reason brings forward on behalf of its pretensions; for we already knew beforehand with complete certainty that all their allegations, while perhaps honestly meant, had to be absolutely null and void, be­cause they dealt with information which no human being can ever get.

(Emphasis added. These are from the Guyer–Wood translation.)

Replies from: Fivehundred, quen_tin
comment by Fivehundred · 2014-04-05T16:19:51.903Z · LW(p) · GW(p)

Does anyone smell irony in this whole discussion? Considering the OP specifically derided the whole "discussion of old, dead guys" thing?

Ah, I wish this wasn't a three year old post. I have no idea how this site works yet, so who knows whose attention I'll attract by doing this?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2014-04-05T18:33:27.772Z · LW(p) · GW(p)

At least the person whose comment you're replying to sees your reply, so you weren't speaking entirely into the void :).

comment by quen_tin · 2011-03-30T09:54:34.582Z · LW(p) · GW(p)

Ok, it depends what you mean by "information about". My understanding is that we have no information on the nature of reality, which does not mean that we have no information from reality.

Replies from: Tyrrell_McAllister, TheAncientGeek
comment by Tyrrell_McAllister · 2011-03-30T16:36:23.925Z · LW(p) · GW(p)

I agree that we get information from reality. And I think that we agree that our confidence that we get information from reality is far less murky than our concept of "the nature of reality".

Kant, being a product of his times, doesn't seem to think this way, though. Maybe, if you explained the modern information-theoretic notion of "information" to Kant, he would agree that we get information about external reality in that sense. But I don't know. It's hard to imagine what a thinker like Kant would do in an entirely different intellectual environment from the one in which he produced his work. I'm inclined to think that, for Kant, the noumena are something to which it is not even possible to apply the concept of "having information about".

comment by TheAncientGeek · 2014-04-05T18:07:37.964Z · LW(p) · GW(p)

Suggestion: knowledge of what a thing is in itself , is like information that is not coded in any particular scheme.

Replies from: None
comment by [deleted] · 2014-04-05T18:43:27.153Z · LW(p) · GW(p)

I suppose it's a virtue of that interpretation that 'information that cannot be coded in any particular scheme' is a conceptual impossibility (assuming that's what you meant).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-05T19:01:46.443Z · LW(p) · GW(p)

Yes. You can make such an interpretation of the ding-an-such.

For my money, that lessens its impact.

comment by cousin_it · 2011-03-29T16:00:56.534Z · LW(p) · GW(p)

If you are a cognitive algorithm X that receives input Y, this allows you to "know" a nontrivial fact about "reality" (whatever it is): namely, that it contains an instance of algorithm X that receives input Y. The same extends to probabilistic knowledge: if in one "possible reality" most instances of your algorithm receive input Y and in another "possible reality" most of them receive input Z, then upon seeing Y you come to believe that the former "possible reality" is more likely than the latter. This is a straightforward application of LW-style thinking, but it didn't occur to Kant as far as I know.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T16:17:33.313Z · LW(p) · GW(p)

If I am a cognitive algorithm X that reveives input Y, I don't necessarily know what an algorithm is, what an input is, and so on. One could argue that all I know is 'Y'. I don't necessarily have any idea of what a "possible reality" is. I might not have a concept of "possibility" nor of "reality".

Your way of thinking presupposes many metaphysical concepts that have been questioned by philosophers, including Kant. I am not saying that this line of reasoning is invalid (I suspect it is a realist approach, which is a fair option). My personal feeling is that Kant is upstream of that line of reasoning.

Replies from: cousin_it
comment by cousin_it · 2011-03-29T17:22:37.176Z · LW(p) · GW(p)

But I do know what an algorithm is. Can someone be so Kantian as to distrust even self-contained logical reasoning, not just sensations? In that case how did they come to be a Kantian?

Replies from: Vladimir_M, AlephNeil, quen_tin
comment by Vladimir_M · 2011-03-29T20:33:28.217Z · LW(p) · GW(p)

But I do know what an algorithm is.

Do you? I find the unexamined use of this particular concept possibly the most problematic component of what you call "LW-style thinking." (Another term that commonly raises my red flags here is "pattern.")

Replies from: cousin_it, None
comment by cousin_it · 2011-03-29T20:51:36.127Z · LW(p) · GW(p)

What do you find dubious about the use of this concept on LW?

Replies from: Vladimir_M
comment by Vladimir_M · 2011-03-29T21:14:10.574Z · LW(p) · GW(p)

To take a concrete example, the occasional attempts to delineate "real" computation as distinct from mere look-up tables seem to me rather confused and ultimately nonsensical. (Here, for example, is one such attempt, and I commented on another one here.) This strongly suggests deeper problems with the concept, or at least our present understanding of it.

Interestingly, I just searched for some old threads in which I commented on this issue, and I found this comment where you also note that presently we lack any real understanding of what constitutes an "algorithm." If you've found some insight about this in the meantime, I'd be very interested to hear it.

Replies from: None, cousin_it
comment by [deleted] · 2011-03-29T22:32:48.428Z · LW(p) · GW(p)

I don't see that the concept of a computation excludes a lookup table. A lookup table is simply one far end of a spectrum of possible ways to implement some map from inputs to outputs. And if I were writing a program that mapped inputs to outputs, implementing it as a lookup table is at least in principle always one of the options. Even a program that interacted constantly with the environment could be implemented as a lookup table, in principle. In practice, lookup tables can easily become unwieldy. Imagine a chess program implemented as a lookup table that maps each possible state of the board to a move. It would be staggeringly huge. But I don't see why we wouldn't consider it a computation.

One of your links concerns the idea that a lookup table couldn't possibly be conscious. But the topic of consciousness is a kind of mind poison, because it is tied to strong, strong delusions which corrupt everything they touch. Thinking clearly about a topic once consciousness and the self have been attached to it virtually impossible. For example, the topic of fission - of one thing splitting into two - is not a big deal as long as you're talking about ordinary things like a fork in the road, or a social club splitting into two social clubs. But if we imagine you splitting into two people (via a Star Trek transporter accident or what have you), then all of sudden it becomes very hard to think about clearly. A lot of philosophical energy has been sucked into wrapping our heads around the problem of personal identity.

Replies from: Vladimir_M, Eugine_Nier
comment by Vladimir_M · 2011-03-30T03:12:23.358Z · LW(p) · GW(p)

A lookup table is simply one far end of a spectrum of possible ways to implement some map from inputs to outputs.

Yes. In my view, this continuity is best observed through graph-theoretic properties of various finite state machines that implement the same mapping of inputs to outputs (since every computation that occurs in reality must be in the form of a finite state machine). From this perspective, the lookup table is a very sparse graph with very many nodes, but there's nothing special about it otherwise.

comment by Eugine_Nier · 2011-03-30T03:40:22.510Z · LW(p) · GW(p)

The reason people are concerned with the concept of consciousness, is that they have terms in their utility functions for the welfare of conscious beings.

If you have some idea how to write out a reasonable utility function without invoking consciousness I'd love to hear it. (Adjust this challenge appropriately if your ethical theory isn't consequentialist.)

Replies from: None
comment by [deleted] · 2011-03-30T04:00:46.793Z · LW(p) · GW(p)

I think it is largely because consciousness is so important to people that it is hard to think straight about it, and about anything tied to it. Similarly, the typical person loves Mom, and if you say bad things about Mom then they'll have a hard time thinking straight, and so it will be hard for them to dispassionately evaluate statements about Mom. But what this means is that if someone wants to think straight about something, then it's dangerous to tie it to Mom. Or to consciousness.

comment by cousin_it · 2011-03-29T21:27:23.002Z · LW(p) · GW(p)

Nope, no new insights yet... I agree that this is a problem, or more likely some underlying confusion that we don't know how to dissolve. It's on my list of problems to think about, and I always post partial results to LW, so if something's not on my list of submitted posts, that means I've made no progress. :-(

comment by [deleted] · 2011-03-29T22:46:09.859Z · LW(p) · GW(p)

Granted, our concepts are often unclear.The Socratic dialogs demonstrate that, when pressed, we have trouble explaining our concepts. But that doesn't mean that we don't know what things are well enough to use the concepts. People managed to communicate and survive and thrive, probably often using some of the very concepts that Socrates was able to shatter with probing questions. For example, a child's concepts of "up" and "down" unravel slightly when the child learns that the planet is a sphere, but that doesn't mean that, for everyday use, the concepts aren't just fine.

comment by AlephNeil · 2011-03-30T03:00:46.084Z · LW(p) · GW(p)

(I know the exchange isn't primarily about Kant, but...)

Kant certainly isn't a "distrusting logical reasoning" kind of guy. He takes for granted that "analytic" (i.e. purely deductive) reasoning is possible and truth-preserving. His mission is to explain (in light of Hume's problem) how "synthetic a priori knowledge" is possible (with a secondary mission of exposing all previous work on metaphysics as nonsense). "Synthetic a priori knowledge" includes mathematics (which he doesn't regard as just a variety or application of deductive logic), our knowledge of space and time, and Newtonian science.

His solution is essentially to argue that having our sensory presentations structured in space and time, and perceiving causal relations among them, is universally necessary in order for consciousness to exist at all. Since we are conscious, we can know a priori that the necessary conditions for consciousness obtain. [Disclaimer: This quick thumbnail sketch doesn't pretend to be adequate. Neither am I convinced that the theory even makes sense.]

What Kant says we cannot know is how things ("really") are, considered independently of the universal and necessary conditions for the possibility of experience. As far as I can tell, this boils down to "it's not possible to know the answers to questions that transcend the limits of possible experience". For instance, according to Kant we cannot know whether the universe is finite or infinite, whether it has a beginning in time, whether we have free will, or whether God exists.

It's important to understand that Kant is an "empirical realist", which means that the objects of experience - the coffee cups, rocks and stars around us - really do exist and we can acquire knowledge of them and their spatiotemporal and causal relations. However, if the universe could be considered 'as it is in itself' - independently of our minds - those spatiotemporal and causal relations would disappear (rather like how co-ordinates disappear when you consider a sphere objectively).

(It's similar to the dust theory.)

comment by quen_tin · 2011-03-29T17:51:50.376Z · LW(p) · GW(p)

The nature of logical reasoning is actually a deep philosophical question...

You know what an algorithm is, but do you know if you are an algorithm? I am not sure to understand why you need algorithm at all. Maybe your point is "If you are a human being X that receive an input Y, this allows you to know a nontrivial fact about reality (...)". I tend to agree with that formulation, but again, this supposes some concepts that do not go without saying, and in particular, it supposes a realist approach. Idealist philosophers would disagree.

I can understand that your idea is to build models of reality, then use a Bayesian approach to validate them. There is a lot to say about this (more than I could say in a few lines). For example : are you able to gather all your "inputs"? What about the qualitative aspects: can you measure them? If not, how can you ever be sure that your model is complete? Are the ideas you have about the world part of your "inputs'? How do you disentangle them from what comes from outside, how do you disentangle your feelings, memory and actual inputs? Is there a direct correspondance between your inputs and scientific data, or do you have presupositions on how to interpret the data? For example, don't you need to have an idea of what space/time is in order to measure distances and durations? Where does this idea comes from? Your brain? Reality? A bit of both? Don't we interpret any scientific data at the light of the theory itself, and isn't there a kind of circularity? etc.

Replies from: cousin_it, jimrandomh
comment by cousin_it · 2011-03-29T18:40:16.972Z · LW(p) · GW(p)

in particular, it supposes a realist approach. Idealist philosophers would disagree.

This is why I talked about algorithms. When a human being says "I am a human being", you may quibble about it being "observational" or "apriori" knowledge. But algorithms can actually have apriori knowledge coded in, including knowledge of their own source code. When such an algorithm receives inputs, it can make conclusions that don't rely on "realist" or "idealist" philosophical assumptions in any way, only on coded apriori knowledge and the inputs received. And these conclusions would be correct more or less by definition, because they amount to "if reality contains an instance of algorithm X receiving input Y, then reality contains an instance of algorithm X receiving input Y".

Your second paragraph seems to be unrelated to Kant. You just point out that our reasoning is messy and complex, so it's hard to prove trustworthy from first principles. Well, we can still consider it "probably approximately correct" (to borrow a phrase from Leslie Valiant), as jimrandomh suggested. Or maybe skip the step-by-step justifications and directly check your conclusions against the real world, like evolution does. After all, you may not know everything about the internal workings of a car, but you can still drive one to the supermarket. I can relate to the idea that we're still in the "stupid driver" phase, but this doesn't imply the car itself is broken beyond repair.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T19:17:16.211Z · LW(p) · GW(p)

I don't think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm.

I agree with your second point: you can take a pragmatist approach. Actually, that's a bit how science work. But still you did not prove in anyway that your model is a complete and definitive description of all there is nor that it can be strictly identifiable with "reality", and Kant's argument remains valid. It would be more correct to say that a scientific model is a relational model (it describes the relations between things as they appear to observers and their regularities).

Replies from: cousin_it
comment by cousin_it · 2011-03-29T19:55:25.586Z · LW(p) · GW(p)

I don't think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm.

You can be the algorithm. The software running in your brain might be "approximately correct by design", a naturally arising approximation to the kind of algorithms I described in previous comments. I cannot examine its workings in detail, but sometimes it seems to obtain correct results and "move in harmony with Bayes" as Eliezer puts it, so it can't be all wrong.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T20:25:44.785Z · LW(p) · GW(p)

No you cannot be an algorithm. An algorithm is a concept, it only exists inside our representations... You cannot be an object/a concept inside your own representation, that makes no sense...

Replies from: cousin_it, jimrandomh, Jonathan_Graehl
comment by cousin_it · 2011-03-29T20:40:06.145Z · LW(p) · GW(p)

The question whether algorithms "exist" is related to the larger question of whether mathematical concepts "exist". (The former is a special case of the latter.) Many people on LW take seriously the "mathematical multiverse" ideas of Tegmark and others, which hypothesize that abstract mathematical concepts are actually all that exists. I'm not sure what to think about such ideas, but they're not obviously wrong, because they've been subjected to very harsh criticism from many commenters here, yet they're still standing. The closest I've come to a refutation is the pheasant argument (search for "pheasant" on this site), but it's not as conclusive as I'd like.

I think it's very encouraging that we've come to a concrete disagreement at last!

ETA: I didn't downvote you, and don't like the fact that you're being downvoted. A concrete disagreement is better than confused rhetoric.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T21:32:55.640Z · LW(p) · GW(p)

They may not be obviously wrong, but the important point is that it remains a pure metaphysical speculation and that other metaphysical systems exist, and other people even deny that any metaphysical system can ever be "true" (or real or whatever). The last point is rather consensual among modern philosophers: it is commonly assumed that any attempt to build a definitive metaphysical system will necessarily be a failure (because there is no definitive ground on which any concept rests). As a consequence, we have to rely on pragmatism (as you did in a previous comment). But anyway, the important point is that different approaches exist, and none is a definitive answer.

comment by jimrandomh · 2011-03-29T20:31:23.933Z · LW(p) · GW(p)

An algorithm is a concept, it only exists inside our representations

No, an algorithm can exist inside another algorithm as a regularity, and evidence suggests that the universe itself is an algorithm.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T21:19:54.325Z · LW(p) · GW(p)

No, evidence does no suggest that the universe is an algorithm. This is perfectly meaningless.

Replies from: Jack
comment by Jack · 2011-03-29T22:03:53.309Z · LW(p) · GW(p)

You need to actually explain your point and not just keep repeating it.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T22:33:14.425Z · LW(p) · GW(p)

Evidence suggests that the universe is composed of qualia. The ability to build a mathematical model that fits our scientific measurements (= a probabilistic description of the correlations between qualia) does not remotely suggest that the universe is an algorithm.

Replies from: Jack, twanvl
comment by Jack · 2011-03-29T22:41:25.226Z · LW(p) · GW(p)

It may not suggest this to your satisfaction but it certainly suggests it remotely (and the mathematical model involves counterfactual dependencies of qualia, not just correlations). What does it mean to say that the universe is composed of qualia? That sounds like an obvious confusion between representation and reality.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T22:58:56.421Z · LW(p) · GW(p)

Well my opinion is that the confusion between representation and reality is on your side.

Indeed, a scientific model is a representation of reality - not reality. It can be found inside books or learned at school, it is interpreted. On the contrary, qualia are not represented but directly experienced. They are real.

That sounds obvious. No?

Replies from: FAWS, twanvl
comment by FAWS · 2011-03-29T23:19:57.207Z · LW(p) · GW(p)

That sounds obvious. No?

Not at all. What you call "qualia" could be the combination of a mental symbol, the connections and associations this symbol has and various abstract entities. When you experience experiencing such a "quale" the actual symbol might or might not be replaced with a symbol for the symbol, possibly using a set of neural machinery overlapping with the set for the actual symbol (so you can remember or imagine things without causing all of the involuntary reactions the actual experience causes)

comment by twanvl · 2011-03-29T23:04:50.967Z · LW(p) · GW(p)

qualia are not represented but directly experienced. Can you give a definition of these qualia?

That sounds obvious. Sounding obvious and being true are two very different things.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T23:27:54.604Z · LW(p) · GW(p)

I define qualia as the elements of my subjective experience. "That sounds obvious" was an euphemism. It's more than obvious that qualia are real, it's given, it is the only truth that does not need to be proven.

comment by twanvl · 2011-03-29T22:41:46.845Z · LW(p) · GW(p)

Evidence suggests that the universe is composed of qualia.

Do you have some links to this evidence, or studies that come to this conclusion?

comment by Jonathan_Graehl · 2011-03-29T20:45:31.461Z · LW(p) · GW(p)

Unless you're making a use-mention distinction (and why would you?), I don't see your point. An algorithm can be realized in a mechanism. Are you saying that he should say "you can be an implementation of an algorithm" instead?

Replies from: quen_tin, quen_tin
comment by quen_tin · 2011-03-29T21:41:33.088Z · LW(p) · GW(p)

What I mean is that the notion of algorithm is always relative to an observer. Something is an algorithm because someone decides to view it as an algorithm. She/He decides what its inputs are and what its outputs. She/He decides what is the relevant scale for defining what a signal is. All these decisions are arbitrary (say I decide that the text-processing algorithm that runs on my computer extends to my typing fingers and the "calculation" performed by the molecules of them - why not? My hand is part of my computer. Does my computer "feel it"? Only because I decided to view things like that?). Being, on the contrary, is independent on any observer and is not arbitrary. Therefore being an algorithm is meaningless.

Replies from: jimrandomh, Jack, quen_tin
comment by jimrandomh · 2011-03-29T21:58:29.911Z · LW(p) · GW(p)

"Algorithm" is a type; things can be algorithms in the same sense that 5 is an integer and {"hello","world"} is a list. This does not depend on the observer, or even the existence of an observer.

Replies from: AlephNeil, quen_tin
comment by AlephNeil · 2011-03-29T22:52:03.178Z · LW(p) · GW(p)

I'm not sure you understand where quen tin is coming from. He would regard integers, list and "algorithms" in your sense as abstract entities, and maintain (as a point so fundamental that it's never spelled out) that abstract entities are not physically real. At most they provide patterns that we can usefully superimpose on various 'systems' in the world.

The point isn't whether or not abstract entities are observer-dependent, the point is that the business of superimposing abstract entities on real things is observer-dependent (on quen tin's view). And observers themselves are "real things" not abstracta.

(Not that I agree with this personally, but it's important to at least understand how others view things.)

Replies from: Jack
comment by Jack · 2011-03-29T23:01:11.408Z · LW(p) · GW(p)

There is a sense in which the view of the universe that just consists of me (an algorithm) receiving input from the universe (another algorithm) feels like it's missing something, it's the intuition the Chinese room argument pumps. I've never really found a good way to unpump it. But attempts to articulate that other component keep falling apart so...

comment by quen_tin · 2011-03-29T22:11:23.111Z · LW(p) · GW(p)

I think it does.

{"hello", "world"} is a set of lighted pixels on my screen, or a list of characters in a text file containing source code, or a list of bytes in my computer's memory, but in any case, there must be an observer so that they can be interpreted as a list of string. The real list of string only exists inside my representation.

Replies from: Jack
comment by Jack · 2011-03-29T22:19:21.906Z · LW(p) · GW(p)

Pretty sure I can write code that makes these same interpretations.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T22:46:17.990Z · LW(p) · GW(p)

Your code is a list of characters in a text file, or a list of bytes in your computer's memory. Only you interpret it as a code that interprets something.

Replies from: Jack
comment by Jack · 2011-03-29T22:51:58.384Z · LW(p) · GW(p)

What does it mean to 'interpret' something?

Edit: or rather, what does it mean for me to interpret something, 'cause I know exactly what it means for code to do it.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T23:15:37.285Z · LW(p) · GW(p)

I will reply several messages at once.

  • Intepreting is giving a meaning to something. Stating that the "code interprets something" is a misuse of language for saying that the code "processes something". You don't know if the code gives meaning to anything since you are not the code, only you give the meaning. "Interpretation" is a first-person concept.

  • "mathematical model involves counterfactual dependencies of qualia" -> I suggest you read David Mermin's "What quantum physics is tring to tell us". It can be found on arxiv. Quantum physics is only about correlations between measurements - or at least it can be successfully interpreted that way, and that solves quite every "paradox" of it...

  • "if you dispute this metaphysics you need to explain what the disadvantage" -> It would require more than a few comments. I just found your self-confidence a bit arrogant, as far as scientific realism is far from being a consensus among philosophers and has many flaws. Personnaly, the main disavantage I see is that its an "objectual" conception, a conception of things as objects, which does not account for any subject, and does not acknowledge that an object merely exist as representations for subjects. It does not address first-person phenomenology (time, ...). It does not seem to consider our cognitive situation seriously by uncritically claiming that our representation is reality, that's all, which I find a bit naive.

(EDIT - formatting)

Replies from: Jack
comment by Jack · 2011-03-29T23:52:21.139Z · LW(p) · GW(p)

Intepreting is giving a meaning to something. Stating that the "code interprets something" is a misuse of language for saying that the code "processes something". You don't know if the code gives meaning to anything since you are not the code, only you give the meaning. "Interpretation" is a first-person concept.

Okay... well what does it mean to give meaning to something? My claim is that I am a (really complex) code of sorts and that I interpret things in basically the same way code does. Now it often feels like this description is missing something and that's the problem of consciousness/qualia for which I, like everyone else, have no solution. But "interpretation is a first-person concept" doesn't let us represent humans.

"if you dispute this metaphysics you need to explain what the disadvantage" -> It would require more than a few comments.

You were disputing someone's claim that 'the universe is an algorithm'... why isn't that reason enough to identify one possible disadvantage. Otherwise you're just saying "Na -ahhhh!"

I just found your self-confidence a bit arrogant, as far as scientific realism is far from being a consensus among philosophers and has many flaws. Personnaly, the main disavantage I see is that its an "objectual" conception, a conception of things as objects, which does not account for any subject, and does not acknowledge that an object merely exist as representations for subjects. It does not address first-person phenomenology (time, ...). It does not seem to consider our cognitive situation seriously by uncritically claiming that our representation is reality, that's all, which I find a bit naive.

I'm really bewildered by this and imagine you must have read someone else and took their position to be mine. I'm a straight forward Quinean ontological relativist which is why I paraphrased the original claim in terms of ideal representation and dropped the 'is'. I was just trying to explain the claim since it didn't seem like you were understanding it- I didn't even make the statement in question (though I do happen to think the algorithm approach is the best thing going, I'm not confident that thats the end of the story).

But I think we're bumping up against competing conceptions of what philosophy should be. I think philosophy is a kind of meta-science which expands and clarifies the job of understanding the world. As such, it needs to find a way of describing the subject in the language of scientific representation. This is what the cognitive science end of philosophy is all about. But you want to insist on the subject as fundamental- as far as I'm concerned thats just refusing to let philosophy/science do it's thing.

Replies from: quen_tin
comment by quen_tin · 2011-03-30T09:25:43.418Z · LW(p) · GW(p)

I also view philosophy as a meta-science. I think language is relational by nature (e.g. red refer to the strong correlation between our respective experiences of red) and is blind to singularity (I cannot explain by mean of language what it is like for me to see red, I can only give it a name, which you can understand only if my red is correlated to yours - my singular red cannot be expressed).

Since science is a product of language, its horizon is describing the relational framework of existing things, which are unspeakable. That's exactly what science converge toward (Quantum physics is a relational description of measurables - with special relativity, space/time referentials are relative to an observer, etc.). Being a subject is unspeakable (my experience of existing is a succession of singularities) and is beyond the horizon of science, science can only define its contour - the relational framework.

I don't think that we can describe the subject in the language of scientific representation, because I think that the scientific representation is always relative to a subject (therefore the subject is already in the representation, in a sense...). That is why I always insist on the subject. Not that I refuse to let philosophy do its thing, I just want to clarify what its thing exactly is, so that we are not deluded by a mythical scientific description of everything that would be totally independend of our existence (which would make of us an epiphenomenon).

I hope this clarify my position.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-03-30T19:18:46.946Z · LW(p) · GW(p)

To your 3 paragraphs:

1: Yes - we assume that words mean the same thing to others when we use them, and it's actually quite tricky to know when you've succeeded in communicating meaning.

2: "with special relativity, space/time referentials are relative to an observer, etc." - this is rather sad and makes me think you're trolling. What does this have to do with language? Nothing.

3: Your belief that we can't describe things in certain ways has you preaching, instead of trying to discover what your interlocutor actually means. "which would make of us an epiphenomenon" - so what? It sounds like you're prepared to derail any conversation by insisting everyone remind themselves that these are PEOPLE saying and thinking these things. Or maybe, more reasonably, you think that everyone ought to have a position about why they aren't constantly saying "I think ...", and you'll only derail when they refuse to admit that they're making an aesthetic choice.

Replies from: quen_tin
comment by quen_tin · 2011-03-30T19:41:20.492Z · LW(p) · GW(p)

I only insist that people do not conflate representation and reality. To me, stating that an object is is already a fallacy (though I accept this as a convenient way of speaking). An object appears or is conceived, but we do not know what is, and we should not talk about what we do not know. To me, uncritically assuming that their exist an objective world and trying to figure out what it is is already a fallacy. Why I think that? Because I think there is no absolute, only relations.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-03-30T19:47:43.210Z · LW(p) · GW(p)

Because I think there is no absolute, only relations.

Who cares?

comment by Jack · 2011-03-29T22:09:10.447Z · LW(p) · GW(p)

So I agree that whether or not an observer views something as an algorithm is in fact, contingent. But the claim is that the people and the universe are in fact algorithms. To put it in pragmatic language: representing the universe as an algorithm and it's components as subroutines is a useful and clarifying way of conceptualizing that universe relative to competing views and has no countervailing disadvantages relative to other ways of conceptualizing the universe.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T22:22:12.859Z · LW(p) · GW(p)

I prefer this formulation, because you emphasize on the representational aspect. Now a representation (a conceptualization) requires someone that conceptualize/represents things. I think that this "useful and clarifying way" just forget that a representation is always relative to a subject. The last part of the sentence only expresses your proud ignorance (sorry)...

Replies from: Jack
comment by Jack · 2011-03-29T22:26:35.278Z · LW(p) · GW(p)

The last part of the sentence only expresses your proud ignorance (sorry)...

What proud ignorance? I haven't proudly asserted anything (I'm not among your downvoters). My point is, if you dispute this metaphysics you need to explain what the disadvantages of it are and you haven't done that which is what is frustrating people.

Replies from: twanvl
comment by twanvl · 2011-03-29T22:38:04.504Z · LW(p) · GW(p)

I haven't proudly asserted anything

I am not saying that it is meant in this way, but the following could be construed as a proud assertion:

is a useful and clarifying way of conceptualizing that universe relative to competing views and has no countervailing disadvantages relative to other ways of conceptualizing the universe.

I agree that representing the universe as an algorithm is a useful view. I am not sure what you mean by "it's components as subroutines", though. What are the components of the universe?

Replies from: Jack
comment by Jack · 2011-03-29T22:47:58.600Z · LW(p) · GW(p)

Re: the first part, that's just what it means to assert that "the universe is an algorithm".

The components are you, me, the galaxy, socks, etc.

Replies from: twanvl
comment by twanvl · 2011-03-29T23:00:01.961Z · LW(p) · GW(p)

I thought you were only talking about representing the universe as algorithms, which seems like a good idea. You could also claim that "the universe is an algorithm", but I find that statement to be too vague, what does 'is' mean in this sentence?

The components are you, me, the galaxy, socks, etc. A subroutine in a program is a distinct part that can be executed repeatedly. Are you saying that the universe has a distinct part dedicated to dealing with socks? To me that sounds like the universe would somehow have to know what is and what is not a sock. (sorry for anthropomorphising the universe there.) It is mainly the word "subroutine" that I have a problem with, not the universe-as-an-algorithm idea per se.

Replies from: Jack
comment by Jack · 2011-03-29T23:27:01.336Z · LW(p) · GW(p)

I thought you were only talking about representing the universe as algorithms, which seems like a good idea. You could also claim that "the universe is an algorithm", but I find that statement to be too vague, what does 'is' mean in this sentence?

Quinean ontological pragmatism just paraphrases existential claims as "x figures in our best explanation of the universe". So 'is' in the sentence "the universe is an algorithm" means roughly the same thing as 'are' in the sentence "there are atoms in the universe".

Are you saying that the universe has a distinct part dedicated to dealing with socks? To me that sounds like the universe would somehow have to know what is and what is not a sock. (sorry for anthropomorphising the universe there.) It is mainly the word "subroutine" that I have a problem with, not the universe-as-an-algorithm idea per se.

I see what you're saying and on reflection it might be a dangerously misleading thing to say. The best candidate algorithm would not have such subroutines, however more complex but functional identical algorithms would.

comment by quen_tin · 2011-03-29T22:00:36.560Z · LW(p) · GW(p)

The downvote corporatist system of this site is extremely annoying. I am proposing a valid and relevant argument. I expect counter-arguments from people who disagree, not downvotes. Why not keep downvotes for not-argumented/irrelevant comments?

Replies from: Tyrrell_McAllister, Vladimir_M, jimrandomh
comment by Tyrrell_McAllister · 2011-03-30T00:24:04.587Z · LW(p) · GW(p)

The downvote corporatist system of this site is extremely annoying.

I'm really curious: What work is the word "corporatist" doing in this sentence? In what sense is the downvote system "corporatist"?

comment by Vladimir_M · 2011-03-29T22:20:24.808Z · LW(p) · GW(p)

Your above comment could be phrased better (it makes a valid point in a way that can be easily misinterpreted as proposing some mushy-headed subjective relativism), but I agree that people downvoting it are very likely overconfident in their own understanding of the problem.

My impression is that the concept of "algorithm" (and "computation" etc.) is dangerously close to being a semantic stop sign on LW. It is definitely often used to underscore a bottom line without concern for its present problematic status.

comment by jimrandomh · 2011-03-29T22:27:37.027Z · LW(p) · GW(p)

The guideline is to upvote things you want to see more of, and downvote things you want to see less of. That leaves room for interpretation about where the two quality thresholds should be, but in practice they're both pretty high and I think that's a good thing. There are a lot of things that could be wrong with a comment besides being irrelevant or not being argued. In this case, I think the problem is arguing one side of a confusing question rather than trying to clarify or dissolve it.

Replies from: None
comment by [deleted] · 2011-03-29T23:17:22.531Z · LW(p) · GW(p)

Votes are not always for good reasons, whatever the guidelines. Getting good behavior out of people works best if people are accountable for what they do, and tends to fail when they are not. People who comment are accountable in at least two ways that people who vote are not:

1) They have to explain themselves. That, after all, is what a comment is.

2) They have to identify themselves. You can't comment without an account.

Voters have to do neither. Now, even though commenters are doubly accountable, I think most will agree that a certain nonzero proportion of the comments are not very good. Take away accountability, and the we should expect the proportion of the bad to increase.

comment by quen_tin · 2011-03-29T21:17:40.018Z · LW(p) · GW(p)

It's a category error. I am not a concept, nor an instance of a concept.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-03-30T19:25:23.703Z · LW(p) · GW(p)

So you're not a person?

Replies from: quen_tin
comment by quen_tin · 2011-03-30T19:51:25.535Z · LW(p) · GW(p)

Inside your representation, I might be a person, and I do represent myself as a person sometimes.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-03-30T22:13:59.109Z · LW(p) · GW(p)

"... and words will never hurt me" :)

comment by jimrandomh · 2011-03-29T17:56:37.846Z · LW(p) · GW(p)

All of those questions have known answers, but you have to take them on one at a time. Most of them go away when you switch from discrete (boolean) reasoning to continuous (probabilistic) reasoning.

Replies from: quen_tin, quen_tin
comment by quen_tin · 2011-03-29T18:10:06.541Z · LW(p) · GW(p)

Each of those questions have several known and unknown answers...

Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying "I feel 53% happy" does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a "probabilistic" meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?

comment by quen_tin · 2011-03-29T18:12:45.150Z · LW(p) · GW(p)

Each of those questions have several known and unknown answers...

Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying "I feel 53% happy" does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a "probabilistic" meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?

Replies from: jimrandomh
comment by jimrandomh · 2011-03-29T18:18:57.680Z · LW(p) · GW(p)

But what does it rest upon?

Cox's theorem. Probability reduces to set measure, which requires nothing but a small set of mathematical axioms.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T19:00:51.498Z · LW(p) · GW(p)

My question is what does "happiness" rest upon? A probability of what? You need to have an apriori model oh what hapiness is in order to measure it (that is, a theory of mind), which you have not. Verifying your model depends on your model...

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2011-03-30T00:13:08.925Z · LW(p) · GW(p)

You argued that "I believe P with probability 0.53" might be as meaningless as "I am 53% happy". It is a valid response to say, "Setting happiness aside, there actually is a rigorous foundation for quantifying belief—namely, Cox's theorem."

Replies from: quen_tin
comment by quen_tin · 2011-03-30T10:10:52.133Z · LW(p) · GW(p)

The pb here is that "I believe P" supposes a representation / a model of P. There must be a pre-existing model prior to using Cox's theorem on something. My question is semantic: what does this model lie on? The probabilities you will get will depend on the model you will adopt, and I am pretty sure that there is no definitive model/conception of anything (see the problem of translation analysed by Quine for example).

comment by Jack · 2011-03-29T21:06:33.328Z · LW(p) · GW(p)

Kant is actually the last philosopher part of both the analytic and continental canon; neither embracing nor rejecting his positions is emblematic of one school or the other. This particular bit of skepticism has plenty of precursors in early philosophy anyway.

Replies from: quen_tin
comment by quen_tin · 2011-03-29T21:43:53.339Z · LW(p) · GW(p)

Agreed.

comment by marcad · 2011-03-30T16:17:59.343Z · LW(p) · GW(p)

In these articles, I believe Less Wrong is approaching extraordinary levels of group think. I had the misfortune of growing up as the child of bona fide cult members, complete with guru. There are many similarities here.

And what is significantly absent is self-awareness of the blatant conceit in believing that some super smart dude can reinvent all thinking all-by-self (don't deny it, that's what's going on). I have been disgusted by articles written by Eleazar which virtually lifted whole swaths of Nietzsche, completely unattributed. There is no way that most of this is original thinking.

And I'll also point out to all the people with rationality blinders on that if the poor dumb sheeples (as appears to be the general attitude around here) get wind that you're anywhere close to installing super-awesome robot overlords that you are certain will rule with love and compassion, then we'll see an uprising which will make the French Revolution look like a love-in.

Really super disgusted. And I don't even give a shit about Wittgenstein. Though I think rationalists who believe they have found or are close to finding the key to living and thinking non-metaphorically are living in their own very delusional altered reality.

What's most ironic is that Less Wrong IS mainstream philosophy. Look around peeps, this IS the zeitgeist of the scientific set. Just because universities haven't caught up with you means Nothing. Get some self-awareness, this is pure and simply an advanced step in the progression of the Enlightenment, although more accurately it's an advanced step in scientific reason a-la the school of Socrates. Of course you're different, advanced, but you are a part of that specific genealogy. And this is the damning lack of awareness most present in this mindset. You are children of Enlightenment. (Go ahead, murder your fathers ;)

The analysis of mainstream philosophy is missing some key analytical components. Namely, the big picture: the nature, progression of change at, and priorities of academia overall. The political and social world in which that academic progression took shape. The rationalizations and biases supporting major universities as suppliers of ruling classes, as well as the rationalizations and biases of the academia in working class university, and of course the funding of all of the above, and how those shape thinking. Not exploring this issues is tantamount to not exploring the problem. It's just hand wringing.

And why this glaring lack? Those are the hard problems. Hard to talk about aren't they. Hell, all this philosophy debate sparks hundreds of comments, but this is a particularly abstract topic. 10% concrete development, 90% repainting the bike shed.

Here's my contribution to LW: the Fallacy of the Single Solution (to society's ills), i.e. AI; i.e. Rationality. Particularly abstract solutions, mind you. A lot of what goes on around here is quite Utilitarian, a point alone which should make people sit back and consider, "do we really have the knowledge and capabilities yet to resolve through advanced AI the serious unresolved problems of Utilitarianism?" I'd say that you'd better be Insanely Sure. The bar for evidence better be high, this is high-risk territory. Or, instead, are the Old Dead Guys who have discussed these problems Not Worth Reading either? "Stick with our dogma peeps, don't confuse yourselves!"...

..Oh man, when you're telling people "don't confuse yourselves with the old literature, you are in really altered reality. Wow. Cults.

Quite strange all the denial. But then again, that's what group think is all about.

Replies from: TheOtherDave, wnoise
comment by TheOtherDave · 2011-03-30T16:53:35.291Z · LW(p) · GW(p)

What's also ironic is that luke, who wrote the post you're responding to, has recently argued at some length that it's important to acknowledge the relationships between LW and mainstream philosophy and in particular the places where LW/EY owe debts to mainstream philosophy.

A reasonable man might infer from this that he's not entirely blinded by groupthink on this particular subject.

Of course, that doesn't mean all the rest of us aren't... though we sure do seem to have a lot of internal disagreement for a bona fide cult.

comment by wnoise · 2011-03-30T16:39:27.080Z · LW(p) · GW(p)

all thinking all-by-self.

All thinking all-by-himself? No. Great chunks, while being immersed in the culture that resulted from that thinking, sure.

I have been disgusted by articles written by Eleazar which virtually lifted whole swaths of Nietzsche, completely unattributed. There is no way that most of this is original thinking.

For direct influences, Eliezer is quite willing to cite e.g. Feynmann, Dennet, Pearl and Drescher.

I don't see the connection you see to Nietzsche in particular, merely a bunch of things that are tangential at best. Would you be willing to spell out which bits of his writings are like which bits of Nietzsche? I would strongly guess that anything you identify is not particularly unique to Nietzsche, and similar points had been made both before and after him, and any that did have no antecedents before him leaked out into the broader culture.

It depends on what you mean by this being "original thinking". Eliezer almost certainly isn't directly mining 19th century German philosophers for ideas. I doubt he has read much if any Nietzsche and would thus not be able to directly copy Nietzsche. Nonetheless, some ideas of Nietzsche have made their way into modern world view. Ideas are generally dense and interconnected. Starting at one idea of a philosopher and thinking about the implications are going to produce similar new ideas to others the philosopher had.

Yes, one should keep clear that one's ideas that apparently arise from within are crucially dependent on previous experiences and culture. But that doesn't extend to a requirement to track down and cite previous articulators of similar ideas. Once an idea is encountered indirectly, it's free game to build upon. It's long been recognized that certain ideas arise multiple times apparently independently when the prerequisites take root in a given culture. Newton and Liebniz independently invented calculus, with no direct connection. I'm sure neither could cite any direct influence from prior mathematicians that would directly lead to calculus. But there was still enough commonality in mathematical culture that they developed it at roughly the same time.

comment by myron_tho · 2012-10-28T20:57:57.105Z · LW(p) · GW(p)

I can see why philosophy is so bothersome to people who believe that rationality can and should be the only path to knowledge, and the equally troubled belief that reductionism applies to any science outside of physics.

So I have to ask: on what grounds do you rationally justify any of your own claims to Truth and Right? It certainly isn't through empiricism, despite the vile form of scientistic reductionism that is evangelized around these parts. There is a form of reasoning at work behind these claims, of course, but without a clear philosophical grounding you're left with a series of implicit, half-formed assumptions that you simply hold up as self-evident (they are not, which is why we have philosophy in the first place).

You will earn +5 points if you can justify any of your presuppositions regarding the superiority of "objectivity" and "rationality" as modes of inquiry without an appeal to metaphysics and the "magical categories" that your rationality-seeking brains reject a priori.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-28T21:43:17.631Z · LW(p) · GW(p)

Others: downvotes don't fix what problems might be in myron's thoughts. They make them worse.

Myron: preliminary note, your comment sounds a bit presumptuous and demanding.

Substance: I don't believe that rationality necessarily has a fundamental undeniable metaphysical grounding. I believe every philosophy lacks this. Problems like the problem of induction (knowledge comes from experiences and it's impossible to know that the future will be like the past) and the turtles all the way down problem (assumptions are either unwarranted or dependent on further assumptions, which makes all forms of thought either infinitely regressive or groundless) are basically insurmountable. These are problems that all philosophies face and cannot answer satisfactorily.

However, I think you're asking the wrong question. Instead of starting by looking for fundamental metaphysical justifications, which we know to be impossible, we should look to a more pragmatic and tangible level. The brute fact of our existence is that some things work and others don't, that some things seem right and others seem wrong. If someone is insistent upon denying reality, then that's their affair, but they should know that there are consequences to this rejection. I consider the rejection of reality to be viceful, because those who do so reject their own current values and intutions in favor of an embrace of an abstract form of nihilism. Nihilism is much easier than acknowledging reality, but it's also much worse, in my opinion.

Even if nothing that we see or predict or experience or value is real in an absolute and abstract and irrefutable metaphysical sense, it's real and meaningful and useful in the context of our everyday lives. The reality that we face each day is the one that I care about, not the abstract and irrefutable ideological one. That's why I support rationality even if I don't know why rationality works. The fact that it does work, or that it seems to work, is enough for me.

I also think it's relevant that the alternatives to rationality that I've seen have all been worse, in terms of logical metaphysical justification.

Replies from: wedrifid, myron_tho
comment by wedrifid · 2012-10-29T01:03:08.989Z · LW(p) · GW(p)

Others: downvotes don't fix what problems might be in myron's thoughts. They make them worse.

The expected value of attempting to fix myron's thoughts is not sufficiently high to warrant adopting it as a goal. In fact it is negative. This is due to the low probability of success and the negative externalities the attempt would produce.

Downvotes do help fix the problem of undesirable discussion being visible. In sufficient volume they would also have prevented later parts of this discussion entirely. It would have preempted the frustration that prompted "arrogant arsehole" labels and, indeed, tends to be more effective at making people "fuck off" than actually telling them to. Perhaps some voters simply saw the warning signs ahead of time?

Replies from: chaosmosis, myron_tho
comment by chaosmosis · 2012-10-29T01:25:13.749Z · LW(p) · GW(p)

At this point I agree, but at that point there was no real sign he'd turn out this way. What warning signs were there? Because if they were there then I missed them and I'd prefer to not miss them in the future.

comment by myron_tho · 2012-10-29T01:27:05.804Z · LW(p) · GW(p)

fix myron's thoughts

And they still see no problem with reducing everything to rationality. As if "fixing" based on some unjustified utopian ideals hasn't led to just about every atrocity in history.

But no really THIS TIME we've got it right. No, really.

comment by myron_tho · 2012-10-28T22:14:33.837Z · LW(p) · GW(p)

The brute fact of our existence is that some things work and others don't, that some things seem right and others seem wrong. If someone is insistent upon denying reality, then that's their affair, but they should know that there are consequences to this rejection. I consider the rejection of reality to be viceful, because those who do so reject their own current values and intutions in favor of an embrace of an abstract form of nihilism. Nihilism is much easier than acknowledging reality, but it's also much worse, in my opinion.

This paragraph moves from (rightly) noting that we cannot establish certainty to, in the very next sentence, a confident assertion of truth without so much as an attempt at justification. Repeating unjustified claims ad nauseam is, despite the LessWrong belief that simply repeating a claim is enough to make it so, only illustrates why this project fails: the lack of justification (minus the invocation of Putnam's "no miracles" argument, which is not as ironclad as you believe) is a very real problem for the brash and sweeping generalization that "philosophy is diseased and useless".

As an aside, I find it interesting that you speak to me of "nihilism" given the argument for reductionism of the worst sort. Talk about "values devaluing themselves"; your own position is incompatible with value and meaning!

The lack of respect for philosophy here is telling; the consensus arguments here aren't even consistent, let alone capable of making informed claims to truth. You cannot simply put forth a metaphysical position -- and you are most certainly doing so despite the unwillingness to acknowledge your beliefs -- and then handwave it away as "well we think it works so we're right".

The fact that it does work, or that it seems to work, is enough for me.

Works for what? In trivial cases of "common sense" where induction is more or less "right"? For some instances of medicine, electronics, other assorted applications of technology? I'll grant you that too.

As a totalizing and unassailable account of humanity, the natural world, all possible knowledge? Absolutely not. Given that the consensus around here is that life and mind are reducible to rationality and technology metaphors, I hardly find this position surprising, although it is all but indefensible by your own stated positions.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-28T22:44:22.562Z · LW(p) · GW(p)

This paragraph moves from (rightly) noting that we cannot establish certainty to, in the very next sentence, a confident assertion of truth without so much as an attempt at justification. Repeating unjustified claims ad nauseam is, despite the LessWrong belief that simply repeating a claim is enough to make it so, only illustrates why this project fails: the lack of justification (minus the invocation of Putnam's "no miracles" argument, which is not as ironclad as you believe) is a very real problem for the brash and sweeping generalization that "philosophy is diseased and useless".

There is no logical way that I can prove to you that reality exists. If you want one, I am sorry. Nonetheless, my senses tell me that reality exists and that logic works and that my values are good. I accept those senses because the alternative is to embrace groundlessness and the total destruction of meaning.

You do not show how other philosophies can solve the problems I outline. You have no offense against rationalism. Rationalism has offense against other philosophies because rationalism works. Even if rationalism doesn't work, it appears to, and is the inescapable condition of my life. I can't help but think in terms of logic and induction and empiricism, and I refuse to embrace any abstract form of truth without a tangible connection to my own internal understanding of the universe.

The choice isn't between one philosophy and many, which are equally justified, but between one philosophy which is my own and the one that I can't help but believe, and others which are so abstract and deconnected from my own experiences and understanding that they fail to provide any sort of value in my life.

As an aside, I find it interesting that you speak to me of "nihilism" given the argument for reductionism of the worst sort. Talk about "values devaluing themselves"; your own position is incompatible with value and meaning!

I don't believe that reductionism destroys value. That seems like a separate debate, anyways.

The lack of respect for philosophy here is telling; the consensus arguments here aren't even consistent, let alone capable of making informed claims to truth. You cannot simply put forth a metaphysical position -- and you are most certainly doing so despite the unwillingness to acknowledge your beliefs -- and then handwave it away as "well we think it works so we're right".

I'm not trying to do that. I'm saying that my reality is inescapably the way it currently is. If I didn't accept the metaphysical condition that I currently accept, I would believe things indiscriminately and have no ability to judge things or discern things or to make choices. However, I want to do those things. Therefore, I accept rationality. This isn't pretty, from a logical standpoint. But it's basically inevitable for anyone who wants purpose in their life.

Your alternative philosophy, whatever it might be, is at least as groundless as rationality, if not more so. If you want to reject rationality, please pick a specific paradigm and explain how it would provide an answer to the problem of induction and turtles all the way down. Otherwise, you're being unfair in your evaluation, because you place a higher burden of proof on rationality than you do on other positions.

Works for what? In trivial cases of "common sense" where induction is more or less "right"? For some instances of medicine, electronics, other assorted applications of technology? I'll grant you that too.

As a totalizing and unassailable account of humanity, the natural world, all possible knowledge? Absolutely not. Given that the consensus around here is that life and mind are reducible to rationality and technology metaphors, I hardly find this position surprising, although it is all but indefensible by your own stated positions.

Can you give me an example of somewhere where rationality doesn't work, where some other paradigm does? In my experience, if rationality doesn't work, it's in an area where nothing else works either. Moreover, rationality has a history of solving problems that were previously thought to be unsolvable. Therefore, I currently trust it more than alternative positions.

Replies from: myron_tho
comment by myron_tho · 2012-10-28T22:58:59.175Z · LW(p) · GW(p)

There is no logical way that I can prove to you that reality exists. If you want one, I am sorry. Nonetheless, my senses tell me that reality exists and that logic works and that my values are good. I accept those senses because the alternative is to embrace groundlessness and the total destruction of meaning.

I have no exceptional quarrel with scientific realism nor the existence of an objective and mind-independent reality. I am however skeptical, firstly, of the idea that restricting inquiry into that domain to "rationality" is needlessly constraining, and secondly that privileging "objective" modes of inquiry leaves out very important matters -- like consciousness, ethics, and aesthetics, to name a few.

You do not show how other philosophies can solve the problems I outline. You have no offense against rationalism. Rationalism has offense against other philosophies because rationalism works. Even if rationalism doesn't work, it appears to, and is the inescapable condition of my life. I can't help but think in terms of logic and induction and empiricism, and I refuse to embrace any abstract form of truth without a tangible connection to my own internal understanding of the universe.

Indeed, I do not show how other philosophies may solve the problems because I question their status as problems at all. To treat everything as a "problem" that can and must be solved by Mighty Intellect is to implicitly endorse a particular epistemic, if not metaphysical, position -- a position that takes for granted a particular status of thinking subjects as they relate to mind-independent reality and other beings -- and I simply choose not to endorse that position, or more to the point, not to endorse it as uncritically as the locals here are wont to do.

To repeat my earlier point: why should rationalism be given privileged grounds? The no-miracles argument is about the only thing you've got to hang a hat on, and it is trivial to point out that there are many instances just in science alone where we don't have knowledge and may never be able to acquire it. This is without even getting into arguments about why "progress" and "doing things" should be the ultimate measuring stick of usefulness, let alone truth.

The choice isn't between one philosophy and many, which are equally justified, but between one philosophy which is my own and the one that I can't help but believe, and others which are so abstract and deconnected from my own experiences and understanding that they fail to provide any sort of value in my life.

I can't speak for everyone of course but I find immense value in aesthetics and in other non-rational modes of human experience, and equally, I find myself wary of philosophies that exclude such values and treat them as meaningless.

In reality all this article has done is show that philosophy is far from dead; LessWrong has simply chosen to adopt a particularly limiting form of it and decry everything outside that sphere.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-28T23:06:45.539Z · LW(p) · GW(p)

I have no exceptional quarrel with scientific realism nor the existence of an objective and mind-independent reality. I am however skeptical, firstly, of the idea that restricting inquiry into that domain to "rationality" is needlessly constraining, and secondly that privileging "objective" modes of inquiry leaves out very important matters -- like consciousness, ethics, and aesthetics, to name a few.

I think that rationality encompasses all of those things entirely and don't understand why you believe differently. Rationality is the tool that we use to distinguish the claims about aesthetics and consciousness and ethics that make sense from the ones that don't. A refusal to use this tool seems like it would be crippling. Other tools might still prove useful, and there are issues as to what we should do if our tools conflict, but I think rationality is the ultimate tool because it is very good at making comparisons between different things, because it uses such generalized ideas like logic. If one aesthetic claim contradicts another, only rationality can recognize that as a problem and work towards solving it.

Indeed, I do not show how other philosophies may solve the problems because I question their status as problems at all. To treat everything as a "problem" that can and must be solved by Mighty Intellect is to implicitly endorse a particular epistemic, if not metaphysical, position -- a position that takes for granted a particular status of thinking subjects as they relate to mind-independent reality and other beings -- and I simply choose not to endorse that position, or more to the point, not to endorse it as uncritically as the locals here are wont to do.

This epistemic condition is inevitable, because you ARE a thinking subject. If you prioritize a different epistemic condition above this one I don't understand how you can go about living your life.

To repeat my earlier point: why should rationalism be given privileged grounds? The no-miracles argument is about the only thing you've got to hang a hat on, and it is trivial to point out that there are many instances just in science alone where we don't have knowledge and may never be able to acquire it. This is without even getting into arguments about why "progress" and "doing things" should be the ultimate measuring stick of usefulness, let alone truth.

There's no logical reason to give rationality privileged grounds. But I think that people should choose epistemic systems which connect to their own understanding of the way reality works. I think this on a value-level basis, not a logical metaphysical one. (Side Note: I believe that values, not logical truths, are the ultimate metaphysical justification because they inherently connect to motivational states. However, I arrived at this position through the heavy use of logic, such as by trying to think of a solution to the is-ought problem. Values are the ultimate metaphysical foundation but rationality is the ultimate metaphysical tool that we use to weigh values against each other and to consider the implications of certain values, etc.)

I can't speak for everyone of course but I find immense value in aesthetics and in other non-rational modes of human experience, and equally, I find myself wary of philosophies that exclude such values and treat them as meaningless.

I don't believe that aesthetics is meaningless. I don't know why you think rationality believes that.

In reality all this article has done is show that philosophy is far from dead; LessWrong has simply chosen to adopt a particularly limiting form of it and decry everything outside that sphere.

Please show me the quote. I don't believe that LessWrong has disavowed anything that you've said you valued.

Replies from: myron_tho
comment by myron_tho · 2012-10-28T23:15:24.834Z · LW(p) · GW(p)

I think that rationality encompasses all of those things entirely and don't understand why you believe differently.

I don't believe it because I am persuaded by arguments against treating consciousness and some features of consciousness as rational (that "useless" Continental philosophy as well as related arguments by John Searle, if you'd like a look).

This epistemic condition is inevitable, because you ARE a thinking subject. If you prioritize a different epistemic condition above this one I don't understand how you can go about living your life.

It clearly isn't inevitable if I am a thinking subject and do not accept that everything in reality boils down to formal rationality, which I do not.

There's no logical reason to give rationality privileged grounds. But I think that people should choose epistemic systems which connect to their own understanding of the way reality works. I think this on a value-level basis, not a logical metaphysical one.

I have no issue with this reasoning (although I obviously disagree with it). The issue arises from bold claims to capital-T Truth status, which are built on flimsy grounds.

I don't believe that aesthetics is meaningless. I don't know why you think rationality believes that.

Because aesthetic enjoyment is non-rational.

Replies from: Vaniver, chaosmosis
comment by Vaniver · 2012-10-28T23:25:14.919Z · LW(p) · GW(p)

Because aesthetic enjoyment is non-rational.

I don't know, the process of recognizing beauty seems pretty rational to me. I may not have introspective access to my modules that calculate beauty, but introspective access and rationality are different things.

Replies from: myron_tho
comment by myron_tho · 2012-10-28T23:33:21.283Z · LW(p) · GW(p)

calculate beauty

This is why these arguments are not taken seriously.

Why should I (or anyone outside this circle) accept that this is a claim to be taken seriously? It sounds like you've found a fine metaphor that you believe can encompass any and all forms of mental activity and thus can "explain" anything put to it.

Freudian psychoanalysis can make the same claim to truth about the mind, of course, so you've offered up on rational case for why anyone should accept this as true.

See what happens when you ignore philosophy?

Replies from: Vaniver, chaosmosis
comment by Vaniver · 2012-10-29T00:02:10.954Z · LW(p) · GW(p)

Why should I (or anyone outside this circle) accept that this is a claim to be taken seriously?

Do you think this is a pretty piece of music? Please just listen to it, without being primed by the description, comments, or related videos; it'll only take 2 minutes.

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-29T01:06:15.951Z · LW(p) · GW(p)

I didn't enjoy listening to it on its own terribly much, but I think it'd be OK as a soundtrack for a video or something.

comment by chaosmosis · 2012-10-28T23:58:08.965Z · LW(p) · GW(p)

First, you claim that rationality is exclusive and ignores other legitimate fields of inquiry. Now, you claim that rationality is overly inclusive, trying to incorporate too many things. You're a bit of an idiot.

Replies from: myron_tho
comment by myron_tho · 2012-10-29T00:12:31.180Z · LW(p) · GW(p)

It's almost like they aren't mutually exclusive claims.

You're a bit of an idiot

Show your calculations for this argument.

comment by chaosmosis · 2012-10-28T23:26:50.767Z · LW(p) · GW(p)

I think you're using a different definition of rationality than is common on this site.

I don't believe it because I am persuaded by arguments against treating consciousness and some features of consciousness as rational (that "useless" Continental philosophy as well as related arguments by John Searle, if you'd like a look).

I need more detail to be able to evaluate what you are saying here.

It clearly isn't inevitable if I am a thinking subject and do not accept that everything in reality boils down to formal rationality, which I do not.

I think you might accept it but have hidden flaws within your reasoning process that lead you to misunderstand your own beliefs. I think that if you truly rejected this position then you would be unable to make decisions or understand arguments in aesthetic or ethical or consciousness related domains. I think getting to such a rejection would be impossible for a human being but that some human beings might mislead themselves to believe in arguments for that conclusion and to selectively believe in that conclusion, and to believe that they believe in that conclusion fully. This is why I said such a rejection would be a form of abstract nihilism.

I have no issue with this reasoning (although I obviously disagree with it). The issue arises from bold claims to capital-T Truth status, which are built on flimsy grounds.

I think that rationality is capital-T insofar as it is the best paradigm. It has no ultimate foundation, but the foundation that it does have is intrinsic to the very mode of our existence and our values, and that makes it the best.

Also, I don't understand why you disagree with this reasoning. It seems very similar to what you claim.

Because aesthetic enjoyment is non-rational.

I don't understand what you mean by "non-rational" or why you believe that aesthetics is that. Also, rationality doesn't believe that values are logical truths, but that doesn't mean that rationality thinks that values are valueless. Anyone who isn't using rationality extremely badly will recognize that values are valuable and not valueless. Your thinking is confused.

Replies from: myron_tho
comment by myron_tho · 2012-10-28T23:37:49.775Z · LW(p) · GW(p)

I think you might accept it but have hidden flaws within your reasoning process that lead you to misunderstand your own beliefs.

Prove it.

I think that if you truly rejected this position then you would be unable to make decisions or understand arguments in aesthetic or ethical or consciousness related domains.

This speaks more to the limitations of your ability to think outside your box than it does to problems with my, or anyone else's, thinking. You're so married to the Computer Metaphor that the possibility of thought and experience outside of it is simply inconceivable; of course this leaves you wide-open to charges of pseudo-science.

It has no ultimate foundation, but the foundation that it does have is intrinsic to the very mode of our existence and our values, and that makes it the best.

If it is intrinsic to our mode of existence then it does have a foundation, so which is it?

I still find it hilarious (and not in a good way) that you're so insistent on treating your particular notion of values as justification for what is "best"; as if there is no such thing as a historical contingency or accident that might just call that into question on a deep level.

Even Kant figured this out in the 1700s. Score one more for philosophy.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-28T23:49:53.714Z · LW(p) · GW(p)

Prove it.

Well, I can't do so all by myself. You'll need to do some introspection to help me out. I don't feel like you're considering my arguments fairly, you've been combative and demanding and hostile throughout this conversation. This means I might stop wasting my time on you soon.

This speaks more to the limitations of your ability to think outside your box than it does to problems with my, or anyone else's, thinking. You're so married to the Computer Metaphor that the possibility of thought and experience outside of it is simply inconceivable; of course this leaves you wide-open to charges of pseudo-science.

What do you think we could use to make arguments, if not logic?

A big part of my argument is that we are limited. I don't have the ability to use things other than logic to make decisions, my decisions never seem to work out when I don't. If you do, then you are a superhero and you should definitely use your powers to the fullest extent.

If it is intrinsic to our mode of existence then it does have a foundation, so which is it?

It has a foundation. It doesn't justify itself in terms of an undeniable logical proof but in terms of a process that cannot be escaped and which is intrinsic to every aspect of human behavior.

I still find it hilarious (and not in a good way) that you're so insistent on treating your particular notion of values as justification for what is "best"; as if there is no such thing as a historical contingency or accident that might just call that into question on a deep level.

"Best" is a brute fact about what my values say. If your values are different, I can't argue with you.

On a side note, I'm pretty convinced that you're an arrogant asshole. You don't seem to be on a quest for truth, you seem to be on a quest to show that you are smarter than me. You came into this conversation claiming that you wanted to teach LessWrongers about the value of epistemic tolerance, but you've mocked me and my arguments throughout this entire discussion. I think you're more about proving to yourself how smart you are than actually figuring out the way the world works and how you should live your life. Fuck off.

Replies from: wedrifid, myron_tho
comment by wedrifid · 2012-10-29T00:50:51.415Z · LW(p) · GW(p)

On a side note, I'm pretty convinced that you're an arrogant asshole. You don't seem to be on a quest for truth, you seem to be on a quest to show that you are smarter than me. You came into this conversation claiming that you wanted to teach LessWrongers about the value of epistemic tolerance, but you've mocked me and my arguments throughout this entire discussion. I think you're more about proving to yourself how smart you are than actually figuring out the way the world works and how you should live your life. Fuck off.

You said this on lesswrong and got away with it without sanction (as of the time of the post). That says a lot. Specifically it says you are probably right and blatantly so! I haven't read the preceding discussion but if I do so and find that I don't agree with your assessment then I will be shocked and confused.

Replies from: chaosmosis, TimS
comment by chaosmosis · 2012-10-29T01:28:05.666Z · LW(p) · GW(p)

I was:

  1. Testing the limits of LessWrong's tolerance. I was curious under what circumstances I could get away with language.

  2. Trying to alter the motivational state of the commenter by pointing out that they were being rude and hypocritical. I think the root of the problem is that there's no real incentive for the commenter to change their beliefs, because they didn't seem to be thinking through what I was saying. I also wanted it to be memorable, so that they might think back to this at a later point in time when they're more amenable to the kind of arguments I've been making.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-29T01:40:40.378Z · LW(p) · GW(p)

Also: I think there are times when LessWrong does need greater epistemic tolerance and that the commenter was making it harder for those times to happen. I was trying to signal that I support pragmatism now, so that I would be more likely to be trusted while arguing for epistemic tolerance in other situations.

comment by TimS · 2012-10-29T01:59:07.010Z · LW(p) · GW(p)

I think I don't understand your point here.

Why should this language deserve sanction? It asserts that the counter-party to the debate isn't interested in discussion leading to improved thinking - a grave insult in this community. But simply making the accusation doesn't deserve punishment. Falsely making the accusation deserves substantial rebuke, but given the systematic high variance in the value of true and false accusations, why should we care about the average value of these types of accusations.

Alternatively, I am entirely missing your intended point.

Replies from: wedrifid, Vaniver
comment by wedrifid · 2012-10-29T05:17:28.538Z · LW(p) · GW(p)

Alternatively, I am entirely missing your intended point.

The intended point was the literal one.

If someone can be told they are and asshole and to fuck off and that is accepted then they probably really are behaving like asshole.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-29T05:49:34.697Z · LW(p) · GW(p)

Or they are not a member of the in-group. Both are probably relevant, in this instance.

Imagine if I did that to Yudkowsky.

Replies from: wedrifid
comment by wedrifid · 2012-10-29T06:48:11.212Z · LW(p) · GW(p)

Or they are not a member of the in-group. Both are probably relevant, in this instance.

Imagine if I did that to Yudkowsky.

When Eliezer behaves poorly criticizm of said behavior tends to be well received. People pay a lot of attention to Eliezer when he makes his rationality posts but also care a lot more when he does things they don't like. Because what it says and does (in this context) matters a lot more.

If in doubt either that direct criticism of Yudkowsky can be well received or that said comments can be upvoted dramatically grab Wei_Dai's user comments script, grab mine, and sort by vote. Last time I checked a couple of the top ten are actually examples of just that.

Sure, I have never called Yudkowsky an asshole (because he isn't one) and even when I have criticized him I criticized a specific behavior rather than alleging an innate trait. I also have never told him to "fuck off", although I have given him (sincere) advice to delegate his moderation authority to a SingInst minion who has better social skills and is more equipped to translate his goals into achieved outcomes.

I reject the claim that Eliezer gets anti-criticism privileges.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-29T17:22:33.710Z · LW(p) · GW(p)

My comment was tongue in cheek, just a joke. I'm not planning to curse out Eliezer anytime soon. It was enjoyable from my perspective to imagine a stream of hundreds of downvotes flooding my comment and making this profile unusable.

I don't think that Eliezer is immune to criticism. I do think he gets extra respect and politeness. The reasons for this are tied very closely to his past history of good content, but I bet that being a de facto leader of this site is also somewhat helpful. In quantitative terms I'm not sure whether or not he receives more or less criticism-per-content than other people, but suspect that he's receiving more, because of his status as leader. In qualitative terms though, leaders generally are treated with more respect, and I don't think that he's an anomaly in that respect. I think the kind of criticism that he receives is probably generally nicer and better thought out, although he probably also receives more criticisms per content.

Despite that, he also seems more likely to draw the ire of ignorant and especially rude people. He might be receiving more respectful criticism but also more disrespectful criticism, and receiving less criticism with only moderate levels of respect. This is my current belief, now that I've thought about it a bit.

I also believe that he's probably criticized less often by in-group members than other in-group members are. I think that the amount of content he produces makes this tricky to evaluate, though, and this is the conclusion that I'm least certain about.

Replies from: wedrifid
comment by wedrifid · 2012-10-29T18:32:25.402Z · LW(p) · GW(p)

My comment was tongue in cheek, just a joke. I'm not planning to curse out Eliezer anytime soon. It was enjoyable from my perspective to imagine a stream of hundreds of downvotes flooding my comment and making this profile unusable.

Of course, you said "imagine that ". And so my reply is that your imagination produced flawed counterfactual predictions of the response and so conveys an incorrect picture of the actual world prior to the counterfactual modification.

I also believe that he's probably criticized less often by in-group members than other in-group members are.

That would be something I might predict based on general understanding of how social groups work prior to exposure to the actual data stream of less wrong comments. However my actual observations tell me otherwise and so I would happily bet against you were such a thing to be measured. Eliezer is criticized more often than the median in-group member (for most reasonable interpretations of 'criticism' and 'in group member').

I intend no particular presumption by this so more by way of information: I am one of the most active participants here and suspect I pay a more than typical amount of attention to what is being criticizer by who, how such criticism is received and how the interplay of social dynamics and status (seems to) influence which criticisms can be (or are) given when and to whom. Mere criticism volume is a comparatively simple thing to keep an account of. This gives me enough confidence in how often Eliezer is criticized that I would consider the opportunity to bet at even odds that he is criticized more often than the median in-group member. In fact I would even be willing to strengthen my claim to refer to "criticism relative to contribution volume".

comment by Vaniver · 2012-10-29T03:03:36.935Z · LW(p) · GW(p)

Falsely making the accusation deserves substantial rebuke

Exactly. wedrifid's claim appears to be that chaosmosis saying that without being downvoted heavily means that other members of the community have not yet rebuked him, which is evidence that he does not deserve rebuke.

(I don't think the evidence is that strong because the comments are recent and it's a Sunday night, but the evidence will strengthen with the passing of time.)

Another function of wedrifid noting that explicitly is to remind people to downvote if they see something that should be rebuked- if the community is failing to downvote bad material, that is cause for shock and confusion.

Replies from: wedrifid
comment by wedrifid · 2012-10-29T06:04:23.454Z · LW(p) · GW(p)

Exactly. wedrifid's claim appears to be that chaosmosis saying that without being downvoted heavily means that other members of the community have not yet rebuked him, which is evidence that he does not deserve rebuke.

Certainly.

Another function of wedrifid noting that explicitly is to remind people to downvote if they see something that should be rebuked- if the community is failing to downvote bad material, that is cause for shock and confusion.

I suppose that connotation does come across but I wouldn't necessarily want to say that in this context. I would estimate that I am more inclined than average to support the use of such a response. I wouldn't outright encourage the specific language but it certainly wouldn't bother me and I would either leave it neutral or possible upvote.

comment by myron_tho · 2012-10-29T00:02:27.937Z · LW(p) · GW(p)

What do you think we could use to make arguments, if not logic?

Why should consciousness or aesthetics reduce to arguments of any kind? Why should they be amenable to formalization in agreement with a rather bizarre epistemic position?

It doesn't justify itself in terms of an undeniable logical proof but in terms of a process that cannot be escaped and which is intrinsic to every aspect of human behavior.

That is an unfortunately narrow encapsulation of human nature.

Fuck off.

You mad bro?

Maybe you should re-calculate your emotions for a more reasonable outcome.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-29T00:08:27.466Z · LW(p) · GW(p)

Why should consciousness or aesthetics reduce to arguments of any kind? Why should they be amenable to formalization in agreement with a rather bizarre epistemic position?

Allow me to rephrase: what do we use to distinguish between conflicting intuitions, if not meta-intuitions?

Replies from: wedrifid
comment by wedrifid · 2012-10-29T00:47:10.731Z · LW(p) · GW(p)

Allow me to rephrase: what do we use to distinguish between conflicting intuitions, if not meta-intuitions?

Sometimes a coin.