How not to be a Naïve Computationalist
post by diegocaleiro · 2011-04-13T19:45:10.934Z · LW · GW · Legacy · 36 commentsContents
36 comments
Meta-Proposal of which this entry is a subset:
The Shortcut Reading Series is a series of less wrong posts that should say what are the minimal readings, as opposed to the normal curriculum, that one ought to read to grasp most of the state of the art conceptions of humans about a particular topic. Time is finite, there is only so much one person can read and thus we need to find the geodesic path to epistemic enlightenment and show it to Less Wrong readers.
Exemplar:
“How not to be a Naïve Computationalist”, the Shortcut Reading Series post in philosophy of mind and language:
This post’s raison d’etre is to be a guide for the minimal amount of philosophy of language and mind necessary for someone who ends up thinking the world and the mind are computable (such as Tegmark, Yudkowsky, Hofstadter, Dennett and many of yourselves) The desired feature which they have achieved, and you soon will, is to be able to state reasons, debugg opponents and understand different paradigms, as opposed to just thinking that it’s 0 and 1’s all the way down and not being able to say why.
This post is not about Continental/Historical Philosophy, about that there have been recommendations in http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/
The order is designed.
What is sine qua non, absolutely necessary, is in bold and OR means you only have to read one, the second one being more awesome and complex.
Language and Mind:
- 37 Ways words can be Wrong - Yudkowsky
- Darwin Dangerous Idea Chapters 3,5, 11, 12 and 14 - Daniel Dennett
- On Denoting - Bertrand Russell
- On What There Is - Quine
- Two Dogmas of Empiricism - Quine
- Namind and Necessity - Kripke OR Two Dimensional Semantics - David Chalmers
- “Is Personal Identity What Matters?” - Derek Parfit
- Breakdown of Will - Part Two (don’t read part 3) George Ainslie
- Concepts of Consciousness 2003 - Ned Block
- Attitudes de dicto and de se - David Lewis- Phil Papers 1
- General Semantics - David Lewis - Phil Papers 1
- The Stuff of Thought, Chapter 3 “Fifty Thousand Innate Concepts” - Steve Pinker
- Beyond Belief - Daniel Dennett in Intentional Stance
- The Content and Epistemology of Phenomenal Belief - David Chalmers
- Quining Qualia OR I Am a Strange Loop OR Consciousness Explained - Dan & Doug
- Intentionality - Pierre Jacob - Stanford Encyclopedia Phil
- Philosophy in the Flesh - Lakoff & Johnson - Chap 3,4, 12, 21,24 and 25.
What you cannot find here you probably will on Google or Library.nu (if anyone has a link to Beyond Belief (EDIT: Found it!), post it, it is the only hard to find one)
Congratulations, you are now officially free from the Naïve philosophical computationalism that underlies part of the Less Wrong Community. Your computationalism is now wise and well informed.
Feel free now to delve into some interesting computational proposals such as
- Consciousness as Integrated Information - Giulio Tononi
- What is Thought - Eric Baum
- Good and Real - Gary Drescher
- The Mathematical Universe Hipothesis - Max Tegmark
Dealing with complexity is an inefficient and unnecessary waste of time, attention and mental energy. There is never any justification for things being complex when they could be simple. - Edward de Bono
There are many realms and domains in which the quote above should not be praised. But I think I have all philosophy majors with me when I say that there must be a simpler way to get to the knowledge level we reach upon graduation.
Finally, having wasted substantial amounts of time reading those parts that should not be read of philosophy, and not intending to do the same mistake in other areas, I ask you to publish a selection of readings in your area of expertise, The Sequences are a major rationality shortcut, and we need more of that kind.
36 comments
Comments sorted by top scores.
comment by Perplexed · 2011-04-14T05:11:50.233Z · LW(p) · GW(p)
I like the irreverent attitude exemplified by this posting. But the posting might have been improved if it had attempted to provide a short characterization of what a Naïve Computationalist believes that a wise and well-informed Computationalist does not. And vise versa.
comment by Perplexed · 2011-04-14T06:05:32.888Z · LW(p) · GW(p)
... having wasted substantial amounts of time reading those parts that should not be read of philosophy, and not intending to do the same mistake in other areas, I ask you to publish a selection of readings in your area of expertise, The Sequences are a major rationality shortcut, and we need more of that kind.
Wow! Thanks for doing this regarding Computationalism. I don't really have an area of expertise such that I could produce a list like yours, but I can think of some areas where such a list would be very helpful (to me, at least).
How not to be a Naive Consequentialist: The ethical thinking here is a bit ... hmmm, lets say ... parochial because it has never confronted the best thinking of other schools of ethics (i.e. deontological ethics, virtue ethics, and contractarian ethics). Neither has it really addressed foundational issues within consequentialism itself - issues addressed by people like Sen and Harsanyi. It would be best if we could discuss our own ethical viewpoints in terms that other people can understand.
How not to be a Naive Evolutionist: Apart from some overenthusiasm for the just-so-stories of the less reputable parts of evolutionary psychology, LessWrongers seem to have a fairly good grasp of the philosophical implications of Darwinian evolution. But I have noticed some lack of awareness of some of the recent political/intellectual history of the field, plus a bit of the usual difficulty that outsiders have in separating the headline-grabbing pop science from the real science.
How not to be a Naïve Realist/Reductionist: This one is probably controversial, but what I have in mind here is not to overthrow realism and reductionism, but rather to provide some exposure to the saner criticisms of these philosophical doctrines. What naturalism meant before Quine. What emergence means to Philip Anderson. What is meant by scientific anti-realism and why it isn't a totally insane viewpoint.
How not to be naive about logic, models, and proof theory - particularly as they relate to proving program correctness and program equivalence. The importance of these topics to the topic of FAI are obvious. Yet a basic knowlege of the techniques and terminology of this field are sorely lacking in many of us. It is not rocket science.
We could probably also use such reading lists in the fields of machine learning, game theory, and Bayesian statistics. Perhaps also GOFAI. And at least one reading list on practical rationality.
comment by Bongo · 2011-04-13T20:21:49.984Z · LW(p) · GW(p)
The desired feature which they have achieved, and you soon will, is to be able to state reasons, debugg opponents and understand different paradigms, as opposed to just thinking that it’s 0 and 1’s all the way down and not being able to say why.
I like and agree with the implication that reading these is needed to defend computationalism, but not needed to believe for correct reasons that computationalism is true.
comment by cousin_it · 2011-04-14T09:56:44.204Z · LW(p) · GW(p)
What is naive computationalism? Would you classify Giles's questionnaire or my reply to dfranke as naive? If the answer is "yes" for either of them, could you point out where it goes wrong?
comment by Perplexed · 2011-04-14T05:02:43.971Z · LW(p) · GW(p)
XOR means you only have to read one, the second one being more awesome and complex.
To most people, it does not mean that. I suggest you use OR where you now use XOR.
Replies from: Thomas↑ comment by Thomas · 2011-04-14T07:17:35.776Z · LW(p) · GW(p)
Use the "<==". Since the second implies the first.
I am just a naïve computationalist, though.
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-04-15T02:49:36.069Z · LW(p) · GW(p)
This here is a good example of how computationalist thoght may get lost sometimes, even though it was a joke. It is explanatory: The second does not imply the first, it is more awesome, it will give you more information, but not necessarily the same information. It will only be the same *for the purposes of not becoming a Naïve computationalist. The implication could get lost in context (not that the XOR couldn't)
Replies from: Perplexed↑ comment by Perplexed · 2011-04-16T17:41:56.198Z · LW(p) · GW(p)
Speaking of thoughts getting lost ...
You are using the word XOR incorrectly. It has an accepted meaning - it is not a word that is available for you to attach a private definition to. The actual meaning of a recommendation to "do A XOR B" is "do A or do B but don't do both because whichever one you do second will undo the good effect of whichever one you did first". If the meaning you wish to convey is "do A or do B or do both (though both is not necessary)" then you should use the word OR. At least in English.
Please correct this. For some reason, it offends me far more than would a picture of Mohammed.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-04-17T03:55:58.803Z · LW(p) · GW(p)
To expand on that point, I should also point out that that more generally "do A_1 XOR A_2 ... XOR A_n" means not "do precisely one of A_1 through A_n", but rather "do an odd number of A_1 through A_n".
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-04-18T17:34:39.940Z · LW(p) · GW(p)
Ok, I need then to know what established symbol means: "do precisely one of A_1 through A_n"
Replies from: Sniffnoy, khafra↑ comment by Sniffnoy · 2011-04-19T01:13:41.916Z · LW(p) · GW(p)
"Do precisely one of A_1 through A_n". There's nothing wrong with writing things out longhand.
(Except, as Perplexed points out, I don't think that's really what you mean - would it really be such a problem to do more than one?)
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-04-19T17:22:28.119Z · LW(p) · GW(p)
If the purpose is to be mininmal, yes.
http://en.wikipedia.org/wiki/Exclusive_or
"one or the other but not both." From Wikipedia.
I begin to think I was not that wrong......
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-04-20T18:18:40.484Z · LW(p) · GW(p)
Your use may be technically correct but it is very misleading. If you simply say "do A or B", it's clear that doing one is sufficient so a person who wants to save effort will only do one. Specifying "xor" therefore suggests that there is some additional harm to doing both, beyond nonminimality.
↑ comment by khafra · 2011-04-18T17:59:40.706Z · LW(p) · GW(p)
Do A ∈{A1, A2, ... An} ?
Although in this case, I don't think there's any harm to come from doing more than one of A1 through An; wouldn't "at least one" work better?
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-04-18T18:24:49.478Z · LW(p) · GW(p)
I got that usage of 'XOR' from one of Pinker's books I believe. But given my utilitarianism, I'm postponing my knowledge so that those who suffer mohammed-level pain stop experiencing it, and using simple 'OR'
comment by thomblake · 2011-04-13T21:11:03.583Z · LW(p) · GW(p)
But I think I have all philosophy majors with me when I say that there must be a simpler way to get to the knowledge level we reach upon graduation.
Certainly not all, but I'm with you.
Finally, having wasted substantial amounts of time reading those parts that should not be read of philosophy, and not intending to do the same mistake in other areas, I ask you to publish a selection of readings in your area of expertise, The Sequences are a major rationality shortcut, and we need more of that kind.
Be cheered that you've been through the worst of it - few other fields have so very many " parts that should not be read " yet nonetheless have so many parts that should be read.
comment by wedrifid · 2011-04-14T10:10:56.852Z · LW(p) · GW(p)
My eyes hurt! What on earth happened to the formatting?
Replies from: Emile↑ comment by Emile · 2011-04-14T12:30:05.073Z · LW(p) · GW(p)
Microsoft Word, probably.
Replies from: Clippy↑ comment by Clippy · 2011-04-14T13:18:40.064Z · LW(p) · GW(p)
Not specifically. Microsoft Word is actually a way to ensure correct formatting -- you can make sure the in-site editor works like you intend by writing your article in Word first and the copying over. I think the formatting deviation here is due to other errors (even if Microsoft Word was also used).
comment by byrnema · 2011-04-13T21:19:56.720Z · LW(p) · GW(p)
I started with "37 Ways words can be Wrong" and didn't find much immediate benefit from the exercise in the way of being less naive about Computationalism.
I scroll down your list and see there's a lot about language. Is such an extensive education in language quite necessary? (Or do I need to keep reading to see?)
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-04-14T01:12:47.369Z · LW(p) · GW(p)
Most of those readings will tell you nothing about computationalism directly, they will broaden your vision of the world in such a way as to eventually make your kind of reasoning converge into a better rationalist about issues related to computationalism.
The main reason I put the personal identity text there for instance, is to cause a transition from thinking frequently that something (like personal identity) will carry over to it's closest continuator in a new slightly different scenario, to a more gradualist thinking, in which sometimes things may dissolve in any dimension you try to vary them. In a future in which some folks try to build FAI, this will be of extreme importance when considering the values dimension. For instance, will what we want to protect be preserved if we extrapolate human intelligence? This is my current line of work (any input welcome).
Replies from: Nisan↑ comment by Nisan · 2011-04-14T09:02:53.961Z · LW(p) · GW(p)
will what we want to protect be preserved if we extrapolate human intelligence?
Does this mean you're thinking about uploaded people here? I think that is an important research question.
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-04-15T02:51:46.551Z · LW(p) · GW(p)
I was thinking about CEV, but yes, the same question applies to uploads (and is not the classic upload issue).
Good that you find it important. I'm going to dedicate some time to that research.
Does anyone have good reasons to say it is not a good research avenue?
Replies from: boni_bo, boni_bo↑ comment by boni_bo · 2011-04-16T13:51:12.491Z · LW(p) · GW(p)
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can't preserve high levels of empathy and share the same "computational space". If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I'm not a homophobe), but I'm able to extrapolate it from my own values and be motivated to respect its preservation as if it were mine (How? Simulating it. As a highly empathic person, I can say that it hurts to make others miserable. So it works as an intrinsic motivation and goal)
↑ comment by boni_bo · 2011-04-16T13:48:23.410Z · LW(p) · GW(p)
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can't preserve high levels of empathy and share the same "computational space". If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I'm not a homophobe), but I'm able to extrapolate it from my own values and be motivated to respect it's preservation and if it were mine (How? Dunno, but as a highly empathic person, I can say that it hurts to make others miserable).
comment by Nisan · 2011-04-13T22:39:55.648Z · LW(p) · GW(p)
Thank you for this post. Would you please make the font easier to read?
Replies from: Alicorn↑ comment by Alicorn · 2011-04-13T23:10:09.363Z · LW(p) · GW(p)
I don't think it is a known thing why some articles come out with weird fonts/sizes. I looked in the editor, and there's a lot of HTML formatting tags before every individual paragraph in this post, but no obvious way to affect them one way or the other in the WYSIWYG. Manually deleting all the formatting tags would involve removing so much text that I'd be afraid to carve away actual content, so I'm leaving it for the original author, who will be more likely to notice something missing.
Replies from: JGWeissman, Vladimir_Nesov↑ comment by JGWeissman · 2011-04-13T23:35:11.356Z · LW(p) · GW(p)
It probably comes from cut and pasting from an external rich text editor.
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-04-14T01:17:18.820Z · LW(p) · GW(p)
It was complicated, but fixed.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-14T01:45:36.725Z · LW(p) · GW(p)
That happens to me sometimes...if I write a post in Word and then copy-paste, sometimes the last paragraph comes out in a different font than the rest, or the whole of it is in a weird font. I think most of this site is in Arial or something similar, but I usually write in Times, so that might have something to do with it.
Replies from: None↑ comment by [deleted] · 2011-04-14T02:04:47.300Z · LW(p) · GW(p)
When I run into this situation, which is fairly often, I use a text editor in between. I paste the text to the editor, then copy it from the editor. This removes formatting, fonts, etc. It has to be a real text editor, not something that allows formatting.
Specific software such as Word may have the ability to copy text only.
↑ comment by Vladimir_Nesov · 2011-04-17T17:55:29.548Z · LW(p) · GW(p)
I wrote some regexp scripts and removed the extra tags. Send me a message if something like this happens in the future and I don't notice.
Replies from: Alicorncomment by chatquitevoit · 2011-07-12T15:12:33.996Z · LW(p) · GW(p)
Question, hopefully one betraying my busynes, not laziness :D......can you watch the BBC production of Darwin's Dangerous Idea instead of reading it? And if so, which sections correspond?
Thanks loads.