SUGGEST and VOTE: Posts We Want to Read on Less Wrong
post by lukeprog · 2011-02-07T02:51:05.458Z · LW · GW · Legacy · 105 commentsContents
Rules The List So Far (updated 02/11/11) None 105 comments
Less Wrong is a large community of very smart people with a wide spectrum of expertise, and I think relatively little of that value has been tapped.
Like my post The Best Textbooks on Every Subject, this is meant to be a community-driven post. The first goal is to identify topics the Less Wrong community would like to read more about. The second goal is to encourage Less Wrongers to write on those topics. (Respecting, of course, the implicit and fuzzy guidelines for what should be posted to Less Wrong.)
One problem is that those with expertise on a subject don't necessarily feel competent to write a front-page post on it. If that's the case, please comment here explaining that you might be able to write one of the requested posts, but you'd like a writing collaborator. We'll try to find you one.
Rules
You may either:
- Post the title of the post you want someone to write for Less Wrong. If the title itself isn't enough to specify the content, include a few sentences of explanation. "How to Learn a Language Quickly" probably needs no elaboration, but "Normative Theory and Coherent Extrapolated Volition" certainly does. Do not post two proposed post titles in the same comment, because that will confuse voting. Please put the title in bold.
or... - Vote for a post title that has already been suggested, indicating that you would like to read that post, too. Vote with karma ('Vote Up' or 'Vote Down' on the comment that contains the proposed post title).
I will regularly update the list of suggested Less Wrong posts, ranking them in descending order of votes (like this).
The List So Far (updated 02/11/11)
- (35) Conversation Strategies for Spreading Rationality Without Annoying People
- (32) Smart Drugs: Which Ones to Use for What, and Why
- (30) A Survey of Upgrade Paths for the Human Brain
- (29) Trusting Your Doctor: When and how to be skeptical about medical advice and medical consensus
- (25) Rational Homeschool Education
- (25) Field Manual: What to Do If You're Stranded in a Level 1 (Base Human Equivalent) Brain in a pre-Singularity Civilization
- (20) Entrepreneurship
- (20) Detecting And Bridging Inferential Distance For Teachers
- (19) Detecting And Bridging Inferential Distance For Learners
- (18) Teaching Utilizable Rationality Skills by Exemplifying the Application of Rationality
- (13) Open Thread: Offers of Help, Requests for Help
- (13) Open Thread: Math
- (12) How to Learn a Language Quickly
- (12) True Answers for Every Philosophical Question
- (10) The "Reductionism" Sequence in One Lesson
- (10) The "Map and Territory" Sequence in One Lesson
- (10) The "Mysterious Answers to Mysterious Questions" Sequence in One Lesson
- (10) Lecture Notes on Personal Rationality
- (10) The "Joy in the Merely Real" Sequence in One Lesson
105 comments
Comments sorted by top scores.
comment by lukeprog · 2011-02-07T04:40:59.059Z · LW(p) · GW(p)
Conversation Strategies for Spreading Rationality Without Annoying People
Replies from: nazgulnarsil↑ comment by nazgulnarsil · 2011-02-07T18:58:45.646Z · LW(p) · GW(p)
(without resorting to dark arts)
Replies from: RobinZ↑ comment by RobinZ · 2011-02-08T22:27:36.457Z · LW(p) · GW(p)
It occurs to me that resorting to manipulative methods to teach someone methods which will improve their ability to detect said manipulations has problems entirely separate from moral concerns.
Replies from: orthonormal↑ comment by orthonormal · 2011-02-12T06:46:56.098Z · LW(p) · GW(p)
Once you've climbed the ladder, you can discard it.
Replies from: RobinZ↑ comment by RobinZ · 2011-02-12T13:45:13.956Z · LW(p) · GW(p)
Will the person being manipulated discard it?
Replies from: orthonormal↑ comment by orthonormal · 2011-02-12T19:14:18.487Z · LW(p) · GW(p)
If you set it up properly, yes. The moral concerns remain- I'm just saying that one could teach resistance to manipulation in a (at first undetected) manipulative fashion, so that the eventual discovery reinforces rather than undermines the lesson.
If it helps, imagine QQ doing this.
Replies from: RobinZ↑ comment by RobinZ · 2011-02-13T04:12:57.306Z · LW(p) · GW(p)
...who?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2011-02-13T04:58:58.749Z · LW(p) · GW(p)
Presumably Orthonormal is referring to "Quirinus Quirrell," the fictional character from Eliezer Yudkowsky's work of Harry Potter fanfiction, Harry Potter and the Methods of Rationality. Best wishes, the Less Wrong Reference Desk.
Replies from: orthonormal↑ comment by orthonormal · 2011-02-13T23:26:16.541Z · LW(p) · GW(p)
Indeed.
comment by [deleted] · 2011-02-07T20:04:13.184Z · LW(p) · GW(p)
Rational Education
As a mom who can't afford private schools and is horrified by the current state of public education in my country (USA), I'm keenly interested in rational homeschool curriculum ideas--both explicitly teaching rationality itself, and also teaching specific subjects in a rational way. Teaching the skills necessary for self-education might be a third topic.
Replies from: RobinZ↑ comment by RobinZ · 2011-02-08T22:36:36.391Z · LW(p) · GW(p)
I was homeschooled - I should ask my mom about resources she used. It is worth noting that the three of us siblings all became readers, and she provided us with good textbooks to work from. I taught myself algebra and geometry out of a geometry textbook she bought, for instance.
(I think I saw a copy of John Holt's How Children Fail around the house once, to name an author who appeared in a few Rationalty Quotes threads, but I really don't know if that was a significant part of my parents' thinking.)
comment by lukeprog · 2011-02-07T04:41:14.242Z · LW(p) · GW(p)
Smart Drugs: Which Ones to Use for What, and Why
Replies from: Skatche, None↑ comment by Skatche · 2011-02-07T16:01:51.277Z · LW(p) · GW(p)
I'd like to take a stab at writing this one, actually, if no one else is dead set on it. Expect it in the discussion section within forty-eight hours.
EDIT: Status as of 11:56AM EST, Feb 9: The first draft is about 90% completed, but I need to leave it aside and run off to class. I will post it this afternoon, and then revise it (aided by your contributions!) over the remainder of the week.
EDIT EDIT: Posted as of 8:10PM EST.
Replies from: lukeprog, Larkscomment by Wei Dai (Wei_Dai) · 2011-02-07T04:41:13.140Z · LW(p) · GW(p)
A Survey of Upgrade Paths for the Human Brain
Replies from: lukeprog↑ comment by lukeprog · 2011-02-07T04:46:40.667Z · LW(p) · GW(p)
Wei_Dai,
We need these four titles as four separate comments. Please post each one separately so they can be voted on separately, and delete this comment, and I will delete my comment here, too.
Also, please put each title in bold.
Thanks!
comment by JenniferRM · 2011-02-07T19:22:38.872Z · LW(p) · GW(p)
Detecting And Bridging Inferential Distance For Teachers
Roughly: Generic tutoring skills where a stable curriculum doesn't exist and what the person who is being taught actually knows can be patchy or surprising.
Replies from: JenniferRM↑ comment by JenniferRM · 2011-02-08T17:28:52.799Z · LW(p) · GW(p)
Responding to Silas's comment about the learning side of the equation. He wrote:
Don't forget the problem from the other side, too: how to detect and bridge inferential distence for knowledge-havers, i.e., how to find the knowledge-gap and convey the information to them. (That was actually the long-delayed article I'm working on, given my success in teaching others and my difficulty in getting others to convey knowledge to me when the roles are reversed.)
(The use of the term "knowledge haver" rather than "teacher" was deliberate.)
Yes, I used the activity oriented "learner" over the institutional role of "student" specifically because I was trying to emphasize general life skills.
I think it says something about our culture that there doesn't appear to be a common term to describe "one who conveys a lesson" but that doesn't have the connotations of "teacher" in that it is something people do for money. When I suggested an article for "teachers" I used the best non-neologism I could think of. Having thought about this some more I'm wondering if I "mentor" might be a better term than "teacher"?
The trick with mentoring is that its a long term process and is less about delivery of pre-specified lessons and more about delivering supplementary insight into the mentee's ongoing currently articulated life processes.
Thinking about the terminological issues, it strikes me that these conceptual framing issues have implications for what kinds of learning/teaching are actually possible. Perhaps a lot of the skills here involve having a realistic model of a normal person's willingness and capacity to learn? Maybe you just can't teach/mentor/tutor very well without long term insight and life-driven discovery of knowledge gaps? Maybe other languages cut the world in better ways? For example there's senpai and kohai in Japanese, but that also carries baggage about organizational status hierarchies rather than transmission of specialist expertise itself.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-08T18:02:19.040Z · LW(p) · GW(p)
I agree that there's no commonly used term for what you want to describe, and "knowledge haver" is just as problematic. Ideally, people will alternate between being a mentor and learner throughout their lives -- the process never ends.
Btw, though my article on this matter is ballooning, the advice for "teachers" amounts to:
a) Actually understand the subject matter yourself, in the sense of having a model that connects to your understanding of everything else. (Obligatory plug: that means Level 2.)
b) Identify the nearest point of common understanding ("nepocu"), and work back to your own understanding from there.
comment by Wei Dai (Wei_Dai) · 2011-02-07T04:57:31.701Z · LW(p) · GW(p)
Field Manual: What to Do If You're Stranded in a Level 1 (Base Human Equivalent) Brain in a pre-Singularity Civilization
comment by JenniferRM · 2011-02-07T19:26:10.152Z · LW(p) · GW(p)
Detecting And Bridging Inferential Distance For Learners
Roughly: How to notice when someone has more levels of expertise than you do in some area and then effectively and ethically acquire their skills/wisdom/knowledge.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-07T19:40:27.086Z · LW(p) · GW(p)
Don't forget the problem from the other side, too: how to detect and bridge inferential distence for knowledge-havers, i.e., how to find the knowledge-gap and convey the information to them. (That was actually the long-delayed article I'm working on, given my success in teaching others and my difficulty in getting others to convey knowledge to me when the roles are reversed.)
EDIT: Nevermind, I didn't read the discussion before saying that.
(The use of the term "knowledge haver" rather than "teacher" was deliberate.)
Replies from: JenniferRM↑ comment by JenniferRM · 2011-02-08T17:33:56.656Z · LW(p) · GW(p)
For reference, I responded here to put the useful conversation in the right part of the tree.
comment by Nick_Roy · 2011-02-07T08:16:37.878Z · LW(p) · GW(p)
Entrepreneurship
Replies from: Nick_Roy, Alexandros↑ comment by Alexandros · 2011-02-07T08:56:26.380Z · LW(p) · GW(p)
duly upvoted
comment by wedrifid · 2011-02-07T11:00:29.901Z · LW(p) · GW(p)
"How to Learn a Language Quickly" probably needs no elaboration
That one doesn't sound bad. I'd like to read a take from a non-Ferris source.
Replies from: None, komponisto↑ comment by [deleted] · 2011-02-07T14:19:30.839Z · LW(p) · GW(p)
In short: immersion, SRS and cloze deletion. Screw textbooks, classes and any "this isn't proper material for a learner" elitism.
Learning a language takes 3000-10000 hours with the best techniques (length depending only on how closely related it is to one you already know), half that for decent basic fluency, about 2-4 weeks of intense practice for pub-level conversations. There's no free lunch, but it can be pretty tasty.
Techniques:
1) There is no Immersion like Immersion and Khatzumoto is its prophet. (Slightly kidding, but he's my favorite advocate of the approach and fun to read. And he is absolutely right.)
2) What's cloze deletion? Anki FAQ. Why does it matter? It gives you lots of context around unknown pieces, making them stick better. Also, it's fun.
3) Anki is the best SRS, see the site for an explanation how to use it. At first, you make cards "word -> translation". Then "easy sentence -> translation". Then "easy sentence with cloze-deleted gap" -> "full sentence". Try adding more context, like surrounding sentences in a conversation, audio and so on. Always go "target language -> translation" or "target language -> target language". (Contrasting with Khatz' advice, I'd recommend staying with translations and bilingual material for a long time until you can actually feel how sucky the translation is.)
4) If you like talking more than reading, copy Benny. Otherwise just consume as described.
This might seem a bit Japanese-centric because a) I study it and b) it has the best learning community evar, but this stuff applies to all languages equally. Some esoteric choices (say, dead languages) require some additional tricks to fix specific issues, but essentially it's all the same.
If someone'd like more details, especially for some specific problem, technique or language, just ask. I've been studying languages for about 4-5 years as a main hobby with differing intensity now and have tried pretty much everything that's out there in some form or another. But basically, there are no shortcuts. Do what's fun, imitate relentlessly, use an SRS so you don't forget everything again.
Replies from: tenshiko↑ comment by tenshiko · 2011-02-07T14:52:14.681Z · LW(p) · GW(p)
Since when has Japanese had the best learning community evar? It may be very friendly online, but in my face-to-face experiences public courses have fallen painfully short - I've been studying independently for only a year and a half and talk circles around AP students. Although they do still have an edge on me in such fields as "ordering meals in restaurants" and "presenting business cards", they really have no functional knowledge of the language at all.
↑ comment by komponisto · 2011-02-09T02:35:36.248Z · LW(p) · GW(p)
Here is my recommended method, in three complex but well-defined steps:
Learn the grammar of the language using an old-fashioned (pre-1960) textbook.
Access a large corpus of data (text and speech in the language).
Practice using the language with people who know it, and receive feedback.
comment by Wei Dai (Wei_Dai) · 2011-02-07T04:57:13.794Z · LW(p) · GW(p)
True Answers for Every Philosophical Question
Replies from: SilasBarta, lukeprog, orthonormal, endoself↑ comment by SilasBarta · 2011-02-07T19:42:10.213Z · LW(p) · GW(p)
I don't want true answers to those questions; I want confusion-extinguishing ones.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-02-07T19:55:09.706Z · LW(p) · GW(p)
Are you saying there is no such thing as true and false in philosophy (only confusing and confusion-extinguishing), or that given the choice between a true but confusing answer and a false but confusion-extinguishing answer, you'd choose the latter?
Replies from: SilasBarta, SilasBarta↑ comment by SilasBarta · 2011-02-08T18:20:58.402Z · LW(p) · GW(p)
Maybe I started sounding a little thick-headed to you, as I have in the past, so let me try to rephrase my criticism more substantively.
For the class of questions you're referring to, I believe that as you gain more and more knowledge, and are able to better refine what you're asking for in light of what you (and future self-modifications) want, it will turn out that the thing you're actually looking for is better described as "confusion extinguishment" rather than "truth".
This is because, at a universal-enough level of knowledge, "truth" becomes ill-defined, and what you really want is an understandable mapping from yourself to reality. In our current state, with a specific ontology and language assumed, we can take an arbitrary utterance and classify it as true or false (edit: or unknown or meaningless). But as that ontology adjusts to account for new knowledge, there is no natural grounding from which to judge statements, and so you "cut out the middle" and search directly for the mapping from an encoding to useful predictions about reality, in which the encoding is only true or false relative to a model (or "decompressor").
(Similarly, whether I'm lying to you depends on whether you are aware of the encoding I'm using, and whether I'm aware of this awareness. If the truth is "yes", but you already know I'll say "no" if I mean "yes", it is not lying for me to say "no". Likewise, it is lying if I predicate my answer on a coinflip [when you're not asking about a coin flip] -- even if the coinflip results in giving me the correct answer. Entanglement, not truth, is the key concept here.)
Therefore, in the limit of infinite knowledge, the goal you will be seeking will look more like "confusion extinguishment" than "truth".
Replies from: komponisto, Wei_Dai↑ comment by komponisto · 2011-02-08T21:22:24.022Z · LW(p) · GW(p)
it will turn out that the thing you're actually looking for is better described as "confusion extinguishment" rather than "truth".
This is because, at a universal-enough level of knowledge, "truth" becomes ill-defined, and what you really want is an understandable mapping from yourself to reality
Rather than "truth" being ill-defined, I would rather want to say that the problem is simply that an answer of the form "true" or "false" will typically convey fewer bits of information than an answer that would be described as "confusion-extinguishing"; the latter would usually involve carving up your hypothesis-space more finely and directing your probability-flow more efficiently toward smaller regions of the space.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-09T17:29:51.829Z · LW(p) · GW(p)
Fair enough: I think it can be rephrased as a problem about declining helpfulness of "true/false" answers as your knowledge expands and becomes more well-grounded.
↑ comment by Wei Dai (Wei_Dai) · 2011-02-08T23:44:12.797Z · LW(p) · GW(p)
I'm afraid there's too big of an inferential gap between us, and I'm not getting much out of your comment. As an example of one confusion I have, when you say:
This is because, at a universal-enough level of knowledge, "truth" becomes ill-defined
you seem to assuming a specific theory of truth, which I'm not familiar with. Perhaps you can refer me to it, or consider expanding your comment into a post?
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-09T00:14:31.756Z · LW(p) · GW(p)
I thought I just explained it in the same paragraph and in the parenthetical. Did you read those? If so, which claim do you find implausible or irrelevant to the issue?
The purpose of my remarks following the part you quoted was to clarify what I meant, so I'm not sure what to do when you cut that explanation off and plead incomprehension.
I'll say it one more time in a different way: You make certain assumptions, both in the background, and in your language, when you claim that "100 angels can dance on the head of a pin". As those assumptions turn out false, they lose importance, and you are forced to ask a different question with different assumptions, until you're no longer answering anything like e.g. "Do humans have free will?" or about angels -- both your terms, and your criteria for deciding when you have an acceptable answer, have changed so as to render the original question irrelevant and meaningless.
(Edit: So once you've learned enough, you no longer care if "Do humans have free will?" is "true", or even what such a thing means. You know why you asked about the phenomenon you had in mind with the question, thus "unasking" the question.)
I looked at the list of theories of truth you linked, and they don't seem to address (or be robust against) the kind of situation we're talking about here, in which the very assumptions behind claims are undergoing rapid change, and necessitate changes to the language in which you express claims. The pragmatic (#2) sounds closest to what I'm judging answers to philosophical questions by, though.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-02-09T01:11:28.615Z · LW(p) · GW(p)
Thanks, that's actually much clearer to me.
You know why you asked about the phenomenon you had in mind with the question, thus "unasking" the question.
But can't that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I'm to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I'm human and flawed).
I'm worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don't understand it, but we do understand the heuristics.
Do you understand my worry, and if so, do you think it applies here?
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-09T16:13:06.891Z · LW(p) · GW(p)
I think I understand your worry: you think there's a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.
I differ in that I think it's the reverse: truth always "cashes out" as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.
To put it another way, what if the model you were given performs perfectly? Would you have any worry that, "okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing ... but what if it's just tricking me? This might not all be really true." I would say at that point, you have your priorities reversed: if something fails at being "truth" but can perform that well, this "non-truth" is no longer something you should care about.
↑ comment by SilasBarta · 2011-02-07T20:01:08.280Z · LW(p) · GW(p)
I'm saying that the "confusion-extinguishing" heuristic is a better one for identifying good answers to philosophical questions, as judged by me, and probably as judged by you as well.
Also that, given the topic matter, truth may be undecidable for some questions (owing to the process by which philosophers arrived at them), in which case you'd want the confusion-extinguishing answer anyway.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-02-07T20:42:49.919Z · LW(p) · GW(p)
"confusion-extinguishing" heuristic is a better one
Better than what? Better than "it seems true to me"? But I didn't ask for "Answers That Seem True".
"Confusion-extinguishing" may be the best heuristic I have now for arriving at the truth, but if someone else has come up with better heuristics, I want them to write about the answers they arrived at using those heuristics. I think I was right to identify what I actually want, which is truth, and not answers satisfying a particular heuristic.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-07T20:47:07.544Z · LW(p) · GW(p)
Do you want to know whether "100 angels can dance on the head of a pin" is true, or do you want the confusion that generated that question to be extinguished?
(It's true, by the way.)
↑ comment by lukeprog · 2011-02-07T05:09:39.831Z · LW(p) · GW(p)
Do you think this is possible right now? Would this be a joke post that you want to read, or something?
Replies from: beriukay↑ comment by beriukay · 2011-02-07T11:40:07.945Z · LW(p) · GW(p)
I hope it isn't a joke. I can see great use for a deconstruction of the many philosophical questions, failed philosophies, and most importantly, some kind of status report of more modern thought.
We've all heard Hume, Kant and Descartes, to name a few. But their ideas were formed long before the Scientific Revolution, which I arbitrarily deem to be the publishing of the Origin of the Species. It would be nice to point people arguing old school deontology, for example, to Wei Dei's chapter: True Answers About Why Good Will Alone Is Insufficient.
Replies from: Perplexed, Larks↑ comment by Perplexed · 2011-02-07T23:40:16.636Z · LW(p) · GW(p)
In some ways I like this idea, but in some ways I don't think it would work. Suppose, for example, that I produce a post entitled "The real reason why philosophical realism sucks". The post consists of 20 lines or so of aphorisms, each a link to a more complete philosophical argument. Cool, potentially informative, and very likely useful as a reference. But how would you discuss a posting like that in the comments?
↑ comment by Larks · 2011-02-07T17:06:11.914Z · LW(p) · GW(p)
Suppose acting out of concern for the morality of my future selves was moral.
For a reductio, assume moral motive was sufficient for moral action. Suppose you self-modified yourself into a paperclipper, who believed it was moral to make paperclips. Now, post-modification you could be moral by making paperclips. Recognising this, your motive in self-modifying is to help your future self to act morally. Hence, by our Kantian assumption, the self-modification was moral. Hence it is moral to become a paperclipper!
↑ comment by orthonormal · 2011-02-12T06:49:17.533Z · LW(p) · GW(p)
Full content of the actual post:
"I'm not sure."
comment by Risto_Saarelma · 2011-02-08T11:10:15.898Z · LW(p) · GW(p)
Why Cryonics, Uploading, and Destructive Teleportation Do Not Kill You
This was asked for in the IRC channel. I don't think anyone came up with a link to a good and accessible single-link refutation.
ETA: Changed the clumsy cutesy title according to the suggestion below.
ETA 2: David Chalmers' singularity paper has a reasonably good overview on the subject, but it's mixed up with a bunch of other stuff.
Replies from: lukeprog↑ comment by lukeprog · 2011-02-08T13:10:41.283Z · LW(p) · GW(p)
I've added this as: "Why Cryonics, Uploading, and Destructive Teleportation Do Not Kill You".
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2011-02-08T13:22:50.716Z · LW(p) · GW(p)
Yes, much better, thanks.
comment by Perplexed · 2011-02-07T23:26:25.205Z · LW(p) · GW(p)
The Arrow of Time
Gary Drescher dissolves this old mystery in one chapter of "Good and Real". Amazing. I must have read a dozen pop science books that discuss this problem, analyze some proposed solutions, and then leave it as a mystery. Drescher crushes it.
This may not fit in one posting, but it might well fit in a sequence of four or so.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-07T23:39:06.989Z · LW(p) · GW(p)
Believe it or not, I actually started an article on this around "17 October 2009" (per the date stamp) and never finished it. (I actually had the more ambitious idea of summarizing every chapter in one article, but figured Chapter 3 would be enough.) Might as well post what I have (formatting and links don't carry over; I've corrected the worst issues) ...
Here I attempt to summarize the points laid out in Gary Drescher's Good and Real: Demystifying Paradoxes from Physics to Ethics (discussed previously on Less Wrong), chapter 3, which explores the apparent flow of time and gives a reductionist account of it. To [...] What follows is a restating of the essential points and the arguments behind them in my own words, which I hope to make faithful to the text. It's long, but a lot shorter than reading the chapter, a lot cheaper than buying the book, and a lot less subjuntively self-defeating than pirating it.
The focus of the chapter is to solve three interrelated paradoxes: If the laws of physics are time-symmetric:
1) Why does entropy increase in only one direction?
2) Why do we perceive a directional flow of time?
3) Why do we remember the past but not the future?
Starting from the first: why does entropy -- the total disorder in the universe -- increase asymmetrically? To answer, start with a simple case: the billiard ball simulation, where balls have a velocity and position and inelastically bounce off each other as per the standard equations predicated on the (time-symmetric) conservation of linear momentum. For a good example of entropy's increase, let's initialize it with a non-uniformity: there will be a few large, fast balls, and many small, slow balls.
What happens? Well, as time goes by, they bounce off each other, and the larger balls transfer their momentum to balls with less. We see the standard increase in entropy as time increases. So if you were to watch a video of the simulation in action, there would be telltale signs of which is the positive and which is the negative direction: in the positive direction, large balls would plow through groups of smaller balls, leaving a "wake" during which it increases their speeds. But if we watch it in reverse, going back to the start, entropy, of course, decreases: highly-ordered wakes spontaneously form before the large balls go into them.
Hence, the asymmetry: entropy increases in only one direction.
The mystery dissolves when you consider what happens when you continue to view the simulation backwards, and proceed through the initial time, onward to t= -1, -2, -3, ... . You see the exact same thing happen going in the direction of negative time from t=0. So, we see our confusion: entropy does not increase in just the positive direction: it increases as you move away from zero, even if that direction isn't positive.
So, we need to reframe our understanding: instead of thinking in terms of positive and negative time directions, we should think in terms of "pastward" and "futureward" directions. Pastward means in the direction of the initial state, and futureward means away from it. Both the sequences t= 1, 2, 3, ... and t= -1, -2, -3, ... go into the future. (Note the parallel here to the reframing of "up" and "down" once your model of the earth goes from flat to round: "down" no longer means a specific vector, but the vector from where you are to the center of the earth. So you change your model of "down" and "up" to "centerward" and "anticenterward" [my terms, not Drescher's], respectively.)
Okay, that gets us a correct statement of the conditions under which entropy increases, but still doesn't say why entropy increases in only the futureward direction. For that, we need to identify what the positive-time futureward direction and the negative-time futureward direction have in common. For one thing, the balls become correlated. Previously (pastwardly), knowing a ball's state did not allow you to infer much about the other balls' states, as the velocities were set independently of one another. But the accumulation of collisions causes the balls to become correlated -- in effect, to share information with each other. [Rephrase to discuss elimination of gradients/exchange of information of all parts of system?...]
Note that the entropy does not need to increase uniformly: this model still permits local islands of lower entropy in the futureward direction, as long as the total entropy still increases. Consider the "wakes" left by the large balls that were mentioned above. In that case, the large balls will "plow" right through the small balls and leave a (low entropy) wake. (Even as they do this, the large balls transfer momentum to the smaller balls and increase total entropy.) The wakes allow you to identify time's direction: a wake is always located where the large ball was in an immediately pastward state. This relationship also implies that wake contains a "record" of sorts, giving physical form to the information in the current timewise state, regarding a pastward state.
This process is similar to what goes on in the brain. Just as wakes are islands of low entropy containing information about pastward states, so too is your brain an island of low entropy containing information about pastward states. (Life forms are already known to be dissipative systems that maintain an island of low entropy at the cost of a counterbalancing increase elsewhere.) [...]
So it's not that "gee, we notice time goes forward, and we notice that entropy happens to always increase". Rather, the increase of entropy determines what we will identify as the future, since any time slice will only contain versions of ourselves with memories of pastward states.
Replies from: timtyler, lukeprog, Perplexed↑ comment by timtyler · 2011-02-08T00:48:40.871Z · LW(p) · GW(p)
Hawking did this analysis in the first edition of A Brief History of Time - though he made a complete mess of it - and concluded that time will start going backwards when the universe stops expanding!
I remember back when I read this at university, I thought: Boltzman will be turning in his grave. I also remember immodestly thinking: here's a smart, famous scientist - and even spotty teenage I could see what a ridiculous argument he was making - in about two seconds.
Replies from: LukeStebbing↑ comment by Luke Stebbing (LukeStebbing) · 2011-02-08T05:05:15.416Z · LW(p) · GW(p)
When I re-read A Brief History of Time in college, I remember bemusedly noticing that Hawking's argument would be stronger if you reversed its conclusion.
A note to myself from 2009 claims that Hawking later dropped that argument. Can anyone substantiate that?
Replies from: timtyler↑ comment by lukeprog · 2011-02-08T13:06:22.431Z · LW(p) · GW(p)
BTW, Sean Carroll just wrote an entire popular-level book on this subject.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-08T15:29:44.866Z · LW(p) · GW(p)
Yes, I actually read a large portion of that book ("From Eternity to Here"?) whilst still in the bookstore. It provided great exposition of several difficult concepts, but ultimately I was unimpressed, since Carroll would frequently present a problem in thermodynamics, and I would be thinking, "Yeah, so what about the Barbour/Drescher solution to this?" and he wouldn't address it or say anything that would undermine it.
↑ comment by Perplexed · 2011-02-07T23:50:25.600Z · LW(p) · GW(p)
Cool. Except that one or the other of us didn't quite understand Drescher. Because my understanding was that he considered and rejected the idea that the arrow of perceived time is the same as the order of increased entropy. I thought he said that it is the inter-particle correlations that matter for subjective time - not entropy as such. But perhaps I misunderstood.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-02-08T00:07:54.687Z · LW(p) · GW(p)
I'm glad you bring this up, I've been interested in a discussion on this.
Drescher makes extensive use of the generalized concept of a "wake": in the ball case, a wake is where you can identify which direction is "pastward", i.e., to the direction of minimal inter-particle entanglement. Any mechanism that allows such an identification can be though of as a generalization of the "wake" that happens in the setup.
One such wake is the formation of memories (including memories in a brain), which, like the literal wake, exploit regularities of the environment to "know" the pastward direction, and (also like the wake) necessarily involve localized decrease but global increase of entropy. (edit: original was reversed)
So yes, I agree that Drescher is saying that the interparticle correlations are what determine the subjective feeling of time -- but he's also saying that the subjective feeling (memory formation) necessarily involves a local decrease of entropy and counterbalancing increase somewhere else.
Replies from: Perplexed↑ comment by Perplexed · 2011-02-08T00:31:11.540Z · LW(p) · GW(p)
I'm glad you bring this up, I've been interested in a discussion on this.
Unfortunately, I'm probably not the ideal person to carry out this discussion with you. I got my copy of the book through interlibrary-loan and it is due back tomorrow. :-(
comment by Vladimir_Nesov · 2011-02-07T11:19:30.499Z · LW(p) · GW(p)
Lecture Notes on Personal Rationality
(Not "in one lesson" summaries, but self-contained treatment of the topic, incorporating material from the Sequences probably, written from scratch by another author, as a presentation appropriate for teaching a course.)
comment by lukeprog · 2011-02-07T04:41:40.648Z · LW(p) · GW(p)
The "Reductionism" Sequence in One Lesson
Replies from: SilasBartacomment by Risto_Saarelma · 2011-02-08T12:44:00.201Z · LW(p) · GW(p)
The cognitive processes of people doing science and engineering
There's a bunch of research about what seems to be going on in the heads of small children who are learning to read or count, a lot of it also seems to be used in attempts to make them learn better. Try asking what you should expect to see happening in the heads of university students successfully learning mathematical physics or trained scientists doing their stuff, and there seems to be next to nothing. Math and science education is cognitively very demanding, and seems to be mostly uninterested in the cognitive strategies the students should try to develop to master the subject material.
Human cognition at this level might be too complex to get a handle on with any reasonable amount of work, but that doesn't quite explain the sink-or-swim apathy that seems to be the common attitude towards getting students to understand advanced math.
comment by ata · 2011-02-07T04:55:06.596Z · LW(p) · GW(p)
I'm not sure about the "__ in One Lesson" posts — I think it would be a good project to complete the sequence indexes that don't already have post summaries, but the sequences themselves are pretty information-dense; how would you condense them without losing a lot of their value?
Would they be targeted at people who have already read the full sequence and want a refresher/index, or at people who haven't read them yet, as an introduction?
Replies from: lukeprog↑ comment by lukeprog · 2011-02-07T04:59:14.503Z · LW(p) · GW(p)
It would indeed be hard to compress those sequences - and impossible for other sequences, such as those on meta-ethics and quantum physics. But I think it could be done. Some information would have to be lost, but that is okay: it's still there in the original sequence.
The goal would be to lower the barrier of entrance to Less Wrong. Right now the entrance exam is, "Go read the sequences," which is a command to read more words than are in Lord of the Rings. That's insane. We need a better way to welcome newbies into the site.
comment by XFrequentist · 2011-02-08T21:05:16.032Z · LW(p) · GW(p)
How to incorporate Spaced-Repetition Systems (SRS) into your self-study program.
Replies from: gwern, XFrequentist↑ comment by gwern · 2011-02-09T00:40:17.918Z · LW(p) · GW(p)
http://www.gwern.net/Mnemosyne.html might be interesting/helpful for writing that?
Replies from: XFrequentist↑ comment by XFrequentist · 2011-02-09T18:16:22.112Z · LW(p) · GW(p)
Nice article, thanks!
↑ comment by XFrequentist · 2011-02-08T21:09:25.745Z · LW(p) · GW(p)
This might seem trivial, but I've personally never gotten up the activation energy to actually learn how to use SRS effectively, despite being convinced that I would benefit from doing so.
A short, "Do It Like This"-type post would be most helpful!
comment by Risto_Saarelma · 2011-02-08T11:50:04.495Z · LW(p) · GW(p)
A survey of systems theory approaches and applications
I've been meaning to look into various general theories about systems and processes, but the field seems pretty obscure and ill-defined. Category theory seems to have been popping up in relation to this since the 70s, but I don't know if this stuff has been successfully applied to modelling any real-world phenomena. The late Robin Milner was working on some sort of process formalism stuff, but what I tried to read of that was extremely formalism-heavy and very light on the motivation. Baez's Rosetta paper tries to unify physical processes and computations with a category theoretical formalism.
One basic theme seems to be looking for a formalism that deals with processes instead of static objects. Process philosophy sounds like it should be relevant.
It seems obvious that better tools for understanding complex processes would be nice, but given that systems theory has been a thing since at least the mid-20th century and seems to remain pretty obscure and confusing despite people having struggled with plenty of complex systems in between, it looks like it might not be a terribly handy or powerful tool.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2011-03-04T08:11:31.822Z · LW(p) · GW(p)
Note to self: John Baez seems to be at this again.
comment by CronoDAS · 2011-02-08T02:50:44.409Z · LW(p) · GW(p)
How to Argue with Religious People, Conspiracy Theorists, and Other People Who Believe Crazy Things
Replies from: ruhe47, orthonormal↑ comment by orthonormal · 2011-02-12T06:57:30.941Z · LW(p) · GW(p)
Obvious prerequisite: replace "How" with "Whether" or "When".
comment by gwern · 2011-02-07T04:28:12.922Z · LW(p) · GW(p)
(1) Smart Drugs: Which Ones to Use for What, and Why
Out of curiosity, would you be interested in something like http://www.gwern.net/Drug%20heuristics ?
(Also, shouldn't you have posted each of those topics as a comment to be voted on or not?)
Replies from: wedrifid, lukeprog↑ comment by lukeprog · 2011-02-07T04:43:57.178Z · LW(p) · GW(p)
Oops, yes, thanks. I've commented with the titles I provided.
I suggest you leave a comment with a proposed post title for the Drug Heuristics thing you wrote, and see how many up-votes it gets!
Replies from: gwern↑ comment by gwern · 2011-02-09T00:32:33.945Z · LW(p) · GW(p)
I suggest you leave a comment with a proposed post title for the Drug Heuristics thing you wrote, and see how many up-votes it gets!
I don't have any catchy titles for it; 'How an evolutionist takes drugs'? 'Evolution's Excellent Encyclopedia of Enhancements'? 'Nick Bostrom's Favorite Nootropics'? 'Nootropics and You and Your Ancestors'? 'Heuristics & Huperzines'? They're all so silly.
(And there are already a lot of upvotes on my first comment, so I guess I'll let it stand.)
Replies from: lukeprogcomment by sixes_and_sevens · 2011-02-08T09:35:45.707Z · LW(p) · GW(p)
Less Wrong Survey Redux
comment by Perplexed · 2011-02-07T23:18:07.293Z · LW(p) · GW(p)
What topics would you like to see more of on Less Wrong
Whoops, we already did that one recently.
comment by Wei Dai (Wei_Dai) · 2011-02-07T04:57:24.810Z · LW(p) · GW(p)
FAI Design for Dummies
Replies from: lukeprog↑ comment by lukeprog · 2011-02-07T05:10:13.359Z · LW(p) · GW(p)
Could you explain more what you want, here? 'For Dummies' books are usually on fields for which there is a lot of well-accepted knowledge, but that's not the case with FAI.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-02-07T19:18:24.066Z · LW(p) · GW(p)
Sorry, I originally had all my requests grouped together, which perhaps made it clearer that they all were made tongue-in-cheek.
comment by lukeprog · 2011-02-12T04:56:38.268Z · LW(p) · GW(p)
Updated again.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-02-12T06:39:09.919Z · LW(p) · GW(p)
Should we be crossing off ones that have already been done?
Replies from: lukeprog↑ comment by lukeprog · 2011-02-12T14:59:11.926Z · LW(p) · GW(p)
I'll do that. Has one of them been done?
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-02-13T00:36:19.677Z · LW(p) · GW(p)
Well, there was http://lesswrong.com/r/discussion/lw/45u/a_rationalists_guide_to_psychoactive_drugs/, but I guess that was a bit less focused than "Smart Drugs: Which Ones to Use for What, and Why" was intended to be. (When I posted the grandparent I hadn't noticed the distinction.)