Open Thread, September 15-30, 2012
post by OpenThreadGuy · 2012-09-15T04:41:03.869Z · LW · GW · Legacy · 204 commentsContents
206 comments
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
204 comments
Comments sorted by top scores.
comment by sixes_and_sevens · 2012-09-16T03:57:08.385Z · LW(p) · GW(p)
Just a meta-comment for admins. The "Sequence Reruns" tag in the discussion section is now so common relative to the other tags, it's forced all the others in the tag cloud in the sidebar to the same relative size. That seems to be defeating the point a bit.
Replies from: FiftyTwocomment by [deleted] · 2012-09-17T09:41:26.399Z · LW(p) · GW(p)
While I didn't think much of the discussion in the recent creepy thread, I'm very much enjoying a series on a related subject written by Yvain.
- The First Meditation on Privilege (A beggar and a tourist in India)
- The Second Meditation on Privilege
- The Third Meditation on Privilege (Sex and spiders)
- The Fourth Meditation On Creepiness (Current standards and why they don't work)
- The Fifth Meditation on Creepiness (True Love)
- The Sixth Meditation on Superweapons
- The Seventh Meditation on The War On Applause Lights
- The Eighth Meditation on Superweapons and Bingo
- The Ninth Meta-tation on Meta
↑ comment by Emile · 2012-09-17T16:18:15.746Z · LW(p) · GW(p)
I strongly recommend the whole blog - that guy should post on LessWrong or something!
Those posts are followed by:
- The Sixth Meditation on Superweapons
- Jackdaws love my big sphinx of quartz - The Seventh Meditation on The War On Applause Lights
↑ comment by [deleted] · 2012-09-18T11:09:18.732Z · LW(p) · GW(p)
I really liked the series. He should make a discussion post on LW linking to these with some commentary. I he doesn't I think I will. What he shouldn't do is make a neutered special needs padded "safe for LessWrong" version.
Replies from: J_Taylor, NancyLebovitz↑ comment by J_Taylor · 2012-09-18T22:43:59.593Z · LW(p) · GW(p)
I think it would be better if some of the emotional appeals and personal elements were removed.
What he shouldn't do is make a neutered special needs padded "safe for LessWrong" version.
Maybe he should make a steroid-injected high-tier cutting-edge "too controversial for LiveJournal" version.
↑ comment by NancyLebovitz · 2012-09-22T03:57:45.831Z · LW(p) · GW(p)
If it's safe enough for LiveJournal, I expect it to be safe enough for Less Wrong.
He does a nice Eliezerish job of slowly easing people into ideas that they otherwise might not agree with.
↑ comment by beoShaffer · 2012-09-18T05:33:02.616Z · LW(p) · GW(p)
Replies from: J_Taylor↑ comment by SilasBarta · 2012-09-20T17:24:35.778Z · LW(p) · GW(p)
I don't think the example with the beggars in India provided the same insight to me that it did to Yvain. Mainly because tourists -- consistently, articulably -- don't want anyone to ask them for money, while women (I think?) do want (some) men to approach them with romantic overtures, and ostensibly filter them by certain characteristics.
Tourist: don't ever ask me for money. So don't do things that make it harder for me to turn you down for money and then ask for money.
Women[1]: don't ask me for romantic interaction unless I like you (or will like you), and if you get a false positive you're a f\**ing terrorist who did a thousand things wrong that I would have found charming if I liked you*.
In other words, once you accept that some approaches of a certain type are desired, you have to accept that some will make that type of approach without it being desired, and therefore not regard such instances as atrocities.
The tourist doesn't have this problem: s/he doesn't want begging at all. If there were a tourist that actually wanted to be panhandled but only from "awesome enough" people, and gave such people money I would expect that they'd have the insight to recognize that, "Well, some people will panhandle me without being awesome enough, and that doesn't mean they violated any rules. Here's a clear list of things that I regard as awesome, and here's what I do to mean that you should really stop revealing awesomeness and go away."
[1] of the type who are most vocal in the creepiness threads
Edit: I spoke too soon. Yvain addresses the above in the second meditation, but in the (more insightful) comparison to how "some tourists want their fortunes told". And indeed, some people want to be telemarketed to, and some want to be spammed. So what exactly makes some advances wanted and others not? I tried to address that issue here a while back, but all I got was resentment at the comparison to salesmen and telemarketers (oh, and a "you don't have the right to respond to my arguments here"). Go fig.
↑ comment by A1987dM (army1987) · 2012-09-18T17:41:06.608Z · LW(p) · GW(p)
I found them very interesting, though some of his statements and implications are naive or disingenuous. (As for Hanlon's razor, I assign a higher prior for the former and I'm not sure which way the evidence points overall. EDIT: after reading the ninth post of the series, it was definitely the former -- or that guy deserves an Academy Award.)
comment by gwern · 2012-09-17T03:10:57.273Z · LW(p) · GW(p)
Today I learned gwern.net
has passed 1 million page-views. :)
comment by satt · 2012-09-21T00:33:34.359Z · LW(p) · GW(p)
Activist and Less Wrong user Aaron Swartz has been charged with 13 felonies for downloading millions of academic articles from JSTOR.
comment by SilasBarta · 2012-09-17T04:55:39.315Z · LW(p) · GW(p)
I'm trying to find the existing research on the topic Paul Graham discusses in this article (regarding the relative merits of programming languages in footnote 3 and surrounding text) and which EY touches on here (regarding Turing tarpits).
Basically, within the realm of Turing-complete languages, there is significant difference in how easy it is to write a program that implements specific functionality. That is, if you want to write a program that takes a bunch of integers and returns the sum of their squares, then it's possible to do it by writing in machine code, assembly, brainfsck, BASIC, C, Java, Python, Lisp, but it's much easier (and more concise, intuitive, etc) in some than others.
What's more, Graham speculates there's a ranking of the languages in which programmers too comfortable in one of the languages "don't get" the usefulness of the features in languages above it. So BASIC-addicts might not appreciate what they can do with recursion or structured programming (i.e. by abolishing go-to statements); C-addicts might not appreciate what they can do with functions as first class objects, and Python-addicts might not appreciate what they can do with Lisp macros.
I'm interested in research that formalizes (or deconstructs) this intuition and attempts to come up with a procedure for comparing programming languages in this respect. The concept is similar to "expressive power", but that's not exactly the same thing, because they can all express the same programs, but some can span a broader array of programs using fewer symbols (due to how they combine and accumulate meaning faster).
Also, the theory of Komogorov complexity holds that "when compressing data, choice of language doesn't matter except to the extent that it adds a data-independent constant equal to the length of a program that converts between the two languages"; however, this still allows that some programs (before the converter) are necessarily longer, and I've heard of results that some Turing-complete formalisms require exponential blow-up in program size over e.g. C. (This is the case with Wolfram's tiny Turing-complete language in A New Kind of Science.)
tl;dr: I want to know what research is out there (or how to find it) regarding how to rigorously evaluate programming languages regarding relative ease and brevity of writing programs, and in what senses python is better than e.g. assembler.
Replies from: Viliam_Bur, Risto_Saarelma, eurg, Pentashagon, mstevens↑ comment by Viliam_Bur · 2012-09-24T15:27:47.507Z · LW(p) · GW(p)
Graham speculates there's a ranking of the languages in which programmers too comfortable in one of the languages "don't get" the usefulness of the features in languages above it. So BASIC-addicts might not appreciate what they can do with recursion or structured programming (i.e. by abolishing go-to statements); C-addicts might not appreciate what they can do with functions as first class objects, and Python-addicts might not appreciate what they can do with Lisp macros.
If "enlightening people about better programming languages" ever becomes a higher priority than "enlightening people about superior status of X language users", I think a good strategy would be to explain those possible insights in a simplest possible form, without asking people to learn the guru's favorite programming language first.
For example, to show people the benefits of the recursion, I would have to find a nice example where recursion is the best way to solve the given problem; but also the problem should not feel like an artificial problem created merely for the sake of demonstrating usefulness of recursion.
I can use recursion to calculate the factorial of N... but I can use a for-cycle to achieve the same effect (without risking stack overflow). Actually, if my math happens to be limited to 64-bit integers, I could just use a precomputed table of the first few results, because the factorials of greater numbers would overflow anyway. This is why using factorial as an example of usefulness of recursion is not very convincing. A good example would be e.g. recursively parsing and then evaluating a mathematical expression. -- The idea is that even if some concept is powerful, your demonstration of it may be wrong. But providing no demonstration, or very esoteric demonstration, is also wrong.
Alternatively, we could hold annual competitions between proponents of different programming languages. For example they would have to program a Minesweeper game, and would be rated by (a) the speed of finishing the product, and (b) the elegance of their code.
At this point, proponents of the X language may object that coding Minesweeper is not the fair way of comparing languages, because their language is better for some other tasks. But this argument cuts both ways. If my goal is to write a Minesweeper game, then perhaps Paul Graham's (or anyone else's) opinion about the best programming language is not relevant to me, here and now. Perhaps call me again when my ambitions change. Or tell me in advance what kinds of projects should I do in the X language, and what kinds of projects will rather easily work in any language.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-24T16:38:51.066Z · LW(p) · GW(p)
Or tell me in advance what kinds of projects should I do in the X language, and what kinds of projects will rather easily work in any language.
This.
↑ comment by Risto_Saarelma · 2012-09-18T13:06:17.590Z · LW(p) · GW(p)
Have you dug around the publications of Alan Kay's Viewpoints Research Institute? They're trying to really push the envelope with the expressive power of highly customized programming languages.
ETA: A LtU discussion thread about a recent VPRI progress report.
↑ comment by eurg · 2012-09-17T11:37:20.160Z · LW(p) · GW(p)
I haven't heard of any studies in that direction, although a few people try to do find something, like "how long are the programs comparatively" etc., similar to this quickly googled IEEE paper.
I assume that because
- programming languages are used by humans, and
- most of a programming language's quality is based upon its actual effects on humans in the target group, and
- for real work we value long-term effects
such a study is unfeasible, i.e. too much effort, too expensive. And probably not that much related to programming language features (as most popular languages converge to some degree, especially on the most important axises). Also, "fewer symbols/lexemes/special-purpose-constructs" is an advantage only under certain conditions, meaning: the question asked may very well already determine the answer.
Replies from: None↑ comment by [deleted] · 2012-09-17T20:19:43.225Z · LW(p) · GW(p)
That's the short version. The full paper is here. I found it while looking for a similar comparison that I remembered seeing mentioned several times when I had been interested in Common Lisp and it turned out to be a follow-up to that. Oh, and those things actually looked at time spent programming, so they didn't measure only silly things like program length.
Replies from: SilasBarta, eurg↑ comment by SilasBarta · 2012-09-18T03:05:35.945Z · LW(p) · GW(p)
Why is program length a silly thing?
Replies from: None, Morendil↑ comment by Morendil · 2012-09-18T23:42:18.352Z · LW(p) · GW(p)
It's silly when you're measuring it in "lines of code", because "line" is a somewhat arbitrary construct, for which "chunks of text delimited by newlines" is a worse approximation than most people think. (Quick proof: in many languages, stripping out all the newlines yields an equivalent program, so that all programs are effectively one-liners.)
Replies from: SilasBarta↑ comment by SilasBarta · 2012-09-18T23:47:08.996Z · LW(p) · GW(p)
Then it's a good thing I didn't measure it that way, or use that term in this entire thread! Whenever I did refer to measures of program length, it was with constructions such as:
some can span a broader array of programs using fewer symbols (due to how they combine and accumulate meaning faster)
↑ comment by Pentashagon · 2012-09-25T19:04:51.755Z · LW(p) · GW(p)
Do you mean only programming languages or programming languages plus the commonly available standard libraries? Some programming languages are very simple and powerful (LISP, Haskell), and some provide a large standard library and other tools to make it easy and straightforward to get things done in specific problem spaces (MATLAB, VB).
The most powerful and concise language I can imagine would be in the language of set theory with a magic automated solver: Define the domain of a problem space and a predicate for solutions and the result is the set of solutions. A standard library built on such a language would consist mostly of useful definitions for converting real world problems into mathematics and back again.
I think most programming languages try to approximate this to some degree, but the programmer is necessary to fill in the algorithmic steps between the problem space and the solution space. The fewer explicit steps the programmer has to specify to write a function, the more concise and powerful the language. The fewer definitions and functions necessary to convert real world input into a mathematical model and back to real world output, the more concise and powerful the standard library is.
Replies from: SilasBarta↑ comment by SilasBarta · 2012-09-25T22:54:37.358Z · LW(p) · GW(p)
Graham addresses the point you're making about the difference between the language being powerful vs it having a large standard library. As he says in the link, by his metric, two languages are "at the same level" if they only differ by one of them lacking a library function that the other has. His test for proving that language A is "above" B is the question: "To do a task in B, do you have to write an interpreter for A (or language on its level)?" For example, adding recursion to Basic requires going beyond writing one more library function.
Replies from: Pentashagon↑ comment by Pentashagon · 2012-10-10T16:12:25.283Z · LW(p) · GW(p)
After thinking about this for (a long) while, I think the most powerful languages would then be the ones that are internally extensible and allow modification of their own syntax and semantics. For instance BASIC would become a far more powerful language than C or LISP if it had a function for altering the interpreter itself. Certain versions of BASIC effectively had this by allowing functions to be written in machine language and then executed. Theoretically, the machine language snippets could extend the interpreter to add structured programming features or first class functions. The same could potentially be done with C by just including the Tiny C Compiler in the source (as both the compiled functions and C strings containing the source) and then reflexively recompiling the compiler to add features.
What is most interesting to me is that making a language that can modify itself requires a complete definition of its execution environment in order to be safely extensible. The most powerful languages would have to fully and formally define their semantics as well as their syntax and make both accessible within the language so that extension of the syntax could could safely extend the semantics to match. In other words BASIC with assembly language is not enough if you don't know everything about the hardware and the interpreter you start with. From my CS student days I seem to recall reading that few programming languages have rigorously defined semantics ("This feature is implementation specific" and "This behavior is undefined" appears far too often in most specifications). The closest thing I can find is the Gödel machine, but that's defined directly in terms of a Turing machine, afaict.
comment by NancyLebovitz · 2012-09-15T16:42:04.970Z · LW(p) · GW(p)
Learning to lose when you aren't Harry Potter-- an American studies ping pong in China.
comment by David_Gerard · 2012-09-15T06:10:41.668Z · LW(p) · GW(p)
The new Richard Carrier book, Proving History, is fantastic. Basically it's an introduction to Bayesian thinking for people who think in words. You'll enjoy it.
comment by Epiphany · 2012-09-26T05:25:31.525Z · LW(p) · GW(p)
I think this is worth it's own post but in light of my last discussion catching fire and burning to the ground, I have decided to request a critique on this one before posting in discussions:
Cryonics Moral Dilemma
Since joining LessWrong, I've been thinking about cryo a lot, and have encountered a dilemma:
According to GiveWell, "We estimate that giving a few thousand dollars to AMF likely saves a person's life." (They do malaria bed nets if you're not familiar).
Cryo costs tens of thousands of dollars, and it's not guaranteed to save even one life.
I don't see how I would ever justify signing up, myself, unless I show that I'm capable of making a large enough difference in the world that rescuing my difference making abilities justifies the risk and cost.
This also means "Reddit, help me find some peace I'm dying young" is a cute puppy dog cause. :/
Does anyone relate? What are your thoughts?
please critique the proposed discussion post
Replies from: Nisan, drethelin, Mitchell_Porter, shminux↑ comment by Nisan · 2012-09-26T09:42:06.927Z · LW(p) · GW(p)
This idea has been covered on Less Wrong before. I'll spend the next minute looking up some links.
EDIT: Years saved: Cryonics vs VillageReach
Against Cryonics & For Cost-Effective Charity
There's already discussion about cryonics and charitable giving in the Reddit, help me find some peace I'm dying young thread.
There is a discussion thread in Normal Cryonics about charity vs. cryonics. See in particular this comment.
↑ comment by Epiphany · 2012-09-28T21:55:43.386Z · LW(p) · GW(p)
Wow okay. I didn't expect to find such good arguments. I am still not adjusted to the intelligence level here. Well, different new discussion idea then.
↑ comment by drethelin · 2012-09-28T06:36:23.972Z · LW(p) · GW(p)
For me the argument is the same as for why I don't live a pauper and give all my wealth to charity: Selfishness. I know that I can feed someone for a long time on the money I spend on a trip somewhere, and I still prefer to take the trip. I will spend a lot more money on myself than I will on a friend, and a lot more on a friend than on a stranger.
Note: I am not signed up for Cryonics.
↑ comment by Mitchell_Porter · 2012-09-26T07:37:19.648Z · LW(p) · GW(p)
If cryonics works, then money spent on cryonics is much more of an investment than money spent on conventional charity. Several million people die every month. Malaria nets can only stop a small fraction of that, no matter how many are made, but cryonics can stop almost all of it - if it works. Anything done in support of cryonics in its fledgling form will help it to scale up.
The future won't revive you because it needs you to solve the Y3K problem, but we also don't save children from dying in order that they can go back to work the next day. Cryonics is a way to stop a life from being cut off, with the side effect that the cryonaut wakes up as a mere human in a transhuman world. If it's a friendly place, they'll have a chance to grow into their new world as an equal and a participant.
↑ comment by Shmi (shminux) · 2012-09-26T05:58:03.260Z · LW(p) · GW(p)
Grats, you are catching on :)
Replies from: Epiphany↑ comment by Epiphany · 2012-09-26T06:35:57.902Z · LW(p) · GW(p)
(: Thanks, Shminux.
I have finally gotten the ass-kicking I needed. Though not especially in my elitism thread, it was spread out... Wedrifid showed me arguments good enough to corner mine. Kindly provided a wonderfully devastating critique of my poll. Gwern's website shows that he's so well-read that I felt like an idiot. Eliezer's "The Magnitude of His Own Folly" depicted a deep acknowledgement of the terrible nature of reality that I found moving because it made him neither paranoid or unambitious - I relate to this but I haven't seen anyone like that before. You always seem to be there to say something snide, making my overconfidence think twice while Morendil typed me up a refreshing batch of sanity.
These are exciting.
I haven't felt so much respect and faith in humanity for a long time.
I was getting apathetic because of it.
Now my self-confidence is right about where it should be.
I decided to commit to reading the major sequences, and I'm considering reading them all. I previously did lots of things like learning about logical fallacies and razing my cached thoughts years ago, so these aren't as dense in new information as they'd be otherwise, but I'm learning to communicate with you guys and I'm enjoying Eliezer's brilliance.
comment by [deleted] · 2012-09-20T09:12:18.787Z · LW(p) · GW(p)
http://www.reddit.com/r/estimation/
Why didn't someone tell me about this earlier?
comment by lukeprog · 2012-09-22T06:06:45.360Z · LW(p) · GW(p)
Geoff Anders just showed me this PowerPoint prepared by U.S. Air Force's Center for Strategy and Technology, the same group that produced this bombastic 'future of the air force' video.
Slide 18 makes a point I often make when introducing people to the topic: the military's policy assumption is that humans will always be in the loop, but in reality there will be constant pressure to pull humans out of the loop (see e.g. Arkin's military-funded Governing Lethal Behavior in Autonomous Robots). The slide concludes: "In fact, exponential technological change is outpacing the ethical programming of unmanned technology." Which is not far from the way I put it in Facing the Singularity: "AI safety research is in a race against AI capabilities research. Right now, AI capabilities research is winning, and in fact is pulling ahead. Humanity is pushing harder on AI capabilities research than on AI safety research."
comment by [deleted] · 2012-09-20T05:45:58.396Z · LW(p) · GW(p)
Almost a Century Ahead of The New York Times
Many of the influential thinkers, prestigious publications, and important articles of that bygone era are almost totally unknown today, even to many specialists, and the vacuum produced by that loss of historical knowledge has often been filled with the implied histories of modern Hollywood movies and television shows, some of which are occasionally not totally accurate or realistic. Indeed, a casual perusal of the major writings of the past often seems somewhat akin to entering a science fictional alternate-reality, in which America took a different turn in the 1920s than we know it actually did. Except that in this case, the alternate-reality we are exploring is the true one, and it is our assumed understanding of the past which turns out to be mostly fictitious.
I don't recall any direct modern evidence that Denisovans where small brained, thought it isn't a out there assumption to make, but otherwise Unz makes a good point. As Robin Hanson pointed out in Why Read Old Thinkers
Cynicism often seems this way to me. Finding deep insight in 350 year old sayings by de La Rochefoucauld discourages me, as it suggests either that I will not be able to make much progress on those topics, or that too few will listen for progress to result. Am I just relearning what hundreds have already relearned century after century, but were just not able to pass on?
Having different assumptions and related motivated cognition, means different kinds of evidence will be emphasised and others ignored. Sometimes the science of the past is more like the science of a foreign country rather than something obsolete that we moved on from. We notice he past getting it wrong and we kind of gloat about this in narratives of history. But the fact is sometimes we get stuff wrong that the past got right. And I'm not talking about them getting it right for the wrong reasons, sometimes they did good work that we dismiss.
Replies from: Vaniver↑ comment by Vaniver · 2012-09-21T00:43:53.377Z · LW(p) · GW(p)
So, Cochran and Harpending have been posting to their blog about genetic noise and parental age. Given modern data, this is actually a rather important result with a lot of wide-ranging implications. Turns out, people thought it was significant 50 years ago, and it's mostly lain dormant since then.
Replies from: gwern↑ comment by gwern · 2012-09-21T02:16:27.329Z · LW(p) · GW(p)
In genetics, it seems to be the rule that speculation far outpaces what can actually be known. Countless times I read these papers and they go 'as speculated by X 50 years ago...' (where X is usually Darwin or Fisher). I understand there's some question as to even whether Mendel's peas showed the laws he wanted them to show! Which would indeed exemplify the theory outpacing the practice.
comment by Matt_Simpson · 2012-09-17T02:04:51.106Z · LW(p) · GW(p)
Evidence is building that High intensity interval training, e.g. Tabata sprints, is more effective at physical conditioning than low intensity endurance techniques. In terms of weightlifting, "low-rep, high-weight" workouts seem to be better than "high rep, low-weight" workouts.*
I wonder if something analogous is true for mental training. E.g., will you improve mathematical ability faster by grinding through a bunch of relatively easy problems, or by spending a shorter amount of time mentally exhausting yourself on problems that push your limits? Anyone know of any solid evidence?
My experience seems to reflect the latter being more effective. I spent a lot of time my last year or two of undergrad grinding through a bunch of relatively easy calculus problems in order to finish up my degrees in a reasonable amount of time. In my second year of grad school, I took a measure theoretic probability & statistics sequence that was the opposite - a small number of problems, but each one was a struggle. It was rare that I could finish more than 25% of the problems the first time I attempted them. Unsurprisingly, I felt like I improved much more in mathematical ability after taking that sequence than I improved after my undergrad calculus grind. The effect seems stronger than this though - I felt like the measure theory sequence improved my ability to do difficult yet standard calculus problems more than the calculus grind ever did even though I wasn't actually doing those types of problems during the measure theory sequence. The effect was probably mediated through improving my general mathematical/logical reasoning abilities. Now these are just my impressions - untrustworthy for the whole gamut of reasons - plus even if we take them at face value there's a ton of confounders. Nonetheless, it's Bayesian evidence. Anyone else have a similar experience?
* I'm not an expert here and could very easily be wrong. If you have evidence one way or the other to share, please allow me (and others) to update.
Replies from: gwern, Vaniver, eurg↑ comment by gwern · 2012-09-17T02:59:55.781Z · LW(p) · GW(p)
How about 'deliberate practice'? I'm fairly sure that it implies that you're working on a problem that challenges you and pushes your limits.
Replies from: Matt_Simpson, beoShaffer↑ comment by Matt_Simpson · 2012-09-18T18:39:30.250Z · LW(p) · GW(p)
I remember an unconference now from the July minicamp on deliberate practice now. IIRC the speaker suggested something similar to beoShaffer's comment.
Replies from: beoShaffer↑ comment by beoShaffer · 2012-09-18T22:13:30.770Z · LW(p) · GW(p)
I forgot to mention that I was basing my info on a keynote speaker that I suspect may have done the minicamp unconference. Was the speaker a female psychology professor who made numerous movie references.
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2012-09-19T01:19:08.119Z · LW(p) · GW(p)
Nope, it was a male. I think it was Mark E, known around LW as Mark E
(I don't recall how to say/spell his last name, I just remember it being somewhat complicated and that it's also part of his LW name)
Replies from: beoShaffer↑ comment by beoShaffer · 2012-09-19T04:35:50.032Z · LW(p) · GW(p)
Nevermind then.
↑ comment by beoShaffer · 2012-09-17T03:17:09.276Z · LW(p) · GW(p)
From what I understand deliberate practice would generally favor the small number of hard problems, especially for building overall mathematical competence/your ability to tackle hard problems. However, doing the easy problems in a challenging way, like trying to do them as fast as possible while still maintaining a high standard of accuracy, would also lead to improvement, particularly for you ability to do that specific type of problem quickly and accurately.
↑ comment by Vaniver · 2012-09-17T13:34:18.581Z · LW(p) · GW(p)
I wonder if something analogous is true for mental training. E.g., will you improve mathematical ability faster by grinding through a bunch of relatively easy problems, or by spending a shorter amount of time mentally exhausting yourself on problems that push your limits? Anyone know of any solid evidence?
What's the basis behind HIIT? If I remember correctly, it's that the high intensity activity kicks your metabolism up a notch, continuing to burn calories / seem active for a significant period after the training is officially complete. Is there a similar mechanism for learning and memory?
There's solid evidence that spaced repetition- like in the Saxon method- is demonstrably better than doing something once and moving on with little review. In general, it seems like practice is a very important part of mathematics ability.
There are also time-based effects for learning things before going to sleep- but I'm not sure how practical using those would be.
↑ comment by eurg · 2012-09-17T11:45:11.630Z · LW(p) · GW(p)
Off-Topic Nit-Picking:
Evidence is building that High intensity interval training, e.g. Tabata sprints, is more effective at physical conditioning than low intensity endurance techniques.
"physical conditioning" is a very general term. For instance: Is evidence building that Tabata sprints are more effective for preparing for a 100k ultra-marathon?
Of course competitive runners do some sort of interval training, and -- if information on The Internet (reddit) is to be believed -- runners do not train the full distance. And if basic health and looks is your goal, running is probably not the most time-efficient (of even effective) way for doing it. But this "endurance is all wrong" meme is overshooting it a bit...
comment by Multiheaded · 2012-09-16T19:07:17.110Z · LW(p) · GW(p)
Good news for EY: Akinator can recognize him and unequivocally believes that he's 'famous'. Bad news for EY: Akinator also believes that he uses 'green energy' to 'power up'. I'm not sure if that refers to an environmentally clean power source or literally green supervilllain rays of death, like those of the Necrons in WH40k. Probably the latter. In any case, it hardly improves SIAI's public image :(
Replies from: MixedNuts, Alicorn, Kaj_Sotala, pleeppleep, None↑ comment by Alicorn · 2012-09-16T19:35:38.487Z · LW(p) · GW(p)
I tried and didn't get Eliezer. This is possibly because Akinator doesn't believe that any actual famous people are personally known to the people playing the game? I guess it would have had an even harder time if I'd played back when I would've had to say "yes" to "does your character live with you?".
Edit: Told it it was wrong and hit continue; still got it wrong. When I told it it was wrong again it wanted me to pick off a list, and admittedly Eliezer was on the list!
↑ comment by Kaj_Sotala · 2012-09-16T20:29:48.984Z · LW(p) · GW(p)
I got Eliezer (the page's second guess) despite saying "no" to the green energy question.
Replies from: komponisto↑ comment by komponisto · 2012-09-16T21:06:27.820Z · LW(p) · GW(p)
If you click on "Game Report" at the end, it will tell you which answer it expected for each question, given who the person is. (It is capable of guessing the correct person even with a few "unexpected" answers.)
↑ comment by pleeppleep · 2012-09-18T13:05:04.780Z · LW(p) · GW(p)
A year ago, it specifically got Harry James Potter Evans Verres from Methods of Rationality.
comment by Matt_Caulfield · 2012-09-15T15:27:45.264Z · LW(p) · GW(p)
Kind of a stupid question:
It's a truism in the efficient charity community that when giving to charity, we should find the most efficient group and give it our entire charity budget; the common practice of spreading donations among groups is suboptimal. However, in investing it's considered a good idea to diversify. But it seems that giving to charity and investing are essentially the same activity: we are trying to get the highest return possible, the only difference is who gets it. So why is diversification a good idea for one and not the other?
Replies from: wedrifid, GuySrinivasan, benelliott, Anatoly_Vorobey↑ comment by wedrifid · 2012-09-15T19:38:54.288Z · LW(p) · GW(p)
It's a truism in the efficient charity community that when giving to charity, we should find the most efficient group and give it our entire charity budget; the common practice of spreading donations among groups is suboptimal. However, in investing it's considered a good idea to diversify. But it seems that giving to charity and investing are essentially the same activity: we are trying to get the highest return possible, the only difference is who gets it. So why is diversification a good idea for one and not the other?
If you are attempting to maximise expected returns from your personal investment you would not diversify (except within resources that have identical expected returns). However with personal investments you have some degree of risk aversion. That is, you don't value money linearly all the way from 0 to $10,000,000 and so splitting the investment between multiple stocks gives higher expected utility even though the expected returns in $ will be slightly lower.
This differs when it comes to charitable giving because it is assumed that your personal donations aren't sufficient to change the marginal utility significantly. Personally owning $10,000 rather than $0 is much more useful than owning $20,000 instead of $10,000 but after you give $10,000 to The Society For Cute Puppies And Mosquito Nets the value of giving another $10,000 to TSFCPaMN has probably barely changed at all. Diversifying becomes important again when you have enough financial power to change the margin all on your own.
↑ comment by SarahSrinivasan (GuySrinivasan) · 2012-09-15T16:02:09.782Z · LW(p) · GW(p)
You've pinpointed it: the only difference is who gets it. When investing, diversification as the receiver of the return is useful because you'd rather gain slightly less than often lose everything. When ... living, diversification as the receiver of the return is useful for the same reason.
When investing, you'd like your buyers to diversify... but there's only one buyer, so that buyer needs to diversify. But when giving charitably, the world would like its buyers to diversify, and there are lots of buyers. Assuming its buyers are sufficiently independent, the world gets enough diversification just because its buyers make different decisions. So as long as sufficiently many people make different charitable giving decisions than you, feel free to buy only what you think are the most efficient charities.
The world doesn't care how much you help it, the world only cares how much it gets helped overall.
Replies from: rocurley, Matt_Caulfield↑ comment by rocurley · 2012-09-16T02:40:44.178Z · LW(p) · GW(p)
I'm not sure if the arguments for diversification in investments actually apply to charity. You want to diversify your investments because you're risk averse. I would not, for example, bet $1000 on a coin flip; losing $1000 is more painful to me than gaining $1000 is pleasurable. In other words, your utility is not linear in money in your bank account.
However, for charity, I think it makes perfect sense to have linear utility in money donated to charity. If you value saving two lives twice as much as saving one, and the cost per life saved is constant, then you should value each dollar given to charity as much as the last. Given that, you shouldn't really care about variance; you can focus on expected returns. As such, I don't think you should diversify charity donations at any scale, personal or on a worldwide scale; just donate to the most efficient charity, and then when that charity becomes less efficient, donate to the newest most efficient.
↑ comment by Matt_Caulfield · 2012-09-15T16:22:40.826Z · LW(p) · GW(p)
Now that I have read your answer, it seems obvious in retrospect. Very nice, thanks!
↑ comment by benelliott · 2012-09-18T22:52:17.490Z · LW(p) · GW(p)
The difference is very simple.
Is it better to have $100,00, or a 30% chance of $1,000,000 and a 70% chance of being homeless. Obviously the former
Is it better to save 1 life, or have a 30% chance of saving 10 lives and 70% chance of doing nothing. Obviously the latter.
↑ comment by Anatoly_Vorobey · 2012-09-16T10:52:49.483Z · LW(p) · GW(p)
It seems true that when investing, you're trying to get the highest return possible, in terms of a single value measured in currency.
I've never understood why it should also necessarily be true with charity. It seems often to be an unexamined assumption, and may be reinforced by using terminology like "utilons" that appears to be begging the question.
Someone who donates both to the mosquito nets effort in Africa, and to the society which helps stray dogs and cats in Michigan, is not necessarily being irrational. They just may be perceiving the two benefits to lie on incomparable axes. They may be caring about helping Africans and helping stray dogs simultaneously, in different ways that are not exchangable to each other. The familiar objection is: "Sure they are exchangable; everything is exchangable into utilons; if you don't see a clear rate of exchange for your own preferences, that just means you still ought to estimate one given your imperfect knowledge, and act on it". But I don't see why that should be true.
Certainly most of our spending is done on axes that are incomparable to one another. We have needs along those axes that we do not normally consolidate to one "most efficient" axis, even after the minimal requirements are met. Investing is the activity that's the odd one out, here - and one of the reasons it is is precisely that we don't care much which of the companies we invest in brings us profit. It seems odd that charity should so unequivocally stand along with investing as an exception.
If charitable giving is not an exceptional way for us to spend money, the idea of a single currency becomes difficult to support, because if charity must be so streamlined, why not all other activity? In other words, sure, you can criticize someone for helping stray dogs by saying their money could be saving lives in Africa instead; but is that very different from criticizing them for buying a large color TV, when their money could be saving lives in Africa instead?
Replies from: Kindly, pengvado↑ comment by Kindly · 2012-09-16T14:54:30.768Z · LW(p) · GW(p)
Charity falls in the same category as investing to the extent that you care about the effectiveness of the different charities (as opposed to feeling good about yourself, for example). Here's why.
For the sake of simplicity, suppose that you have $2000 to give to charity, and $1000 can either save a child in Africa or a dog in Michigan. For now, we assume that you care about the number of children and dogs saved.
If the charities currently have enough money to save 999 dogs and 999 children, then preferring an even split to a $2000/$0 split means preferring 1000 dogs and 1000 children saved, to 1001 dogs and 999 children. Nothing wrong with this, by itself.
However, we aren't precisely certain about these numbers; and if the charities have enough money to save 1000 dogs and 998 children, then preferring an even split to a $0/$2000 split reveals exactly the opposite preference. This is a problem.
In general, as long as our uncertainty about how much the charities are doing is much greater than the impact of our own donations, a similar thing happens. It's easy to have enough information to prefer a $2000/$0 split or a $0/$2000 split above all: for instance, if you think it's best to have 1000 children and 1000 dogs saved, and currently there's money to save about 500+/-100 children and 1500+/-100 dogs, you should definitely donate all your money to the children charity.
But having enough information to definitely prefer an even split is nearly impossible. The best we can do is consider the case where the difference is probably small, so you shouldn't really care one way or the other. This is probably rare, though, and even that doesn't argue in favor of an even split.
Now, obviously our assumption that we only care about the totals is unrealistic. But then, the usual argument runs, maybe we should figure out how much we care about the totals, and efficiently distribute that portion of our money (most likely, only to one charity). After that, the remaining money can go to making yourself feel good about yourself, or to signaling that you care about dogs, or whatever.
Notably, argument says nothing about color TVs, because you're certain of the exact impact: you definitely go from 0 color TVs to 1. If we had that much information about the impact of charity, maybe even splits would more often be a good idea.
↑ comment by pengvado · 2012-09-16T13:15:16.083Z · LW(p) · GW(p)
Someone who donates both to the mosquito nets effort in Africa, and to the society which helps stray dogs and cats in Michigan, is not necessarily being irrational. They just may be perceiving the two benefits to lie on incomparable axes.
If you decide that donating 1$ to mosquito nets and 1$ to stray dogs is better than 2$ to one or 2$ to the other, then you have in fact performed a comparison between those three actions. If the type of good generated by mosquito nets is one axis and the type of good generated by saving stray dogs is another, then the scalar-valuedness of utility isn't about the axes, it's about comparing any given point in that 2-D space with any other point.
The alternative to being able to compare things isn't some decision process other than comparison. The alternative is to not have preferences about the state of the world at all; to say that there is no such thing as a "right thing to do" in a given circumstance.
Why not all other activity?
Expected utility does apply to all activity.
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2012-09-16T14:44:07.661Z · LW(p) · GW(p)
then... it's about comparing any given point in that 2-D space with any other point.
Granted, preferring one particular 2D point to another may be read as running a scalar-valued comparison function on the 2D space (such a reading is not without problems, e.g. because real people's preferences may not be transitive, but let's ignore those details). However, from the existence of such a function it does not follow that "we should find the most efficient group and give it our entire charity budget" - this being the claim the universality of which I was contesting.
Replies from: pengvado, Vladimir_Nesov↑ comment by pengvado · 2012-09-16T16:17:01.835Z · LW(p) · GW(p)
From the existence of such a function it does not follow that "we should find the most efficient group and give it our entire charity budget"
Agreed. To derive that you would also need a smoothness constraint on said function, so that it can be locally approximated as linear; and you need to be donating only a small fraction of the charity's total budget, so as to stay within the domain of said local approximation.
I assert that the smoothness property is true of sane humans' altruistic preferences, but that's not something you can derive a priori, and a sufficiently perverse preference could disagree.
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2012-09-16T20:36:47.170Z · LW(p) · GW(p)
To derive that you would also need a smoothness constraint on said function, so that it can be locally approximated as linear;
You're solving essentially a global optimization problem; what use is (the existence of) a local linear approximation? If the utility function happens to be the eminently smooth f(x,y)=xy, then under the constraint of x+y=const the optimal solution is going to be an even split. It's possible to argue that this particular utility function is perverse and unnatural, but smoothness isn't one of its problems.
You don't even need contrived examples to show that utility functions do not admit their maxima along one axis. My other point was that charity may not be easily distinguishable from other types of spending[1], and our normal utility functions definitely don't have that behavior. We do not, among different types of things we require/enjoy, pick out the "most efficient" one and maximize it alone.
[1] As another example of that thesis, consider the sequence: I buy myself a T-shirt - I buy my child a T-shirt - I pool funds with other parents to buy T-shirts for kids in my child's kindergarden, including for those whose parents are too poor to afford it - I donate to the similar effort in a neighbouring kindergarden - I donate to charity buying T-shirts for African kids.
Replies from: DanielLC↑ comment by DanielLC · 2012-09-16T21:53:59.318Z · LW(p) · GW(p)
then under the constraint of x+y=const the optimal solution is going to be an even split.
Yeah, but unless you actually end up at that point, that's hardly relevant. If people donated rationally, we would always be at that point, but people don't, and we aren't.
and our normal utility functions definitely don't have that behavior.
We're normally only dealing with one person. If you play videogames, you quickly get to the point where you don't want to play anymore nearly as much, so you do something else. If you save someone's life, there's still another guy that needs saving, and another guy after that, etc. You can donate enough that the charity becomes less efficient, but you have to be rich and the charity has to be small.
Also, consider: If you wanted a shirt, and I bought you one, you'd stop wanting a shirt and spend your money on something else, just like if you bought the shirt. If you wanted to donate $100 to X charity, and I told you that I already did, would you respond the same?
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2012-09-17T10:32:41.755Z · LW(p) · GW(p)
Yeah, but unless you actually end up at that point, that's hardly relevant. If people donated rationally, we would always be at that point, but people don't, and we aren't.
I don't understand how what you just said relates to my example. To recap, I meant my example, where the maximum is at the even split, to refute the claim that any smooth utility function will obtain its maximum along one "most efficient" axis. The whole argument is only about the rational behavior.
We're normally only dealing with one person. If you play videogames, you quickly get to the point where you don't want to play anymore nearly as much, so you do something else. If you save someone's life, there's still another guy that needs saving, and another guy after that, etc.
While this is true, and does point at an interesting difference between charity and many other behaviors, it can't isolate charity, not by a long shot. There are many, many things we do that we stop doing not because of satiety or exhaustion, but because of other priorities.
To give the first example that comes to mind, a personal one, I'm learning piano and I also play table tennis. I enjoy both activities immensely and would like to do either of them a lot more (but can't because of other commitments). There's no question of satiety of exhaustion at the level I currently invest in either. I could stop doing one of them and use that time for the other, but I explicitly don't want do to that and consider that an inferior outcome. I don't think this preference of mine is either irrational or very unusual.
consider: If you wanted a shirt... If you wanted to donate $100 to X charity...
Here's a closer "personal spending" analogy to charity: I commit to putting aside $500 every month for a future downpayment on a house (a goal far in the future). A family friend gives me an unexpected present of $500, putting it right into the fund. Am I likely to forego my own deduction this month and use it for other fun things? Depends on the kind of person I am, but probably not.
Replies from: cousin_it, DanielLC↑ comment by cousin_it · 2012-09-17T15:54:00.854Z · LW(p) · GW(p)
Kindly's comment gets it right. It's not about satiety. If you're a consequentialist and care about the total amounts of money donated to each charity, rather than about how much you donated, then the decision in favor of the equal split must be very sensitive to the donations of others like you. That's the relevant difference between selfish spending and charity.
↑ comment by DanielLC · 2012-09-18T00:23:22.065Z · LW(p) · GW(p)
To recap, I meant my example, where the maximum is at the even split, to refute the claim that any smooth utility function will obtain its maximum along one "most efficient" axis.
You only control a tiny portion of the money that gets donated to charity. If there's currently an equal amount of money donated to each charity, the ideal action would be to donate equally to each. If the difference between the amounts exceeds the amount you donated, which is more likely the case, you donate to the one that there's been less donated to. For example, if one charity has one million dollars in donations and the other has two million, and you donate a hundred thousand over your life, you should donate all of it to the charity that has a million.
There's no question of satiety of exhaustion at the level I currently invest in either.
I doubt that. You might still have fun doing each more, but not as much. If you chose to learn the piano before, but now choose to play tennis, something must have changed. If nothing changed, yet you make a different decision, you're acting irrationally.
Here's a closer "personal spending" analogy to charity:
Why is that analogy closer? It looks like it's in far mode instead of near mode, and the result is more controlled by what's pretty than what makes you happy. For example, if you got a $500 a month raise, you likely wouldn't save it all for the downpayment, even though there's no reason to treat it differently. If you got a $500 a month pay cut, you almost certainly wouldn't stop saving.
↑ comment by Vladimir_Nesov · 2012-09-20T11:05:50.845Z · LW(p) · GW(p)
I've made a post with an analysis of this situation.
comment by cousin_it · 2012-09-18T14:36:46.756Z · LW(p) · GW(p)
Reposting a comment I made on Yvain's livejournal:
There's a standard argument about "efficient charity" that says you should concentrate all your donations on one charity, because presumably you have preferences over the total amounts of money donated to each charity (not just your own donations), so choosing something like a 50/50 split would be too sensitive to other people's donations.
I just realized that the argument applies in equal force to politics. If you're not using "beliefs as attire" but actually care about politics, your participation in politics should be 100% extremist. That's troubling.
Replies from: None, Desrtopa, Emile, Douglas_Knight, Vladimir_Nesov, shminux↑ comment by [deleted] · 2012-09-18T14:51:40.370Z · LW(p) · GW(p)
You might be an extreme centrist. Or an extreme pragmatic. Not all extremists are the "take some idea to its (il)logical conclusion and start blowing things up" type.
Replies from: Manfred↑ comment by Manfred · 2012-09-29T05:35:40.381Z · LW(p) · GW(p)
I believe the point is that while your personal beliefs may lie at any point in some high-dimensional space, if you're getting involved in politics in some anonymous way you should throw all your support behind the single "best" group, even if, like in two-party politics, that means supporting a group you have significant differences with. Non-anonymity (nonymity) changes things, leading to behavior like lobbying multiple parties.
I don't really find it that disturbing, but it does get a little weird when you remember how bad humans are at separating acts from mental states.
↑ comment by Desrtopa · 2012-09-18T16:20:55.722Z · LW(p) · GW(p)
Impact of charitable donations is, at least within the domains that most people can give, directly proportional to the size of the donations. It's not at all clear, however, that extremist participation in politics produces a greater impact in the desired direction than casual participation.
I think that in some cases, it probably does, whereas in others it does not.
↑ comment by Emile · 2012-09-18T15:14:24.060Z · LW(p) · GW(p)
It probably depends of the decision process you're trying to influence:
If you're voting for a candidate, you don't have any incentive to vote in a way more extreme than your preferences - with more than two candidates, you can have strategic voting which is often the opposite incentive, i.e. voting for a candidate you like less that has more chances of making it.
If a bureaucrat is trying to maximize utility by examining people's stated preferences, then you can have an incentive to claim extreme preferences for the reasons Yvain gives.
Informal discussions of what social norms should be look more like the second case.
Elected politicians have to deal with the two systems, on one side they want to take a moderate position to get the maximum number of voters (median voter etc.), on the other hand once elected they have an incentive to claim to be more extreme when negotiating in their constituents' interest.
↑ comment by Douglas_Knight · 2012-09-21T01:54:10.399Z · LW(p) · GW(p)
Could you spell out what you mean by extremist, and how the analogous argument goes?
If there are three candidates, then yes, you should give all your support to one candidate, even if you hate one and don't distinguish between the other two.
But that hardly makes you an extremist. I don't see any reason that this kind of argument says you should support the same party in every election, or for every seat in a particular election, or that you should support that party's position on every issue. Even if you are an extremist and, say, want to pull the country leftward on all issues, it's not obvious whether equal amounts of support (say, money) to a small far-left party will be more effective than to a center-left party. Similarly, if your participation in politics is conversation with people, it's not obvious that always arguing left-wing positions is the most effective way to draw people to the left. It may be that demonstrating a willingness to compromise and consider details may make you more convincing. In fact, I do think the answer is that the main power individuals have in arguing about politics is to shift the Overton window; but I think that is a completely different reason than the charity argument.
and then I looked up your comment on LJ and the comment it replies to and I strongly disagree with your comment. This has nothing to do with the charity argument. Whether this argument is correct is different matter. I think the Overton window is a different phenomenon. I think the argument to take extreme positions to negotiate compromises better applies to politicians than to ordinary people. But their actions are not marginal and so this is clearly different from the charity argument.
Replies from: cousin_it↑ comment by cousin_it · 2012-09-21T08:39:11.572Z · LW(p) · GW(p)
I agree with everything in your comment. "Extremist" was a bad choice of word, maybe "single-minded" would be better. What I meant was, for example, if success at convincing people on any given political issue is linearly proportional to effort, you should spend all your effort arguing just one issue. More generally, if we look at all the causes in the world where the resulting utility to you depends on aggregated actions of many people and doesn't include a term for your personal contribution, the argument says you should support only one such cause.
Replies from: Wei_Dai, Douglas_Knight↑ comment by Wei Dai (Wei_Dai) · 2012-09-22T01:26:32.025Z · LW(p) · GW(p)
What I meant was, for example, if success at convincing people on any given political issue is linearly proportional to effort, you should spend all your effort arguing just one issue.
But this isn't at all likely. For one thing you probably have a limited number of family and friends who highly trust your opinions, so your effectiveness (i.e., derivative of success) at convincing people on any given political issue will start out high and quickly take a dive as you spend more time on that issue.
Replies from: wedrifid↑ comment by wedrifid · 2012-09-22T06:56:16.312Z · LW(p) · GW(p)
But this isn't at all likely. For one thing you probably have a limited number of family and friends who highly trust your opinions, so your effectiveness (i.e., derivative of success) at convincing people on any given political issue will start out high and quickly take a dive as you spend more time on that issue.
I'm inclined to agree. A variant of the strategy would be to spend a lot of time arguing for other positions that are carefully selected to agree with and expand eloquently on the predicted opinions of the persuasion targets.
↑ comment by Douglas_Knight · 2012-09-21T17:44:07.579Z · LW(p) · GW(p)
Yes, that is the charity argument. Yes, you should not give money both to a local candidate and to a national candidate simultaneously.
But the political environment changes so much from election to election, it is not clear you should give money to the same candidate or the same single-issue group every cycle.
Moreover, the personal environment changes much more rapidly, and I do not agree with the hypothesis that success at convincing people depends linearly with effort. In particular, changing the subject to the more important issue is rarely worth the opportunity cost and may well have the wrong effect on opinion. If effort toward the less important issue is going to wear out your ability to exert effort for the more important issue an hour from now, then effort may be somewhat fungible. But effort is nowhere near as fungible as money, the topic of the charity argument.
↑ comment by Vladimir_Nesov · 2012-09-21T09:29:05.191Z · LW(p) · GW(p)
Value of information about which political side is more marginally valuable makes unbiased discussion a cause that's potentially more valuable than advocacy for any of the political sides, and charities are similarly on the same scene. So the rule is not "focus on a single element out of each class of activities", the choice isn't limited to any given class of activities. Applied to politics, the rule can be stated only as, "If advocacy of political positions is the most marginally valuable thing you can do, focus on a single side."
Replies from: cousin_it↑ comment by Shmi (shminux) · 2012-09-18T18:11:05.828Z · LW(p) · GW(p)
There's a standard argument about "efficient charity" that says you should concentrate all your donations on one charity
If this argument was universal, it would be rational to invest in a single stock and the saying about all eggs in one basket would not exist.
Replies from: cousin_it, benelliott↑ comment by cousin_it · 2012-09-19T08:19:29.057Z · LW(p) · GW(p)
Sorry, can you explain why it also applies to investing? For reference, here's an expanded version of the argument.
Say you have decided to donate $500 to charity A and $500 to charity B. Then you learn that someone else has decided to reallocate their $500 from charity A to charity B. If you're a consequentialist and have preferences over the total donations to each charity, rather than the warm fuzzies you get from splitting your own donations 50/50, you will reallocate $500 to charity A. Note that the conclusion doesn't depend on your risk aversion, only on the fact that you considered your original decision optimal before you learned the new info. That means your original decision for the 50/50 split relied on an implausible coincidence and was very sensitive to other people's reallocations in both directions, so in most cases you should allocate all your money to one charity, as long as your donations aren't too large compared to the donations of other people.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-19T14:57:08.155Z · LW(p) · GW(p)
Sorry, I must be misunderstanding the argument. Why would you shift your donations from B to A if someone else donates to B?
Replies from: cousin_it↑ comment by cousin_it · 2012-09-19T15:13:32.451Z · LW(p) · GW(p)
Let's say for simplicity that there's only one other guy and he splits his donations $500/$500. If you prefer to donate $500/$500 rather than say $0/$1000, that means you like world #1, where charity A and charity B each get $1000, more than you like world #2, where charity A gets $500 and charity B gets $1500. Now let's say the other guy reallocates to $0/$1000. If you stay at $500/$500, the end result is world #2. If you reallocate to $1000/$0, the end result is world #1. Since you prefer world #1 to world #2, you should prefer reallocating to staying. Or am I missing something?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-19T16:54:27.661Z · LW(p) · GW(p)
OK, so "preferences over the total amounts of money donated to each charity" mean that you ignore any information you can glean from knowing that "the other guy reallocates to $0/$1000", right? Like betting against the market by periodically re-balancing your portfolio mix? Or donating to a less-successful political party when the balance of power shifts away from your liking? If so, how does it imply that "your participation in politics should be 100% extremist"?
Replies from: cousin_it↑ comment by cousin_it · 2012-09-19T18:13:35.083Z · LW(p) · GW(p)
you ignore any information you can glean from knowing that "the other guy reallocates to $0/$1000"
Good point, but if your utility function over possible worlds is allowed to depend on the total sums donated to each charity and additionally on some aggregate information about other people's decisions ("the market" or "balance of power"), I think the argument still goes through, as long as the number of people is large enough that your aggregate information can't be perceptibly influenced by a single person's decision.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-19T20:30:26.249Z · LW(p) · GW(p)
I think the argument still goes through, as long as the number of people is large enough
This sounds suspiciously like trying to defend your existing position in the face of a new argument, rather than an honest attempt at evaluating the new evidence from scratch. And we haven't gotten to your conclusions about politics yet.
Replies from: cousin_it↑ comment by benelliott · 2012-09-18T21:00:45.983Z · LW(p) · GW(p)
The key difference is risk aversion. People are (quite rightly in my opinion) very risk-averse with their own money, almost nobody would be happy to trade all their possessions for a 51% shot at twice as much, mostly because doubling your possessions doesn't improve your life as much as losing them worsens it.
On the other hand, with altruistic causes, helping two people really does do exactly twice as much good as helping one, so there is no reason to be risk averse, and you should put all your resources on the bet with the highest expected pay-off, regardless of the possibility that it might all amount to nothing if you don't get lucky.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-18T21:26:13.829Z · LW(p) · GW(p)
The key difference is risk aversion.
Right, and there is risk in everything. A charity might fold, or end up consisting of crooks, or its cause might actually be harmful, or the estimate of buying malaria nets being more useful than supporting SI might turn out to be wrong. Hence the diversification.
Politics is even worse, you can never be sure which policy is better, and whether, when carried to its extreme, it turns out to be harmful.
This is where cousin_it's naive argument for radicalization falls flat.
Replies from: benelliott↑ comment by benelliott · 2012-09-18T22:48:32.963Z · LW(p) · GW(p)
Right, and there is risk in everything.
This doesn't matter. Whether you should be risk averse doesn't depend on how much risk there is, whether you should be risk averse depends on whether your pay-offs suffer diminishing returns, it is a mathematical equivalence (if your pay-offs have accelerating returns, you should be risk-seeking).
I think you don't understand risk aversion. Consider a simple toy problem, investment A has a 90% chance of doubling your money and a 10% chance of losing all of it, investment A has a 90% chance of multiplying your money by one half and a 10% chance of losing all of it. Suppose you have $100,00, enough for a comfortable lifestyle. If you invest it all in A, you have a 90% chance of a much more comfortable lifestyle, but a 10% chance of being out on the street, which is pretty bad. Investing equal amounts in both reduces your average welath from $180,000 to $157,500, but increases your chance of having enough money to live from 90% to 99%, which is more important.
If they are instead charities, and we substitute $1,000 return for 1 life saved, then diversifying just reduces the number of people you save, it also increases your chance of saving someone, but this doesn't really matter compared to saving more people in the average case.
Look at it this way, in personal wealth, the difference between some money and no money is huge, the difference between some money and twice as much money is vastly less significant. In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
I'm not explaining this very well, because I'm a crap explainer, here's the relevant wikipedia page
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-18T23:14:53.995Z · LW(p) · GW(p)
In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment. Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well. Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
Replies from: benelliott↑ comment by benelliott · 2012-09-18T23:43:58.022Z · LW(p) · GW(p)
Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
I'm not talking about the politics case, there are other problems with Cousin It's argument. I'm arguing with your 'refutation' of the non-diversifying principle.
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment.
They may not be in danger of homelessness, but there is still diminishing returns. The difference between $1m and $2m is more important than the difference between $2m and $3m. Notice the operative word 'large' in your sentence. If those guys were just betting for amounts on the scale of $10, sufficiently small that the curve becomes basically linear, then they wouldn't diversify (if they were smart).
The situation with charity is somewhat similar, your donation is as small on the scale of the whole problem being fixed, and the whole amount being donated, as $10 is for a rich investment banker. The diminshing returns that exist do not have any effect on the scale of individuals.
Politics, if you insist on talking about it, is the same. Your personal influence has no effect on the marginal utilities, it is far too small.
Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well.
Yes, if you donate to make yourself feel good (as opposed to helping people) and having all your money go to waste makes you feel exceptionally bad, then you should diversify. If you donate to help people, then you shouldn't assign an exceptionally large disutility to non-optimal-charity, you should assign utility precisely proportional to the number of lives you can save.
comment by [deleted] · 2012-09-18T07:08:51.956Z · LW(p) · GW(p)
How many threads and discussions have we had about LessWrong readers joining or participating in online classes? How many about forming study and reading groups for such classes or textbooks? Please help me complete this list. I'm interested in getting a quick idea of what works and what doesn't in trying to educate parts of the community as whole. I think it is relevant to the problem of our subculture not updating as well as some other efforts I'm currently working on.
Online Classes
- Learn Power Searching with Google
- Let's all learn stats!
- Stanford Intro to AI course to be taught for free online
- Free Online Stanford Courses: AI and Machine Learning
Study groups
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-09-18T09:42:54.762Z · LW(p) · GW(p)
I'm surprised no-one's thrown together an online course aggregator yet.
Replies from: None↑ comment by [deleted] · 2012-09-18T10:36:32.687Z · LW(p) · GW(p)
You mean something like Class Central?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-09-18T10:39:32.919Z · LW(p) · GW(p)
Yes. It's a shame nothing like that already exists.
comment by Jabberslythe · 2012-09-16T05:30:45.678Z · LW(p) · GW(p)
Can anyone recommend any books on signalling or on feminism that might appeal to a LWer?
Replies from: mstevens, badger↑ comment by mstevens · 2012-09-17T10:28:52.700Z · LW(p) · GW(p)
I too am interested in books on feminism that might appeal to a LWer.
I tried researching this, but I found one easy failure mode is to find books based on the assumption that everyone knows what feminism is about and agrees 100%, and what the reader wants is a list of people who were feminists at various times in history.
The angle I was more interested in was "feminists believe X because Y", or "you, foolish person who is not a feminist, should be a feminist because Z".
Replies from: coffeespoons↑ comment by coffeespoons · 2012-09-19T12:11:31.698Z · LW(p) · GW(p)
Also interested in books that might appeal to a LWer. This Lukeprog post and comments from a while ago might be of interest.
↑ comment by badger · 2012-09-16T13:54:06.581Z · LW(p) · GW(p)
On signalling:
- Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction by William Flesch attempts to explain what makes certain plots, or fiction at all, enjoyable through a signalling lens.
- The Mating Mind and Spent by Geoffrey Miller delve into how signalling concerns pushed us evolutionarily into who we are and then how our evolved tendencies are disconnected with modern consumer culture.
- Codes of the Underworld: How Criminals Communicate by Diego Gambetta
↑ comment by NancyLebovitz · 2012-09-16T14:27:49.365Z · LW(p) · GW(p)
Your amazon links aren't working.
Does Comeuppance give any clues about why Martin's Game of Thrones series is so wildly popular? It's a world with plenty of power maneuvering but little or no justice so far.
Replies from: badger↑ comment by badger · 2012-09-16T15:11:39.664Z · LW(p) · GW(p)
Thanks about the links.
Flesch would argue it's largely because there is so little justice in the books. We're interested in tracking others through stories to see who deserves punishment. We remain emotionally involved to see what happens to them, and more injustices mean even more reason to keep tabs on what they're doing. Anticipation of justice is more satisfying than seeing justice itself.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-09-16T15:28:09.600Z · LW(p) · GW(p)
I have to admit there's a character I want to see smashed flat. In normal fiction, he would be and I can't count on it happening with Martin, but it still feels like a reason to read book six, when and if it comes out.
Does Flesch get into the difference between punishment and challenge? Either can be a strong hook for readers.
comment by Jayson_Virissimo · 2012-09-15T07:45:42.318Z · LW(p) · GW(p)
I've enrolled in 3 Coursera classes as a kind of warm-up for (possibly) going back to school to study computer science (if my start-up succeeds before then). They are:
- Introduction to Mathematical Thinking by Keith Devlin
- Learn to Program: The Fundamentals by Jennifer Campbell and Paul Gries
- Introduction to Logic by Michael Genesereth
Reply to this comment or PM me if you are interested in collaborating.
Replies from: datadataeverywhere, somervta, khafra, tgb↑ comment by datadataeverywhere · 2012-09-16T03:14:38.871Z · LW(p) · GW(p)
...going back to school to study computer science (if my start-up succeeds before then).
That's amusing. Usually I would say the value of the founder being present is much higher for a successful company than one that has failed. I would actually expect my freedom to pursue other avenues diminish as my success in my current avenue grows.
Do you mean that your start-up, if successful, will pretty much run itself? Or that if it hasn't succeeded /yet/, then you will feel obligated to stay and keep working on it?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-09-17T10:11:43.249Z · LW(p) · GW(p)
Schooling (for me) is as much consumption as investment. I'm merely saying that if my income significantly increases, then I will engage in more consumption (getting a degree in computer science, going on a pilgrimage in Europe, etc...). Is this really so strange?
Replies from: datadataeverywhere, eurg↑ comment by datadataeverywhere · 2012-09-18T02:04:12.628Z · LW(p) · GW(p)
A little. As your income increases, I expect your consumption to become more expensive in monetary terms, but as your business grows I expect the value of your time to increase and for your consumption patterns to become less expensive in terms of time. College is very expensive in terms of time.
I'm not saying this is a bad choice, but it is one that surprises me. I'm still interested in the answers to my questions. Do you intend to sell your start-up, have it run itself, or abandon it? It seems like those options cover the gamut (I might consider requiring < 40 hours a week of your time to be "running itself"; if you're quite dedicated, you could probably fit being a full-time student in even with the start-up taking 40+ hours of your time, making that an alternative option).
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-09-18T03:44:33.534Z · LW(p) · GW(p)
A little. As your income increases, I expect your consumption to become more expensive in monetary terms, but as your business grows I expect the value of your time to increase and for your consumption patterns to become less expensive in terms of time. College is very expensive in terms of time.
Ah, I think I see the source of your confusion. If my start-up succeeds, then I plan to increase the time I spend doing it and schooling, since I currently work a full-time job and work on my start-up part-time.
The relevant options are full-time work/part-time entrepreneur or part-time school/full-time entrepreneur.
Replies from: datadataeverywhere↑ comment by datadataeverywhere · 2012-09-18T04:00:32.496Z · LW(p) · GW(p)
Indeed, I misinterpreted you in multiple ways. My model went something like "Jayson_Virissimo is currently working 60-80 hours a week on his start-up. Once it exceeds ramen-profitability, he intends to scale back his efforts to become a full-time student." How very foolish of me!
↑ comment by somervta · 2012-09-29T05:10:21.808Z · LW(p) · GW(p)
I'm thinking of doing all three, though I might skip 2 since I'm already doing things along those lines.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-09-29T07:02:48.252Z · LW(p) · GW(p)
So far, 1 and 3 have lots of overlap. I'm not sure if that is good or bad yet.
↑ comment by khafra · 2012-09-25T12:09:41.260Z · LW(p) · GW(p)
I'm doing mathematical thinking as well. I have a two-person local study group; how large is the LW study group looking so far?
Replies from: Jayson_Virissimo, Curiouskid↑ comment by Jayson_Virissimo · 2012-09-25T12:24:45.363Z · LW(p) · GW(p)
I'm doing mathematical thinking as well. I have a two-person local study group; how large is the LW study group looking so far?
So far, it contains Curiouskid, khafra, and Jayson_Virissimo.
↑ comment by Curiouskid · 2012-09-26T17:19:10.780Z · LW(p) · GW(p)
Should we set up a facebook group for this? How do you guys plan on communicating?
↑ comment by tgb · 2012-09-15T13:22:21.252Z · LW(p) · GW(p)
I've been taking their Quantum Mechanics and Quantum Computation course that is about to end. I would recommend it to anyone interested. The course doesn't do a good job emphasizing the linear algebra prereqs (basically the only prereqs of importance) and it sounded like a lot of people got frustrated with that early on. A college course in linear algebra definitely suffices and one could even learn this sufficiently for the course on your own if you were ready to dedicate some time to it.
comment by [deleted] · 2012-09-28T11:59:30.114Z · LW(p) · GW(p)
Luck egalitarianism
Luck egalitarianism is a view about distributive justice espoused by a variety of egalitarian and other political philosophers. According to this view, justice demands that variations in how well off people are should be wholly attributable to the responsible choices people make and not to differences in their unchosen circumstances. This expresses the intuition that it is a bad thing for some people to be worse off than others through no fault of their own.
Luck egalitarians therefore distinguish between outcomes that are the result of brute luck (e.g. misfortunes in genetic makeup, or being struck by a bolt of lightning) and those that are the consequence of conscious options (such as career choice or fair gambles). Luck egalitarianism is intended as a fundamental normative idea that might guide our thinking about justice rather than as an immediate policy prescription. The idea has its origin in John Rawls's thought that distributive shares should not be influenced by arbitrary factors. Luck egalitarians disagree among themselves about the proper way to measure how well off people are (for instance, whether we should measure material wealth, psychological happiness or some other factor) and the related issue of how to assess the value of their resources.
This does not seem coherent. The responsible choices people make are always the result of unchosen circumstances. The genes you are born with, the circumstances of the development of your brain, your upbringing, the decisions of your past and perhaps very different self, which information you don't know you don't have, all of these are unchosen and there is no decision making beyond the laws of physics.
I can understand wanting to indulge in our risk aversion, reducing some of the highs people can acheive with gambles to mitigate some of the lows. I would probably support this, I'm not comfortable with people's lives being endangered without their explicit consent for example and it is bad fun theory to have people lock themselves into unrecoverable positions.
Some might nitpick here that you can mitigate the lows without reducing the highs, but this is plain false for humans, since egalitarian feelings are just how our monkey brains tell us to punish those we subjectively deem too high status or perhaps "if she is brought down I or my tribe will climb up". Demands for equality are not primarily about material poverty but social equality. Only if we eliminate the social aspect of achievement, could you keep some of the high of achievement without making the less skilled or talented or lucky feel bad. I am not ready to do this. I would much prefer the alternative solution that after we ensure high status would not abuse their positions, we dial down our feelings about equality. Or better yet that we upgrade them, change them so they are tied to worth. High status people doing bad things should be swiftly brought down, but we are less envious for selfish reasons.
Kneeling before a good king is a good feeling. If it was eliminated from our minds, I say we would be impoverished.
Replies from: yli↑ comment by yli · 2012-09-28T14:07:54.122Z · LW(p) · GW(p)
This does not seem coherent. The responsible choices people make are always the result of unchosen circumstances. The genes you are born with, the circumstances of the development of your brain, your upbringing, the decisions of your past and perhaps very different self, which information you don't know you don't have, all of these are unchosen and there is no decision making beyond the laws of physics.
Well, we already know that (even in a deterministic world) there's a meaningful way to say that someone could have done something, despite being made of mindless parts obeying only the laws of physics. I think the notion of responsible choice is probably similar.
comment by knb · 2012-09-27T12:22:49.051Z · LW(p) · GW(p)
On LW we talk occasionally about "life-hacks" (simple but non-obvious ways of solving common problems/becoming more productive, etc.) However, these are often considered too off-topic for LW. I distinctly remember reading a long list of life-hack ideas on some website, and a lot of them seemed very promising, but I apparently never bookmarked it.
Is there any good place on the Net to find the more effective hacks? It seems like there are a number of easy-to-implement ideas out there that would help a lot of people, but they are not concentrated in any one place.
Lifehacker.com should be that place, but it is run as a blog, with lots of new, trivial ideas floating around. The hacks with the biggest payoffs were probably written up years ago, and are no buried in the archives. The new ones on the front page seem pretty narrow and trivial. (If there is some way of actually finding the powerful, broad-appeal ideas on lifehacker.com, please point it out, I couldn't find it.)
comment by gwern · 2012-09-19T19:38:27.755Z · LW(p) · GW(p)
I've finished transcribing the classic sociology paper "The Iron Law Of Evaluation And Other Metallic Rules"; LWers or libertarians may enjoy it.
comment by Jabberslythe · 2012-09-16T23:46:49.766Z · LW(p) · GW(p)
I posted earlier on the advantages of incorporating audiobooks into your study methods. One of the main problems I desribed was that there was poor selection with regards to audiobooks and particularly with regard to higher level subjects. I've recently found a way around this that makes using audiobooks even more of an obvious decision for me. I've started using text to speech conversion to make audiobooks from ebooks. The inspiration was from wedrifid.
Here is a sample of the best TTS voice I have been able to find. This method produces suprisingly high quality audiobooks with suprisingly little effort.
I can get through around 2 - 5 books a day by listening to audiobooks.
Replies from: yli↑ comment by yli · 2012-09-18T23:10:10.773Z · LW(p) · GW(p)
I've been doing this since November last year and recommend it.
My list of fully listened books has 109 entries now. I've found that an important thing in determining whether a book works well in text-to-speech form is how much of it you can miss and still understand what's going on, or, how dense it is. Genre-wise, narrative or journalistic nonfiction and memoirs make especially good listening; most popular nonfiction works decently; history and fiction are pretty hard; and scholarly and technical writing are pretty much impossible.
A lot of writing on the internet, like blog posts, works well too. I have some scripts for scraping websites, converting them into an intermediate ebook form and then into text-to-speech audiobooks. If I encounter an interesting blog with large archives this is usually how I consume it nowadays.
There's also obviously the issue of comprehension, which I'd say definitely is lower when listening than when reading. But, 1) literally at least 95% of the stuff that I've listened to I never would have read, it would either have sat on my to-read list forever* or I wouldn't have thought it was worth the time and effort in the first place 2) you can view this as a way to discover stuff that's worth deeper study, like skimming, 3) it takes less mental effort than reading, and 4) there are a lot of times when you couldn't read but can still listen, so it's not a tradeoff. There are also some interesting ways in which texts are more memorable when listening, because parts of the text get associated to the place you were in and/or the activity you were doing when you were listening to that part.
Compared to traditional audiobooks, there's the disadvantage that fiction seems to be harder to make sense of in text-to-speech form, but other than that, you get all the benefits of traditional audiobooks plus it's faster** and you can listen to anything.
* Whereas during the last year I've been getting to experience the new and pleasant phenomenon of actually getting to strike off entries from my to-read list pretty often.
** You can speed up the text-to-speech, and while you can also speed up traditional audiobooks, you can speed up the text-to-speech more because it's always the same voice instead of a different one for every book so you can get used to listening to it at higher and higher speeds - I currently do 344, 472 and 612 WPM for normal, fast, and extra-fast-but-still-comprehensible respectively (these numbers have been stable for about the past six months).
Replies from: Jabberslythe↑ comment by Jabberslythe · 2012-09-19T00:20:21.171Z · LW(p) · GW(p)
I don't have a much problem listening to to scholarly stuff what problems did you have with it? Most of the books I listen to recently probably count as scholarly. I don't think that my comprehension is lower than when I read conventionally, actually, I need to test it, though.
What method do you use for converting blogs? Have you found a way of converting a whole blog at once or do you convert articles individually?
Replies from: yli↑ comment by yli · 2012-09-19T07:54:29.933Z · LW(p) · GW(p)
Well I had in mind how at one time or another I tried to listen to Inside Jokes, Folly of Fools, In Gods we Trust, Godel's Theorem: An Incomplete Guide to Its Use and Abuse, The Origin of Consciousness in the Breakdown of the Bicameral Mind and The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It and quit each of them because it wasn't working, I was missing too much and wasn't enjoying it. Maybe "scholarly" isn't the best word I could have chosen to describe them, and maybe I was just doing it wrong, and should have just gone slower and concentrated better.
The result of a converted blog is this. I just have to write a new parser for every new blog, which usually takes maybe 15 minutes, and the rest is automated.
Replies from: Jabberslythe↑ comment by Jabberslythe · 2012-09-22T03:40:14.413Z · LW(p) · GW(p)
Wow, those are some awesome books, thanks. I listen to the The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It and at 3.8x speed and I think I understood it fine.
I'd love to be able to convert blogs... Can't find any service to do it for me.
comment by mstevens · 2012-09-19T15:32:12.043Z · LW(p) · GW(p)
Random idea: LW would benefit from some good feminism articles. Someone with more money than me (CFAR) should incentivise creation of them with a suitable prize.
Replies from: gwern, Manfred↑ comment by gwern · 2012-09-19T19:37:39.272Z · LW(p) · GW(p)
But on what? Question choice is hard. (My own suggestion would be something like hold a contest on finding and summarizing all existing criticism of Baumeister's Is There Anything Good About Men? and if there is no good criticism, come up with one's own; but that's just because I regard it as one of the more evidence-backed paradigms.)
Replies from: NancyLebovitz, mstevens↑ comment by NancyLebovitz · 2012-09-22T04:07:52.102Z · LW(p) · GW(p)
We could start with an overview of different kinds of feminism.
↑ comment by Manfred · 2012-09-29T05:29:41.338Z · LW(p) · GW(p)
Intrinsic motivation, pplz :D. Better than a prize might be paying someone to convince and help other people to want to write the article. Or simply doing that yourself, if you want to make it happen.
comment by David_Gerard · 2012-09-18T12:50:03.971Z · LW(p) · GW(p)
Inventor of "Out Of Africa" hypothesis says it's probably quite a bit more complicated than that. Transcription of edge.org talk.
Replies from: Vaniver↑ comment by Vaniver · 2012-09-19T00:05:56.159Z · LW(p) · GW(p)
Primary takeaway: simple, visible theories are rarely completely correct, especially when they're formulated in response to primitive data.
Commentary: "Species" is a wrong idea, and much of the start is uninteresting discussion of whether or not neanderthals were a "separate species." Taboo species, and everything becomes clear: they had a separate evolutionary history than the strain of homo sapiens that originated in Africa, but they could and did interbreed with that strain, and so current humans have genes from at least one of the African, Neanderthal, and Denisovan varieties.
Replies from: endoselfcomment by mstevens · 2012-09-16T12:54:59.029Z · LW(p) · GW(p)
The Spirit Catches Lia Lee, RIP
"First published in 1997, Anne Fadiman's book The Spirit Catches You and You Fall Down, a chronicle of a Hmong refugee family's interactions with the American medical system in the face of a child's devastating illness, has become highly recommended, if not required, reading for many medical students and health care professionals, over the past 15 years quietly changing how young doctors approach patients from different cultures. On August 31, with little publicity, Lia Lee, the young girl who inspired the book, after living most of her life in a persistent vegetative state, quietly died [NYT obit]."
I thought this was an interesting article, I'd be interested to see what LW thought, especially anyone who's actually read the book.
Replies from: Nonecomment by David_Gerard · 2012-09-25T13:05:02.942Z · LW(p) · GW(p)
comment by NancyLebovitz · 2012-09-24T15:08:01.783Z · LW(p) · GW(p)
Movement toward taking a statistical approach to the quality of evidence from fingerprint similarity
Before you read that article, what was your opinion of fingerprint evidence? [pollid:74]
My first exposure to the idea that fingerprint evidence might not be all that good was in L. Neil Smith's The Probability Broach, in which the viewpoint character, a policeman, wonders whether all fingerprints really are unique, and also wonders whether the government might disappear people who had identical fingerprints.
(Surprisingly to me, the Salon article mentions that identical snowflakes have been found.)
My second exposure was an article in Lingua Franca, which brought up the more plausible issue that fingerprints from crime scenes are likely to be partial and/or fuzzy.
Replies from: saturn, Oscar_Cunningham, None↑ comment by Oscar_Cunningham · 2012-09-24T17:13:29.707Z · LW(p) · GW(p)
The article doesn't actually contain any data saying that fingerprints are reliable. If I had to guess, I'd say that a (non-partial) fingerprint match had an odds ratio of around 10^7, a hefty 22 bits of info. Is there any data to contradict that? Or is this just the "but there's still a chance, right?" fallacy?
comment by NancyLebovitz · 2012-09-22T04:11:24.838Z · LW(p) · GW(p)
Garbriel Kolko on the New Deal
If this is accurate, history is more complicated and less dramatic than usually thought, as is commonly the case.
Replies from: Unnamed↑ comment by Unnamed · 2012-09-22T05:49:56.596Z · LW(p) · GW(p)
The article is very limited as a history of the Great Depression - it says very little about why the economy got worse and better and does not include key words like "deflation", "gold standard", "Federal Reserve", or "monetary".
One of the first things that Roosevelt did as President was to take the US off the gold standard (to put a stop to the deflation), and that was probably the most important thing that a country could do to deal with the Depression. See, for example, this graph, this section of a Wikipedia article, or this (much longer) Economic History Association article.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-09-22T13:31:53.424Z · LW(p) · GW(p)
Still, if Hoover's policies weren't all that different from Roosevelt's, and most of Roosevelt's didn't make all that much difference, that's quite different from the usual account.
Also, is it true, as claimed in the article, that WW2 got the US out of the Great Depression? This has never seemed plausible to me (get richer by doing a huge amount of non-productive work?), and wars don't usually seem to be great for the economy.
Replies from: Unnamed, tut, bogus↑ comment by Unnamed · 2012-09-22T19:31:18.156Z · LW(p) · GW(p)
The article is overstating the similarities between Roosevelt's policies and Hoover's. The Wikipedia article which I linked covers it pretty well. Some of the main components of Roosevelt's policies were:
- Ending the gold standard
- Regulating the banking/finance industry, including creating the FDIC (which put an end to bank runs) and the SEC
- Creating a safety net (e.g., Social Security and the precursor to food stamps)
- Expanding & creating more public works / job creation programs (Hoover had some, FDR employed as much as 7% of the workforce)
- Labor market regulations, including prohibiting child labor, creating a minimum wage, and establishing a 44-hour workweek
You could divide the consequences of policies into three categories: getting the country out of the Depression (improving the economy, or making it worse), getting the country through the Depression (coping with the bad economy), and making lasting changes to the government and society. Most of these policies did more for the latter two.
For getting out of the Depression, the most important things the government could do were 1) monetary policy, and 2) increasing total government spending and debt. Getting off the gold standard was the key step for monetary policy (there were also various missteps by the Federal Reserve, including many in the late 1920s and one in 1937). Hoover increased total government spending and debt somewhat and then stopped; FDR kept them relatively flat until WW2 broke out and then increased them massively.
↑ comment by bogus · 2012-09-22T14:06:51.331Z · LW(p) · GW(p)
Also, is it true, as claimed in the article, that WW2 got the US out of the Great Depression? This has never seemed plausible to me (get richer by doing a huge amount of non-productive work?), and wars don't usually seem to be great for the economy.
It's sort-of plausible, because the GD was due to a severe shortfall in overall nominal expenditure wrt. the prevailing level of prices and wages. The original cause of this shortfall was a series of mistakes in monetary policy; however, increased deficit spending in WW2 could have made up for it.
comment by Vaniver · 2012-09-19T21:26:38.608Z · LW(p) · GW(p)
In the honor of our swanky new poll tech, help me determine which post I should write next. (By next, I mean "in parallel with a chapter-by-chapter review of Causality and my rationalist MLP fanfic," so no guarantees which gets done first.)
The main candidates are:
Rereading An Intuitive Explanation of Bayes' Theorem, I was struck by how uninformative the introduction was. Why do you want to learn Bayes? Because it's cool! It seems like a post explaining what mindset / worldview would find Bayes useful might be a useful complement to that.
About a year ago JenniferRM suggested I give an overview of Operations Research as a field. (I'd call it the field of industrial rationality, and suspect that it would be a fertile place for high school / college age LWers to set their sights.)
It felt like this comment on Value of Information could be expanded to a full post, but it steadily moved down my queue of "things I want to write."
Focus any extra writing effort on your review of Causality.
Focus any extra writing effort on your MLP fanfic.
Something else. (Feel free to suggest something in a reply comment.)
[pollid:19]
Replies from: dbaupp, MixedNutscomment by Ritalin · 2012-12-18T10:25:51.727Z · LW(p) · GW(p)
"Religious issues" in hardware and software
Apparently, it's not just politics that is the mind-killer. When it comes to one's tool of choice, one can get as irrationally fanatical as it gets. From the Jargon Dictionary
“What is the best operating system (or editor, language, architecture, shell, mail reader, news reader)?”, “What about that Heinlein guy, eh?”, “What should we add to the new Jargon File?”
Great holy wars of the past have included ITS vs.: Unix, Unix vs.: VMS, BSD Unix vs.: System V, C vs.: Pascal, C vs.: FORTRAN, KDE vs, GNOME, vim vs. elvis, Linux vs. [Free|Net|Open]BSD. Hardy perennials include EMACS vs.: vi, my personal computer vs.: everyone else's personal computer, ad nauseam.
The characteristic that distinguishes holy wars from normal technical disputes is that in a holy war most of the participants spend their time trying to pass off personal value choices and cultural attachments as objective technical evaluations. This happens precisely because in a true holy war, the actual substantive differences between the sides are relatively minor.
A bigot: a person who is religiously attached to a particular computer, language, operating system, editor, or other tool (see religious issues). Usually found with a specifier; thus, Cray bigot, ITS bigot, APL bigot, VMS bigot, Berkeley bigot. Real bigots can be distinguished from mere partisans or zealots by the fact that they refuse to learn alternatives even when the march of time and/or technology is threatening to obsolete the favored tool. It is truly said “You can tell a bigot, but you can't tell him much.”
So, why's that? We're talking highly educated, intelligent, creative people here. People whom programming keeps honest, who have learned to decorticate and take apart every notion and every argument into its simplest components, in order to be able to properly communicate with mindless machines. Maybe I'm being naïve here, but I'd have thought fanaticism, in these circles, should be extinct.
So why isn't it? Why is the sanity waterline seem so low, and how does one raise it?
(Also, compare with console wars)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-12-19T15:19:44.127Z · LW(p) · GW(p)
My impression is that holy wars about software and hardware aren't as common as they used to be. Is this correct?
comment by chaosmosis · 2012-09-30T23:55:42.375Z · LW(p) · GW(p)
Apparently, hammocks are really good to sleep on.
comment by David_Gerard · 2012-09-26T07:35:40.423Z · LW(p) · GW(p)
Alvin Plantinga is renowned as one of the finest philosophers and theologians that the theist world has to offer. Maarten Boudry (the guy who did the Sokaling I note a couple of posts down) absolutely obliterates the painfully awful bloviations on science and evolution to be found in Plantinga's latest book. PDF, p21 on. If you enjoy LessWrong, you will enjoy this example of a good philosopher skewering bad philosophy.
Related: I have finally written up the recent history of the phrase "sophisticated theology" as used by New Atheists and their fans. Needs more from people who were there at the time, though Jerry Coyne thought I did OK.
comment by ArisKatsaris · 2012-09-25T10:22:02.866Z · LW(p) · GW(p)
Just a few days ago it occurred to me that reductionism is the inevitable result of strictly local causality -- or putting it differently, any universe with a speed-limit in causality (in our universe this seems to be c) must by necessity be a reductionist universe.
The conclusion seems obvious but the connection between the two had never before occurred to me.
Replies from: pengvado↑ comment by pengvado · 2012-09-25T16:12:34.490Z · LW(p) · GW(p)
How so? In a local nonreductionist universe, wouldn't a fundamentally complicated thing just have to have zero width?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-09-25T18:35:52.560Z · LW(p) · GW(p)
Ah, I guess I was thinking of a different type of non-reductionism, where e.g. an organism gained an extra non-reductionist property "life" which caused all its parts to behave differently than if they lacked that property...
comment by [deleted] · 2012-09-24T17:27:02.373Z · LW(p) · GW(p)
Maybe this has been discussed here, but I wanted to see what you guys think of the surprise test paradox.
The paradox goes like this: it is impossible to give a surprise test.
Say a teacher tells her class on Monday that there will be a surprise test this week. The test cannot be on Friday, because when no test has been given by end-of-class Thrusday, everyone will know that the test is on Friday, and so it won't be a surprise. The test cannot be on Thursday. Having established that the test cannot be on Friday, if no test has been given by end-of-class on Wednesday, it will be obvious that the test is on Thursday, so no surprise. The test cannot be on Wednesday, nor on Tuesday, nor on Monday for similar reasons. So no surprise test can be given.
Similarly, it is impossible to threaten someone thusly: "I'll get you when you least expect it!"
This is a paradox because revenge and surprise tests are, of course, perfectly possible. What's going on here?
Replies from: Manfred, ArisKatsaris↑ comment by Manfred · 2012-09-29T05:11:23.592Z · LW(p) · GW(p)
Mixed Nash equilibrium is going on here :P If you made this into a little game, with the students paying a cost to cram but getting a benefit if they crammed the night before the test, and the teacher wanting the minimum number of students to cram, you could figure out what the actual ideal strategies would be, and the teacher would indeed have a mixed strategy.
What the non-probabilistic (that is, deductive) reasoning really shows is that there is no way to always surprise a student. If you make things probabilistic, that means you're only claiming to surprise your students the most you can. This problem is weird because it demonstrates how in order to be surprising overall, sometimes you actually do have to choose bad options (at least when you're playing against perfect reasoners :D )! It's only by sometimes actually choosing Friday that the teacher can ever get students to not cram before class Thursday - the times the quiz is on Friday are sacrificed in order to be surprising the other times.
Replies from: None↑ comment by ArisKatsaris · 2012-09-25T10:56:52.683Z · LW(p) · GW(p)
It's an amusing paradox and I don't know what the standard solution to it is, but I resolve it easily in my mind by tabooing the word 'surprise', and replacing with "suddenly obtaining certain knowledge of its date". Then it becomes more of a silly game of words.
Assuming the test is certain to take place (it's the law!), the students can't be suprised on Friday, but on Thursday they will either be "surprised" to locate the test on Thursday or "surprised" to locate the test on Friday. The Thursday (or Wednesday or Tuesday or Monday) surprise will therefore be genuine, though no student can be suprised on Friday. They can be "surprised" on Thursday about either a Thursday or a Friday test, though.
The day-by-day progression is a distraction btw. The paradox can be replaced by having five cups, a ball under one of them, lifting them one by one in whatever order. If you've lifted three empty cups, you'll then either be "surprised" to locate the ball under the cup you lift next, or "surprised" to locate it definitively under the cup you haven't lifted. Both of these will be a surprise, a surprise occurring at the next-to-last cup.
Replies from: None, Kindly↑ comment by [deleted] · 2012-09-25T14:04:23.332Z · LW(p) · GW(p)
It's an amusing paradox and I don't know what the standard solution to it is
The 'no friday, but any other day is fine' thing is the closest we have to a standard solution. The taboo game is a good idea, but it could also easily be misleading. The taboo solution works or doesn't work depending on what you replace 'surprise' with, so you have to argue for your replacement. (EDIT: this strikes me now as a general and serious problem with the taboo game)
Here's an argument against your replacement. If I told you that there would be a test next year, on this day, at exactly 2:00, you would hardly call this a surprise even though you'd just gained 'sudden and certain knowledge of its date'. It would be impossible to not give a surprise test. This can't be what 'surprise' is supposed to mean, and even if it is, the paradox still makes impossible another kind of surprise test which teachers often take themselves to be able to announce.
A surprise test should probably be understood as a test given in such a way that you do not know, the night before (when you would study) that the test will be the next day. This is what the paradox makes problematic. The students can't be surprised, on Thursday, to locate the test on either Friday or Thursday, because they know the test won't be on Friday (since then they'd know about it the night before). So they know the test must be on Thursday (which they would have guessed last night, so no surprise there either).
The paradox can be replaced by having five cups, a ball under one of them, lifting them one by one in whatever order.
You can reproduce the paradox with five cups, yes. But the conditions have to be narrower than you say: the cups have to be lifted in a specific order known to the lifter before hand. The lifter has to be told that he will not know, before he at any stage lifts the cup to find the ball, wether or not the ball will be under that cup. So the player will know that the ball cannot be in the last cup (since then he will know before he lifts it that it is there), and given that, it cannot be under the second to last cup either, and so on.
↑ comment by Kindly · 2012-09-25T12:05:58.397Z · LW(p) · GW(p)
Furthermore, if you take the "information content" approach to surprise, then you would be more surprised by a test on Monday than by a test on Thursday; but this is made up for by the fact that on Monday, Tuesday, and Wednesday, you were very slightly surprised that there wasn't a test. Total surprise is conserved.
comment by James_Miller · 2012-09-15T16:44:55.439Z · LW(p) · GW(p)
My 7-year-old son likes computer programming and I suspect has a lot of innate aptitude for it. We have worked out a system that for every X minutes of learning he does with me he gets X minutes of computer gaming time. What kind of learning exercises could help him be a better programmer when he becomes an adult? Should I focus on him doing lots of coding or, for example, would he be better served by learning additional math?
For those of you who are adult computer programmers (and please identify yourself as such in your response) what, if anything, could you have done at a young age that you think would have caused you to now be a better programmer?
Replies from: None, eurg, fubarobfusco, Risto_Saarelma, Anatoly_Vorobey, lsparrish↑ comment by [deleted] · 2012-09-15T21:19:47.091Z · LW(p) · GW(p)
I'm a hobbyist computer programmer considering a career in it.
When I was 6, I met a friend who was into star-trek and science and such. We used to talk about science stuff and dig up "dinosaurs" and attempt to build spaceships. I think a lot of my intellectual personality came from being socialized by him. The rest came from my dad, who used to teach me things about electricity and physics and microeconomics (expected value and whatnot).
I learned to program when someone introduced it to me and I realized I could make video games. (I was 18) I absorbed a lot of knowledge quickly, and didn't get much done. I would find some little problem, and then go and absorb all the relevant knowledge I could to get it exactly right. Even tho I didn't accomplish much, now I know a lot about computer science, which is helpful. Having some thing I was trying to do put a powerful drive behind my learning, even if I didn't actually act in a strategic or effective way.
My dad occasonally told me the importance of finishing the last project before beginning the next, but I don't think it properly transferred. I still have lots of trouble shipping.
One thing that bit me a lot was regressing into the deep wizardry tech instead of focusing on the end product. Architecture and tech are a lot more interesting than the mere surface of the product, but what matters is getting the thing done. The tech side is the dark side. Lots of interesting insight on this on prog21.dadgum.com (a blog)
I wish I'd encountered Paul Graham's essays a lot earlier. They have a lot of good advice about programming and growing up in general. I caught a lot of my ambition from there. The wider hacker news community is great too. Really inspiring and helpful community.
Overall (TL;DR:) Teach him to ship. This is important. Start with trivially small things if you have to, but make sure stuff gets to version 0.1 (releasable) at least. Do what you can to encourage friendships with other smart kids who are into science. Find something he's interested in and wants to build, or the rest of it will be directionless. Focus on the creative output, not on the tech. Get him reading interesting and inspiring battle-wisdom like paul graham and prog21. Don't make him hate it.
Replies from: datadataeverywhere↑ comment by datadataeverywhere · 2012-09-16T03:08:03.951Z · LW(p) · GW(p)
Of all my flaws, I currently consider my bias to thought (and study, research, etc.) over action my greatest. I suspect that LessWrong attracts many (a disproportionate number of) such people.
↑ comment by eurg · 2012-09-17T12:28:38.976Z · LW(p) · GW(p)
I am a software developer, and have glimpsed over many similar questions. To summarize: There are enormous individual differences in how one can become a better programmer, and even more so on the opinions on it. It is not even easy to agree on what basic skills should be there at the "end" (i.e., the beginning, after your first two years real experience), much less on how to get those skills.
That said, most commonplace advice is valid here:
- there is with high probability no really significant innate aptitude for programming (intelligence has a carry-over into many domains, however) (edit: seems I am wrong about that: See "Has “Not everyone can be a programmer” been studied?" at Programmers.StackExchange
- don't expect too much skill carry-over: mathematical thinking and programming are related, but different activities
- being able to concentrate for some time is good (a.k.a.: conscientiousness helps)
- as a kid, liking the task probably also helps
- playing the trumpet - not that much (though it can be fun)
On skill-set:
- The practical coding skills are always important. (Except if you want to earn more than a dev.)
- Algorithmic knowledge is rarely helpful, but you will stumble at least once if you don't have it (but nobody will notice).
- Mathematics is necessary only in very small areas.
What should I have done? Training self-discipline would have helped (if something like that is possible). Knowing/Having somebody who is actually significantly better than me, and pushing me, would have helped (fuckarounditis on "learning" is too easy to do).
My Opinion: In professional software development there is very little time to broaden your knowledge -- it goes into obscure platform and domain knowledge quite quickly, and Jobs where you do what is actually interesting are in very short supply. However, exactly this broader knowledge is what can keep you afloat even if you happen to be neurotic, unreliable, slow worker*. So: Let him do what is most interesting, but gently guide him to do it just a bit better, and a little bit more diverse than he would otherwise. About interest: Many people say kids usually like somewhat quick results, and things that crawl. Maybe?
Preparatory learning exercises: Sorry, no opinion and idea here. I always liked to take things apart, and sometimes also to put them together (as in Lego Technik models, or -- almost always unsuccessful -- attempts to construct my own). Actually, I think constructing Lego models (or of course anything that is "more real") is great. But then there is the initial caveat of individual differences.
*: I might exaggerate a bit, but not too much.
↑ comment by fubarobfusco · 2012-09-15T18:02:54.301Z · LW(p) · GW(p)
One thing I have observed about people who are better programmers than me is that many play musical instruments. (I don't.) I have no idea what sort of causal structure is involved here.
Learn off-sequence math: logic puzzles, geometric constructions, number theory (primes, modular arithmetic) that are not represented in the standard school math sequence.
Learn something of electronics, digital logic, etc.
Use different languages: maybe Python and Pygame for animations and games, and HTML and JavaScript for web pages. Don't allow the idea to develop that there is one "normal" way for code to look, e.g. "normal languages have curly braces; ones that don't are gratuitously weird."
↑ comment by Risto_Saarelma · 2012-09-17T13:36:03.661Z · LW(p) · GW(p)
I'm an adult professional programmer. The gaming machine of choice when I was a kid was C-64, not NES, so I knew programming your own stuff was a thing at elementary school age. The obvious interesting thing with the computer was game programming. The (generally correct back then) received wisdom also was that any serious game programming effort with the 8-bit home computers involved assembly languages, but I wasn't able to find any manuals or software I could understand to teach myself assembly, so I never ended up learning it.
A few years later the PC was the top gaming box, and assembly had given way to C (and Turbo Pascal, which was a big thing on DOS back then for some reason), I managed to learn C after a false start or two, and went off happily coding many stupid things I did not finish with it. So not much of a lesson here, not learning assembly at age 10 didn't seem to slow me down much later, and it also turned out that I never did need to learn to program 8-bit microcomputers right down to metal to make games. Still, I'm sure it would've been interesting to learn that.
The big thing that made me learn programming was the intrinsic drive to want to program computer games. Any sort of achievement-endorsing authority figures were generally entirely indifferent to this.
The hard things in programming for me are also ones which I suspect are inherently boring. Working with legacy codebases, working on uninteresting features that are needed for business reasons, finishing the countless little things like documentation, ports and packaging so that something can be released, and in general going from the 80 % solution to the 100 % one. I guess I could echo nyan_sandwich's admonition to teach shipping here.
I did waste some time writing unnecessarily bad C++ before I read Stroustrup's book on the language, so I guess hitting the books early could help. I also crashed, burned and never recovered enough to have any degree of academic success with university level maths after basically coasting all the way through high school, but as far as I can tell, this hasn't had much any impact with my programming skill. It did keep me from ending up in any kind of grad school, which I thought I would end up in when going to university.
So I guess my first choice of an experiment for 10-year-old me would be to hook him up to the present-day internet and teach him to download lots of textbooks on programming. I might worry a bit about a parental authority figure pushing the stuff extremely hard, since that might kill off the intrinsic drive which has been pretty important for me, but having some sort of mentor who knows the stuff and who you can bounce things off might also have sped things up a lot back then.
I do miss never learning how to make tracker music and not getting into the demoscene as a a teenager, so there are those. Tracker music is kinda programmy and could exercise the hypothetical music and programming skill connection. The demoscene thing is a general programmatic art competitive thing that will teach someone a lot about how to make a computer do tricky things if they get into it. The demoscene seems to be pretty much an European phenomenon, for some reason, and also a dying one, since fewer people who know about computers nowadays consider making them do intricate moving colors and sounds to be an impressive achievement.
Should I focus on him doing lots of coding or, for example, would he be better served by learning additional math?
IMO, definitely lots of coding. Lots and lots of coding. There's a big aspect of craftsmanship in being a good programmer, and at least for now, solid craftsmanship and weak theory seems to beat solid theory and weak craftsmanship most of the time.
↑ comment by Anatoly_Vorobey · 2012-09-16T11:41:57.425Z · LW(p) · GW(p)
I'm a computer programmer who started at age 12. At that time, by far the coolest thing about programming for me was being able to write my own game. I was also motivated by math and algorithms, but more weakly, at least initially.
I would recommend balancing two things. First, settle on a simple framework that lets you do simple graphics in a simple language with minimum boilerplate. Maybe pygame (never tried it), or processing.js (good because it works right there in your browser), or some learning-oriented language (when I was a kid, Logo was popular, though I quickly moved on to BASIC; is there an analog of Logo for the 2010s?). Show him how he can draw something with three lines of code, get him fascinated, move the goalposts, iterate.
Second, more math but I would especially advise math puzzles of all kinds. Tricks with numbers, geometric constructions, logical puzzles. It's great if you can find material of this kind for your son that he'll want to consume on his own and not just because of the time-sharing scheme. Martin Gardner's books may be useful, though probably a bit later on; and other material of such kind.
Note that I haven't answered your last question because while the things I listed were helpful to me, I did them, and I don't know what other, different things would've caused me to be a better programmer. I don't even know if starting at age 7 would've caused that; I suspect it's true, but don't have a solid argument.
↑ comment by lsparrish · 2012-09-22T17:55:29.675Z · LW(p) · GW(p)
You might try getting him a shell account and have him learn to use tmux (or screen) and vim (or emacs).
comment by Slackson · 2012-09-15T06:13:09.872Z · LW(p) · GW(p)
I'm thinking of starting a meetup group in Auckland, NZ, but would like to gauge the interest in such a group first. I know I'm supposed to just plan a meetup, but from a search of the site it looks like Auckland groups have failed to get momentum in the past, so I'd like to arrange for a time and place such that at least a couple of people can definitely attend.
Reply if you're interested, and then we'll sort something out.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-09-15T09:33:57.052Z · LW(p) · GW(p)
I think gauging interest for meetups is usually presented as a discussion post so that it's more likely to be noticed, but I'm not sure I'm right about that. Anyone else notice a pattern?
Replies from: Slackson↑ comment by Slackson · 2012-09-15T09:56:22.823Z · LW(p) · GW(p)
Thanks. I thought that that might be the case, but I wasn't sure either. If I don't have anyone expressing interest here by Sunday 6pm local time I'll make a discussion post, and if I don't get any replies to that in a day or two then I'll just add a meetup.
comment by lsparrish · 2012-09-28T23:43:25.050Z · LW(p) · GW(p)
Most economists seem to agree that the costs of voting outweigh the benefits. I've been considering whether it might be worth trying to use the threat of voting to get people to do something costly (from their perspective) which serves my values. Here is how I would picture it working:
- I commit to vote for a candidate (if my conditions are not met). Contrariwise I commit to
- People who dislike that candidate will want me to abstain rather than casting my vote.
- Require something which seems non-selfish so as to avoid sacred value dissonance.
So for example, I could say: "I will vote for Obama iff I don't get 10 commitments to switch to using Linux instead of Windows."
I could announce this on Facebook or something, then track responses in a Google Docs spreadsheet and vote accordingly.
Good idea?
comment by Incorrect · 2012-09-15T06:37:27.088Z · LW(p) · GW(p)
How does ambient decision theory work with PA which has a single standard model?
It looks for statements of the form Myself()=C => Universe()=U
(Myself()=C), and (Universe()=U) should each have no free variables. This means that within a single model, their values should be constant. Thus such statements of implication establish no relationship between your action and the universe's utility, it is simply a boolean function of those two constant values.
What am I missing?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-09-15T10:04:08.488Z · LW(p) · GW(p)
The problem is that the agent doesn't know what Myself() evaluates to, so it's not capable of finding an explicitly specified function whose domain is a one-point set with single element Myself() and whose value on that element is Universe(). This function exists, but the agent can't construct it in an explicit enough form to use in decision-making. Let's work with the graph of this function, which can be seen as a subset of NxN and includes a single point (Myself(), Universe()).
Instead, the agent works with an extension of that function to the domain that includes all possible actions, and not just the actual one. The graph of this extension includes a point (A, U) for each statement of the form [Myself()=C => Universe()=U] that the agent managed to prove, where A and U are explicit constants. This graph, if collected for all possible actions, is guaranteed to contain the impossible-to-locate point (Myself(), Universe()), but also contains other points. The bigger graph can then be used as a tool for the study of the elusive (Myself(), Universe()), as the graph is in a known relationship with that point, and unlike that point it's available in a sufficiently explicit form (so you can take its argmax and actually act on it).
(Finding other methods of studying (Myself(), Universe()) seems to be an important problem.)
Replies from: Incorrect↑ comment by Incorrect · 2012-09-15T17:14:30.400Z · LW(p) · GW(p)
I think I have a better understanding now.
For every statement S and for every action A, except the A Myself() actually returns, PA will contain a theorem of the form (Myself()=A) => S because falsehood implies anything. Unless Myself() doesn't halt, in which case the value of Myself() can be undecidable in PA and Myself's theorem prover wont find anything, consistent with the fact that Myself() doesn't halt.
I will assume Myself() is also filtering theorems by making sure Universe() has some minimum utility in the consequent.
If Myself() halts, then if the first theorem it finds has a false consequent PA would be inconsistent (because Myself() will return A, proving the antecedent true, proving the consequent true). I guess if this would have happened, then Myself() will be undecidable in PA.
If Myself() halts and the first theorem it finds has a true consequent then all is good with the world and we successfully made a good decision.
Whether or not ambient decision theory works on a particular problem seems to depend on the ordering of theorems it looks at. I don't see any reason to expect this ordering to be favorable.
comment by listic · 2012-09-28T23:39:23.271Z · LW(p) · GW(p)
After reading comments to a certain work of fiction published here I was surprised to discover strong negative reaction to smoking of the protagonist. (personally, I found this part a nice touch as far as its literary qualities) Thus for my fiction-writing purposes I would like to know:
What is the general stance on drug usage in fiction?
Smoking marijuana
Using psychedelic substances
Using unidentified or fictional substances of supposedly psychoactive nature, but with their full effect or social status unmentioned.
↑ comment by fubarobfusco · 2012-09-29T18:27:29.605Z · LW(p) · GW(p)
I do not think there is a "general stance" on drug usage. Further, tobacco is not a general drug (there's no such thing) but a specific well-known drug with a specific history. I would not expect reactions to tobacco smoking in fiction to be reflections of general stances on drug use.
What I read in the reaction to Hanna's smoking is specifically that, today and to some readers, smoking seems ① self-destructive and ② odious; and furthermore ③ signals unsympathetic or disfavored character tropes.
First, because we know so well today that tobacco smoking causes such a wide range of deadly diseases, an educated character in present-day or near-future fiction can be assumed to know this, too. She has willingly taken up a habit which causes cancer, emphysema, heart disease, etc. Many readers will have had relatives who died due to smoking-related diseases. So smoking may be seen as a symbol of a character's self-destructiveness. (An example that comes to mind is John Constantine in the Swamp Thing and Hellblazer comics.)
Second, smoking in public is often seen as odious — or, at least, insensitive to others' well-being. People from social backgrounds where smoking is uncommon may, due purely to a selection effect, associate smoking with people who do not care that they are being unpleasant to others. So smoking may be a symbol of a character's rudeness, hostility, or willful repulsiveness — a choice to "gross out" others to assert personal space, for instance.
(Also, many readers may have sensitivity to smoke, asthma, allergies, or simply a strong dislike for being around smoking or people who smell of smoke. So they may read a character smoking and think, "I would not want to be around her," and through mind projection fallacy conclude "she is unpleasant to be around.")
Third, smoking may be culturally associated with various sorts of characters (or real-world people) whom the readers disfavor — beatnik-type hipsters; rednecks; French philosophers; street criminals; anxious New York literary types; good ol' boys; the self-consciously retro sort of conservative; or for that matter the characters of Ayn Rand. Writers often intend a character to fit particular tropes, and the tropes that smoking brings to mind may be ones the reader finds unsympathetic.
I would not expect most of the above to apply to most other sorts of drug use.
comment by Epiphany · 2012-09-28T21:51:04.884Z · LW(p) · GW(p)
New post idea:
"Female Test Subject - Convince Me To Get Cryo"
(Offering myself for experimentation, of course.)
What do you think? Should I post it? [pollid:110]
Replies from: lsparrish, listic↑ comment by listic · 2012-09-28T23:45:23.316Z · LW(p) · GW(p)
What's the point?
Replies from: Epiphany↑ comment by Epiphany · 2012-09-29T00:42:49.514Z · LW(p) · GW(p)
To practice on me before something happens to your female family members and you've got to convince them...
Friendly hint: you just implied my life isn't worth saving. I am not easily offended and I'm not hurt, so that's just FYI.
Replies from: saturn↑ comment by saturn · 2012-09-29T01:17:46.146Z · LW(p) · GW(p)
To practice on me before something happens to your female family members and you've got to convince them...
Are you such a Platonically ideal female that we can generalize from you to other females, who may have expressed no interest in cryonics?
Friendly hint: you just implied my life isn't worth saving. I am not easily offended and I'm not hurt, so that's just FYI.
If you see it that way, it sounds like you're already very nearly convinced.
Replies from: Epiphany, Alicorn↑ comment by Epiphany · 2012-09-29T02:02:48.691Z · LW(p) · GW(p)
Are you such a Platonically ideal female that we can generalize from you to other females, who may have expressed no interest in cryonics?
Of course not, that's an assumed "no". I guess what you're really asking is "What is the point of seeing whether we can convince you to sign up for cryo?" Sometimes case studies are helpful for figuring out what's going on. Study results are more practically useful but let's not forget how we develop the questions for a study - by observing life. If you've ever felt uncomfortable about the idea of persuading someone of something or probing into their motivations, you can see why being invited to do so would be an opportunity to try things you normally wouldn't and explore my objections in ways that you may normally keep off-limits.
Even if most of my objections are different from the ones other people have, discovering even a few new objections and coming up with even a few new arguments that work on others would be worthwhile if you intend to convince other people in the future, no?
If you see it that way, it sounds like you're already very nearly convinced.
Alicorn is right. It's not that I am convinced or not convinced, it's that I'm capable of interpreting it the way that you might have meant it. For the record, where I'm at right now is that I'm not convinced it's a good way to save my life, (being the only way does not make it a good way) and I'm not 100% convinced that it's better than donating to a life-saving charity.
Replies from: saturn↑ comment by saturn · 2012-09-29T02:36:40.071Z · LW(p) · GW(p)
I'm trying to say that I think you might already be a pretty extreme outlier in your opinion of cryonics, based on a few clues I noticed in your comment, so your reactions may not generalize much. The median reaction to cryonics seems to be disgust and anger, rather than just not being convinced. I'm sort of on the fence about it myself, although I will try to refute bad cryonics-related arguments when I see them, so on object-level grounds I can't really say whether convincing you or learning how to convince people in general is a good idea or not.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-29T03:44:46.302Z · LW(p) · GW(p)
Disgust and anger, that's interesting. I wonder if that might be due to them feeling it's unfair that some people might survive when everyone else has died, or seeing it as some kind of insult to their religion like trying to evade hell (with the implication that you won't be motivated enough to avoid sinning, for instance). If that's the case, you're probably right that my current reaction is different from the ones that others would have. My initial reaction was pretty similar, though. My introduction to cryo was in a cartoon as a child - the bad guys were freezing themselves and using the blood of children to live forever. I felt it was terrifying and horribly unfair that the bad guys could live forever and creepy that there were so many frozen dead bodies. I didn't think about getting it myself until I met someone who had signed up. My reaction was "Oh, you can actually do that? I had no idea." - and it felt weird because it seemed strange to believe that freezing yourself is going to save your life (I didn't think technology was that far along yet), but I'm OK with entertaining weird ideas, so I was pretty neutral. I thought about whether I should do it, but I wasn't in a financial position to take on new bills at the time, so I stored that knowledge for later. Then, when I joined LessWrong, I began seeing mentions of cryo all over. I had the strong sense that it would be wrong to spend so much on a small chance of saving my own life when others are currently dying, but that was countered pretty decently by one of the posts linked to above. Now I'm discovering cached religious thoughts (I thought I removed them all. These are so insidious!) and am wondering if I will wake up as some sort of miserable medical Frankenstein.
I can't tell you whether it's worth it to convince me or learn to convince people, either. I'm not even sure it's worth signing up, after all. (:
↑ comment by Alicorn · 2012-09-29T01:19:53.275Z · LW(p) · GW(p)
Friendly hint: you just implied my life isn't worth saving. I am not easily offended and I'm not hurt, so that's just FYI.
If you see it that way, it sounds like you're already very nearly convinced.
She could know that you see it that way without seeing it that way herself. If I knew someone who believed that I would definitely go to hell unless I converted to their religion, and they didn't seem to care if I did that or not, I might characterize that as them not thinking my soul was worth saving.
Replies from: saturn↑ comment by saturn · 2012-09-29T02:14:54.684Z · LW(p) · GW(p)
Yeah, that's true. But still, if "they don't think my soul is worth saving" is more salient to you than, for instance, "I'm glad I won't have to deal with their proselytizing," it suggests that you take the idea of souls and hell at least a little bit seriously.
To give a more straightforward example, imagine a police officer asking someone someone whether they have any contraband. The person replies, "no, officer, I don't have any weed in my pocket." How would that affect your belief about what's in their pocket?
comment by AandNot-A · 2012-09-19T02:39:28.617Z · LW(p) · GW(p)
on hpmor: harry seems to be very manipulative but almost in a textbook kind of way. I take it eliezer got this from somewhere but cannot figure out where. I'd love to read more about this, could it have come from "the strategy of conflict"?
Cheers
Replies from: Nornagest↑ comment by Nornagest · 2012-09-19T02:52:53.079Z · LW(p) · GW(p)
I haven't read Strategy of Conflict, but I have read Robert Cialdini's book Influence: The Psychology of Persuasion, which Harry name-drops a couple of times and uses several techniques from. I'd guess that that's some of what you're seeing.
For future reference, though, it's considered polite to confine free-floating HPMoR discussion to the Methods threads, the most recent of which appears to be here. There have been a few Methods-related threads since, but all with narrower scope.
Replies from: AandNot-Acomment by [deleted] · 2012-09-15T14:52:16.829Z · LW(p) · GW(p)
A talk on The Importance of Mathematics by Timothy Gowers