Posts
Comments
Note how all the exodus is to places where people own their particular space and have substantial control over what's happening there. Personal blogs, tumblrs, etc. Not, say, subreddits or a new shinier group blog.
Posting on LW involves a sink-or-swim feeling: will it be liked/disliked? upvoted/downvoted? many comments/tepid comments/no comments? In addition, you feel that your post stakes a claim on everybody's attention, so you inevitably imagine it'll be compared to other people's posts. After all, when you read the Discussion page, you frequently go "meh, could've done without that one", so you imagine other people thinking the same about your post, and that pre-discourages you. In addition, a few years' worth of status games and signalling in the comments have bred to some degree a culture of ruthlessness and sea-lawyering.
So, these three: fretting about reactions; fretting about being compared with other posts; fretting about mean or exhausting comments. One way to deal with it is to move to an ostensibly less demanding environment. So you post to Discussion, but then everyone starts doing that, Main languishes and the problem reoccurs on Discussion. So you post to open threads, but then Discussion languishes, open threads balloon and become unpleasant to scan, and the problem reoccurs, to a lesser degree, on them too. But if you go off to a tumblr or a personal blog or your Facebook: 2nd problem disappears; 3rd problem manageable through blocking or social pressure from owner (you); 1st problem remains but is much less acute because no downvotes.
It's useless to say "just don't fret, post on LW anyway". The useful questions are "why didn't this happen in the first 4-5 years of the site?" and "assuming we want this reverted, how?" For the first question, because as the site was growing, the enthusiasm for this exciting community and the desire to count your voice among its voices overrode those feelings of discomfort. But after a few years things changed. Many regulars established lateral links. The site feels settled in, with an established pecking order of sorts (like the top karma lists; these were always a bad idea, but they just didn't matter much at first). There's no longer a feeling of "what I'll post will help make LW into what it'll be". And there's a huge established backlog that feels formidable to build on, especially since nobody's read it all. So the motivation lessened while the dis-motivation stayed as it was.
How to fix this? I think platformizing LW might work well. Everybody prefers their own space, so give everybody their own space on the common platform. Every user gets a personal blog (e.g. vaniver.lesswrong.com) on the same platform (reddit code under the hood). The global list of users is the same. Everybody gets to pick their reading list (tumblr-style) and have their custom view of new posts. There's also RSS for reading from outside of course. Blog owners are able to ban users from their particular blog, or disallow downvotes.
Then bring back Main as a special blog to which anyone can nominate a post from a personal blog, and up/downvotes determine pecking order, with temporal damping (HN style). Would also be cool to have a Links view to which everyone can nominate links from other rationality blogs and LWers can discuss.
(I realize that this would require nontrivial programming work, and have a good understanding of how much of it would be required. That isn't an insurmountable challenge).
It's a bad idea. Don't do it. You'll be turned off by all the low-level grudgery and it'll distract you from the real content.
Most of the time, you'll know if you found a solid proof or not. Those times you're not sure, just post a question on math.stackexchange, they're super-helpful.
The fifth axiom is the only one which requires some effort to understand. Intuitively, it states that parallel lines do not intersect.
No. This is bad and you should feel bad. Parallel lines do not intersect, and the fifth postulate has nothing to do with it. What do you imagine the definition of "parallel lines" is?
Parallel lines do not intersect by definition, in any geometry, Euclidean or non-Euclidean. The parallels postulate talks about something completely different.
I just want to note here that Johnstone's book is amazing and I'm grateful to you for introducing me to it.
"Herzelia, Israel" should now be "Tel Aviv, Israel" as the location changed, The link to the FB event stays the same. Thanks!
The entire point of that whole battle is to encourage Harry to commit his hidden resources (Lesath under the Cloak). The whole brawl is basically a show put on for Harry's benefit. Since Quirrell controls the time of Harry's coming to the scene, he could easily take out Snape himself and move him out of the way earlier. He didn't need to bring Sprout or manipulate others to come.
Since Quirrell neglected to ask Harry in Parseltongue whether he still has hidden resources Quirrell doesn't know about, it's still just about possible that Cedric Diggory, Time-Turned, is following them under the second Cloak. I hope he does.
Seems a bit strange that Quirrell didn't ask Harry to confirm in Parseltongue that Harry didn't have any contingency measures beyond those Quirrell already knows about (Lesath under the cloak). In the last chapter's discussion there were theories that Cedric Diggory might be around, time-turned with Harry. Even if not, why wouldn't Q make sure H doesn't have anyone else around to help or set up any other measures? Harry's promise "shall call for no help" isn't enough, if things are already set in motion for someone to help him.
-
Tel Aviv has had regularly scheduled meetings for a while now, please add it to the regular list.
If you're offended by any word in any language, it’s probably because your parents were unfit to raise a child. They were too stupid. They should have been neutered. Because all it is is a sound you can make with your mouth. It’s not a weakness that you have naturally. When you come out of that pink ugly hole onto this planet, you're nothing but a gooey, shrinking, wrinkled ball of weakness. That’s all you are: you're weak, you're nothing but weak, and your parents look at that, and they think: “Not weak enough. We can make this thing even weaker by training it to react poorly to different sounds that you can make with your mouth.”
-- Doug Stanhope
I do not see how this suggestion could be positively refuted. It enjoys a status well known in academic circles and doubtless elsewhere,—that of the Remotely Conceivable Alternative, contrary to the obvious implication of the facts, incapable of proof or disproof.
-- Denys L. Page (1908-1978), History and the Homeric Iliad (Berkeley: University of California Press, 1966), p. 57
[retracted]
There is no such thing as "the shortest program for which the halting property is uncomputable". That property is computable trivially for every possible program. What's uncomputable is always the halting problem for an infinity of programs using one common algorithm.
It is also easy to make up artificial languages in which Kolmogorov complexity is computable for an infinite subset of all possible strings.
You were probably thinking of something else: that there exists a constant L, which depends on the language and a proof system T, such that it's not possible to prove in T that any string has Kolmogorov complexity larger than L. That is true. In particular, this means that there's a limit to lower bounds we can establish, although we don't know what that limit is.
(I doubt this is original)
To help yourself do something regularly on a computer, put a direct link on your bookmarks bar.
Example: you want to keep a diary. Write it in a Google doc document and put the direct link to it in your bookmarks bar, so that one click is all it takes to open it. Not in your bookmarks somewhere else, not in a shortcut on the desktop (your browser is open all the time anyway), not a separate fancy diary-keeping software, but just one click in a place that's always in front of you. If you're like me, that'll help a lot.
Perhaps anti-intuitively, the difference in results is huge even between "one click in the bookmarks bar" and "one click to open a folder in the bookmarks bar and one click on the right link in that folder".
P.S. Alerts are also good, but this method helps where alerts aren't helpful. You want to train yourself to write a quick review of every book once you finish it. There's no way to set an alert to go off when you turn the last page in Kindle. But put a link on your bookmarks bar.
Is there a causal link between being relatively lonely and isolated during school years and (higher chance of) ending up a more intelligent, less shallow, more successful adult?
Imagine that you have a pre-school child who has socialization problems, finds it difficult to do anything in a group of other kids, to acquire friends, etc., but cognitively the kid's fine. If nothing changes, the kid is looking at being shunned or mocked as weird throughout school. You work hard on overcoming the social issues, maybe you go with the kid to a therapist, you arrange play-dates, you play-act social scenarios with them..
Then your friend comes up to have a heart-to-heart talk with you. Look, your friend says. You were a nerd at school. I was a nerd at school. We each had one or two friends at best and never hung out with popular kids. We were never part of any crowd. Instead we read books under our desks during lessons and read SF novels during the breaks and read science encyclopedias during dinner at home, and started programming at 10, and and and. Now you're working so hard to give your kid a full social life. You barely had any, are you sure now you'd rather you had it otherwise? Let me be frank. You have a smart kid. It's normal for a smart kid to be kind of lonely throughout school, and never hang out with lots of other kids, and read books instead. It builds substance. Having a lousy social life is not the failure scenario. The failure scenario is to have a very full and happy school experience and end up a ditzy adolescent. You should worry about that much much more, and distribute your efforts accordingly.
Is your friend completely asinine, or do they have a point?
Hmm, I would disagree. If you have a metaphysical claim, then arguments for or against this claim are not normally epistemological; they're just arguments.
Think of epistemology as "being meta about knowledge, all the time, and nothing else".
What does it mean to know something? How can we know something? What's the difference between "knowing" a definition and "knowing" a theorem? Are there statements such that to know them true, you need no input from the outside world at all? (Kant's analytic vs synthetic distinction). Is 2+2=4 one such? If you know something is true, but it turns out later it was false, did you actually "know" it? (many millions of words have been written on this question alone).
Now, take some metaphysical claim, and let's take an especially grand one, say "God is infinite and omnipresent" or something. You could argue for or against that claim without ever going into epistemology. You could maybe argue that the idea of God as absolute perfection more or less requires Him to be present everywhere, in the smallest atom and the remotest star, at all times because otherwise it would be short of perfection, or something like this. Or you could say that if God is present everywhere, that's the same as if He was present nowhere, because presence manifests by the difference between presence and absence.
But of course if you are a modern person and especially one inclined to scientific thinking, you would likely respond to all this "Hey, what does it even mean to say all this or for me to argue this? How would I know if God is omnipresent or not omnipresent, what would change in the world for me to perceive it? Without some sort of epistemological underpinning to this claim, what's the difference between it and a string of empty words?"
And then you would be proceeding in the tradition started by Descartes, who arguably moved the center of philosophical thinking from metaphysics to epistemology in what's called the "epistemological turn", later boosted in the 20th century by the "lingustic turn" (attributed among others to Wittgenstein).
Metaphysics: X, amirite? Epistemological turn: What does it even mean to know X? Linguistic turn: What does it even mean to say X?
"Ontology" is firmly dedicated to "exist or doesn't exist". Metaphysics is more broadly "what's the world like?" and includes ontology as a central subfield.
Whether there is free will is a metaphysical question, but not, I think, an ontological one (at least not necessarily). "Free will" is not a thing or a category or a property, it's a claim that in some broad aspects the world is like this and not like that.
Whether such things as desires or intentions exist or are made-up fictions is an ontological question.
Metaphysics: what's out there? Epistemology: how do I learn about it? Ethics: what should I do with it?
Basically, think of any questions that are of the form "what's there in the world", "what is the world made of", and now take away actual science. What's left is metaphysics. "Is the world real or a figment of my imagination?", "is there such a thing as a soul?", "is there such a thing as the color blue, as opposed to objects that are blue or not blue?", "is there life after death?", "are there higher beings?", "can infinity exist?", etc. etc.
Note that "metaphysical" also tends to be used as a feel-good word, meaning something like "nobly philosophical, concerned with questions of a higher nature than the everyday and the mundane".
Yes, I would.
(Have two small children, haven't needed to).
I didn't downvote you, but you're wrong. I don't particularly want XiXiDu to go away, and I haven't felt offended by his posts. It's simply an option that he should consider, given his description of his health problems. If I wanted him to leave, I would have urged him to leave, not suggested that he consider the option.
There's no "fight". You've been a very aggressive and mean-spirited critic of LW/MIRI/EY for a few years. Doesn't mean that there's a fight. Doesn't mean anyone "wins" if, say, you shut up and go away.
Your suggestion is not constructive, because coming up with retorts to mean-spirited past posts and endorsing them would be a poor use of MIRI's time, and would only add to drama rather than reduce it. Here's what you should do instead:
First, consider just going away. It may be best for your physical and mental health to stay away from LW and LW-related topics. Delete your old posts, forget you ever cared about this stuff, take up some other hobbies, etc. If you feel you can't, presumably because you think these issues are really important, read on.
Come up with a generously-sized kindly-worded update that negates the meanness and stick it on top of your relevant past posts. E.g. if I were in your position I would write something like "I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit".
Continue participating on LW as you desire, trying your best to be kind and not get into drama. Plenty of people manage to be skeptical of MIRI/EY and criticize them here without being you. (If you're not sure you can do this well, ask some regular(s) to help you out. Precommit that if people you asked PM you about your future LW comment or blog post saying you're being an asshole, you'll believe them and mend it.)
Accept that some people will continue to hate/dislike/hold a grudge against you. Issue private apologies to them if you feel you should, but don't do it publicly (because drama). If that doesn't help, accept and move on.
The list you mention is not very strong evidence that "people are born with their sexuality". It's a list of correlations of varying quality and effect size that is subject to strong publication bias. More importantly, all of these correlations are perfectly compatible with the possibility that genetic/prenatal factors only partially influence one's sexual orientation rather than completely determine it.
Please read the section on twin studies that opens the Wiki page you referenced. The epidemiological twin studies are probably the strongest evidence we currently have, and they suggest that genetic factors play a role but do not determine sexual orientation.
Probably a lot of different things, for example: revulsion at some of the traditional gender roles and behaviors. Negative emotions about their sexual organs. Intense erotic pleasure while imagining themselves the opposite sex. Anxiety due to not feeling what they think the person of their sex is supposed to feel.
Why do you think such a meme would spread or originate, if not due to its truth value?
Memes that provide an explanation of one's behavior in terms of one's identity are insanely powerful. They spread because they lead you from from "I don't understand why I'm like this" to "I understand why I'm like this", and the latter feeling is something we all lust for.
The truth value is not especially important to the initial spread of an attractive identity-meme. Consider that "people are born gay" is almost a dogma in the LGBT community and liberal circles, although the available scientific understanding sharply contradicts it. Or recall that the 19th century saw a very potent meme in which gay people self-identified as "the third sex", "a female psyche in a male body". It seems that many gay people in the 19th century really felt very strongly that they have a "female psyche" or a "female soul", similarly to how today many biological-X transgender people feel very strongly that they have a "non-X brain".
Hmm, "delusional" is a bit underspecified.
How about "There's no solid evidence for a gender bit in the brain. While many or most transgender people feel something, explaining that feeling as "I'm an X brain trapped in a non-X body" is essentially a memetic phenomenon. Additionally, genderqueer and non-binary persons are typically participants in a memetic fad."
I think that's what I believe; summarizing this as "trans people are delusional" seems harsh and uncharitable to me, but I can see how someone might say that's exactly what it is. If you think now that the above is obviously wrong, I'm very interested in arguments/evidence.
And Then I Thought I Was a Fish. After a bad LSD trip, a 20-something student can't sleep for two weeks, and as a result of the drug, the sleep deprivation, or both, he goes into psychosis. After a few months in this state, which include brief institutionalization which he ends by learning to fake recovery, he spontaneously wakes up sane one day. This is a first-hand account of the entire story, written 10 years after the fact. The author has very specific memories of what it felt like to be insane, which he's able to share convincingly. He tells us about his friends, roommates, crushes. He interviews people who interacted with him at the time and reports their impressions.
I couldn't put this book down. It's very well-written and utterly fascinating.
I'm studying very basic Lie group theory by working through John Stillwell's Naive Lie Theory. The end-goal in this direction is to acquire the basics of modern differential geometry. If I make it to the end of this book, I've got Janich's Vector Analysis (differential manifolds, differential forms, Stokes' theorem in the modern setting, de Rham cohomology) and Loomis & Sternberg Advanced Calculus (all this and more, starting from basic linear algebra and multivariable calculus in a principed way). Not decided yet which of them I'll try to work through or both.
Independently of this, I would like to refresh probability and acquire statistics in a mathematically rigorous way. I tried Wasserman's All of Statistics that is sometimes recommended, but it's too dry and unmotivating for me. I like the look of David Williams's Weighing the Odds, which seems to be both suitably rigorous and full of illuminating explanations, but I haven't really tried reading it yet.
However, the point I wanted to make is that you would have been able to come up with some mechanism described by graph 2 that could account for the data..
Thanks. I should have realized that, and I think I did at some point but later lost track of this. With this understood properly I can't think of any counterexample, and I feel more confident now that this is true, but I'm still not sure whether it ought to be obvious.
After the MH17 flight was shot down in the skies of eastern Ukraine, I spent an insane amount of time poring over various bits and pieces of evidence that pointed to how it happened and who was responsible. Soon after the crash evidence started appearing that on that day a Russian-made BUK missile launcher was driven through nearby towns controlled by the separatists, and the SBU, the Ukrainian counterpart of the FBI, published intercepts of phone conversations between rebels that indicate the missile launcher was brought across the border from Russia the night before and sent by one of the rebel commanders to the exact location where the launch was suspected to have come from. However, much of this evidence was scattered, inconclusive, or quickly claimed to have been faked by the SBU, and at the same time other versions of what may have happened appeared and multiplied.
I found bits and pieces of local evidence from July 17th before the crash (e.g. locals tweeting they saw a BUK driving through their town or talking about hearing the launch), located and double-checked detective work already done by others (e.g. geolocation of key photographs/videos showing the BUK launcher), and noticed that several people working independently from different bits of evidence converged, without knowing of each other, on a particular location southeast of the town of Torez as the launch site. I compiled it all together into a long compelling blog post (in Russian) filled with evidence and careful examination of each piece of evidence, how it could or could not have been faked, and how this all ties together into an overwhelmingly likely version. The post and its followups saw >3000 comments from people debating various pieces of evidence and debunking alternative stories, with subthreads going into expert discussions on weather conditions, correctness of geolocation efforts, JPEG compression artefacts, debunking of conspiracy theories based on an unlikely Youtube date-tagging bug, and much else. Journalists travelled to the suspected launch site and found burned ground and suspicious-looking plastic pieces (but no smoking gun). An independent Russian TV channel invited me to appear on a panel (I did so by Skype) to communicate the evidence.
I wrote at length on that quotation in a follow-up comment in that thread.
[cont'd from the parent comment]
1: when you first introduce "God's Table", it's really hard to understand what's going on, and the chief reason may be that you don't explicitly explain to the reader that the rows in the table are individuals for which you record the data. Again, this is something that's probably crystal clear to you, but I, the reader, at this point haven't seen any EXAMPLE of what you mean by "dataset", anything beyond vague talk of "observed variables". The rows in your table are only identified by "Id" of 1,2,3,4, which doesn't tell me anything useful, it can be just automatic numbering of rows. And since you have two variables of interest, A and Y, and four rows in the table, it's REALLY easy to just mistake the table for one which lists all possible combinations of values of A and Y and then adds more data dependent on those in the hypothetical columns you add. This interpretation of the table doesn't really make sense (the set of values is wrong for one thing) and it probably never occurred to you, but it took me a while to dig into your article and try to make sense of it to see that it doesn't make sense. It should be crystal clear to the reader on the first reading just what this table represents, and it just isn't. Again, if only you had a simple example running through the article! If only you wrote, I dunno, "John", "Mary", "Kim" and "Steven" as the row IDs to make it clear they were actual people, and "treatment" and "effect" (or better yet, "got better") instead of A and Y. It would have been so much easier to understand your meaning.
(then there're little things: why oh why do you write a=0 and a=1 in the conditioned variables instead of A=0, A=1? It just makes the reader wonder unhappily of there's some small-case 'a' they missed before in the exposition. "A" and "a" are normally two different variable names in math, even if they're not usually used together! Again, I need to spend a bunch of time to puzzle out that you really mean A=0 and A=1. And why A and Y anyway, if you call them treatment and effect, why not T and E? Why not make it easier on us? The little things)
2: You start a section with "What do we mean by Causality?" Then you NEVER ANSWER THIS. I reread this section three times just to make sure. Nope. You introduce counterfactual variables, God's table and then say "The goal of causal inference is to learn about God’s Table using information from the observed table (in combination with a priori causal knowledge)". I thought the goal of causal inference was to answer questions such as "what caused what". Sure, the link between the two may be very simple but it's a crucial piece of the puzzle and you explain more simple things elsewhere. You just explained what a "counterfactual" is to a person who possibly never heard the term before, so you can probably spare another two sentences (better yet, two sentences and an example!) on explaining just how we define causation via counterfactuals. If you title a section with the question "What do we mean by causality?" don't leave me guessing, ANSWER IT.
3: First you give a reasonably generic example of confounding and finish with "This is called confounding". Then later you call sex a confounder and helpfully add "We will give a definition of confounding in Part 2 of this sequence." OK, but what was the thing before then?
4: The A->L->Y thing. God, that's so confusing. At this point, again because there've been no examples, I have no clue what A,L,Y could conceivably stand for. Then you keep talking about "random functions" which to me are functions with random values; a "random function with a deterministic component dependent only on A" sounds like nonsense. "Probabilistic function" would be fine. "Random variable" would be fine if you assume basic knowledge in probability. "Random function" is just confusing but I took it to mean "probabilistic function".
Then you say "No matter how many people you sample, you cannot tell the graphs apart, because any joint distribution of L, A and Y that is consistent with graph 1, could also have been generated by graph 2".
Why?
It's obvious to you? It isn't obvious to me. I understand sort of vaguely why it might be true, and it certainly looks like a good demonstration of why joint distribution isn't enough, if it's true. Why is it true? The way you write seems like it should be the most obvious thing in the world to any reader. Well, it's not. Maybe if you had a running EXAMPLE (that probably sounds like a broken record by now)...
So I'm trying to puzzle this out. What if the values for A,L,Y are binary, and in the diagram on the left, L->A and L->Y always just copy values from L to those other two deterministically; while in the diagram on the left, let's say that A is always 1, L is randomly chosen to be 0 or 1 (so that its dependence on A is vacuous), while Y is a copy of L. Then the joint distribution generated by graph 1 will be, in order ALY, 100 or 111 with equal probability, and it cannot be generated by graph 2, because in any distribution generated by graph 2, A=Y in all samples.
Does that make sense, or did I miss something obvious? If I'm right, the example is wrong, and if I'm wrong, perhaps the example is not as crystal clear as it could have been, since it let me argue myself into such a mistake?
OK, how should I finish this. You may have gotten the impression from my previous comment that I was looking for a more rounded-off philosophical discussion of the relevant issues, but that's really not the case. My problem was not that you didn't spend 10 paragraphs on summarizing what we might mean by causality and what other approaches there are. It's fine to have a practical approach that goes straight to discussing data and what we do with it. The problem is that your article isn't readable by someone who isn't already familiar with the field. I feel that most of the problems could be solved by a combination of: a) a VERY careful rereading of the article from the p.o.v. of someone who's a subject-matter expert, but is completely ignorant of epidemiology, causal inference or any but the most basic notions in probability, and merciless rewriting/expanding of the text to make everything lucid for that hypothetical reader; b) adding a simple example that would run through the text and have every new definition or claim tested on it.
I feel that I owe you a longer explanation of what I mean, especially in the light of a longish comment you wrote and then retracted (I thought it was fine, but nevermind). I invoke Crocker's Rule for myself, too.
I took another look the other day at your introduction paragraph, to better understand what was bugging me so much about it. Meanwhile you edited it down to this:
Whenever someone tells you about a new framework for describing empirical research, the first question you should ask yourself is whether the new framework allows you to correctly represent the important aspects of phenomena you are studying.
So here's the thing. The vagueness of this "empirical research" kept bugging me, and suddenly it dawned on me to try and see if this is a thing in epidemiology, which you said the article was originally rooted in. And turns out it is. Epidemiology journals, books etc. talk all the time about "empirical research" and "empirical research methods" etc. etc. As far as I could understand it - and I trust you'll correct me if I'm wrong - these refer to studies that collect relevant data about large numbers of people "in the wild" - perhaps cohorts, perhaps simply populations or subpopulations - and try to infer all kinds of stuff from the data. And this is contrasted with, for example, studying one particular person in a clinic, or studying how diseases spread on a molecular level, or other things one could do.
So, suddenly it's all very much clearer! I understand now what you were saying about complicated models etc. - you have a large dataset collected from a large population, you can do all kinds of complicated statistical/machine learning'y stuff with it, try to fit it to any number of increasingly complicated models and such, and you feel that while doing so, people often get lost in the complicated models, and you think causal inference is nice because it never loses track of things people on the ground actually want to know about the data, which tends to be stuff like "what causes what" or "what happens when I do this". Did I get this right?
OK, now consider this - I'm a computer programmer with a strong math background and strong interest in other natural sciences. I seem to be in the intended audience for your article - I deal with large datasets all the time and I'm very keen to understand causal inference better. When I see the phrase "empirical research", it doesn't tell me any of that stuff I just wrote. The closest phrase to this that I have in my normal vocabulary is "empirical sciences" which is really all natural sciences besides math. The only reasonable guess I have for "empirical research" is "things one finds out by actually studying something in the real world, and not just looking at the ceiling and thinking hard". So for example all of experimental physics comes under this notion of "empirical research", all chemistry done in a lab is "empirical research". All the OTHER kinds of epidemiological research that are NOT "empirical research" according to the way the phrase is used in epidemiology, I would still consider "empirical research" under this wide notion. Since the notion is so vague, I have no idea what kinds of "models" or "frameworks" to handle it you might possibly be thinking about. And it puzzles me that you even speak about something so wide and vague, and also it isn't clear what kinds of relevance it might have for causal inference anyway.
By now I spent probably x30 more time talking about that sentence than you spent writing it, but it's a symptom of a larger problem. You have a crisp and clear picture in your head: empirical research, that thing where you take N humans and record answers to K questions of interest for every one of them, and then try to see what this data can reveal. But when you put it into words, you fail to read those words with the simulated mind of your intended audience. Your intended audience in this case are not epidemiologists. They will not see the crisp and clear picture you get when you invoke the phrase "empirical research". It's on you to clearly understand that - that may be the most important thing you have to work on as an explainer of things to a non-specialist audience. But in your text this happens again and again. AND, to add to the problem, you DON'T use examples to make clear what you're talking about.
As a result, I spent upwards of 2 hours trying to understand just what two words in one sentence in your article mean. This is clearly suboptimal!
Now, I'll try to finish this already overlong comment quickly and point out several more examples of the same sort of problem in the article.
[in the next comment since this one is getting rejected as too long]
I'm sorry that my reaction is relatively harsh. This is poorly written and very difficult to understand for someone who doesn't already know these basic topics.
It is possible to create arbitrarily complicated mathematical structures to describe empirical research. If the logic is done correctly, these structures are all completely valid, but they are only useful if the mathematical objects correctly represent the things in the real world that we want to learn about. Whenever someone tells you about a new framework which has been found to be mathematically valid, the first question you should ask yourself is whether the new framework allows you to correctly represent the important aspects of phenomena you are studying.
This is the first paragraph in your entire post, the one where you want to hook the reader and make them care about the subject, and you spend it on some sort of abstract verbiage that is not really relevant to the rest of the post. Imagine that your reader is someone who cannot name any "complicated mathematical structures" if you ask them to. What the hell are you talking about? What structures? What does "correctly represent" mean? What "new framework", what kind of framework are you talking about? Is "Riemannian manifolds" a "new framework"? Is "QFT" a "new framework"? And if it's so important to "ask yourself... whether allows... to correctly represent...", how do you answer this question? Maybe give an example of a "new framework" that DOESN'T allow this?
When we are interested in causal questions, the phenomenon we are studying is called "the data generating mechanism". The data generating mechanism is the causal force of nature that assigns value to variables. Questions about the data generating mechanism include “Which variable has its value assigned first?”, “What variables from the past are taken into consideration when nature assigns the value of a variable?” and “What is the causal effect of treatment”.
What does this MEAN? What kind of an entity is this "mechanism" supposed to be, hypothetical in your mind or physical? What does "assign" mean, what are "variables"? This fundamental (to your article) paragraph is so vague that it borders on mystical!
We can never observe the data generating mechanism. Instead, we observe something different, which we call “The joint distribution of observed variables”. The joint distribution is created when the data generating mechanism assigns value to variables in individuals. All questions about how whether observed variables are correlated or independent, and how about strongly they are correlated, are questions about the joint distribution.
You just lost all readers who don't know what a joint distribution is, including those who learned it at one time at school, remember that there is such a thing, but forgot what it is precisely. Additionally, in normal language you don't "observe variables", you MEASURE something. "Observe" means "stare hard" if you don't know the jargon. If someone knows the jargon, why do they need your basic explanation?
Look, I know this stuff enough for there not to be anything new for me in your article, and I still find it vexing when I read it. Why can't you give an example, or five? Like anything whatsoever... maybe a meteor in space flies close to a planet and gets captures by its gravitational field, or maybe your car goes off into a ditch because it slips on an icy road. Talk about how intuitively we perceive a cause-and-effect relationship and our physical theories back this up, but when we measure things, we just find a bunch of atoms (bodies, etc.) in space moving in some trajectory (or trajectories for many samples); the data we measure doesn't inherently include any cause-and-effect. Spend a few sentences on a particular example to make it sink in that data is just a bunch of numbers and that we need to do extra work, difficult work, to find "what causes what", and in particular this means understanding just what it means to say that. Then you can segue into counterfactuals, show how they apply to one or two examples from before, etc.
The journal article is available at the usual place: use the scientific articles field and search by the DOI.
Here's an interesting application of elementary probability theory.
Syria recently held an election, in the midst of a civil war. Dr. Bashar Hafez al-Assad wins post of President of Syria with sweeping majority of votes at 88.7%.
The elections were a sham. The vote counts are completely fraudulent. And you can learn this just from the results page linked above, without knowing anything about Syria or its internal politics. How?
The results are too accurate.
"11,634,412 valid ballots, Assad wins with 10,319,723 votes at 88.7%". That's not 88.7%, that's 88.699996%. Or in other words, that's 88.7% of 11,634,412, which is 10,319,723.444, rounded to a whole person.
The same is true about all other percentages in this election. In one of the results there's even a bad rounding error: 4.3% cast for Al-Nouri is 11,634,412 * 0.043 = 500,279.716 votes which is rounded down to 500,279 votes in the results instead of the closer 500,280. As a result, the total number of all alternatives (three candidates + incorrect ballots) differs from the total number of valid ballots by 1 (442,108 + 10,319,723 + 500,279 + 372,301 = 11,634,411 and not 11,634,412. If they were rounding correctly, their fake numbers would've looked better. In either case, it's evident that someone took the total vote count, calculated the percentages and rounded.
(why is this an application of elementary probability theory? You can calculate the probability of such an exact percentage of votes occurring by chance).
(to the best of my knowledge, this was first noted in this Russian-language Facebook post. Recently there had been an identical case with a sham referendum in a Ukrainian province controlled by separatists, which is what got people interested in looking at vote counts).
Michael Swanwick's The Iron Dragon's Daughter. This is fantasy for adults: complex flawed characters, a world rich in detail, multitude of characters who live and do things for their own sake rather than to advance a plot point or help the hero. Utter disregard for conventions and cliches of the genre. A hero who is an anti-Mary Sue. Endless inventiveness of the author.
To my taste, this novel is what books like The Kingkiller Chronicles promise, but then utterly fail to deliver. But if you're a fan of Rothfuss, try Swanwick anyway, and you might get a fuller and richer taste of what you like.
I've also read a science fiction novel by the same author, Stations of the Tide, which won a Nebula in 1991. It's also very good. In it, a nameless bureaucrat of the interplanetary government is pursuing a self-declared magician (who's suspected of smuggling restricted technology) across the surface of a planet where half the surface is about to get flooded for many years, and a great migration of the populace is imminent. One of the themes is unfriendly AI - the Earth with its entire population had suffered a horrible fate in the world of this novel, which is discussed and explored in one of the episodes, although it's not a major plot device.
The CS graduates from top schools disproportionately end up in Silicon Valley, where salaries are much higher than in other places, as is the cost of living. Mechanical engineering doesn't have a very large Mecca of its own.
(this post might have been better as an Open Thread comment)
The article was written with an assumption that the reader would have been exposed to the basic arguments in favor of volunteering ahead of time, which accounts for the imbalance.
Then you should definitely mention that, so the reader knows to expect the one-sidedness upfront.
There's no contradiction here; if philanthropic opportunity A is better than philanthropic opportunity B, then convincing people to take opportunity A rather than B is net positive, even though there's a negative effect.
Yes, but you advising people to donate to a nonprofit and someone "fundraising for a nonprofit" is essentially the same activity. You do it because it can be net positive, but then you criticize someone else doing it because "it can hurt other nonprofits", without mentioning the net positive thing in that case.
P.S. Since I probably came off as curmudgeony, just wanted to mention that I think Cogito Mentoring is a promising endeavor and some of your articles have been great; don't take my brisk criticism of this one as hostile or peeved.
This article is disappointing: it gives a very strong impression that you wrote the bottom line first. As a result, this looks not so much as an article about "How valuable is volunteering?", but rather a collection of arguments designed to steer a reader from volunteering towards effective altruism.
Since that is essentially what you're doing, you should be more open about that, and then the article would be more effective as well. The way it looks now, I imagine reading it as a high school student and immediately thinking: "this guy isn't really interested in finding out the value of volunteering. He has his own agenda to sell."
The logical structure of the article makes it clear that you're eliminating options one by one:
- Want to volunteer to help people who can pay? Bad idea.
- Still want to do that? Another reason it's a bad idea.
- What's left, helping people in need? No, cash is better
- Still want to help? Don't, you'll be hurting others!
- Want to spend your time volunteering? Don't, your time is too valuable, donate cash instead.
Want to volunteer? It's costly for the nonprofit to train you, don't be so selfish.
Bummer, what should I do then? Glad you asked! There's this thing called effective altruism...
You never try to see why volunteering may be more valuable than donating. There's no attempt to understand, much less steelman, the opposing side. Here, off the top of my head:
When volunteering, a person sees the fruits of their labor immediately; when donating money, there's uncertainty about whether it really goes to the stated goal, or is squandered due to incompetence, inefficiency, fraud. Organizations that track and rank charities solve this only partially, since you need to trust them, too, and their ability to understand the charities may be limited. Depending on how people estimate the degree of uncertainty, they may rationally prefer volunteering.
Volunteering may be a way to train oneself effectively to help others, by using social pleasure and cohesiveness. Some people are just not motivated enough by sending a check and imagining the rest; you can tell them to "separate utilons and hedons" all you want, but if the actual result is that they'll stop donating, it may be better to volunteer.
Visible volunteering work is much more effective at drawing others to charitable causes than hired workforce performing the same work.
Finally, your particular arguments are sometimes poor, as is typical for arguments chosen for a precommitted bottom line. For example:
But after a certain point, people aren't willing to give more money to charity. By getting people to give to one nonprofit, you can make them reluctant to give to other nonprofits, reducing their funding.
An unconvincing zero-sum assumption (who said we're anywhere near close the "certain point"?). Also, hello, by urging people to consider e.g. GiveWell's recommendations, this is exactly what you're doing!
training and supervising volunteers often costs a nonprofit a lot of money,
If it's not net helpful to the nonprofit, they will not accept the volunteers.
1) yes 2) no, and I'll read through Nielsen's post, thanks. I've been postponing the task of actually reading Pearl's book.
I still haven't found a readable meta-overview of causation. What I would love to be able to read is a 3-10 pages article that answers these questions: what is causation, why our intuitive feeling that "A causes B" is straightforward to understand is naive (some examples), why nevertheless "A causes B" is fundamental and should be studied, what disciplines are interested in answering that question, what are the main approaches (short descriptions with simple lucid examples), which of them are orthogonal/in conflict/cooperate with each other, example of how a rigorous definition of causality is useful in some other problem, major challenges in the field.
Before I'm able to digest such a summary (or ultimately construct it in my own head from other longer sources if I'm unable to find it), I remain confused by just about every theoretical discussion of causation - without at least a vague understanding of what's known, what's unknown, what's important and what's mainstream everything sounds a little sectarian.
I also read it based on your recommendation (I think - don't remember clearly) and I really really liked it. The near-future science is overwhelmingly convincing in a good way. What's funny is that I thought the characters were pretty shallow and the constantly peppy attitude of the hero not believable and somewhat grating; usually the quality of characters and their development is a must for me, their shallowness ruins any book. Somehow it didn't happen here. There was just so much of this juicy mind-opening fascinating engaging sciency stuff that kept me at the edge of the chair. I'm really glad I read this book - thanks!
Very apposite, thanks!
A lot of your impressions seem to go something like "the workshop was useful because it made me think about X, and that's more important than specific answers/techniques it gave for X". Lately, I've been noticing more and more examples of this around me. A particular book would offer a frankly poor argument in favor of Y, but I'd still recommend it to my friends because reading it makes you think about Y and reach your own conclusion. An online community centered around boosting Z in your life may be somewhat cultish and prone to pseudo-scientific explanations about why more Z is awesome, but it's still worth reading their FAQs because you didn't even think of Z as something that might be adjusted.
This is one of my favorite hammers now, and it finds nails everywhere. So much advice turns out to be helpful indirectly, because it makes you reflect carefully on its domain. The actual direct value of the advice may be almost irrelevant, be it good or bad: the indirect contribution is much greater anyway.
I read in my native language without subvocalizing, and in English with subvocalizing. I can make an effort and read w/o subvocalization in English, but then I get an unpleasant feeling that I'm reading in a very shallow way, understanding and retaining much less than usual. I don't know for certain that this feeling is actually correct, but the evidence leans that way.
Some of your fake numbers fall out of the common practice of shoehorning a partial order into the number line. Suppose you have some quality Foo relative to which you can compare, in a somewhat well-defined manner in at least some cases. For example, Foo = desirable: X is definitely more desirable to me than Y, or Foo = complex: software project A is definitely more complex than software project B. It's not necessarily the case that any X and Y are comparable. It's then tempting to invent a numerical notion of Foo-ness, and assign numerical values of Foo-ness to all things in such a way that your intuitive Foo-wise comparisons hold. The values turn out to be essentially arbitrary on their own, only their relative order is important.
(In mathematical terms, you have a finite (in practice) partially ordered set which you can always order-embed into the integers; if the set is dynamically growing, it's even more convenient to order-embed it into the reals so you can always find intermediate values between existing ones).
After this process, you end up with a linear order, so any X and Y are always comparable. It's easy to forget that this may not have been the case in your intuition when you started out, because these new comparisons do not contradict any older comparisons that you held. If you had no firm opinion on comparative utility of eating ice-cream versus solving a crossword, it doesn't seem a huge travesty that both activities now get a specific utility value and one of them outranks the other.
The advantages of this process are that Foo-ness is now a specific thing that can be estimated, used in calculations, reasoned about more flexibly, etc. The disadvantages, as you describe, are that the numbers are "fake", they're really just psychologically convenient markers in a linear order; and the enforced linearity may mislead you into oversimplifying the phenomenon and failing to investigate why the real partial order is the way it is.
The answer is "probably not". Cormen is too comprehensive and dry for self-study; it's best used as the textbook to back an algorithms course or as a reference to consult later on.
A very good book is Skiena, The Algorithm Design Manual. I usually recommend it to people who want to brush up on their algorithms before programming interviews, but I think it's accessible enough to a novice as well. Its strengths are an intelligent selection of topics and an emphasis on teaching how to select an algorithm in a real-life situation.
Out of curiosity, did you consider sending this comment via PM, and if so, what made you decide to post it publicly?
Sure, if that's the way you like it, but for me that just doesn't work. Occam's Razor is a principle that is supposed to help me think better here and now; to decide that its justification rests on whether, say, the Universe is discrete at Planck scale or not, when this choice has to be compatible with QM and Newtonian mechanics at larger scales and therefore doesn't change anything in practical terms in my life here and now - that seems absurd. To me, it's a clear evidence that this is no justification at all.
I don't understand the question. I thought that's what we were talking about. Am I missing something?
To be more explicit: setting up a UTM with random bits on the input tape is a natural-seeming way of getting the probability distribution over programs (2^-length(p)) that goes into the Solomonoff prior. But as I'm trying to say in the comment you replied to, I don't think it's really natural at all. And of course SI doesn't need this particular distribution in order to be effective at its job.
You can. But is there any reason to think that this models well the inherent complexity of programs? Do we ever execute programs by choosing randomly bits that constitute them? We have algorithms that utilize randomness, to be sure, but UTMs are not, normally, such algorithms. I appreciate that "choosing program bits randomly" is a simple, short, succinct way of getting to 2^-length(p), but I don't see a reason to think it's natural in any interesting sense.