Posts
Comments
Murder, suicide, and Catholicism don't mix. It's supposed to be an challenging opera for a culture that truly believes in the religious moral compass. You empathize with Tosca and her decisions to damn herself. The guy she kills is rather evil.
I'm not sure I follow your first notion, but I don't doubt that rationality is still marginally profitable. I suppose you could couch my concerns as whether there is a critical point in rationality profit: at some point does become more rational cause more loss in our value system than gain? If so, do we toss out rationality or do we toss out our values?
And if it's the latter, how do you continue to interact with those who didn't follow in your footsteps? Create a (self defeating) religion?
That's close, but the object of concern isn't religious artwork but instead states of mind that are highly irrational but still compelling. Many (most?) people do a great deal of reasoning with their emotions, but rationality (justifiably) demonizes it.
Can you truly say you can communicate well with someone who is contemplating suicide and eternal damnation versus the guilt of killing the man responsible for the death of your significant other? It's probably a situation that a rationalist would avoid and definitely a state of mind far different from one a rationalist would take.
So how do you communicate with a person who empathizes with it and relates those conundrums to personal tragedies? I feel rather incapable of communicating with a deeply religious person because we simply appreciate (rightfully or wrongfully) completely different aspects of the things we talk about. Even when we agree on something actionable, our conceptions of that action are non-overlapping. (As a disclaimer, I lost contact with a significant other in this way. It's painful, and motivating of some of the thoughts here, but I don't think it's influencing my judgement such that it's much different than my beliefs before her.)
In particular, the entire situation is not so different from Eliezer's Three Worlds Collide narrative if you want to tie it to LW canon material. Value systems can in part define admissible methods of cognition and that can manifest itself as inability to communicate.
What were the solutions suggested? Annihilation, utility function smoothing, rebellion and excommunication?
I feel like this is close to the heart of a lot of concerns here: really it's a restatement of the Friendly AI problem, no?
The back door seems to always be that rationality is "winning" and therefore if you find yourself getting caught up in an unpleasant loop, you stop and reexamine. So we should just be on the lookout for what's happy and joyful and right—
But I fear there's a Catch 22 there in that the more on the lookout you are, the further you wander from a place where you can really experience these things.
I want to disagree that "post-Enlightenment civilization [is] a historical bubble" because I think civilization today is at least partially stable (maybe less so in the US than elsewhere). I, of course, can't be to certain without some wildly dictatorial world policy experiments, but curing diseases and supporting general human rights seem like positive "superhuman" steps that could stably exist.
A loss of empathy with "regular people". My friend, for instance, loves the opera Tosca where the ultimate plight and trial comes down to the lead soprano, Tosca, committing suicide despite certain damnation.
The rational mind (of the temperature often suggested here) might have a difficult time mirroring that sort of conundrum, however it's been used to talk about and explore the topics of depression and sacrifice for just over a century now.
So if you take part of your job to be an educator of those still under the compulsion of strange mythology, you probably will have a hard time communicating with them if you absolve all connection to that mythology.
I agree! That's at least part of why my concern is pedagogical. Unless your plan is more of just run for the stars and kill everyone who didn't come along.
I'm sorry, as I'm reading it that sounds rather vague. Gelman's work stems largely from the fact that there is no central theory of political action. Group behavior is some kind of sum of individual behaviors, but with only aggregate measurements you cannot discern the individual causes. This leads to a tendency to never see zero effect sizes, for instance.
I think this is an important direction to push discourse on Rationality toward. I wanted to write a spiritually similar post myself.
The theory is that we know our minds are fundamentally local optimizers. Within the hypothesis space we are capable of considering, we are extremely good exploitive maximizers, but, as always, it's difficult to know how much to err on the side of explorative optimization.
I think you can couch creativity and revolution in terms like that, and if our final goal is to find something to optimize and then do it, it's important to note randomized techniques might be a necessary component.
This is made explicit in removing connections from the graph. The more "obviously" "wrong" connections you sever, the more powerful the graph becomes. This is potentially harmful, though, since like assigning 0 probability weight to some outcome, once you sever a connection you lose the machinery to reason about it. If your "obvious" belief proves incorrect, you've backed yourself into a room with no escape. Therefore, test your assumptions.
This is actually a huge component of Pearl's methods since his belief is that the very mechanism of adding causal reasoning to probability is to include "counterfactual" statements that encode causation into these graphs. Without counterfactuals, you're sunk. With them, you have a whole new set of concerns but are also made more powerful.
It's also really, really important to dispute that "one could split a data set using basically any possible variable". While this is true in principle, Pearl made/confirmed some great discoveries by his causal networks which helped to show that certain sets of conditioning variables will, when selected together, actively mislead you. Moreover, without using counterfactual information encoded in a causal graph, you cannot discover which variables these are.
Finally, I'd just like to suggest that picking a good hypothesis, coming to understand a system; these are undoubtedly the hardest part of knowledge involving creativity, risk, and some of the most developed probabilistic arguments. Actually making comparisons between competing hypotheses such that you can end up with a good model and know what "should be important" is the tough part fraught with possibility of failure.
If lecture notes contain as much relevant information as a book, then you should be able to, given a set of notes, write a terse but comprehensible textbook. If you're genuinely able to get that much out of notes, then yes that definitely works for you.
The concern is instead if reading a textbook only conveys a sparse, unconvincing, and context-free set of notes (which is my general impression of most lecture notes I've seen).
Both depend heavily on the quality of notes, textbook, subject, and the learning style you use, but I think it's a lot of people's experience that lecture notes alone convey only a cursory understanding of a topic. Practically enough sometimes, test-taking enough surely, but never too many steps toward mastery.
Gelman's text is very specifically targeted at the kinds of problems he enjoys in sociology and politics, though. If you're interested in solving problems in that field or like it (highly complex unobservable mechanisms, large number of potential causes and covariates, sensible multiple groupings of observations, etc) then his book is great. If you're looking at problems more like in physics, then it won't help you at all and you're better off reading Jaynes'.
(Also recommended over Gelman's Applied Regression and Modeling if the above condition holds.)
This is in general one of the advantages of Bayesian statistics in that you can split the line between aggregate and separated data with techniques that automatically include partial pooling and information sharing between various levels of the analysis. (See pretty much anything written by Andrew Gelman, but Bayesian Data Analysis is a great book to cover Gelman's whole perspective.)
The short of it, having read a few of Pearl's papers and taken a lecture with him, is that you build causal networks including every variable you think of and then use physical assumptions to eliminate some edges from the fully connected (assumption free) graph.
With this partially connected causal graph, Pearl identifies a number of structures which allow you to estimate correlations where all identified confounding variables are corrected for (which can be interpreted as causation under the assumptions of your graph).
Often times, it seems like these methods only serve to show you just how bad a situation "estimation causation" actually is, but it's possible to design experiments (or get lucky, or make strong assumptions) so as to turn them into useful tools.
That makes sense if you're only evaluating complete strangers. In other words, your uncertainty about the population-inferred trustworthiness of a person is pretty high and so instead the mere (Occam Factor style) complexity of their statement is the overruling component of your decision.
In the stated case, this isn't a totally random stranger. I feel quite justified in having a less-than uninformative prior about trusting IRC ghosts. In this case, my rationally acquired prejudice overrules in inference about the truth of even somewhat ordinary tales.
Egad, true. I jumped on seeing the juxtaposed question mark and then messed the English grammar.
Perhaps it's not about the ad hominem.
"Rationality is whatever wins."
If it's not a winning strategy, you're not doing it right. If it is a winning strategy, overall in as long of terms as you can plan, then it's rationality. It doesn't matter what the person thinks: whether they'd call themselves rationalists or not.
Picometer nitpick, for accuracy:
你有鼻子, as phrased, is not a question and thus ironically even more bewildering, not that someone who couldn't understand the utterance would be able to determine that. To phrase it as a question you need a different form; one of
你有鼻子吗? 你有没有鼻子? or 你有鼻子,对不对?
would work. The first is a simple question. The second leaves a bit more credence to the possibility you don't have a nose. The third probably is trying to imply that if you don't agree then you're foolish.
That's certainly sensible, and in But There's Still a Chance Eleiezer makes examples where this seems strong. In the above example, it depends a whole lot on how much belief you have in people (or, rather, lines of IRC chat).
I think then that your strength as a rationalist comes in balancing that uncertainty against some your prior trust in people. At which point, instead of predicting the negative, I'd seek more information.
Doesn't any model contain the possibility, however slight, of seeing the unexpected? Sure this didn't fit with your model perfectly — and as I read the story and placed myself in your supposed mental state while trying to understand the situation, I felt a great deal of similar surprise — but jumping to the conclusion that someone was just totally fabricating is something that deserves to be weighed against other explanations for this deviation from your model.
Your model states that pretty much under all circumstances an ambulance is going to pick up a patient. This is true to my knowledge as well, but what happens if the friend didn't report to you that once the ambulance he called it off and refused to be transported. Or perhaps at the same time his chest pains were being judged as not-so-severe the ambulance got another call in that a massive car pileup required their immediate presence.
Your strength as a rationalist must not be the rejection of things unlikely in your model but instead the act of providing appropriate levels of concern. Perhaps the best response is something along the lines of "Sounds like a pretty strange occurrence. Are you sure your friend told you everything?" Now we're starting to judge our level of confidence in the new information being valid.
Which is honestly a pretty difficult model to shake as well. So much of every bit of information you build your world with comes from other people that I think it pretty decent to trust with some amount of abandon.
I like to think Einstein's confidence came instead from his belief that Relativity suitably justified the KL divergence between experiments in 1905 and physics theory in 1905. He was not necessarily in full possession of whatever evidence was required to narrow the hypothesis space down to relativity (which is a bit of a misformulation, I feel, since this space still contains a number of other theories both equally and more powerful than Physics+Relativity) but instead possessed enough so that in his own mental metropolis jumping he stumbled across Relativity (possibly the next closest convenient point climbing from the prior of Physics to the posterior including new evidence for the time) and sat there.
His comment just reflected a belief that new experiments were unlikely to yet be including the same new information he already used. In some sense, their resolution was not yet strong enough to pinpoint something more precise than Relativity.
Not to knock Einstein, of course. Just because you have new evidence drawing you to a different posterior hypothesis doesn't mean that the update is going to be easy. That's perhaps where the philosophy of Bayes runs into the computational limitations of today.