Posts
Comments
The effective altruism movement and the 80000 hours project in particular seem to be stellar implementations of this line of thinking.
Also seconding the doubts about the refrain from saving puppies - at the very least, extending compassion to other clusters in mindspace not too far from our own seems necessary from a consistent standpoint. It may not be the -most- cost-effective, but no reason to just call it a personal interest.
Really liked this one. One thing that bugs me is the recurring theme of "you can't do anything short of the unreasonably high standard of Nature". This goes against the post of "where recursive reasoning hits bottom" and against how most of good science and practically all of good engineering actually gets done. I trust that later posts talk about this in some way, and the point I touch on is somewhat covered in the rest of the collections, but it can stand to be pointed out more clearly here.
It's true that Nature doesn't care about your excuses. No matter your justification for cutting corners, either you did it right or not. Win or lose. But it's not as if your reasoning for leaving some black boxes unopened doesn't matter. In practice, with limited time, limited information, and limited reasoning power, you have to choose your battles to get anything done. You may be taken by surprise by traps you ignored, and they will not care to hear your explanations on research optimization, and that's why you have to make an honest and thorough risk assessment to minimize the actual chance of this happening, while still getting somewhere. As in, you know, do your actual best, not some obligatory "best". It may very well still not suffice, but it is your actual best.
The other lessons seem spot on.
I know I'm way behind for this comment, but still: this point of view makes sense on a level, that saving additional people is always(?) virtuous and you don't hit a ceiling of utility. But, and this is a big one, this is mostly a very simplistic model of virtue calculous, and the things it neglected turn out to have a huge and dangerous impact.
Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.
First case in point: can a surgeon harvest organs from a healthy innocent bystander to save the lives of five people in dire need of those organs? Assuming they match and there is no there donor, an unfortunately likely incident. According to this, we must say that they not only can, but should, since the surgeon is damned as a murderer either way, so at least stack the lower number of bodies. I hope I don't need to explain that this goes south. This teaches us that there must be some distinction between taking negative action and avoiding a (net) positive one.
Another case: suppose I'm in a position to save lives on a daily basis, e.g. an ER doctor. Then if a life not saved is a life lost, then every hour that I rest, or you know, have fun, is another dead body on my scoreboard. Same goes for anyone doing their best to save lives, but in any way other than the single optimal one with the maximal expected number of lives. This one optimal route, if we're not allowed to rest, leads to burnout very quickly and loses lives on the long run. So we must find (immediately!) the One Best Way, or be doomed to be perpetual mass murderers.
As Zach Weinersmith (and probably others) once said, "the deep lesson to learn from opportunity cost is that we're all living every second of our lives suboptimally". We're not very efficient accuracy engines, and most likely not physically able to carry out any particular plan to perfection (or even close), so almost all of the time we'll get things at least somewhat wrong. So we'll definitely be mass murderers by way of failing to save lives, but... Then... Aren't we better off dead? And then are lives lost really that bad...?
And you can't really patch this neatly. You can't say that it's only murder if you know how to save them, because then the ethical thing would be to be very stupid and unable to determine how to save anyone. This is also related to a problem I have with the Rationalist Scoreboard of log(p) that Laplace runs at the Great Spreadsheet Above.
And even if you try to fix this by allowing that we maintain ourselves to save more lives in the long run, we 1) don't know exactly how much this should be, and 2) doing our best attempt at this is going to end up with everyone being miserable, just trying to maximize lives but not actually living them, since pain/harm is typically much easier to produce and more intense than pleasure.
And, of course, all of this is before we consider human biases and social dynamics. If we condemn the millionaire who saves lives inefficiently, we're probably drawing attention from the many others who don't even do that. Since it's much easier to be exposed to criticism than earn praise in this avenue (and this in the broad sense is a strong motivation for people to try and be virtuous), many people would see this and give up altogether.
The list goes on, but my rant can only go so long, and I hope that some of the holes in this approach are now more transparent.
Actually Brennan's idea is common knowledge in physics - energy is derived as the generator of time translation, both in GR and in QFT, so there is nothing new here.
Great observation. One inaccuracy is that velocity in special relativity isn't quite the same as acceleration in GR - since we can actually locally measure acceleration, and therefore know if we're accelerating or the rest of the universe is. This is unless you also count spacetime itself in the rest of the universe, in which case it's best to specify it or avoid the issue more decisively. The actual equivalence is accelerating vs. staying in constant velocity/still in a gravitational field.
Another interesting point is that this chain of "character of law" reasoning in the absence of experimental possibilities is the MO of the field of theoretical high energy physics, and many scientists are trained on ways to make progress anyway under these conditions. Most aren't doing as well as Einstein, but arguably things have gotten much more difficult to reason through at these levels of physics.
Cool story, great insights, but I gotta say, huge planning fallacy on Jeffreyssai's part. Giving rigid deadlines on breakthroughs without actual experience with them or careful consideration of their internal mechanisms, and when the past examples are few and very diverse.
I do agree that speed is important, but maybe let's show some humility about things that humans are apparently hard-wired to be bad at.
If there were something else there instead of quantum mechanics, then the world would look strange and unusual.
If there were something else instead of quantum mechanics, it would still be what there is and would still add up to normality.
About a few of the violations of the collapse postulate: this wouldn't be the only phenomenon with a preferred reference frame of simultaneity - the CMB also has that. Maybe a little less fundamental, but nonetheless a seemingly general property of our universe. This next part I'm less sure about, but locality implies that Nature also has a preferred basis for wavefunctions, i.e. the position basis (as opposed to, say, momentum). Acausal - since nothing here states that the future affects the past, I assume it's a rehash of the special relativity violation. Not that I'm a fan of collapse, but we shouldn't double-count the evidence.
Also, to quote you, models that are surprised by facts do not gain points by this - neither does Mr. Nohr as he fails to imagine the parallel world that actually is.
Just one quick note: this formulation of Bayes' theorem implicitly assumes that the A_j are not only mutually exclusive, but cover the entire theory space we consider - their joint probability is assigned a value of 1.
I know I'm really late with this, but what do you consider as "studying science"? Making a career of it? Does being an engineer count (I guess it does)? Or is getting (an amount of knowledge equivalent to) a B.Sc. enough too? Maybe even less than that, learning cool nuggets of science as a hobby? I think this should be better defined. If it's just a career that counts, I'm afraid that the main inhibitor is not interest, but fear for career prospects. Most often when I head people's reasons not to pursue a career in science, it's because they don't think they'll make a good living out of it, or because it's hard and they don't think they'll make it. If it's the hobbyist population you're worried about, I think it's pretty decent, after factoring in access to prerequisite knowledge, free time, and upbringings. Though there is a LOT of room for improvement on that front. Those who actually don't find science interesting seem to think that way mostly because of bad teacher experiences or the social stigma of "nerds", as far as I've seen.
Then "Gomboc righting itself when on a flat surface" will have an inherent 100% probability. This doesn't refute the example.
Three things bother me here, and they're all about which questions are being asked.
-
The "tree falling in a forest" questions isn't, as far as I've encountered it outside of this blog, about the definition of sound. Rather, it's about whether or not reality behaves the same when you do not observe it, an issue that you casually dismissed, without any proof, evidence, or even argument. There are ways to settle this dispute partially, though they are not entirely empirical due to the nature of the conundrum.
-
Ignoring the question of free will, ill defined as it may be, is merely -pretending to be wise-. You're basically saying you now know not to ask these questions, without explaining why (at least here). If there are any convincing arguments that settle a well-defined notion of free will, I welcome them.
-
Last but not least, I'm bothered by the choice of a question to settle all arguments - just write the mental processes that lead to the argument? Why stop there? Why not map the specific clusters of neurons and synapses activating the argument and reinforced by it? Having written down this stack of processes, can you perform neurosurgery that will stop this pattern of thinking (but not unrelated ones)? In Science, there may be such a thing as "being done", but this isn't it. Not by a longshot.
There's a clarification to be made here, in the bottom line - you were right to say that you shouldn't be expected to believe that the big, elaborate argument violates known laws of physics if no specific step had been shown to do it, but this doesn't mean that no such step exists. It may be that the arguer (and anyone else, for that matter) doesn't understand a subtlety that allows the mechanism to coexist with the laws of Nature. This has happened with the proposition of the ERP experiment, when it was initially thought to violate causality, but it was later understood that there's a distinction between causality and locality. This is also the case in many breakthroughs in engineering. The chance of this, while small, is by no means negligible, since it has happened numerous times in the past. All that said, in the bottom line the burden of proof still lies with the claimer of a breakthrough or violation of physics.
No. 6 - I go again to logic and formal math, where you can never define any term by extensions because sensory perceptions aren't reliable enough to give the needed certainty of Truths. Then you will have to start from some undefined elementary terms and work up from there. Other than this, though, this rule of thumb seems quite trustworthy.
No. 29 - that's just inaccurate. As you said, there are more and less typical examples of a cluster. Hinduism is a typical example, so we stop there. But if a case is a borderline member of a cluster, you will need to run it by the definition to know for sure. And sometimes this will be more reliable or feasible than checking the desired query directly. Whether atheism is a religion will then depend on the definition of religion, which in turn SHOULD depend on the purpose of the categorization.
No.30 - maybe I have a use for "animals that look like fish". "Belonging together" is not such a trivial matter, And there is sometimes serious merit for reclustering. But it's still the listmaker's responsibility to show that the list has value.
I feel the need to address the python vs. modern art thing too - if you just compare the extensional list of art against the intensional definition, you'll see that modern arts pass as arts (at least sometimes) while python definitely doesn't. Modern arts involve some work, are intended to inspire aesthetic emotions, and often do in some people experiencing them. Python, while being an elegant tool, was not (probably) designed with the primary intention of producing emotions, but rather with the intention of being a convenient tool to code.
Also, there is a legitimate quest of finding the "right definition" of a word, as in what concept it represents. Even if there is no class corresponding to it in reality (e.g. God) the existence of the word means some people treat it as a meaningful concept. If enough people use the same word with enough gravitas, and you want to talk to them about it, you will need to understand what their common ground of the idea is. Even if, as with free will, you arrive at the conclusion that there is no common ground to speak of. Not as interesting as carving reality, perhaps, but if you are somewhat interested in what other humans think, it does have merit.
Well, I think that if you are to be true to the message here, you should go even if the students and professors themselves are not above the norm, since the culture of addressing the original purpose directly would have merit in its own right. Unless you believe this expenditure of time isn't worth the while without the bundled social benefits of having a degree.
As for the PhD level, I think that after that the teaching part is nearly gone, and the service the institution can provide is mostly providing a productive environment and tools to conduct research.
On a different note, calling a ball a spheroid isn't really tabooing it, it's just a synonym.
While the general argument is valid, I'm not sure how these accusations of socially-derived rules making up traditional rationality. There were many mathematicians and scientists before Bayes was born, and they derived their beliefs from logic and evidence, not social norms. Take Galileo as an extreme and famous example. Is there any evidence behind these unflattering descriptions of traditional rationalists?
This "if" embodies the decrease of risk from being part of a crowd. In a protest of 5000, 20 may be pulled in, but the leader is much more likely to be one of them than any one person in the crowd.
I agree with the benefits of narrowness, but let's not forget there is a (big) drawback here: science and math are, in their core, built around generalizations. If you only ever study the single apple, or any number of apples individually, and not take the step of generalizing to all apples, or maybe all apples in a given farm, at least, you have zero predictive power. The same goes for Rationality, by the way. What good is talking about biases and Bayesianism, If I can only apply it to Frank from down the street?
I'm arrogantly confident you agree with me on this to some level, Eliezer, and just were not careful with your phrasing. But I think this is more than semantic nitpicking - there is a real, hard trade-off at play here between sticking to concrete, specific examples on which we can have all the knowledge we want, and applying ideas to as many problems as possible, to gain more predictive power and understanding of the Laws of Reality. I think a more careful formulation is to say "do not generalize irresponsibly". Don't abandon the specific examples, as they anchor you down to reality and details, but do try to find patterns and commonalities where they appear - and pinpoint them in precise, well defined, some-result-subspaces-excluding manners.
Eliezer also mentions it here, saying that if you're willing to lie to someone, you should be willing to slash their tires or lobotomize them. But I want to point out the Fallacy of Gray here - there are different degrees of lying, of its implications, and of the implications. I may hide the truth from my teacher about my friend cheating on a test (trying to stop the friend is a different discussion, but I would), but I wouldn't go so far as to outright violence in order to protect the secret.
That would seem to make sense, but in practice you don't see too many people who set out to be liars and it didn't pan out. Unless we count criminals who received harsh punishment, but there's a whole other story there, one thing bring that they often end up imprisoned again. Overall, the percentage of ex-convicts among honest folk doesn't seem to be that high.
I think honest people usually start out as honest, since it's a culturally valued quality, and thereby don't get much experience at lying. People who lie regularly usually get more skilled (or constantly caught) at more benign lies, and don't raise the stakes to prison-order right off the bat.
I'm reserved as to the corollary that only winning against the strongest advocate of an idea holds ANY meaning to disprove the idea.
For one, there could be a better arguer. If there is a better advocate of the intelligence explosion than Eliezer, unlikely as they may seem, who just won't go public and keeps to private circles, would it do nothing to win against the former? Taken another step further, if it is likely there ever will be such a proponent, does that invalidate all present and past efforts?
For another, the quality of an arguer can only be made after they effect. So to have any standing on any idea, one must win against every single advocate of the opposing view. Has anyone here tried that on, say, theism?
I think it's more accurate to say that winning an argument against sub-optimal advocates of an idea doesn't give enough basis to discredit the idea reliably. Indeed, since in complicated issues there is often no advocate who can exhibit all arguments favoring a position, one cannot completely discredit the idea even after defeating the champion of advocates. This frame seems more Bayesian Rationalistic, too, as it does not deal with probabilities of 0 or 1.
I agree with this one. Without probabilities of 0 and 1, it's not merely that some proofs of theorems need to be revised, it's that probability theory simply doesn't work anymore, as its very axioms fall apart.
I can give a statement that is absolutely certain, e.g. "x is true given that x is true". It doesn't teach me much about real life experiences, but it is infinitely certain. Likewise with probability 0. Please note that the probability is assigned to the territory here, not the map.
The fact that I can't encounter these probabilities in real life has to do with my limits of sampling reality and interpreting it, being a flimsy brain, rather than the limits of probability theory.
You may not want to believe that probability theory contains 0 and 1, but like many other cases, Math doesn't care about your beliefs.
Please correct me if I'm wrong, but even in Judaism the (widely accepted) lesson is to improve as an individual, even if the overall trend is a decline. In another phrasing - the individual should try to diminish the generational degradation of virtue as much as possible. And the penance comes inevitably because we will inevitably sin SOME, because we're imperfect humans. Even so, a very real danger remains of taking this penance as a goal in its own right, and forgetting that we primarily need to improve. All that said, I enthusiastically committed to "Tsuyoku Naritai", and to be as Science rather than as Torah :)
Never have I been so confused with anachronisms in methods of reasoning. The characters can't explain counting or equal quantities, but can explain the scientific method, fitness metrics, advanced demagogic methods, etc. Trial and error procedures can lead you down quite a few wrong paths if you don't understand statistics and causal relations, and it would be interesting to see how it would make the argument develop.