Posts
Comments
Disasters and miracles follow similar rules. Charles Babbage, in his Ninth Bridgewater Treatise of 1837, considered the nature of miracles (which, as a computer scientist, he viewed as pre-determined but rarely-called subroutines) and urged us "to look upon miracles not as deviations from the laws assigned by the Almighty for the government of matter and of mind; but as the exact fulfilment of much more extensive laws than those we suppose to exist." It's that question of characteristic scale.
George Dyson, comment on Taleb's "The Fourth Quadrant".
But the thought is one thing, the deed is another, and another yet is the image of the deed. The wheel of causality does not roll between them.
Nietzsche, Thus Spake Zarathustra.
Social justice is about culture, not just legal rights.
I feel like general stupidity does exist, in the same way that general intelligence does? Not sure what you like about this quote. The idea that biases are diverse, maybe?
I think there's a joke to the effect that if you're bad in life then when you die God will send you to New Jersey, and I don't know anything about translations of earlier versions of the bible but I kind of hope that it's possible for us to interpret the Gehenna comparison as parallel to that.
"Oh, but I only detest the mouth of the lion, where its fangs are kept; I do not detest the ear of the lion, nor its tail."
But the ear is how he found your brother, and when he leapt on your sister, the tail kept him straight.
Tycho of Penny Arcade, on the importance of systems thinking.
I couldn't help but be distracted, sorry.
A consequence of this observation is that we should expect Marxists, who believe the free market doesn't work, to lie much more often than capitalists, who think it does. Empirically, however, Democrats seem to lie much less than Republicans (see, e.g., a recent NY Times report on PolitiFact checking of the Presidential candidates), even though Republicans have much more faith in the free market.
This is an extremely terrible proxy for the question you're interested in.
No, there are a lot more constraints, like material resources, or time, or even luck.
The truth comes as conqueror only because we have lost the art of receiving it as guest.
I am somewhat uncertain about whether people who are kidnapped and tortured are in control of their happiness. I know there are at least a few people who've been in those situations or similar ones, like the Holocaust, who report that they retained some control over their own thoughts and perspective and this was a source of comfort and strength to them. I think it is possible that people who are tortured are in control of their own happiness, but they generally tend to make the choice to break.
One example that comes up in discussions on this is medical depression, which I have. From introspection, it feels like it is both true that I have control over my happiness and that it is not true that I have control over my happiness. I can recall occasions on which I have consciously chosen to lie in bed and be unhappy, and I can also recall occasions on which I have consciously chosen to uproot myself from misery. However, there are also occasions where I've attempted to do this but failed. I think the answer to our dilemma lies in compatibilism: we are in control in the sense that what happens inside our heads matters, but not in the sense that we can transcend our physical limitations and become omnipotent.
Also, it was listed as an instrumental rationality quote.
All of that said, I downvoted the original comment. While I think it is a defensible point of view, I want rationality quotes that are insightful and compelling, not ones that regurgitate conventional wisdom which some people will automatically believe while others will not.
The actual developments of society during this period were determined, not by a battle of conflicting ideals, but by the contrast between an existing state of affairs and that one ideal of a possible future society which the socialists alone held up before the public. Very few of the other programs which offered themselves provided genuine alternatives. Most of them were mere compromises or half-way houses between the more extreme types of socialism and the existing order. All that was needed to make almost any socialist proposal appear reasonable to these "judicious" minds who were constitutionally convinced that the truth must always lie in the middle between the extremes, was for someone to advocate a sufficiently more extreme proposal. There seemed to exist only one direction in which we could move, and the only question seemed to be how fast and how far the movement should proceed.
FA Hayek, Intellectuals and Socialism.
The warning against the golden mean fallacy is useful but standard, what I like best about this quote is that it brought to my attention the importance of constructive imagination in political reforms. I think this implies we'll get more and better thinking at the margins of policy if there are many different views about what policy's grand goals ought to be.
Yup.
Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.
Averages don't work that way because you did the math wrong: you should have stopped! I understand the point that you're trying to make with this post, but there are many cases in which uncertainty really does mean you should stop and think, or hedge your bets, rather than go full speed ahead. It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.
It seems to me that we should be very liberal in this regard: biases which remain in the AIs model of SO+UO are likely to be minor biases (as major biases will have been stated by humans as things to avoid). These are biases so small that we're probably not aware of them. Compared with the possibility of losing something human-crucial we didn't think of explicitly stating, I'd say the case is strong to err on the size of increased complexity/more biases and preferences allowed. Essentially, we're unlikely to have missed some biases we'd really care about eliminating, but very likely to have missed some preference we'd really miss if it were gone.
You frame the issue as though the cost of being liberal is that we'll have more biases preventing us from achieving our preferences, but I think this understates the difficulty. Precisely because it's difficult to distinguish biases from preferences, accidentally preserving unnecessary biases is equivalent to being liberal and unnecessarily adding entirely new values to human beings. We're not merely faced with biases that would function as instrumental difficulties to achieving our goals, but with direct end-point changes to those goals.
I like rationality quotes, so whatever happens I hope that stays alive in some form. Maybe it could move to /r/slatestarcodex.
Same. I like my arguments modular. I say this despite liking EA a lot.
The key to avoiding rivalries is to introduce a new pole, which mediates your relationship to the antagonist. For me this pole is often Scripture. I renounce my claim to be thoroughly aligned with the pole of Scripture and refocus my attention on it, using it to mediate my relationship with the antagonistic party. Alternatively, I focus on a non-aggressive third party. You may notice that this same pattern is observed in the UK parliamentary system of the House of Commons, for instance. MPs don’t directly address each other: all of their interactions are mediated by and addressed to a non-aggressive, non-partisan third party – the Speaker. This serves to dampen antagonisms and decrease the tendency to fall into rivalry. In a conversation where such a ‘Speaker’ figure is lacking, you need mentally to establish and situate yourself relative to one. For me, the peaceful lurker or eavesdropper, Christ, or the Scripture can all serve in such a role. As I engage directly with this peaceful party and my relationship with the aggressive party becomes mediated by this party, I find it so much easier to retain my calm.
There's a sequence about how the scientific method is less powerful than Bayesian reasoning that you should probably read.
Maybe hubris means not knowing the capabilities of one's tools.
Edit: I've just realized that in that sense, underestimating the capabilities of one's tools and refusing to try would also be a sin. If you believe that Fate itself is opposed to any attempt by men to fly, that's more arrogant a belief than thinking Fate is indifferent. I like this implication.
After a couple months more thought, I still feel as though there should be some more general sense in which simplicity is better. Maybe because it's easier to find simple explanations that approximately match complex truths than to find complex explanations that approximately match simple truths, so even when you're dealing with a domain filled with complex phenomena it's better to use simplicity. On the other hand, perhaps the notion that approximations matter or can be meaningfully compared across domains of different complexity is begging the question somehow.
The simple view is that medicine exists to fight death and disease, and that is, of course, its most basic task. Death is the enemy. But the enemy has superior forces. Eventually, it wins. And in a war that you cannot win, you don't want a general who fights to the point of total annihilation. You don't want Custer. You want Robert E. Lee, someone who knows how to fight for territory that can be won and how to surrender it when it can't, someone who understands that the damage is greatest if all you do is battle to the bitter end.
Most often, these days, medicine seems to supply neither Custers nor Lees. We are increasingly the generals who march the soldiers onward, saying all the while, "You let me know when you want to stop." All-out treatment, we tell the incurably ill, is a train you can get off at any time--just say when. But for most patients and their families we are asking too much. They remain riven by doubt and fear and desperation; some are deluded by a fantasy of what medical science can achieve. Our responsibility, in medicine, is to deal with human beings as they are. People only die once. They have no experience to draw on. They need doctors and nurses who are willing to have the hard discussions and say what they have seen, who will help people prepare for what is to come--and escape a warehoused oblivion that few really want.
Atul Gawande, Being Mortal: Medicine and What Matters in the End.
“I’ve never been certain whether the moral of the Icarus story should only be, as is generally accepted, ‘don’t try to fly too high,’ or whether it might also be thought of as ‘forget the wax and feathers, and do a better job on the wings.”
At root, our work suggests that creativity in science appears to be a nearly universal phenomenon of two extremes. At one extreme is conventionality and at the other is novelty. Curiously, notable advances in science appear most closely linked not with efforts along one boundary or the other but with efforts that reach toward both frontiers.
Mukherjee et. al, Atypical Combinations and Scientific Impact.
I'm really really liking the everything correlates with everything observation.
Would you object to behavioral nudges a la Thaler?
I think this depends almost entirely on how often you expect the busybodies to be wrong when they override people's judgement.
I don't think classifying adult humans in the same category as infants, imbeciles, and domestic animals is always an unreasonable decision. I refer to myself with this sentiment as well.
The world of to-day attaches a large importance to mental independence, or thinking for oneself; yet the manner in which these things are cultivated is very partial. In some matters we are, perhaps too independent (for we need to think socially as well as to act socially); but in other matters we are not independent enough; we are hardly independent at all. For we always interpret mental independence as being independence of old things. But if the mind is to stand in a real loneliness and liberty, and judge mere time and mere circumstances, and all the wasting things of this world, if the mind is really a strong and emancipated judge of things unbribed and unbrowbeaten, it must assert its superiority, not merely to old things, but to new things.
It must forsee the old age of things still in a strenuous infancy. It must stand by the tombstone of the babe unborn. It must treat the twentieth century as it treats the twelfth, as something which by its own nature has already had an end. A free man must not only be free from the past; a free man must be free from the future. He must be ready to face the rising and increasing thing, and to judge it by immortal tests. It is a very poor mark of courage, in comparison, that we are ready to strike at ancient wrongs. Our courage shall be tested by whether we are ready to strike at youthful and full-blooded wrongs; wrongs that have all their life before them, wrongs that are as sanguine as the sunrise, and as fresh as the flowers.
G.K. Chesterton, http://platitudesundone.blogspot.com/2015/09/the-world-of-to-day-attaches-large.html
Is there any way to do these things without paying a large pricetag? Could you just lurk around campus or something? Only half-joking here.
be sure to first consider the most useful version of grad that you could reliably make for yourself... and then decide whether or not to do it.
Planning fallacy is going to eat you alive if you use this technique.
I appreciate this comment for many reasons, but mostly because it throws into prominence the role of different values underlying comparisons like the top post's.
I wish I had the kind of serene acceptance of other people that you seem to have, but I do not. I am inclined to blame people for not making time to research economic, social, and political policy options, since these things are so important. You're right that it takes time to learn details about which policies are good and which are not, but there are many other factors besides knowledge that are relevant to sustained disagreement. For example, it's not a matter of time investment for someone to admit it when they realize they are wrong, it's essentially just a matter of integrity. Most people lack the humility to do this, however. This is repulsive to me, a mindset that values pretending to be right over actually figuring out how to help others. But this mindset is one I feel that most people possess. I strongly wish I believed otherwise, it's very unpleasant for me to half-despise so many people, but it's what my view of the facts suggests.
After giving myself some time to think about this, I think you are right and my argument was flawed. On the other hand, I still think there's a sense in which simplicity in explanations is superior to complexity, even though I can't produce any good arguments for that idea.
I am suggesting that we move too quickly to the view that rationalism is always an assault on the romantic soul, that it is a symptom of anxiety about our own madly passionate natures, or that it is a flight from love. Instead, rationalism may have its adaptive side, one that seeks to reinforce the ego structures needed to experience the passionate intensity of human emotions. It is possible to see rationalism not as an escape from romanticism, not as a defensive maneuver to protect the self from the excesses of desire, but instead as an effort to master, to fully experience, our passionate natures.
-- Anne C. Dailey, in her paper Liberalism's Ambivalence.
I believe in articulate discussion (in monologue or dialogue) of how one solves problems, of why one goofed that one, of what gaps or deformations exist in one's knowledge and of what could be done about it. I shall defend this belief against two quite distinct objections. One objection says: "it's impossible to verbalize; problems are solved by intuitive acts of insight and these cannot be articulated." The other objection says: "it's bad to verbalize; remember the centipede who was paralyzed when the toad asked which leg came after which."
J.S. Bruner tells us (in his book Towards a Theory of Instruction) that he finds words and diagrams "impotent" in getting a child to ride a bicycle. But while his evidence shows (at best) that some words and diagrams are impotent, he suggests the conclusion that all words and diagrams are impotent. The interesting conjecture is this: the impotence of words and diagrams used by Bruner is explicable by Bruner's cultual origins; the vocabulary and conceptual framework of classical psychology is simply inadequate for the description of such dynamic processes as riding a bicycle. To push the rhetoric further, I suspect that if Bruner tried to write a program to make an IBM 360 drive a radio controlled motorcycle, he would have to conclude (for the sake of consistency) that the order code of the 360 was impotent for this task. Now, in our laboratory we have studied how people balance bicycles and more complicated devices such as unicycles and circus balls. There is nothing complex or mysterious or undescribable about these processes. We can describe them in a non-impotent way provided that a suitable descriptive system has been set up in advance. Key components of the descriptive system rest on concepts like: the idea of a "first order" or "linear" theory in which control variables can be assumed to act independently; or the idea of feedback.
A fundamental problem for the theory of mathematical education is to identify and name the concepts needed to enable the beginner to discuss in mathematical thinking in a clear articulate way.
-- Seymour Papert, distinguished mathematician, educator, computer scientist, and AI researcher, in his 1971 essay "Teaching Children to be Mathematicians vs. Teaching about Mathematics".
That is a limitation of looking at this community specifically, but the general sense of the question can also be approached by looking at communities for specific activities that have strong norms of rationality.
I think most of the time rationality is not helpful for applied goals because doing something well usually requires domain specific knowledge that's acquired through experience, and yet experience alone is almost always sufficient for success. In cases where the advice of rationality and experience conflict, oftentimes experience wins even if it should not, because the surrounding social context is built by and for the irrational majority. If you make the same mistake everyone else makes you are in little danger, but if you make a unique mistake you are in trouble.
Rationality is most useful when you're trying to find truths that no one else has found before. Unfortunately, this is extremely difficult to do even with ideal reasoning processes. Rationality does offer some marginal advantage in truth seeking, but because useful novel truths are so rare, most often the costs outweigh the benefits. Once a good idea is discovered, oftentimes irrational people are simply able to copy whoever invented the idea, without having to bear all the risk involved with the process of the idea's creation. And then, when you consider that perfect rationality is beyond mortal reach, the situation begins to look even worse. You need a strategy that lets you make better use of truth than other people can, in addition to the ability to find truth more easily, if you want to have a decent chance to translate skill in rationality into life victories.
I have no idea which one you are talking about.
Literally all photos of Zizek look like this.
People I expect to be acceptably rigorous:
Sam Harris (atheistic morality & philosophy): .58, 7 books in 12 years.
lol
Of course, this should probably be true for both people in the conversation.
I was editing my comment at the time you replied, you presumably will want to replace this comment with a different one.
Let me start over.
Randomness is maximally complex, in the sense that a true random output cannot easily be predicted or efficiently be described. Simplicity is minimally complex, in that a simple process is easy to describe and its output easy to predict. Sometimes, part of the complexity of a complex explanation will be the result of "exploited" randomness. Randomness cannot be exploited for long, however. After all, it's not randomness if it is predictable. Thus a neural net might overfit its data only to fail at out of sample predictions, or a human brain might see faces in the clouds. If we want to avoid this, we should favor simple explanations over complex explanations, all else being equal. Simplicity's advantage is that it minimizes our vulnerability to random noise.
The reason that complexity is more vulnerable to random noise is that complexity involves more pieces of explanation and consequently is more flexible and sensitive to random changes in input, while simplicity uses large important concepts. In this, we can see that the fact complex explanations are easier to use than simple explanations when rationalizing failed theories is not a mere accident of human psychology, it emerges naturally from the general superiority of simple explanations.
His real attitudes weren't exactly modern, but some of the things he said are intended to be interpreted symbollically, interacting with the abstract idea of Woman rather than with all women as a group of human beings. In that sense, he might be interpreted as criticizing their culturally specific gender role more than their sex-imposed characteristics. He probably wasn't all that interested in distinguishing between those, because he views people who are controlled by their culture as contemptible anyway. I think that lack of interest in understanding or sympathizing with (apparent) weakness is a common flaw of his work. Fundamental attribution error, basically. Similarly, he only rarely praises those who try to cultivate strength in others, which is unfortunate if he really despises weakness so much. I think he might have cut himself off from empathy due to feeling as though it overwhelmed him, some of his writings on Schopenhauer hint at this.
In my opinion, if someone views women's behavior within 19th century gender roles as admirable they're in a way more misogynistic than someone who views it as ugly and broken. Had he sympathy or understanding in addition to his contempt though, or if he'd been more willing to distinguish between a person's internal states and their external behavior, then the balance of his attitudes would have been far better calibrated.
It's also worth keeping in mind that using caveats and qualifiers wasn't Nietzsche's rhetorical style and arguably would have ruined his impact. He sometimes deliberately exaggerates and is inflammatory; he is writing to people's hearts as much as their minds, since one of his main beliefs is that people have broken value systems. Overall, I think he's misogynist, but I don't think he's as extreme a misogynist as he is sometimes perceived. A product of his times, who only partially transcended them. If he saw the way women tend to behave today in Western countries, I like to think he'd be much happier with them.
Also https://en.wikipedia.org/wiki/Friedrich_Nietzsche%27s_views_on_women has some things to say.
I don't think we're talking in different frameworks really, I think my choice of words was just dumb/misinformed/sloppy/incorrect. If I had originally stated "randomness and simplicity are opposites" and then pointed out that randomness is a type of noise, (I think it is perhaps even the average of all possible noisy biases, because all biases should cancel?) would that have been a reasonable argument, judged in your paradigm?
I agree with the sentiment that there are cases where people are lazy about problem solving, asserting essentially that the solution is that the problem ought to spontaneously solve itself. So this quote is a useful approximation. The following is just a nitpick.
Empirically, are there not cases of broad-based semi-spontaneous decentralized collective action that have solved problems? I think they're rare, but real, especially as you get closer to the microlevel. Even within the macrolevel, it's important, because good macro depends on micro. Thinking institutionally would not work, unless individual decentralized people would act in certain useful and/or predictable ways, for example in ways that make institutional action a possibility in the first place, like being willing to cooperate sometimes. And formal institutions are really just a special case of more general things, other things which are not institutions can nonetheless take advantage of similar things to what institutions take advantage of. A sports team can behave somewhat institutionally, and so can a church, or a community, or even a nation. Even without enforcement mechanisms, this is somewhat true - for example, miraculously enough, a non negligible percentage of the population is willing to vote in elections, even without good individual incentives for their marginal vote.
You can't get to the outside. No matter what perspective you are indirectly looking from, you are still ultimately looking from your own perspective. (True objectivity is an illusion - it amounts to you imagining you have stepped outside of yourself.) This means that, for any given phenomenon you observe, you are going to have to encode that phenomenon into your own internal modeling language first to understand it, and you will therefore perceive some lower bound on complexity for the expression of that phenomenon. But that complexity, while it seems intrinsic to the phenomenon, is in fact intrinsic to your relationship to the phenomenon, and your ability to encode it into your own internal modeling language. It's a magic trick played on us by our own cognitive limitations.
I think my objection stands regardless of whether there is one subjective reality or one objective reality. The important aspect of my objection is the "oneness", not the objectivity, I believe. Earlier, you said:
depending on which primitives and rules we have selected... Occam's razor will suggest different models... Minimizing complexity in each modeling language lends a different bias toward certain models and against other models, but those biases can be varied or even reversed by changing the language that was selected. There is consequently nothing mathematically special about simplicity that lends an increased probability of correctness to simpler models.
But since we are already, inevitably, embedded within a certain subjective modelling language, we are already committed to the strengths and weaknesses of that language. The further away from our primitives we get, the worse a compromise we end up making, since some of the ways in which we diverge from our primitives will be "wrong", making sacrifices that do not pay off. The best we can do is break even, therefore the walk away from our primitives that we take is a biasedly random one, and will drift towards worse results.
There might also be a sense in which the worst we can do is break even, but I'm pretty sure that way madness lies. Defining yourself to be correct doesn't count for correctness, in my book of arbitrary values. Less subjective argument for this view of values: Insofar as primitives are difficult to change, when you think you've changed a primitive it's somewhat likely that what you've actually done is increased your internal inconsistency (and coincidentally, thus violated the axioms of NFL).
Whether you call the primitives "objective" or "subjective" is besides the point. What's important is that they're there at all.
Can you elaborate on why you think it's a boundary, not an opposite? I still feel like it's an opposite. My impression, from self-study, is that randomness in information means the best way to describe eg a sequence of coin flips is to copy the sequence exactly, there is no algorithm or heuristic that allows you to describe the random information more efficiently, like "all heads" or "heads, tails, heads, tails, etc." That sort of efficient description of information seems identical to simplicity to me. If randomness is defined as the absence of simplicity...
I guess maybe all of this is compatible with an upper bound understanding, though. What is there that distinguishes the upper bound understanding from my "opposites" understanding, that goes your way?
This is more along the lines of what I was thinking. Most instances of complexity that seem like they're good are in practice going to be versions of overfitting to noise. Or, perhaps stated more concisely and powerfully, noise and simplicity are opposites (information entropy), thus if we dislike noise we should like simplicity. Does this seem like a reasonable perspective?
You're speaking as though complexity is measuring the relationship between a language and the phenomena, or the map and a territory. But I'm pretty sure complexity is actually an objective and language-independent idea, represented in its pure form in Salmonoff Induction. Complexity is a property that's observed in the world via senses or data input mechanisms, not just something within the mind. The ease of expressing a certain statement might change depending on the language you're using, but the statement's absolute complexity remains the same no matter what. You don't have to measure everything within the terms of one particular language, you can go outside the particulars and generalize.
I think this is relevant: https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)
The approach of the final authors mentioned on the page seems especially interesting to me. I also am interested to note that their result agrees with Jaynes'. Universability seems to be important to all the most productive approaches there.
Or arguing that the complexity ordereing is the one that produces the "true" probailities is reframing of the question whether the simplicity formulation is truth-indicative.
If the approach that says simplicity is truth-indicative is self-consistent, that's at least something. I'm reminded of the LW sequence that talks about toxic vs healthy epistemic loops.
If I encounter a working hypothesis there is no need to search for a more simpler form of it.
This seems likely to encourage overfitted hypotheses. I guess the alternative would be wasting effort on searching for simplicity that doesn't exist, though. Now I am confused again, although in a healthier and more abstract way than originally. I'm looking for where the problem in anti-simplicity arguments lies rather than taking them seriously, which is easier to live with.
Honestly, I'm starting to feel as though perhaps the easiest approach to disproving the author's argument would be to deny his assertion that processes in Nature which are simple are relatively uncommon. From off the top of my head, argument one is replicators, argument two is that simpler processes are smaller and thus more of them fit into the universe than complex ones would, argument three is the universe seems to run on math (might be begging the question a bit, although I don't think so, since it's kinda amazing that anything more meta than perfect atomist replication can lead to valid inference - again the connection to universalizability surfaces), argument four is an attempt to undeniably avoid begging the question inspired by Descartes: if nothing else we have access to at least one form of Nature unfiltered by our perceptions of simplicity : the perceptions themselves, which via anthropic type induction arguments we should assume-more-than-not to be of more or less average representativeness. (Current epistemic status: playing with ideas very nonrigorously, wild and free.)