Rationality Quotes June 2014
post by Tyrrell_McAllister · 2014-06-01T20:32:02.500Z · LW · GW · Legacy · 283 commentsContents
283 comments
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
283 comments
Comments sorted by top scores.
comment by lmm · 2014-06-03T11:44:56.641Z · LW(p) · GW(p)
"I just don't have enough data to make a decision."
"Yes, you do. What you don't have is enough data for you not to have to make one"
http://old.onefte.com/2011/03/08/you-have-a-decision-to-make/
Replies from: Gurkenglas↑ comment by Gurkenglas · 2014-06-03T22:14:51.178Z · LW(p) · GW(p)
These two merely disagree on the meaning of the word decision, not the nature of the situation; one should pick a different scenario to make the possible point about how choosing not to choose doesn't quite work.
Replies from: AndHisHorse, RobinZ↑ comment by AndHisHorse · 2014-06-03T23:35:56.721Z · LW(p) · GW(p)
I think that they both agree that "decision" here means "choice to embark on a course of action other than the null action", where the null action may be simply waiting for more data. Where they disagree is the relative costs of the null action versus a member of a set of poorly known actions; it seems that the second speaker is trying to remind the first that the null action carries a cost, whether in opportunity or otherwise.
↑ comment by RobinZ · 2014-06-09T18:24:16.868Z · LW(p) · GW(p)
You post a link to "Disputing Definitions" as if there is no such thing as a wrong definition. In this case, the first speaker's definition of "decision" is wrong - it does not accurately distinguish between vanadium and palladium - and the second speaker is pointing this out.
comment by James_Miller · 2014-06-01T21:20:25.688Z · LW(p) · GW(p)
Replies from: Thomas"Do what you love" / "Follow your passion" is dangerous and destructive career advice. We tend to hear it from (a) Highly successful people who (b) Have become successful doing what they love. The problem is that we do NOT hear from people who have failed to become successful by doing what they love. Particularly pernicious problem in tournament-style fields with a few big winners & lots of losers: media, athletics, startups. Better career advice may be "Do what contributes" -- focus on the beneficial value created for other people vs just one's own ego. People who contribute the most are often the most satisfied with what they do -- and in fields with high renumeration, make the most $. Perhaps difficult advice since requires focus on others vs oneself -- perhaps bad fit with endemic narcissism in modern culture? Requires delayed gratification -- may toil for many years to get the payoff of contributing value to the world, vs short-term happiness.
↑ comment by Thomas · 2014-06-02T11:40:51.215Z · LW(p) · GW(p)
It looks like that ANY advice from highly successful people might be dangerous, since they are only a small minority of those, who also tried those same things. Most of them much less successfully.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-06-02T18:09:43.733Z · LW(p) · GW(p)
On the other hand, honest advice from highly successful people at least gives some indication of what you need to do to be successful, even if it doesn't give a good idea of the odds.
Replies from: Torello, advael↑ comment by Torello · 2014-06-02T23:42:20.223Z · LW(p) · GW(p)
In reply to both Nancy and Thomas:
"For Taleb, then, the question why someone was a success in the financial marketplace was vexing. Taleb could do the arithmetic in his head. Suppose that there were ten thousand investment managers out there, which is not an outlandish number, and that every year half of them, entirely by chance, made money and half of them, entirely by chance, lost money. And suppose that every year the losers were tossed out, and the game replayed with those who remained. At the end of five years, there would be three hundred and thirteen people who had made money in every one of those years, and after ten years there would be nine people who had made money every single year in a row, all out of pure luck. Niederhoffer, like Buffett and Soros, was a brilliant man. He had a Ph.D. in economics from the University of Chicago. He had pioneered the idea that through close mathematical analysis of patterns in the market an investor could identify profitable anomalies. But who was to say that he wasn’t one of those lucky nine? And who was to say that in the eleventh year Niederhoffer would be one of the unlucky ones, who suddenly lost it all, who suddenly, as they say on Wall Street, “blew up”?
-Malcom Gladwell
A magician named Derren Brown made a whole program about horse racing to illustrate the point of the above story. It's kinda interesting, but wastes more time than reading the story above.
https://www.youtube.com/watch?v=9R5OWh7luL4
Replies from: private_messaging↑ comment by private_messaging · 2014-06-04T08:23:47.200Z · LW(p) · GW(p)
It's rather surprising then, that people who success many times in the row, tend to employ eigenvalues rather than use igon value. Why could that be?
Of course, the role of luck in market is huge. But among the sequential winners, you will tend to find people who win a bit more than 50% of the time. There's many opportunities to do something very stupid (e.g. invest in hydrinos) and lose.
Replies from: NancyLebovitz, EHeller↑ comment by NancyLebovitz · 2014-06-05T13:59:49.264Z · LW(p) · GW(p)
I think you're making a joke, but I'm not sure what the joke is.
Replies from: David_Gerard, private_messaging↑ comment by David_Gerard · 2014-06-05T14:48:19.956Z · LW(p) · GW(p)
Presumably that people who know what they're talking about are less likely to make obvious errors in jargon. And that Gladwell was the person who made that particular error in jargon, in the very book that's being quoted, and that this is an example of Gladwell's glibness and lack of deeper knowledge of things he's talking about in this case and in general.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2014-06-05T14:56:00.819Z · LW(p) · GW(p)
It's sort of an odd example, though, since Gladwell himself consistently succeeds.
ETA: Given the downvote, maybe I should clarify that I don't mean that Gladwell succeeds at being the kind of writer that I would want to read. I mean that he consistently succeeds at writing best-selling books. The "igon-value" thing was cringe-inducing, but it plausibly hasn't done any significant harm to his sales. Apparently, you can be careless in that way and still succeed fantastically, again and again, with his target audience.
So, in that sense, he's not a good example for the claim that "people who success many times in the row, tend to employ eigenvalues rather than use igon value." (Though, of course, his existence doesn't disproves the claim, either.)
Replies from: David_Gerard↑ comment by David_Gerard · 2014-06-06T13:36:06.529Z · LW(p) · GW(p)
Nerds fear getting Malcolm Gladwell book for Christmas (The Daily Mash)
↑ comment by private_messaging · 2014-06-05T15:29:56.031Z · LW(p) · GW(p)
The point is that the consecutively successful investment managers tend to have more clue than unsuccessful ones. Of course, the luck plays a huge role, but over ten years, if we assume that super skilled have success rate of 0.6 and low skilled have success rate of 0.4, there's 57-fold difference in 'survival'.
Replies from: Lumifer↑ comment by EHeller · 2014-06-05T16:44:19.361Z · LW(p) · GW(p)
Of course, the role of luck in market is huge. But among the sequential winners, you will tend to find people who win a bit more than 50% of the time. There's many opportunities to do something very stupid (e.g. invest in hydrinos) and lose.
Of course, the record is full of people who make money for years, only to explode later on (look at John Meriwether's career up until Long Term Capital Management blew up so spectacularly). Were these people always just lucky? We can never know. I take Taleb's point (filtered through Gladwell) to be that the base of people actively trading is so large that we should expect people who are successful for years (possibly even whole careers) even if all that is operating is luck, so past performance is no guarantee of future gains. I don't entirely agree, but its decent advice to keep in mind. Especially bad for the home investor is constantly chasing the funds that most recently made money (for some definition of recent)- generally the fees went up on the backs of the improved performance, so as performance falls back to the market value the investors will make less (lost to the higher fees).
Also an irony worth pointing out- Randy Mills of Blacklight Power/Hydrino fame has tried to create credibility for his company by pointing to the savvy investors who did get in board.
Replies from: private_messaging↑ comment by private_messaging · 2014-06-06T09:25:46.374Z · LW(p) · GW(p)
Of course, the record is full of people who make money for years, only to explode later on (look at John Meriwether's career up until Long Term Capital Management blew up so spectacularly). Were these people always just lucky? We can never know.
If the best win at a rate of 60% and worst at a rate of 40% , plenty of the best will explode later on.
Also an irony worth pointing out- Randy Mills of Blacklight Power/Hydrino fame has tried to create credibility for his company by pointing to the savvy investors who did get in board.
The point is that knowing some physics protects you from the likes of Randy Mills. So you do a bit better than mere luck. Also, if you're a so called savvy investor, and you invest into some crap like hydrinos, you may even get gains (if you sell after others jump onto the bandwagon just because you did).
↑ comment by advael · 2014-06-27T20:33:17.097Z · LW(p) · GW(p)
Not necessarily. Honest advice from successful people gives some indication of what those successful people honestly believe to be the keys to their success. The assumption that people who are good at succeeding in a given sphere are also good at accurately identifying the factors that lead to their success may have some merit, but I'd argue it's far from a given.
It's not just a problem of not knowing how many other people failed with the same algorithm; They may also have various biases which prevent them from identifying and characterizing their own algorithm accurately, even if they have succeeded at implementing it.
comment by johnlawrenceaspden · 2014-06-09T23:46:13.975Z · LW(p) · GW(p)
“The root of all superstition is that men observe when a thing hits, but not when it misses"
-- Francis Bacon
https://www.goodreads.com/quotes/5741-the-root-of-all-superstition-is-that-men-observe-when
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-06-10T08:33:27.770Z · LW(p) · GW(p)
The quote is true to Bacon's thought, and its expression much improved in the repetition. Here is the nearest to it I can find in Bacon's works on Gutenberg:
For this purpose, let us consider the false appearances that are imposed upon us by the general nature of the mind, beholding them in an example or two; as first, in that instance which is the root of all superstition, namely, that to the nature of the mind of all men it is consonant for the affirmative or active to affect more than the negative or privative. So that a few times hitting or presence countervails ofttimes failing or absence, as was well answered by Diagoras to him that showed him in Neptune's temple the great number of pictures of such as had escaped shipwreck, and had paid their vows to Neptune, saying, "Advise now, you that think it folly to invocate Neptune in tempest." "Yea, but," saith Diagoras, "where are they painted that are drowned?"
Francis Bacon, "The Advancement of Learning"
comment by Risto_Saarelma · 2014-06-06T16:27:40.771Z · LW(p) · GW(p)
Replies from: DanielLCBut in some form or another, a lot of people believe that there are only easy truths and impossible truths left. They tend not to believe in hard truths that can be solved with technology.
Pretty much all fundamentalists think this way. Take religious fundamentalism, for example. There are lots of easy truths that even kids know. And then there are the mysteries of God, which can’t be explained. In between—the zone of hard truths—is heresy. Environmental fundamentalism works the same way. The easy truth is that we must protect the environment. Beyond that, Mother Nature knows best, and she cannot be questioned. There’s even a market version of this, too. The value of things is set by the market. Even a child can look up stock prices. Prices are easy truths. But those truths must be accepted, not questioned. The market knows far more than you could ever know. Even Einstein couldn’t outguess God, Nature, or Market.
↑ comment by DanielLC · 2014-06-06T19:28:54.129Z · LW(p) · GW(p)
As somewhat of a libertarian, I tend to fall into that last group. I have to keep reminding myself that if nobody could outguess the market, then there'd be no money in trying to outguess the market, so only fools would enter it, and it would be easy to outguess.
Replies from: Lumifer, Salemicus, Richard_Kennaway, elharo↑ comment by Lumifer · 2014-06-06T19:40:53.668Z · LW(p) · GW(p)
I have to keep reminding myself that if nobody could outguess the market, then there'd be no money in trying to outguess the market, so only fools would enter it, and it would be easy to outguess.
There is the old joke about a student and a professor of economics walking on campus. The student notices a $20 bill lying on the sidewalk and starts to pick it up when the professor stops him. "Don't bother," the professor says, "it's fake. If it were real someone already would have picked it up".
Replies from: Nornagest↑ comment by Salemicus · 2014-06-09T10:14:00.177Z · LW(p) · GW(p)
But it's an equilibrium, right? Lumifer's joke may be funny, but as an empirical matter, you don't see a lot of $20 bills lying on the ground. There's no easy pickings to be had in that manner. So the only people who can "outguess" the market (and I think that framing is seriously misleading, but let's put that aside for now) are individuals and organizations with hard-to-reproduce advantages in doing so - in the same way that Microsoft is profitable, but it doesn't follow that just anyone can make a profit through an arbitrage of buying developer time and selling software.
Replies from: Lumifer, Jiro, DanArmak, DanielLC↑ comment by Lumifer · 2014-06-09T15:21:37.986Z · LW(p) · GW(p)
But it's an equilibrium, right?
No, why would it be?
Equilibrium is a convenient mapping tool that lets you assume away a lot of difficult issues. Reality is not in equilibrium.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-09T20:50:42.760Z · LW(p) · GW(p)
Because when it's easy to outguess the market, the people who are good at it get richer and invest more money in it until it gets hard again.
It's not in perfect equilibrium constantly. I've heard of someone working out some new method that made it easy which took off over the course of a few years until enough people used it that outguessing the market was hard again.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-09T21:12:14.898Z · LW(p) · GW(p)
Because when it's easy to outguess the market, the people who are good at it get richer and invest more money in it until it gets hard again.
This is an extremely impoverished framework for thinking about financial markets.
Let's introduce uncertainty. Can Alice outguess the market? Um, I don't know. And you don't know. And Alice doesn't know. All people involved can have opinions and guesses, but no one knows.
Okay then, so let's move into the realm of random variables and probability distributions. Say, Alice has come up with strategy Z. What's the expected return of implementing strategy Z? Well, it's a probability distribution conditional on great many things. We have to make estimates, likely not very precise estimates.
Alice, of course, can empirically test her strategy Z. But there is a catch -- testing strategies can be costly. It can be costly in terms of real money, opportunity costs, time, etc.
Moreover, the world is not stationary so even if strategy Z made money this year whether it will make money next year is still a random variable, the distribution parameters of which you can estimate only so well.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-09T21:26:40.181Z · LW(p) · GW(p)
This is an extremely impoverished framework for thinking about financial markets.
It's good enough.
Knowing about things like risk will tell you about the costs and benefits with higher precision. It will explain somewhat why there's lots of people involved in a market, and not just a couple of people that control the entire thing and work out the prices using other methods.
All that uncertainty makes the market difficult to predict. But all you really need to know is that regardless of how easy or hard it is to guess how a business will do, the market will ensure that you're competing with other people who are really good at that sort of thing, and outguessing them is hard.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-10T00:49:21.132Z · LW(p) · GW(p)
It's good enough.
No, I don't think so.
But all you really need to know is that regardless of how easy or hard it is to guess how a business will do, the market will ensure that you're competing with other people who are really good
This can be applied to anything from looking for a job to dating.
So, no, that's not all you really need to know.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-10T03:16:35.459Z · LW(p) · GW(p)
You wouldn't expect to be able to do job X better than a professional if you don't have any training, would you?
Also, economists say the same about the job market. If you don't have any particular advantage for any given job, you can't easily beat the market and make more money by picking a high-paying job. If a job made more money without some kind of cost attached, people would keep going into it until it stops working.
I guess there is more to the market. It's something that scales well, so doing it on a small scale is especially bad. It takes exactly as much work to by $100 in stocks as $10,000. If you're dealing with tiny companies where someone trying to make trades on that scale would mess around with the price of the stock, that won't apply, but in general trying to make money on small investments would be like playing poker against someone who normally plays high stakes. They're the ones good enough to make huge amounts of money. The market won't support many of them, so they must be good.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-10T03:45:00.615Z · LW(p) · GW(p)
I have a weird feeling that a bunch of people on LW have decided that there's nothing to be done in financial markets (except invest in index funds), fully committed to this belief, and actively resist any attempts to think about it... :-/
Replies from: James_Miller, EHeller↑ comment by James_Miller · 2014-06-10T05:14:49.107Z · LW(p) · GW(p)
Isn't this optimal? The case for index funds by ordinary investors is extremely strong, and if there exists good evidence to the contrary it will be of the form that is almost certainly beyond the ability of most LW people to properly evaluate.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-10T18:30:57.769Z · LW(p) · GW(p)
Isn't this optimal?
Is it? Which specific index funds are you talking about and how do you define optimality here?
good evidence to the contrary it will be of the form that is almost certainly beyond the ability of most LW people to properly evaluate.
So, it's completely fine for most LW people to evaluate the chances of a Singularity, details of AI design, or the MWI of quantum mechanics, but real-life financial markets, noooo, they are way too complicated? X-D
Replies from: James_Miller↑ comment by James_Miller · 2014-06-10T19:58:30.007Z · LW(p) · GW(p)
Low cost, broad based index funds.
So, it's completely fine for most LW people to evaluate the chances of a Singularity, details of AI design, or the MWI of quantum mechanics, but real-life financial markets, noooo, they are way too complicated? X-D
Good reply, but there are different types of complexity and looking at financial market data isn't a type of complexity LW tends to deal with.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-10T20:27:11.301Z · LW(p) · GW(p)
Low cost, broad based index funds.
That's still very VERY non-specific.
Let's take our friends Alice and Bob. They come to you and ask you where should they invest their pennies. You tell them "low cost broad based index funds". They blink at you and say "Could you please give us the names of the funds?"
And I still have no idea what do you mean by "optimal".
there are different types of complexity and looking at financial market data isn't a type of complexity LW tends to deal with.
That is true as a matter of empirical observation. But the real question is about capability: can LW types deal with the financial-markets type of complexity? Why or why not?
Replies from: James_Miller↑ comment by James_Miller · 2014-06-11T04:23:15.262Z · LW(p) · GW(p)
Most Americans invest in mutual funds via their firm's pension plan and have limited choices. I have index funds with Vanguard and Fidelity on the S&P 500.
can LW types deal with the financial-markets type of complexity? Why or why not?
Even for those who could, it wouldn't be worth the time cost for those of us who don't work in finance since you would likely conclude after lengthy study that yes, one should just buy index funds.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-11T04:48:01.664Z · LW(p) · GW(p)
I have index funds with Vanguard and Fidelity on the S&P 500.
So, in which sense having a long-only portfolio of large-cap US equities is optimal?
since you would likely conclude after lengthy study that yes, one should just buy index funds.
How do you know? Isn't that rather blatantly begging the question..?
Replies from: James_Miller↑ comment by James_Miller · 2014-06-11T05:38:31.263Z · LW(p) · GW(p)
How do you know? Isn't that rather blatantly begging the question..?
I have a PhD in economics from the University of Chicago.
So, in which sense having a long-only portfolio of large-cap US equities is optimal?
The S&P 500 is effectively international since big U.S. companies do lots of business in foreign countries. For diversification reasons you might also want to own bonds and invest some in smaller cap stocks.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-11T14:35:12.570Z · LW(p) · GW(p)
I have a PhD in economics from the University of Chicago.
First, appeal to authority is a classic fallacy.
Second, if you're doing the Ghostbusters bit, live up to your billing. Instead of vaguely regurgitating HuffPo-level platitudes, formulate a claim, provide the necessary tight definitions, outline the reasoning why your claim is true, provide links to empirical data supporting your position.
I suspect we have differences in two areas: the credibility of the EMH, and the approach to the problem of asset allocation.
Let's keep the EMH debate out of this thread -- it's a beast of its own -- but even under EMH the asset allocation issue is far from trivial. In fact, it's quite complicated. However this complexity is NOT a good reason to just give up and point to a suboptimal solution which does have the twin advantages of being (a) simple; and (b) not the worst; but is NOT "best for everyone" which is what it's sold as.
Replies from: None, James_Miller, TheAncientGeek↑ comment by [deleted] · 2014-06-11T14:59:24.482Z · LW(p) · GW(p)
First, appeal to authority is a classic fallacy.
Logical fallacies are still (possibly weak) evidence.
It's rude to ask someone how they came to believe something, and then dismiss their experience out of hand.
formulate a claim, provide the necessary tight definitions, outline the reasoning why your claim is true, provide links to empirical data supporting your position.
Step up your own game.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-11T15:02:21.858Z · LW(p) · GW(p)
It's rude to ask someone how they came to believe something, and then dismiss their experience out of hand.
I am not asking about personal experience -- "how did you find your path to Jesus" kind of thing. I am asking to provide supporting evidence and arguments for a claim about empirical reality. "I have a PhD" is neither supporting evidence nor an argument.
Step up your own game.
My claim is negative: there is NO investment optimal for everyone; optimality is hard to define and even harder to estimate; equity index funds are just an asset class, one among many; etc.
I see the advice "you should just invest in index funds" as similar to advice "you should just eat whole grains". Yes, it's progress if your baseline is coke and twinkies. Yes, it's not the worst thing you can do. No, it's not nearly an adequate answer to the question of what should you eat.
Replies from: None↑ comment by [deleted] · 2014-06-11T15:14:32.702Z · LW(p) · GW(p)
"I have a PhD" is neither supporting evidence nor an argument.
You're simply wrong. It is evidence.
My claim is negative: there is NO investment optimal for everyone; optimality is hard to define and even harder to estimate; equity index funds are just an asset class, one among many; etc.
This comes nowhere near the standard you've tried to impose on your interlocutor.
Replies from: Lumifer, Jiro↑ comment by Lumifer · 2014-06-11T15:27:51.439Z · LW(p) · GW(p)
You're simply wrong.
Obviously, I disagree.
This comes nowhere near the standard you've tried to impose on your interlocutor.
That's because I don't go around telling people that the problem of investment allocation is solved and all you need to do is invest in index funds.
Replies from: Cyan↑ comment by Jiro · 2014-06-11T18:17:32.698Z · LW(p) · GW(p)
You are either willfully or autistically not parsing English as an English speaker would normally intend it. "Is not evidence" normally means "is not good evidence". The speaker does not have to insert the word "good" for it to have that meaning.
Replies from: None↑ comment by [deleted] · 2014-06-11T18:28:34.752Z · LW(p) · GW(p)
I'm sorry, but are you projecting? I've outlined how much evidence I ascribe to this situation, and Lumifer has been clear that he ascribes much less. This isn't a debate over omitted modifiers.
Replies from: Jiro↑ comment by Jiro · 2014-06-11T21:07:42.568Z · LW(p) · GW(p)
Either you think that "I have a PhD" is evidence but not good evidence, in which case you are indeed complaining about the omitted modifier, or else you think that "I have a PhD" is good evidence, which is a claim I find astonishing.
Furthermore, you just got finished saying that logical fallacies are (possibly weak) evidence, as if being weak evidence would be relevant, and you linked to a post which says that evidence that is not good is still evidence. These support the interpretation that you were talking about PhDs being evidence at all, not about PhDs being good evidence.
↑ comment by James_Miller · 2014-06-11T14:53:21.213Z · LW(p) · GW(p)
First, appeal to authority is a classic fallacy.
No, it depends on the authority. Being rational means giving appropriate weight to the opinions of other people and these peoples' education has some impact on the optimal weights. Also, you did ask the personal question "How do you know?" and I interpreted this as your wondering how I, James Miller, acquired my knowledge of financial markets.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-11T14:57:29.245Z · LW(p) · GW(p)
you did ask the personal question "How do you know?" and I interpreted this as your wondering how I, James Miller, acquired my knowledge of financial markets.
A bit of miscommunication, then, my question referred to the quote directly preceding it which is
since you would likely conclude after lengthy study that yes, one should just buy index funds
I meant "How do you know that I would likely conclude after lengthy study that yes, one should just buy index funds?"
Replies from: James_Miller↑ comment by James_Miller · 2014-06-11T15:16:58.163Z · LW(p) · GW(p)
If you read LW you are likely the kind of person who, after massive study, would agree with economists on microeconomic issues on which most economists agree because microeconomics is really math and logical reasoning applied to human behavior and economists are, relative even to the LW population, good at these things.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-11T15:35:51.465Z · LW(p) · GW(p)
If you read LW you are likely the kind of person who, after massive study, would agree with economists on microeconomic issues
Let me provide a data point for you: I have studied this issue sufficiently well. I have NOT come to the conclusions which you expect.
microeconomics is really math and logical reasoning applied to human behavior
Yes, but badly applied :-D Economics is only starting to realize that actual live humans are not Homo economicus and that equilibrium models of systems with omniscient fully rational agents driven solely by the desire to have more money are not much like the real world.
Once economists leave the rarefied atmosphere of DSGE models and such and have to deal with the reality-provided empirical data, they can hardly agree on anything. A recent case in point -- the Piketty book.
↑ comment by TheAncientGeek · 2014-06-11T16:10:50.932Z · LW(p) · GW(p)
The fallacy is appeal to inappropriate authority....
Replies from: Cyan↑ comment by Cyan · 2014-06-11T16:13:02.153Z · LW(p) · GW(p)
No it isn't. For support I appeal to Wikipedia.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-06-11T16:21:45.138Z · LW(p) · GW(p)
Could I suggest that you actually read the article? Authorities aren't necessarily correct, and so it would be fallacy to appeal to an authority as necessarily correct... (but what , for a Bayesian, would be necessarily correct...?) ...even so, "authorities can be correct in their field of reasoning" (and are more likely to be than non authorities....to state, in theory, what everyone does in practice)
Replies from: Cyan↑ comment by EHeller · 2014-06-10T04:23:58.036Z · LW(p) · GW(p)
I tend to think that the current markets are efficient enough that putting my money in index funds is about the best I can do from a time/opportunity cost perspective.
The professionals I know working for hedge funds do routinely find small inefficiencies, but in order to make them profitable enough to be worth the time investment, they generally have to exploit quantities of leverage I don't have access to as an individual.
If you enjoy pouring over the market looking for details to exploit, then it can be a use of leisure time I guess. I pour over enough data at work that spending free time pouring over more in order to achieve fairly small gain just doesn't seem worth it.
↑ comment by Jiro · 2014-06-09T14:55:46.133Z · LW(p) · GW(p)
Whether it's worth picking up a $20 bill depends on
- The chance that you are the first person to notice it and pick it up, if it's an actual $20 bill
- The ratio of real $20 bills to fake ones
- The gain in finding a real $20 bill, compared to the loss in picking up a fake one.
The odds for #2 and #3 are pretty high compared to the odds of similar activities when playing the market. The odds of #1 vary depending on how well travelled the place is but are generally a lot higher than for whether you're the first person to notice an opportunity in the market.
Of course, #1 is also affected by how many people use this entire chain of reasoning and conclude it;'s not worth picking up the bill, but the other factors are so important that this hardly matters.
Replies from: Desrtopa↑ comment by DanArmak · 2014-06-12T15:04:35.675Z · LW(p) · GW(p)
On the contrary, there are very many profitable software companies of all sizes. Writing software is a huge market that has grown very quickly and still provides large profit margins to many companies.
You might make an argument that Microsoft's real advantage is the customer lock-in they achieve through control of a huge installed base of software and files. Even there there are many software companies in the same position. It's hard to reproduce the advantage of having a large share of a large market. But that doesn't necessarily make it unprofitable to acquire even a small share of the market.
Replies from: Salemicus↑ comment by Salemicus · 2014-06-12T20:20:15.991Z · LW(p) · GW(p)
I think you misunderstand my point. Of course there are many profitable software companies (I work for one of them!), in the same way that there are also many banks, hedge funds, etc. But all of these have hard-to-reproduce advantages ("moats" in the lingo). The reason Microsoft (or any other software company) is able to buy developer time and sell software at a profit is because they have social and organisational capital, because they have synergy between that capital and their intellectual property rights, because they have customer relationships, etc etc. It is not an arbitrage and it's not true that just anyone can do it. Microsoft themselves are in fact a fine example of this; throwing resources in the fight against Google has not proven successful.
↑ comment by Richard_Kennaway · 2014-06-10T10:36:30.657Z · LW(p) · GW(p)
A lot of the thread descending from here is covered by the whole of the quoted essay. I'd quote more, but I don't want to make it easy for people to just read another quote.
comment by NancyLebovitz · 2014-06-02T18:08:16.199Z · LW(p) · GW(p)
Three Bayesians walk into a bar: a) what's the probability that this is a joke? b) what's the probability that one of the three is a Rabbi? c) given that one of the three is a Rabbi, what's the probability that this is a joke?
--Sorry, no cite. I got this from someone who said they'd been seeing it on twitter.
Replies from: soreff↑ comment by soreff · 2014-06-03T04:36:45.997Z · LW(p) · GW(p)
And what is the probability that one of them is a Prior?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-06-03T12:10:54.458Z · LW(p) · GW(p)
Maximally uninformative.
comment by NancyLebovitz · 2014-06-08T18:47:11.733Z · LW(p) · GW(p)
Let your aim be to come at truth, not to conquer your opponent. So you never shall be at a loss in losing the argument, and gaining a new discovery.
Arthur Martine, quoted by Daniel Dennett
Replies from: raisincomment by [deleted] · 2014-06-02T11:46:14.537Z · LW(p) · GW(p)
DON’T RESPOND TO IDIOTS.
...
Arguing with idiots has the game-theoretic structure of a dollar-auction. Whoever gets in the last argument wins. Add to this the asymmetry that someone with low epistemic standarts can make up some nonsense argument in five minutes, while it takes you an hour to prove that it is nonsense. At which point the other guy will make up some new nonsense.
edit: If you decide to reply, please read the original comment on SSC for context.
Replies from: None, shminux, Jiro, Lumifer, army1987↑ comment by [deleted] · 2014-06-02T17:02:51.009Z · LW(p) · GW(p)
Though I am glad not everyone followed this advice with regards to me, when I was (more of) an idiot. I owe those patient, sympathetic, tolerant people a great deal.
Replies from: RobinZ↑ comment by RobinZ · 2014-06-09T17:20:40.897Z · LW(p) · GW(p)
I would also like to note that I have learned a number of interesting things by (a) spending an hour researching idiotic claims and (b) reading carefully thought out refutations of idiocy - like how they're called "federal forts" because the statutes of the states in which they were built include explicitly ceding the land upon which they were built to the federal government.
↑ comment by Shmi (shminux) · 2014-06-02T20:25:39.568Z · LW(p) · GW(p)
Having been on both sides... how do you know when you are the idiot?
Replies from: Brillyant, None↑ comment by Jiro · 2014-06-02T14:41:17.525Z · LW(p) · GW(p)
Maybe so, but this also assumes that you're good at determining who's an idiot. Many people are not, but think they are. So you need to consider that if you make a policy of "don't argue with idiots" widespread, it will be adopted by people with imperfect idiot-detectors. (And I'm pretty sure that many common LW positions would be considered idiocy in the larger world.)
Consider also that "don't argue with idiots" has much of the same superficial appeal as "allow the government to censor idiots". The ACLU defends Nazis for a reason, even though they're pretty obviously idiots: any measures taken against idiots will be taken against everyone else, too.
Replies from: None, Lumifer, blacktrance, None↑ comment by [deleted] · 2014-06-04T17:50:14.642Z · LW(p) · GW(p)
(And I'm pretty sure that many common LW positions would be considered idiocy in the larger world.)
Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that's largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.
A large part of the problem is that all the lessons of Traditional Rationality teach to guard against actually arriving to conclusions before amassing what I think one Sequence post called "mountains of evidence". The strength and stridency with which LW believes and believes in certain things fail a "smell test" for overconfidence, even though the really smelly things (like, for example, cryonics) are usually actively debated on LW itself (I recall reading in this year's survey that the mean LW-er believes cryonics has a 14% chance of working, which is lower than people with less rationality training estimate).
So in contradistinction to Traditional Rationality (as practiced by almost everyone with a remotely scientific education), we are largely defined (as was noted in the survey) by our dedication to Bayesian reasoning, and our willingness to take ideas seriously, and thus come to probabilistic-but-confident conclusions while the rest of the world sits on its hands waiting for further information. Well, that and our rabid naturalism on philosophical topics.
Replies from: DanArmak, RobinZ↑ comment by DanArmak · 2014-06-05T20:05:45.662Z · LW(p) · GW(p)
large part of the problem is that all the lessons of Traditional Rationality teach to guard against actually arriving to conclusions before amassing what I think one Sequence post called "mountains of evidence".
Except for scientific research, which will happily accept p < 0.05 to publish the most improbable claims.
Replies from: lmm↑ comment by lmm · 2014-06-18T23:12:42.524Z · LW(p) · GW(p)
No, "real science" requires more evidence than that - 5 sigma in HEP. p < 0.05 is the preserve of "soft science".
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-06-19T16:11:36.266Z · LW(p) · GW(p)
And even with more than 5 sigma people will be like ‘we probably screwed up somewhere’ when the claim is sufficiently improbable, see e.g. the last paragraph before the acknowledgements in arXiv:1109.4897v1.
↑ comment by RobinZ · 2014-06-09T20:45:01.947Z · LW(p) · GW(p)
Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that's largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.
There's also a tendency to be doctrinaire among LW-ers that people may be reacting to - an obvious manifestation of this is our use of local jargon and reverential capitalization of "the Sequences" as if these words and posts have significance beyond the way they illuminate some good ideas. Those are social markers of deluded crackpots, I think.
Replies from: None↑ comment by [deleted] · 2014-06-09T21:21:12.958Z · LW(p) · GW(p)
Yes, very definitely so. The other thing that makes LW seem... a little bit silly sometimes is the degree of bullet swallowing in the LW canon.
For instance, just today I spent a short while on the internet reading some good old-fashioned "mind porn" in the form of Yves Couder's experiments with hydrodynamics that replicate many aspects of quantum mechanics. This is really developing into quite a nice little subfield, direct physical experiments can be and are done, and it has everything you could want as a reductive explanation of quantum mechanics. Plus, it's actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.
But if you swallowed your bullet, you'll never discover it yourself. In fact, if you swallow bullets in general, I find it kind of difficult to imagine how you could function as a researcher, given that a large component of research consists of inventing new models to absorb probability mass that currently has nowhere better to go than a known-wrong model.
Replies from: EHeller, The_Duck, gwern, jbay, RobinZ↑ comment by EHeller · 2014-06-10T01:05:39.301Z · LW(p) · GW(p)
Yves Couder's experiments are neat, but the underlying 'quantum' interpretation is basically just Bohm's interpretation. The water acts as a pilot wave, and the silicon oil drops act as Bohmian particles. Its very cool that we can find a classical pilot-wave system, but its not pointing in a new interpretational direction.
Personally, I would love Bohm, but for the problem that it generalizes so poorly to quantum field theories. Its a beautiful, real-feeling interpretation.
Edit: Also neat- the best physical analogue to a blackhole that I know of is water emptying down a bathtub drain faster than the speed-of-sound in the fluid. Many years ago, Unruh was doing some neat experiments with some poor grad student, but I don't know if they ever published anything.
↑ comment by The_Duck · 2014-06-10T00:50:57.274Z · LW(p) · GW(p)
Plus, it's actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.
Note that because of Bell's theorem, any classical system is going to have real trouble emulating all of quantum mechanics; entanglement is going to trip it up. I know you said "replicate many aspects of quantum mechanics," but it's probably important to emphasize that this sort of thing is not going to lead to a classical model underlying all of QM.
↑ comment by gwern · 2014-07-01T22:00:38.179Z · LW(p) · GW(p)
In fact, if you swallow bullets in general, I find it kind of difficult to imagine how you could function as a researcher
How could you function? Well, a quote from last year put it nicely:
"Within the philosophy of science, the view that new discoveries constitute a break with tradition was challenged by Polanyi, who argued that discoveries may be made by the sheer power of believing more strongly than anyone else in current theories, rather than going beyond the paradigm. For example, the theory of Brownian motion which Einstein produced in 1905, may be seen as a literal articulation of the kinetic theory of gases at the time. As Polanyi said:
'Discoveries made by the surprising configuration of existing theories might in fact be likened to the feat of a Columbus whose genius lay in taking literally and as a guide to action that the earth was round, which his contemporaries held vaguely and as a mere matter for speculation.'"
↑ comment by jbay · 2014-06-10T00:18:17.845Z · LW(p) · GW(p)
I think that a 'reductive' explanation of quantum mechanics might not be as appealing as it seems to you.
Those fluid mechanics experiments are brilliant, and I'm deeply impressed by them for coming up with them, let alone putting it into practice! However, I don't find it especially convincing as a model of subatomic reality. Just like the case with early 20th-century analog computers, with a little ingenuity it's almost always possible to build a (classical) mechanism that will obey the same math as almost any desired system.
Definitely, to the point that it can replicate all observed features of quantum mechanics, the fluid dynamics model can't be discarded as a hypothesis. But it has a very very large Occam's Razor penalty to pay. In order to explain the same evidence as current QM, it has to postulate a pseudo-classical physics layer underneath, which is actually substantially more complicated than QM itself, which postulates basically just a couple equations and some fields.
Remember that classical mechanics, and most especially fluid dynamics, are themselves derived from the laws of QM acting over billions of particles. The fact that those 'emergent' laws can, in turn, emulate QM does imply that QM could, at heart, resemble the behaviour of a fluid mechanic system... but that requires postulating a new set of fundamental fields and particles, which in turn form the basis of QM, and give exactly the same predictions as the current simple model that assumes QM is fundamental. Being classical is neither a point in its favour nor against it, unless you think that there is a causal reason why the reductive layer below QM should resemble the approximate emergent behaviour of many particles acting together within QM.
If we're going to assume that QM is not fundamental, then there is actually an infinite spectrum of reductive systems that could make up the lower layer. The fluid mechanics model is one that you are highlighting here, but there is no reason to privilege it over any other hypothesis (such as a computer simulation) since they all provide the same predictions (the same ones that quantum mechanics does). The only difference between each hypothesis is the Occam penalty they pay as an explanation.
I agree that, as a general best practice, we should assign a small probability to the hypothesis that QM is not fundamental, and that probability can be divided up among all the possible theories we could invent that would predict the same behaviour. However, to be practical and efficient with my brain matter, I will choose to believe the one theory that has vastly more probability mass, and I don't think that should be put down as bullet swallowing.
Is QM not simple enough for you, that it needs to be reduced further? If so, the reduction had better be much simpler than QM itself.
↑ comment by RobinZ · 2014-06-09T21:39:30.834Z · LW(p) · GW(p)
I don't think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.
(That said, the experiments sound awesome! Any particular place you'd recommend to start reading?)
Replies from: None↑ comment by [deleted] · 2014-06-09T21:59:09.496Z · LW(p) · GW(p)
There don't seem to be many popularizations. This looks fun and as far as I can tell is neither lying nor bullshitting us. This is an actual published paper, for those with the maths to really check.
I don't think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.
I think I should phrase this properly by dropping into the language of the Lord High Prophet of Bayes, E.T. Jaynes: it is often optimal to believe in some model with some probability based on the fixed, finite quantity of evidence we have available on which to condition, but this is suboptimal compared to something like Solomonoff Induction that can dovetail over all possible theories. We are allocating probability based on fixed evidence to a fixed set of hypotheses (those we understand well enough to evaluate them).
For instance, given all available evidence, if you haven't heard of sub-quantum physics even at the mind-porn level, believing quantum physics to be the real physics is completely rational, except in one respect. I don't understand algorithmic information theory well enough to quantify how much probability should be allocated to "sub-Solomonoff Loss", to the possibility that we have failed to consider some explanation superior to the one we have, despite our current best explanations adequately soaking up the available evidence as narrowed, built-up probability mass, but plainly some probability should be allocated there.
Why, particularly in the case of quantum physics? Because we've known damn well for decades that it's an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period. Reality does not contradict itself: the river of evidence flowing into General Relativity and the river of evidence flowing into quantum mechanics cannot collide and run against each-other unless we idiot humans have approximated two different perspectives (cosmic scale and micro-scale) on one underlying reality using incompatible theories. This is always and only our fault, and if we want to deal with that fault, we need to be able to quantify it.
Replies from: RobinZ, EHeller, RobinZ↑ comment by RobinZ · 2014-06-10T02:48:04.510Z · LW(p) · GW(p)
Coincidentally, I was actually heading out to meet my dad (a physics Ph.D.), and I mentioned the paper and blog post to him to get his reaction. He asked me to send him a link, but he also pointed me at Feynman's lecture on electrostatic analogs, which is based on one of those simple ideas that invites bullet-swallowing: The same equations have the same solutions.
This is one of those ideas that I get irrationally excited about, honestly. The first thing I thought of when you described these hydrodynamic experiments was the use of similitude in experimental modeling, which is a special case of the same idea: after you work out the equations that you would need to solve to calculate (for example) the flow of air around a wing, instead of doing a lot of intractable mathematics, you rewrite the equations in terms of dimensionless parameters like the Reynolds number and put a scale model of the wing in a wind tunnel. If you adjust the velocity, pressure, &c. correctly in your scale model, you can make the equations that you would need to solve for the scale model exactly the same as the equations for the full-sized wing ... and so, when you measure a number on the scale model, you can use that number the same way that you would use the solution to your equations, and get the number for the real wing. You can do this because the same equations have the same solutions.
For that matter, one of the stories my dad wrote on his blog about his Ph.D. research mentions a conversation in which another physicist pointed out a possible source of interesting complexity in gravitational waves by metaphor to electromagnetic waves - a metaphor whose validity came from the same equations having the same solutions.
I have to say, though, that my dad does not get excited about this kind of thing, and he explained to me why in a way which parallels Feynman's remark at the end of the lecture: these physical models, these analog computations, are approximate. Feynman talks about these similarities being used to design photomultiplier tubes, but explains - in a lecture delivered before 1964, mind - that "[f]or the most accurate work, it is better to determine the fields by numerical methods, using the large electronic computing machines." And at the end of section 4.7 of the paper you linked to:
From the value of alpha, it seems that the electrostatic force is about two orders of magnitude weaker than the mechanical force between resonant bubbles. This suggests one limitation of the bouncing-droplet experiment as a model of quantum mechanics, namely that spherically-symmetric resonant solutions are not a good model for the electron.
On the basis of these factors, I think I would fully endorse Brady and Anderson's conclusions in the paper: that these experiments have potential as pedagogical tools, illuminating some of the confusing aspects of quantum mechanics - such as the way multiple particles interacting produce a waveform that is nevertheless defined by a single amplitude and phase at every point. By contrast, when the blogger you link to says:
What are the quantum parallels for the effective external forces in these hydrodynamic quantum analogs, i.e. gravity and the vibrations of the table? Not all particles carry electric charge, or weak or color charge. But they are all effected by gravity. Is their a connection here to gravity? Quantum gravity?
...all I can think is, "does this person understand what the word 'analogue' means?" There is no earthy reason to imagine that the force of gravity on the droplet and liquid surface should have anything to do with gravity acting on particles in quantum waveforms. Actually, it's worse than that: we can know that it does not, in the same way that, among simple harmonic oscillators, the gravity force on pendulums has nothing to do with the gravity force on a mass on a spring. They are the same equations, and the equations in the latter case don't have gravity in them ... so whatever work gravity does in the solution of the first equation is work it doesn't do in the solution of the second.
I may be doing the man a gross injustice, but this ain't no way to run a railroad.
Replies from: None↑ comment by [deleted] · 2014-06-10T07:39:20.654Z · LW(p) · GW(p)
Why draw strong conclusions? Let papers be published and conferences held. It's a neat toy to look at, though.
Replies from: RobinZ↑ comment by RobinZ · 2014-06-10T12:35:43.039Z · LW(p) · GW(p)
It is a neat toy, and I'm glad you posted the link to it.
The reason I got so mad is that Warren Huelsnitz's attempt to draw inferences from these - even weak, probabilistic, Bayesian inferences - were appallingly ignorant for someone who claims to be a high-energy physicist. What he was doing would be like my dad, in the story from his blog post, trying to prove that gravity was created by electromagnetic forces because Roger Blandford alluded to an electromagnetic case in a conversation about gravity waves. My dad knew that wasn't a true lesson to learn from the metaphor, and Richard Feynman agrees with him:
However, a question surely suggests itself at the end of such a discussion: Why are the equations from different phenomena so similar? We might say: “It is the underlying unity of nature.” But what does that mean? What could such a statement mean? It could mean simply that the equations are similar for different phenomena; but then, of course, we have given no explanation. The “underlying unity” might mean that everything is made out of the same stuff, and therefore obeys the same equations. That sounds like a good explanation, but let us think. The electrostatic potential, the diffusion of neutrons, heat flow—are we really dealing with the same stuff? Can we really imagine that the electrostatic potential is physically identical to the temperature, or to the density of particles? Certainly ϕ is not exactly the same as the thermal energy of particles. The displacement of a membrane is certainly not like a temperature. Why, then, is there “an underlying unity”?
Feynman goes on to explain that many of the analogues are approximations of some kind, and so the similarity of equations is probably better understood as being a side effect of this. (I would add: much in the same way that everything is linear when plotted log-log with a fat magic marker.) Huelsnitz, on the other hand, seems to behave as if he expects to learn something about the evolutionary history of the Corvidae family by examining crowbars ... which is simply asinine.
↑ comment by EHeller · 2014-06-10T01:15:32.028Z · LW(p) · GW(p)
Because we've known damn well for decades that it's an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period.
We don't actually know that. Weinberg has suggested that GR might be asymptotically safe. Most people seem to think this isn't the case, but no one has been able to show that he is wrong. We can rephrase your argument, and instead of putting weight on theories for which we have no evidence, dump the "built up" probability mass on the idea that the two theories don't actually disagree.
Certainly the amount of "contradiction" between GR and quantum field theories are often overblown. You can, for instance, treat GR as an effective field theory and compute quantum corrections to various things. They are just too small to matter/measure.
↑ comment by RobinZ · 2014-06-09T23:07:20.811Z · LW(p) · GW(p)
...huh.
I have to go, but downvote this comment if I don't reply again in the next five hours. I'll be back.
Edit: Function completed; withdrawing comment.
↑ comment by Lumifer · 2014-06-04T15:35:32.914Z · LW(p) · GW(p)
Consider also that "don't argue with idiots" has much of the same superficial appeal as "allow the government to censor idiots".
The former has a fair amount of appeal for me and the latter I would find appalling and consider to be descent into totalitarianism. I don't think this comparison works.
Replies from: RobinZ, Eugine_Nier↑ comment by RobinZ · 2014-06-10T22:03:36.929Z · LW(p) · GW(p)
Jiro didn't say appeal to you. Besides, substitute "blog host" for "government" and I think it becomes a bit clearer: both are much easier ways to deal with the problem of someone who persistently disagrees with you than talking to them. Obviously that doesn't make "don't argue with idiots" wrong, but given how much power trivial inconveniences have to shape your behavior, I think an admonition to hold the proposed heuristic to a higher standard of evidence is appropriate.
Replies from: Nornagest↑ comment by Nornagest · 2014-06-10T22:06:30.646Z · LW(p) · GW(p)
Besides, substitute "blog host" for "government" and I think it becomes a bit clearer
Speaking for myself, I've got a fair bit of sympathy for the concept with that substitution and a fair bit of antipathy without it. It's a lot easier to find a blog you like and that likes you than to find a government with the same qualities.
Replies from: RobinZ↑ comment by Eugine_Nier · 2014-06-05T01:30:31.174Z · LW(p) · GW(p)
You are atypical in this respect.
Replies from: Vulture, Lumifer↑ comment by Vulture · 2014-06-10T21:49:40.894Z · LW(p) · GW(p)
Really? I feel the same way as Lumifer and asusmed that this was the obvious, default reaction. Damned typical-mind fallacy.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-06-11T00:04:17.583Z · LW(p) · GW(p)
I also feel the same way, but in my experience most people don't.
Also as RobinZ pointed out here things get fuzzy in the limit where one has to taboo "government".
Replies from: Lumifer↑ comment by blacktrance · 2014-06-02T17:18:22.215Z · LW(p) · GW(p)
Being an idiot is less about positions and more about how one argues. The easiest way to identify an idiot is when debating gets someone angry to the point of violence. Beyond that, idiots can be identified by the use of fallacies, ad hominems, non-sequiturs, etc.
Replies from: None↑ comment by [deleted] · 2014-06-09T06:27:34.859Z · LW(p) · GW(p)
This rule fails for RationalWiki in particular, so I don't think it's sufficiently expressive. RationalWiki will never get violent, they'll never use basic rhetorical fallacies, but are they not idiots?
I think a better rule for idiocy is the inability to update. An idiot will never change their mind, and will never learn. More intelligent idiots can change their mind about minor things related to things they already deeply believe, but never try to understand anything that's a level or two of inference away from their existing core.
Nonidiocy requires the intelligence to think correctly, the wisdom to know when you're wrong, and the charisma to tolerate the social failing of being wrong. It takes all three to avoid being an idiot.
Replies from: V_V, private_messaging↑ comment by V_V · 2014-06-14T21:08:45.552Z · LW(p) · GW(p)
This rule fails for RationalWiki in particular, so I don't think it's sufficiently expressive. RationalWiki will never get violent, they'll never use basic rhetorical fallacies, but are they not idiots?
They won't threaten physical violence, but when discussing certain political topics (libertarianism, social justice and feminism) they do use basic rhetorical fallacies in addition to generally abusive behaviour even from the admins (trolling, name calling and swinging the banhammer).
Surprisingly, when discussing other topics, such as science, pseudosciences and paranormal beliefs, they look like perfectly sane and rational folks.
(I've never engaged them, my experience comes form browsing the wiki and lurking a little bit on the 4ch-...Facebook group)
I think they aren't idiots but just political fanatics.
↑ comment by private_messaging · 2014-06-14T20:30:57.838Z · LW(p) · GW(p)
Well, if Rossi's free energy generators worked and were replacing power stations or gasoline in cars or the like, we all would change our mind about Rossi. I guess that means we're probably idiots, because that's highly unlikely.
Cranks constantly demand that we change our minds in response to Andrea Rossi plain as day rigging up another experiment, Randel L Mills releasing some incoherent formula salad, Chris Langan taking an IQ test, or the like.
↑ comment by [deleted] · 2014-06-04T12:18:42.376Z · LW(p) · GW(p)
Many people are not, but think they are. So you need to consider that if you make a policy of "don't argue with idiots" widespread
I posted this quote on site with average IQ above 99th percentile for a reason. Also, please read the original comment for context, I think you'll interpret it a bit differently.
Replies from: Jiro↑ comment by Jiro · 2014-06-04T15:16:21.523Z · LW(p) · GW(p)
Having a high IQ does not equate to having a good idiot detector.
Also, policies which treat people differently based on a self-serving distinction need more justification than normal, because of the increased prior that the person making the policy is affected by an ulterior motive.
Replies from: None↑ comment by A1987dM (army1987) · 2014-06-06T17:00:05.806Z · LW(p) · GW(p)
Not all idiots are like that, and otherwise-non-idiotic people can also get caught in dollar-auction-like discussions if they're sufficiently mind-killed (I mean, I've seen it happen on a website where supposedly 75% of people have IQs over 130), but that's a good heuristic.
comment by Tyrrell_McAllister · 2014-06-01T20:33:14.970Z · LW(p) · GW(p)
To know what questions may reasonably be asked is already a great and necessary proof of sagacity and insight. For if a question is absurd in itself and calls for an answer where none is required, it not only brings shame on the propounder of the question, but may betray an incautious listener into absurd answers, thus presenting, as the ancients said, the ludicrous spectacle of one man milking a he-goat and the other holding a sieve underneath.
Immanuel Kant, Critique of Pure Reason (trans. Norman Kemp Smith), p. A58/B82.
Replies from: Stabilizer↑ comment by Stabilizer · 2014-06-03T00:00:31.429Z · LW(p) · GW(p)
Kant seems to have one of the first systematic question dissolvers:
Philosophers have never lacked zest for criticizing their predecessors. Aristotle was not always kind to Plato. Scholastics wrangled with unexcelled vigor. The new philosophy of the 17th century was frankly rude about the selfsame schoolmen. But all that is criticism of someone else. Kant began something new. He turned criticism into self-reflection. He didn’t just create the critical philosophy. He made philosophy critical of philosophy itself.
There are two ways in which to criticize a proposal, doctrine, or dogma. One is to argue that it is false. Another is to argue that it is not even a candidate for truth or falsehood. Call the former denial, the latter undoing. Most older philosophical criticism is in the denial mode. When Leibniz took issue with Locke in the Nouveaux Essais, he was denying some of the things that Locke had said. He took for granted that they were true-or-false. In fact, false. Kant’s transcendental dialectic, in contrast, argues that a whole series of antinomies arise because we think that there are true-or-false answers to a gamut of questions. There are none. The theses, antitheses, and questions are undone.
Kant was not the first philosophical undoer. The gist of Bacon undoes the methodology of scholastic thought. But Kant is assuredly the first celebrated, self-conscious, systematic undoer. Pure reason, the faculty of philosophers, outsteps its bounds and produces doctrines that are neither true nor false.
-Ian Hacking, Historical Ontology
comment by James_Miller · 2014-06-01T21:26:32.290Z · LW(p) · GW(p)
Every time a mosquito dies, the world becomes a better place.
From Wikipedia "Various species of mosquitoes are estimated to transmit various types of disease to more than 700 million people annually in Africa, South America, Central America, Mexico, Russia, and much of Asia, with millions of resultant deaths. At least two million people annually die of these diseases, and the morbidity rates are many times higher still."
Related: let's eliminate species of mosquitoes that bite humans.
comment by Jayson_Virissimo · 2014-06-17T06:37:24.522Z · LW(p) · GW(p)
The irony of commitment is that it’s deeply liberating — in work, in play, in love. The act frees you from the tyranny of your internal critic, from the fear that likes to dress itself up and parade around like rational hesitation. To commit is to remove your head as the barrier to your life.
-- Anne Morris
comment by LizzardWizzard · 2014-06-27T10:33:42.451Z · LW(p) · GW(p)
Three Bayesians walk into a bar: a) what's the probability that this is a joke? b) what's the probability that one of the three is a Rabbi? c) given that one of the three is a Rabbi, what's the probability that this is a joke? (c)
According to the base rate there is an evidence that this is a joke about Russia national team or Suarez bite
Replies from: None, shminux, Gav, devas↑ comment by [deleted] · 2014-06-27T11:04:15.630Z · LW(p) · GW(p)
Three Bayesians walk into a bar: a) what's the probability that this is a joke? b) what's the probability that one of the three is a Rabbi? c) given that one of the three is a Rabbi, what's the probability that this is a joke? (c)
And now this must become a canonical example used in logical probability papers.
↑ comment by Shmi (shminux) · 2014-06-27T17:08:12.340Z · LW(p) · GW(p)
This seems to be the original source, as far as I can tell: https://twitter.com/rickasaurus/status/471930220782448641
↑ comment by devas · 2014-07-04T16:37:33.631Z · LW(p) · GW(p)
Wait this is actually brilliant in a couple of ways, because to get the right (estimated) answer, the listener has to distinguish between probability that one of the three is a rabbi and this is a joke, and probability that this is a joke if we put the probability of the third being a rabbi at 100%.
It follows the setup of a rationality calibration question while subverting it and rendering "guessing the teacher's password" useless, since c) is (maybe) higher than a) or b)
comment by johnlawrenceaspden · 2014-06-09T23:14:03.961Z · LW(p) · GW(p)
Prisca iuvent alios: ego me nunc denique natum gratulor
Let others praise ancient times; I am glad I was born in these.
-- Ovid
http://izquotes.com/quote/140267
Replies from: DanielLCcomment by Nornagest · 2014-06-21T23:58:50.845Z · LW(p) · GW(p)
Relevant to bounded cognition and consequentialism:
"It is better to be just than to be kind, but only good judges can be just; let those who cannot be just be kind."
-- Loyal to the Group of Seventeen, The Citadel of the Autarch, Gene Wolfe
Replies from: gwern, anandjeyahar↑ comment by gwern · 2014-07-01T22:26:01.567Z · LW(p) · GW(p)
To give some context here: 'Loyal to the Group of Seventeen' is a captured POW who is telling a story to the main characters in the hospital he's recuperating in, as part of a storytelling competition. He is from the enemy country "Ascia", which is a parody/version of Maoist China (the name comes from a typically Wolfean etymological joke: New Sun is set in South America, the reader eventually realizes, and the Ascians or 'shadowless' live near the equator where the sun casts less of a shadow); in particular, Ascians speak only in quotations from official propaganda (Maoists were notorious for quotation). Sort of Wolfe's reply to Newspeak. So when Loyal tells his story, "Loyal to the Group of Seventeen's Story—The Just Man", he speaks only in quotations and someone interprets for him.
The story simply recounts a commune whose inequitable distribution of work & food prompts the Just Man to travel to the capital and petition the mandarins there for justice, in the time-honored Chinese fashion, but he is rejected and while trying to make his case, survives by begging:
..."Behind our efforts, let there be found our efforts."
"The just man did not give up. He returned to the capital once more."
"The citizen renders to the populace what is due to the populace. What is due to the populace? Everything."
"He was very tired. His clothes were in rags and his shoes worn out. He had no food and nothing to trade."
"It is better to be just than to be kind, but only good judges can be just; let those who cannot be just be kind."
"In the capital he lived by begging."
At this point I could not help but interrupt. I told Foila that I thought it was wonderful that she understood so well what each of the stock phrases the Ascian used meant in the context of his story, but that I could not understand how she did it—how she knew, for example, that the phrase about kindness and justice meant that the hero had become a beggar...
The story itself is simple but it's still one of the most interesting of the stories told within Book of the New Sun and comes up occasionally on urth.net. It's also often compared to a Star Trek episode: "Shaka, When the Walls Fell: In one fascinating episode, Star Trek traced the limits of human communication as we know it - and suggested a new, truer way of talking about the universe".
↑ comment by anandjeyahar · 2014-06-26T09:45:59.611Z · LW(p) · GW(p)
While some parts of me agree with it, there are other parts that set off alarms like: but judges will try to use this as a rationalization for what looks like a kind behaviour(by habit, social proof) instead of trying to evaluate the justness, especially when it looks like it's complex or is likely to threaten one of their biased beliefs.
comment by johnlawrenceaspden · 2014-06-03T01:07:46.350Z · LW(p) · GW(p)
C is quirky, flawed, and an enormous success
-- Dennis Ritchie, The Development of the C language
Replies from: DanArmak↑ comment by DanArmak · 2014-06-05T20:01:42.328Z · LW(p) · GW(p)
This is true, but what makes it a rationality quote?
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-06-05T21:23:45.984Z · LW(p) · GW(p)
He's saying that one does not need to do a perfect job to win. A common failure mode is to spend ages worrying about the details while someone else's good-enough quick hack takes over the world. It's quite a resonant quote for programmers.
C and Unix obliterated their technically superior rivals. There's a whole tradition of worrying about why this happened in the still extant LISP community which was one of the principal losers. Look up 'Worse is Better' if you're interested in the details.
Replies from: DanArmak, Nornagest↑ comment by DanArmak · 2014-06-06T11:14:32.171Z · LW(p) · GW(p)
I'm aware of all that. But the idea that perfection is not needed, and that successful things are almost always flawed in some way, seemed too obvious to merit a quote.
But that is just typical mind fallacy on my part: if others feel this is an insight people should be reminded of, I shouldn't argue.
↑ comment by Nornagest · 2014-06-05T21:55:33.482Z · LW(p) · GW(p)
My understanding -- and I wasn't there for that particular holy war, so I might have some of the details wrong -- is that while LISP is in many ways the better language, it didn't at the time have the practical implementation support that C did. Efficient LISP code at the time required specialized hardware; C was and is basically a set of macros to constructs common in assembly languages for most commodity architectures. It worked, in other words, without having to build an entire infrastructure and set of development practices around it.
Later implementations of LISP pretty much solved that problem, but by that time C and its derivatives had already taken over the world.
Replies from: DanArmak↑ comment by DanArmak · 2014-06-06T11:27:02.400Z · LW(p) · GW(p)
C was a major improvement on the languages of the day: COBOL, Fortran, and plain assembly. Unlike any of those, it was at the same time fully portable, supported structured programming, and allowed freeform text.
But I don't think programmers would have embraced LISP even if its performance was as good as the other languages. For the same reasons programmers don't embrace LISP-derived languages today. It is an empirical fact that the great majority of programmers, particularly the less-than-brilliant ones, dislike pure functional programming.
Replies from: JoachimSchipper, johnlawrenceaspden↑ comment by JoachimSchipper · 2014-06-16T19:04:14.188Z · LW(p) · GW(p)
Note, though, that (a) "Lisp doesn't look like C" isn't as much of a problem in a world where C and C-like languages are not dominant, and (b) something like Common Lisp doesn't have to be particularly functional - that's a favored paradigm of the community, but it's a pretty acceptable imperative/OO language too.
"Doesn't run well on my computer" was probably a bigger problem. (Modern computers are much faster; modern Lisp implementations are much better.)
Edit: still, C is clearly superior to any other language. ;-)
Replies from: lmm↑ comment by lmm · 2014-06-18T23:23:04.578Z · LW(p) · GW(p)
I suspect the main reason lisp failed is the syntax, because the first thing early computer users would try to do is get the computer to do arithmetic. In C/Fortran/etc. you can write arithmetic expressions that look more-or-less like arithmetic expressions, e.g. (a + b/2) ** 2 / c. In Lisp you can't.
↑ comment by johnlawrenceaspden · 2014-06-09T23:22:10.569Z · LW(p) · GW(p)
I dislike pure functional programming. I can't think of a pure functional LISP that isn't a toy. I'm sure there is one. I wouldn't use it.
And before we hijack this thread and turn it into a holy war, C is my other favourite language.
comment by Alejandro1 · 2014-06-02T15:53:53.432Z · LW(p) · GW(p)
On Confidence levels inside and outside an argument:
Thorstein Frode relates of this meeting, that there was an inhabited district in Hising which had sometimes belonged to Norway, and sometimes to Gautland. The kings came to the agreement between themselves that they would cast lots by the dice to determine who should have this property, and that he who threw the highest should have the district. The Swedish king threw two sixes, and said King Olaf need scarcely throw. He replied, while shaking the dice in his hand, "Although there be two sixes on the dice, it would be easy, sire, for God Almighty to let them turn up in my favour." Then he threw, and had sixes also. Now the Swedish king threw again, and had again two sixes. Olaf king of Norway then threw, and had six upon one dice, and the other split in two, so as to make seven eyes in all upon it; and the district was adjudged to the king of Norway.
Heimskringla - The Chronicle of the Kings of Norway
Replies from: Richard_Kennaway, timujin, Eliezer_Yudkowsky, Lumifer, linkhyrule5↑ comment by Richard_Kennaway · 2014-06-02T23:11:30.111Z · LW(p) · GW(p)
If a good game is your goal
Common dice do best for the deed
For great winnings, the wise man wields his own
But may lose to a wiser;
Only a fool throws with another's dice.
-- Hávamál (not really)
Replies from: Nornagest↑ comment by timujin · 2014-06-02T18:09:43.049Z · LW(p) · GW(p)
I don't get how the quote is related to the article.
Replies from: dspeyer, Alejandro1↑ comment by Alejandro1 · 2014-06-02T18:34:12.875Z · LW(p) · GW(p)
If the model of dice are perfectly fair and unbreakable is correct, then the Swedish king is justified in assigning very low probability to losing after rolling two sixes; but this model turns out to be incorrect in this case, and his confidence in winning should have been lower.
Of course it would be silly to apply this reasoning to dice in real life, but there are cases (like those discussed in the linked article) where the lesson applies.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-06T19:23:19.441Z · LW(p) · GW(p)
If they were fair dice, there would still be a one in 72 chance of King Olaf getting the district. That's definitely worth rolling dice for.
Admittedly, the Swedish king knew his own dice were weighted, so if he thought Olaf's weren't he'd definitely win, but since he's not going to admit to cheating he's not going to tell Olaf that.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-06-02T17:22:05.456Z · LW(p) · GW(p)
Yeah, that never happened.
Replies from: jazmt↑ comment by Yaakov T (jazmt) · 2014-06-03T11:57:30.476Z · LW(p) · GW(p)
probably not, but why are you certain
Replies from: linkhyrule5, fezziwig↑ comment by linkhyrule5 · 2014-06-03T22:55:55.874Z · LW(p) · GW(p)
More importantly, whether or not it happened is irrelevant to its use as a rationality quote...
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-06-05T19:50:43.179Z · LW(p) · GW(p)
Update not upon fictional evidence.
Replies from: Punoxysm, ialdabaoth, DanielLC↑ comment by Punoxysm · 2014-06-05T20:31:02.017Z · LW(p) · GW(p)
What about fanfictional evidence?
More seriously, shouldn't it be "don't update on fictional evidence as if it were true"?
Certainly it's reasonable for a story to make us reconsider our beliefs.
Replies from: AndHisHorse↑ comment by AndHisHorse · 2014-06-05T22:58:14.828Z · LW(p) · GW(p)
It's reasonable to update as a result of the analysis of fiction (including fanfiction) for two reasons, neither of which are directly related to the events of the story in the same way that events in real life are related to updating. The first is: does this prompt me to think in a way I did not before? If so, it is not evidence, but it allows you to better way the evidence by providing you with more possibilities. The second is: why was this written? Even a truthless piece of propaganda can be interesting evidence in that it is entangled with human actions and motivations.
Replies from: dthunt, Jiro↑ comment by dthunt · 2014-06-06T04:22:22.003Z · LW(p) · GW(p)
If considering a new hypothesis fundamentally changes the way you think about priors, and the arguments you used to justify ratios between hypotheses no longer hold, then, yes, you will have to look at the evidence again.
I feel a little odd about calling that process 'updating', since I think it's a little more involved than taking into account a single new piece of evidence.
↑ comment by Jiro · 2014-06-06T17:49:23.549Z · LW(p) · GW(p)
The first is: does this prompt me to think in a way I did not before? If so, it is not evidence, but it allows you to better way the evidence by providing you with more possibilities.
I think that this would only be true if it prompts you to think in a new and random way. Fiction which prompts you to think in a new but non-random way (that is, all fiction) could very well make it worse. It could very well be that the author selectively prompts you to think only in cases where you got it right without doing the thinking. If so, then this will reduce your chance of getting it right.
For a concrete example, consider a piece of homeopathic fiction which "prompts you to think" about how homeopathy could work. It provides a plausible-sounding explanation, which some people haven't heard of before. That plausible-sounding explanation either is rejected, in which case it has no effect on updating, or accepted, making the reader update in the direction of homeopathy. Since the fiction is written by a homeopath, it wouldn't contain an equally plausible sounding (and perhaps closer to reality) explanation of what's wrong with homeopathy, so it only leads people to update in the wrong direction.
Furthermore, homeopathy is probably more important to homeopaths than it is to non-homeopaths. So not only does reading homeopathic fiction lead you to update in the wrong direction, reading a random selection of fiction does too--the homeopath fiction writers put in stuff that selectively makes you think in the wrong direction, and the non-homeopaths, who don't think homeopathy is important, don't write about it at all and don't make you update in the right direction.
Replies from: Neph, AndHisHorse↑ comment by Neph · 2014-06-15T13:10:35.281Z · LW(p) · GW(p)
does anyone else find it ironic that we're using fictional evidence (a story about homeopathic writers that don't exist) to debate fictional evidence?
Replies from: Jiro, arromdee↑ comment by Jiro · 2014-06-16T01:09:09.988Z · LW(p) · GW(p)
The scenario is not evidence at all, fictional or not. The reasoning involved might count as evidence depending on your definition, but giving a concrete example is not additional evidence, it only makes things easier to understand. Calling this fictional evidence is like saying that an example mentioning parties A, B, and C is "fictional evidence" on the grounds that A, B, and C don't really exist.
↑ comment by arromdee · 2014-06-16T01:08:11.386Z · LW(p) · GW(p)
The scenario is not evidence at all, fictional or not. The reasoning involved might count as evidence depending on your definition, but giving a concrete example is not additional evidence, it only makes things easier to understand. Calling this fictional evidence is like saying that an example mentioning parties A, B, and C is "fictional evidence" on the grounds that A, B, and C don't really exist.
↑ comment by AndHisHorse · 2014-06-06T18:16:55.248Z · LW(p) · GW(p)
Interesting point. The sort of new ways of thinking I had imagined were more along the lines of "consider more possible scenarios" - for example, if you had never before considered the idea of a false flag operation (whether in war or in "civil" social interaction), reading a story involving a false flag operation might prompt you to reinterpret certain evidence in light of the fact that it is possible (a fact not derived directly from the story, but from your own thought process inspired by the story). While it is certainly possible to update in the wrong direction, the thought process I had in mind was thus:
I have possible explanations A, B, and C for this observed phenomenon Alpha.
I read a story in which event D occurs, possibly entangled with Alpha, a similar phenomenon to Alpha.
I consider the plausibility of an event of the type D occurring, taking in not only fictional evidence but also real-world experience and knowledge, and come to the conclusion that while D takes certain liberties with the laws of (psychology/physics/logic), the event D is entirely plausible, and may be entangled with a phenomenon such as Alpha*.
I now have possible explanations A, B, C, and D for the observed phenomenon Alpha.
It is important to note that fiction has no such use for a hypothetical perfect reasoner, who begins with priors assigned to each and every physically possible event. Further, it would be of no use to anyone incapable of making that second-to-last step correctly; if they simply import D* as a possible explanation for Alpha, or arrive at some hypothetical event D which is not, in fact, reasonable to assume possible or plausible, then they have in fact been hindered by fictional "evidence".
↑ comment by ialdabaoth · 2014-06-17T17:51:32.818Z · LW(p) · GW(p)
Update not upon fictional evidence.
One man's fictional evidence is another man's thought experiment, and another's illustrative story.
To me, the lesson is "square dice are physical objects which imperfectly embody the process 'choose a random whole number between one and six'".
If you make the map-territory error and assume that "whatever the dice roll, is what we accept" while simultaneously assuming that "the dice can only each roll whole numbers between one and six; other outcomes such as 'die breaks in half' or 'die rolls into crack in floor' or 'die bursts into flame' or 'die ends up in Eliezer Yudkowsky's pants and travels unexpectedly to Washington DC' are out-of-scope", you're gonna have a bad time when one of those out-of-scope outcomes occurs and someone else capitalizes on it to turn a pure game-of-chance into a game-of-rhetoric-and-symbolcrafting.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-06-17T20:42:18.883Z · LW(p) · GW(p)
I shall cheerfully bet at very high odds against this happening the next time I roll a standard die.
Replies from: VAuroch, ialdabaoth↑ comment by VAuroch · 2014-07-06T05:11:39.471Z · LW(p) · GW(p)
If you are actually offered this bet, you probably should not take it.
When I was a young man about to go out into the world, my father says to me a very valuable thing. He says to me like this... "Son," the old guy says, "I am sorry that I am not able to bank roll you to a very large start, but not having any potatoes which to give you, I am now going to stake you to some very valuable advice. One of these days in your travels, a guy is going to come to you and show you a nice, brand new deck of cards on which (Sky snaps fingers) the seal has not yet been broken. This man is going to offer to bet you that he can make the jack of spades jump out of that deck and squirt cider in your ear. Now son, you do not take this bet, for as sure as you stand there, you are going to wind up with an earful of cider."
- Sky Masterson(Marlon Brando)Guys and Dolls
↑ comment by ialdabaoth · 2014-06-17T20:54:24.684Z · LW(p) · GW(p)
I shall cheerfully bet at very high odds against this happening the next time I roll a standard die.
I almost said "so shall I, but... " - but then caught myself, because I may very well NOT bet at very high odds against this happening the next time I roll what I perceive to be a standard die.
If I believe my opponent is motivated to cheat, and capable of cheating in a manner that turns "roll a standard die" into "listen to my narrative interpretation of why whatever-just-happened means I won", then I'm apparently willing to take some of the resources I would have otherwise put on that bet, and instead put them on "watch out for signs of cheating and/or malfunctioning dice".
↑ comment by DanielLC · 2014-06-27T19:06:19.761Z · LW(p) · GW(p)
Most of these quotes are just something people said, not something that happened that we could gain a moral from. Even if they were, they're not a random sample. We're cherry picking.
Whatever it is you can learn from quotes and a selection of things someone has picked out you can learn from fiction.
↑ comment by linkhyrule5 · 2014-06-03T22:56:16.489Z · LW(p) · GW(p)
... Huh.
On a miscellaneous note, now I know one of Pratchett's inspirations...
comment by Kaj_Sotala · 2014-06-04T11:39:55.865Z · LW(p) · GW(p)
The eleventh thesis on Feuerbach is engraved on Marx’s tombstone in Highgate Cemetery. It reads: ‘The philosophers have only interpreted the world in various ways; the point is, to change it’ (T 158). This is generally read as a statement to the effect that philosophy is unimportant; revolutionary activity is what matters. It means nothing of the sort. What Marx is saying is that the problems of philosophy cannot be solved by passive interpretation of the world as it is, but only by remoulding the world to resolve the philosophical contradictions inherent in it. It is to solve philosophical problems that we must change the world.
-- Peter Singer, Marx: A Very Short Introduction
Replies from: Lumifer↑ comment by Lumifer · 2014-06-04T14:41:56.287Z · LW(p) · GW(p)
Funny how the same meaning expressed by different people led to so much outrage...
Replies from: Plasmon, satt, gjm, WalterLThe aide said that guys like me were ''in what we call the reality-based community,'' which he defined as people who ''believe that solutions emerge from your judicious study of discernible reality.'' I nodded and murmured something about enlightenment principles and empiricism. He cut me off. ''That's not the way the world really works anymore,'' he continued. ''We're an empire now, and when we act, we create our own reality. And while you're studying that reality -- judiciously, as you will -- we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors . . . and you, all of you, will be left to just study what we do.''
↑ comment by Plasmon · 2014-06-04T17:15:28.726Z · LW(p) · GW(p)
These quotes don't seem similar to me at all.
The first quote talks of "changing" reality, the second talks of "creating" it , making the first seem like an encouragement to try and change reality, and the second like sollipsism (specifically, "creating our own reality").
The second also seems very dismissive of the need to think before you act, the first much less so (if at all).
Replies from: Salemicus, Lumifer↑ comment by Salemicus · 2014-06-04T19:01:01.065Z · LW(p) · GW(p)
The second quote is clearly not solipsism; note that what is "created" will be solid enough to be judiciously studied by other actors, empiricism, etc. Note also that the second quote does not talk about creating "reality," it talks about creating "our own reality." In other words, remoulding the world to suit your own purposes. Any sensible reading of the quote leads to that interpretation.
Like Lumifer, the Singer/Feuerbach quote, made me think immediately of the famous "reality-based community" quote.
Replies from: None, Plasmon↑ comment by [deleted] · 2014-06-04T22:41:02.294Z · LW(p) · GW(p)
No, the second clearly implies that the speaker simply doesn't hold with Enlightenment principles, empiricism, and all that "judicious study of discernible reality" crap. That speaker clearly prefers to just act, not out of rational calculation towards a goal, but because acting is manly and awesome. This is why people have such vicious contempt for that speaker: not only is he not acting rationally on behalf of others, he doesn't even care about acting rationally on his own behalf, and he had the big guns.
Replies from: Lumifer, Richard_Kennaway↑ comment by Lumifer · 2014-06-05T01:00:51.347Z · LW(p) · GW(p)
This is why people have such vicious contempt for that speaker
Actually, no, I think that some people have such vicious contempt for that speaker because he is a prominent member of the enemy political tribe and so needs to have shit thrown at him given the slightest opportunity.
↑ comment by Richard_Kennaway · 2014-06-05T18:14:25.393Z · LW(p) · GW(p)
That speaker clearly prefers to just act, not out of rational calculation towards a goal, but because acting is manly and awesome.
I cannot read this anywhere in the text,not even between the lines.
Replies from: Plasmon↑ comment by Plasmon · 2014-06-05T19:06:54.973Z · LW(p) · GW(p)
What other option is there? Preferring to act out of rational calculation towards a goal would put the speaker among those who "believe that solutions emerge from judicious study of discernible reality", i.e. the very people he's arguing against. We are left to guess what alternative decision procedure the speaker is proposing. eli_sennesh's interpretation is one possibility, do you have another?
Replies from: DanArmak, Richard_Kennaway↑ comment by DanArmak · 2014-06-05T19:55:58.341Z · LW(p) · GW(p)
I read him as saying his empire was so powerful he didn't need to care about existing reality or to plan ahead; he could make it up as he went and still expected to succeed no matter what, so he didn't need to judiciously study the existing reality before overwriting it.
↑ comment by Richard_Kennaway · 2014-06-05T19:26:11.469Z · LW(p) · GW(p)
We are left to guess what alternative decision procedure the speaker is proposing. eli_sennesh's interpretation is one possibility, do you have another?
I read him as saying that the people he is talking to and about are out of the loop. They write about what the politicians are doing, but only after the fact. The politicians have their own sources of information and people to analyse them, and the public-facing writers have no role in that process.
↑ comment by Plasmon · 2014-06-04T20:11:07.722Z · LW(p) · GW(p)
in other words, remoulding the world to suit your own purposes
Denotationally, that seems like a reasonable interpretation. It sets off solipsism warnings in my head, possibly because I know some self-described sollipsists who really are fond of using that kind of phrasing.
However, the speaker could have chosen to say this in a more straightforward way, as you do. Something like "We are an empire now, we have the power to remould parts of the world to better suit our purposes". And yet, he did not. Why not? This is not a rhetorical question, I'm open to other possible answers, but here's what I think:
I don't think it is very controversial that this quote is arguing against "the reality-based community". It is trying to give the impression that "acting ... to create our own reality" is somehow contradictory to "solutions emerging from your judicious study of discernible reality". In reality of course, most or all effective attempts at steering reality towards a desired goal are based on "judicious study of discernible reality". He is trying to give the impression that he ("we, an empire") can effectively act without consulting, or at least using the methods of, the "reality-based community". He doesn't say that denotationally, because it's false.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-04T20:14:45.349Z · LW(p) · GW(p)
Seems to me you're overthinking the simple difference between being a passive observer and being an agenty mover and shaker.
Replies from: Richard_Kennaway, Plasmon↑ comment by Richard_Kennaway · 2014-06-05T18:15:03.941Z · LW(p) · GW(p)
Or in an old Arab saying, the dogs bark, but the caravan moves on.
↑ comment by Plasmon · 2014-06-05T18:52:14.129Z · LW(p) · GW(p)
I think you are underestimating the importance of being well-informed for being an "agenty mover and shaker". Look at this guy and these guys for example. Were they "agenty movers and shakers" ? They certainly tried!
Even the famous Sun Tzu, hardly a passive observer himself, devotes an entire chapter to the importance of being well-informed.
↑ comment by Lumifer · 2014-06-04T17:34:20.759Z · LW(p) · GW(p)
The first quote talks of "changing" reality, the second talks of "creating" it, making the first seem like an encouragement to try and change reality, and the second like sollipsism.
I think this is exactly the same thing. When you change existing reality you create new reality.
The second also seems very dismissive of the need to think before you act
I read it more as pointing out that what many accept as immutable is actually mutable and changeable. This also plays into the agent vs NPC distinction (see e.g. here).
↑ comment by gjm · 2014-06-05T02:30:26.312Z · LW(p) · GW(p)
Funny how the same meaning expressed by different people led to so much outrage
Because no one ever got outraged at fluffy old Karl Marx, dear me no.
I agree with Plasmon that there are important differences between the two quotations other than what political tribe they come from, and that the words attributed to the Bush aide suggest a contempt for "judicious study" and looking before one leaps, which Marx's aphorism doesn't. But even if we set that aside and stipulate that the two quotations convey the exact same meaning and connotations, the point you seem to be making -- that the Bush guy got pilloried for being from the wrong tribe, whereas everyone loves Karl Marx when he says the same thing because he's from the right tribe -- seems to me badly wrong.
First of all, if you think Marx is of the same political tribe as most people who take exception to the Bush aide's remarks, you might want to think again. That's a mistake of the same magnitude (and perhaps the same type?) as failing to distinguish Chinese people from Japanese because they all look "Oriental".
Secondly, while Marx's aphorism gets quoted a lot, I don't think that's because everyone (or everyone on the "left", or whatever group you might have in mind) agrees with it. It expresses an interesting idea pithily, and that suffices.
Replies from: Jiro↑ comment by Jiro · 2014-06-05T07:15:25.891Z · LW(p) · GW(p)
I wouldn't suggest that Marx is in the same tribe as people who don't like Bush. I would, however, suggest that Marx is within the Overton Window for such people and that Bush is not, and that has similar effects to actually being in the same tribe.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-05T09:29:41.786Z · LW(p) · GW(p)
I don't think going around and making a violent revolution to get rid of capitalism is within the overton window of most people on the left in the US.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-06-05T15:41:42.977Z · LW(p) · GW(p)
One could be sympathetic to many of Marx's ideas while nevertheless holding that the violent revolution idea has been shown not to work.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2014-06-05T21:24:55.235Z · LW(p) · GW(p)
Most people don't reject violent revolution for the practical reason that it's a unworkable strategy but because they find the idea of going and lynching the capitalists is morally wrong.
Marx idea of putting philosophy into action brought along the politics of revolution.Bush's relationship with the "reality-based community" leads to misleading voters and ignoring scientific findings. In both cases the ideas get judged by their practical political consequences.
Replies from: Kaj_Sotala, AndHisHorse↑ comment by Kaj_Sotala · 2014-06-06T03:38:24.576Z · LW(p) · GW(p)
No need to lynch anyone: after all, Marx didn't feel that capitalists were evil, he felt that they were just doing what the prevailing economic system forced them to do: to squeeze profit out of workers to avoid being outcompeted and driven to bankruptcy by the other capitalists. But (most of them) are not actively evil and don't need to be punished. So you could just let them live but take their stuff, and there does exist wide support for the notion of forcibly taking at least some of people's stuff (via taxation).
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-06T05:53:10.193Z · LW(p) · GW(p)
Marx didn't think that you can simply get a democratic majority and tax rich peoples wealth away via taxation. He considered that no viable political strategy but advocated revolution. You ignore the political actions that Marx advocated. In dialectics a thesis needs a contrasting antithesis to allow for synthesis.
Stalin also didn't kill people because they were evil. That's besides the point. The action that resulted in dead people were justified because they move history along.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-06-06T06:10:56.762Z · LW(p) · GW(p)
The earlier question was about whether Marx would be in people's Overton window. I think that if someone thinks "well, Marx had a pretty good analysis of the problems of capitalism, though he was mistaken about the best solutions", then that counts as Marx being within the window.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-06T08:56:54.392Z · LW(p) · GW(p)
I don't think anyone would say that Bush was wrong about everything to the extend that he's outside of people's Overton window.
↑ comment by AndHisHorse · 2014-06-06T02:54:58.668Z · LW(p) · GW(p)
What evidence moves you to say that the primary reason for rejection of violent revolution is morality rather than practicality? (And why do you/the majority of people think that violent revolution has to end in lynchings? Is there another widely-held opinion that simply stripping the capitalists of their defining trait - wealth - would be insufficient?)
↑ comment by Lumifer · 2014-06-05T15:49:42.385Z · LW(p) · GW(p)
holding that the violent revolution idea has been shown not to work.
That's not true. The violent revolution idea worked very well. It's just that what happened after that revolution didn't quite match Marx's expectations.
Replies from: DanArmak↑ comment by DanArmak · 2014-06-05T19:53:39.716Z · LW(p) · GW(p)
Well if you ignore all the predictions for what should happen afterwards, the mere idea that it's possible to have a violent revolution that would topple an old authoritarian regime wasn't exactly original to Marx.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-05T21:25:19.035Z · LW(p) · GW(p)
The thing that was original to Marx was that a revolution is the only way to create real political change and that it's impossible to create that change inside the system.
Replies from: DanArmak↑ comment by DanArmak · 2014-06-06T11:29:02.565Z · LW(p) · GW(p)
I find it hard to believe this was an original idea. In a classic autocracy with a small rich legally empowered class, how could you possibly expect to radically change things except through violence? What alternatives are there that were ignored by all the previous violent revolutions in history?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-06T12:55:45.575Z · LW(p) · GW(p)
I find it hard to believe this was an original idea. In a classic autocracy with a small rich legally empowered class, how could you possibly expect to radically change things except through violence?
The idea is that even representative democracies creating radical change within the system is impossible.
Great Britain is still a Monarchy in 2014, but I would say they changed a great deal without a violent revolution.
↑ comment by WalterL · 2014-07-01T21:06:36.253Z · LW(p) · GW(p)
I always thought that this quote was probably fabricated. When a Tribe B reporter encounters a "man on the street" , "black friend", "highly placed source" or in this case "an aide" who is ostensibly a member of Tribe A, yet goes on to issue a quote that is more or less a call to arms for B I'm immensely suspicious.
I could still buy it though, if the aide talked like the protagonist of his own story. But he's just an orc, snarling his hatred of applause lights to the innocent reporter. I don't buy it.
Replies from: Nornagestcomment by sketerpot · 2014-06-21T22:26:09.142Z · LW(p) · GW(p)
"Focus on the future productivity of the asset you are considering. [...] If you instead focus on the prospective price change of a contemplated purchase, you are speculating. There is nothing improper about that. I know, however, that I am unable to speculate successfully, and I am skeptical of those who claim sustained success at doing so. Half of all coin-flippers will win their first toss; none of those winners has an expectation of profit if he continues to play the game. And the fact that a given asset has appreciated in the recent past is never a reason to buy it."
-- Warren Buffett, in some thoughts on investing.
comment by spxtr · 2014-06-06T05:35:24.806Z · LW(p) · GW(p)
Replies from: Richard_Kennaway, DanielLCThe Patrician took a sip of his beer. "I have told this to few people, gentlemen, and I suspect never will again, but one day when I was a young boy on holiday in Uberwald I was walking along the bank of a stream when I saw a mother otter with her cubs. A very endearing sight, I'm sure you will agree, and even as I watched, the mother otter dived into the water and came up with a plump salmon, which she subdued and dragged on to a half-submerged log. As she ate it, while of course it was still alive, the body split and I remember to this day the sweet pinkness of the roes as they spilled out, much to the delight of the baby otters who scrambled over themselves to feed on the delicacy. One of nature's wonders, gentlemen: mother and children dining upon mother and children. And that's when I first learned about evil. It is built in to the very nature of the universe. Every world spins in pain. If there is any kind of supreme being, I told myself, it is up to all of us to become his moral superior."
↑ comment by Richard_Kennaway · 2014-06-06T16:22:14.670Z · LW(p) · GW(p)
comment by Larks · 2014-06-08T17:51:18.675Z · LW(p) · GW(p)
Thinking that all individuals pursue "selfish" interest is equivalent to assuming that all random variables have zero covariance.
Taleb, Aphorisms
Replies from: timujin↑ comment by timujin · 2014-06-08T18:39:50.750Z · LW(p) · GW(p)
I don't get it. (I know what random variables and covariance are)
Replies from: The_Duck, Izeinwinter↑ comment by The_Duck · 2014-06-08T19:53:11.722Z · LW(p) · GW(p)
I read it as saying that people have many interests in common, so pursuing "selfish" interests can also be altruistic to some extent.
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2014-06-09T23:24:47.330Z · LW(p) · GW(p)
If that is the intended reading, then it's an example of sounding wise while saying nothing.
Replies from: VAuroch↑ comment by Izeinwinter · 2014-06-15T11:26:05.429Z · LW(p) · GW(p)
That some people do in fact work towards the common good, or conversely, are outright malevolent rather than focused on personal gain? It's a standard warning against the typical mind fallacy and the spherical cow.
Replies from: timujincomment by johnlawrenceaspden · 2014-06-09T23:17:08.087Z · LW(p) · GW(p)
Chance is always powerful. Let your hook always be cast; in the pool where you least expect it, there will be fish
-- Ovid
http://izquotes.com/quote/140231
Don't know the original. Anyone? Quidquid in latine dictum sit, altum videtur, and all that.
Replies from: RobinZ↑ comment by RobinZ · 2014-06-10T03:07:48.781Z · LW(p) · GW(p)
If my research is correct:
"Casus ubique valet; semper tibi pendeat hamus:
Quo minime credas gurgite, piscis erit."
Ovid's Ars Amatoria, Book III, Lines 425-426.
I copied the text from Tuft's "Perseus" archive.
comment by ike · 2014-06-18T11:25:08.666Z · LW(p) · GW(p)
The trick to following someone without getting caught is to follow somebody who doesn’t think they’re being followed. This is how I learned to follow people, and over the course of an entire school year, I learned fascinating secrets about complete strangers I followed for hours on end. It made me wonder who knew my secrets, on the days I thought I was walking with no one behind me.
-- Lemony Snicket, All The Wrong Questions, Book 2, When Did You See Her Last?, Chapter Seven
Replies from: gwern↑ comment by gwern · 2014-07-01T22:10:23.545Z · LW(p) · GW(p)
The trick to following someone without getting caught is to follow somebody who doesn’t think they’re being followed.
Does that actually work?
Replies from: ike, MarkusRamikin↑ comment by ike · 2014-07-04T11:38:30.908Z · LW(p) · GW(p)
Ask yourself whether you'd notice someone following you when you weren't looking out for it.
The part that impressed me and led me to post it was the whole "X applies to everyone else, hmm maybe it applies to me too" idea.
Replies from: gwern↑ comment by MarkusRamikin · 2014-07-02T06:46:08.516Z · LW(p) · GW(p)
No.
comment by Eugine_Nier · 2014-06-03T01:49:29.139Z · LW(p) · GW(p)
Replies from: None, Stabilizer, army1987Universities have been progressing from providing scholarship for a small fee into selling degrees at a large cost.
This is the natural evolution of every enterprise under the curse of success: from making a good into selling the good, into progressively selling what looks like the good, then going bust after they run out of suckers and the story repeats itself ... (The cheapest to deliver effect: "successful" cheese artisans end up hiring managers and progress into making rubber that looks like cheese, replaced by artisans who in turn become "successful"…).
↑ comment by [deleted] · 2014-06-03T07:14:10.373Z · LW(p) · GW(p)
I'd love to see Taleb actually prove his assertion here, rather than expecting his readers' cynicism and bitterness to do the work of evidence.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-03T08:15:01.812Z · LW(p) · GW(p)
Do you really doubt that universities used to take smaller fees and now sell degrees for a large cost?
Replies from: None, chaosmage↑ comment by [deleted] · 2014-06-03T14:05:55.110Z · LW(p) · GW(p)
I certainly doubt the latter portion. From my observations, whether the professoriat at any given university cares about teaching well or not has little to do with their funding sources.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-03T14:59:17.541Z · LW(p) · GW(p)
whether the professoriat at any given university
The professoriat is not the university. That, actually, is one of the changes in academia that entangles with the quote above: universities are becoming money-making machines and the professoriat becomes the proletariat -- nothing more than salaried employees (notice what's happening to tenure).
Replies from: None↑ comment by [deleted] · 2014-06-03T15:23:28.453Z · LW(p) · GW(p)
That, actually, is one of the changes in academia that entangles with the quote above: universities are becoming money-making machines and the professoriat becomes the proletariat -- nothing more than salaried employees (notice what's happening to tenure).
Though it would be weird if that were what Taleb was talking about: he has nothing but contempt for the institution of tenure (I think another of Eugine's quotes makes that clear). For Taleb, the proletarianization of professors is a good thing, and presumably he doesn't think that this is the cause of the degeneration (if there is in fact any) of higher education.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-03T15:30:24.054Z · LW(p) · GW(p)
presumably he doesn't think that this is the cause of the degeneration (if there is in fact any) of higher education.
True, it's more likely to be a consequence.
If you see yourself primarily as a business with the task of exchanging cheapest-to-deliver services for money, an ossified and unyielding labor force is something you very much do not want.
I suspect that the root of the problem goes to the fact that the universities are supposed to be both centers of research and teaching institutions. It worked well on small scale when the few students were, basically, professors' apprentices. But it doesn't work well for the delivery of education to the masses.
Replies from: EHeller, None↑ comment by EHeller · 2014-06-03T23:39:04.132Z · LW(p) · GW(p)
I suspect that the root of the problem goes to the fact that the universities are supposed to be both centers of research and teaching institutions.
In my estimation (having worked at several universities of various size and prestige, and more recently having consulted at all sorts of businesses) the problem is a common problem in a lot of American business/government since the 1970s/80s- the rise of professional management.
At large flagship U down the street from my house, professor labor costs have dropped markedly (the trend has been to replace tenure track lines with adjuncts and grad students as well as to increase grant overhead. In the science departments, many professors turn a net profit because grant overhead is larger than their salary costs). Enrollment is way up, tuition is way, way up. A drive to leverage university held patents has created massive profits for the university (with some absurdity along the way- a professor tried to start a company only to get a cease and desist order from a semi-conductor company. The university had sold the rights to his research to the semi-conductor company.)
And yet- the university finds itself on the verge of bankruptcy- why? Because management has exploded. The university now has a fellowship office (staffed entirely by managers who add no direct value), not one, but two bureaucratic offices devoted to education quality (how many people does it take to administer teacher feedback forms? Apparently about 20, of which several make more than 100k a year (roughly 5x an adjunct teaching a full load of 10 courses). Twenty years ago, all of the deans were tenured professors who rotated into the job for a few years, now all but one are outside hires who are deans full time. The last president they hired made an absurd amount of money, and brought with him several subordinates all making 150k+ a year. I often wonder how that negotiation went- "I need not only my salary, but I need these extra people to do the parts of the job I don't like."
The problem is insidious- you hire some managers to deal with work no one wants to do. But then, they start hiring people to deal with work THEY don't want to do, so on and so on. Pretty soon all your recent hires have nothing to do with the core competency of your business and they are eating all your profit from within. Its also damn near impossible to get rid of them, because by this point all the hiring and firing that no one wanted to deal with has become their domain.
Its not just education, I've consulted with companies that have more IT project managers than developers, that spend more money on medical benefits-management then they would have spent if they simply paid every claim that walked through the door,etc.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-04T01:02:21.029Z · LW(p) · GW(p)
the rise of professional management
I would call it "being taken over by bureaucracy", but I basically agree.
At private companies the bureaucracy is constrained by market pressures (unless the company finds a particularly juicy spot at the government trough), but for colleges and universities these pressures have been largely absent. Until now.
I expect the next decade to be pretty painful for, um, institutions of higher learning.
Replies from: EHeller↑ comment by EHeller · 2014-06-04T01:48:46.199Z · LW(p) · GW(p)
At private companies the bureaucracy is constrained by market pressures
I disagree- you'd be amazed how inefficient you can be and still be profitable. Lots of very large companies are being strangled by their bureaucracy even while remaining at least somewhat profitable (generally the existence of a huge company is all-in-itself a barrier to entry for competitors). I've worked for a surprising number of companies that have the basic problem of "I used to be very profitable, but now I find I'm slightly less profitable despite selling more products at higher margins." Even worse, I've seen attempts to solve the problem derailed by the same management apparatus.
A former boss was fond of blaming MBAs. He had a saying something along the lines of- the core problem with MBAs is the idea that you can good at "business" without being good at any particular business. MBAs march in, say "we need to quantify these decisions" and add a ton of process (which invites the managers in). A decade later, they notice that despite generally better conditions they aren't as profitable, they higher some big data consultants to come in and we say things like "you are spending $x+100 dollars to better quantify decisions that are only worth $x, and thats not even counting all the time you waste for all the paperwork that the process requires."
Replies from: Lumifer↑ comment by Lumifer · 2014-06-04T04:43:34.530Z · LW(p) · GW(p)
I disagree- you'd be amazed how inefficient you can be and still be profitable.
Constrained, not eliminated :-) If you have certain advantages -- e.g. you are a too-big-to-fail bank -- you can be horribly bureaucratic and nothing bad will happen to you for a long time.
Generally speaking, I think of the standard trajectory of successful companies as looking something like that:
- Start as a lean mean hungry machine. Expand, grow fast.
- Become successful, lazy and complacent. Life is easy.
- Become fat, ossified, and arthritic. Sudden movements are not possible any more.
- Become a dinosaur and either crater from being unable to adapt or be torn to pieces by new lean mean hungry machines.
↑ comment by EHeller · 2014-06-06T05:27:43.328Z · LW(p) · GW(p)
I guess I have less faith in the constraint. Maybe its because I constantly work with companies who have been between stages 3 and 4 for a very long time.
As an anecdote, many years ago I worked with a fortune 1000 retail company whose inventory system was so bad that I was legitimately surprised they were able to operate and make money. ("According to this you have more shirts in inventory in one store in San Francisco than the entire population of California..."). Much of their IT resources were being eaten by building weird one-off work arounds to the problems (i.e. making sure the shipping system didn't stop sending items to the store in California just because the inventory system claimed it had infinity shirts). The business side of management was clueless enough about all that they dropped a lot of money hiring people to try to model with the inventory data. The IT side of management told us, point blank, that hiring us wasn't their idea and they wouldn't be offering any support. Near as I can tell, the company operated entirely because a bunch of mid-level employers were fighting to get their work done in spite of the ridiculous systems in
Maybe they eventually got their house in order, but I doubt it. Still a highly profitable business that has actually grown since I worked with them.
I don't think I've ever worked with a company that didn't have at least some production code that had been developed and was currently running in excel spreadsheets, entirely because nearly everytime they tried to do a project through IT it crashed and burned. So business users develop and then run important business processes in excel. A lot of consulting projects start with "all right, let me collect the 5 dozen excel spreadsheets you use for your crucial business processes. Oh, you each have your own version that does things slightly different? Wonderful..."
Replies from: Lumifer↑ comment by Lumifer · 2014-06-06T15:14:16.380Z · LW(p) · GW(p)
Maybe its because I constantly work with companies who have been between stages 3 and 4 for a very long time.
That's because companies between these stages lose the capability to do anything effective themselves, but still have the money to hire lots of consultants :-)
And yes, I am quite familiar with the spectacle of a dozen and a half of mysterious linked Excel spreadsheets which work ("work" is defined as not crashing, it's not like anyone can check what they output) only if the stars are aligned correctly and no one touches them, ever X-D But that's usually less a consequence of the metastasing bureaucracy and more a case of not very competent people in over their heads.
↑ comment by [deleted] · 2014-06-03T16:13:14.408Z · LW(p) · GW(p)
But it doesn't work well for the delivery of education to the masses.
Maybe not, though I would like to see some statistics on that. My prior on this is that education has probably followed the pattern of pretty much every other good thing in 1st world society: it is decade by decade both better and more widely available than it ever has been before.
Replies from: Lumifer↑ comment by Lumifer · 2014-06-03T17:20:29.491Z · LW(p) · GW(p)
To clarify, I am not making claims here about how well the higher education works. I am saying that the structure of the US universities where faculty are hired on the basis of their ability to do original research (well, kinda sorta, it's really the ability to publish) but are expected to teach, often pretty basic stuff to pretty stupid undergrads, that structure is suboptimal.
And the changes are easy to see: tenure is becoming harder and harder to get, while adjuncts (who are generally expected to have a Ph.D. but are not expected to do research) are multiplying on all campuses.
Replies from: None↑ comment by [deleted] · 2014-06-03T18:41:41.911Z · LW(p) · GW(p)
Some problems with your perception of American academia:
Ability to publish gets you to the interview stage, the rest is good old-fashioned politics.
Adjuncts are still expected to publish, unless they have no interest at all in upward mobility.
Of course the structure is suboptimal, but no one's really come up with a better alternative.
Replies from: EHeller, Lumifer↑ comment by EHeller · 2014-06-03T23:16:28.566Z · LW(p) · GW(p)
I think you have confused adjuncts for "lecture" positions or other "visiting" faculty.
Generally, adjuncts are (very) low paid contract workers- maybe $2k-3k for a 4 credit course who are not expected to publish (they generally have little to no access to university research resources, not even an office on campus!, so publishing is largely impossible) and have no real upward mobility. Most adjuncts work some other full time job (they have to- a full adjunct load generally pays less than 20k a year). These positions aren't supposed to lead to upward mobility within academia.
In some disciplines, there are other non-tenure track positions (lecturers, research associates,etc) which are early career positions (in particular, they tend to come with some access to university resources, so that publishing is at least somewhat possible.) These on top of postdocs, which are early career positions.
Replies from: None↑ comment by [deleted] · 2014-06-03T23:55:04.599Z · LW(p) · GW(p)
I think you have confused adjuncts for "lecture" positions or other "visiting" faculty.
I do actually know what an adjunct is. Assumption wrong.
These positions aren't supposed to lead to upward mobility within academia.
This doesn't imply that people hired as adjuncts have no desire for upward mobility.
Replies from: EHeller↑ comment by EHeller · 2014-06-04T00:37:21.546Z · LW(p) · GW(p)
I apologize then, but the post I was replying to seemed to imply (to me at least) that most adjuncts can continue a research effort and move into higher positions.
I wanted to be clear- that
- adjuncts generally have no more access to research resources than janitors do. This makes maintaining an active research effort impossible in most disciplines.
- Unlike postdocs, lecture positions,etc adjuncts are explicitly not career positions- on top of no opportunity to research, the university expects the position is not your primary job.
This doesn't imply that people hired as adjuncts have no desire for upward mobility.
For most adjuncts (myself sometimes included), adjuncting is not their primary job- this does imply they don't have much desire for upward mobility within academia.
↑ comment by Lumifer · 2014-06-03T21:18:11.970Z · LW(p) · GW(p)
Ability to publish gets you to the interview stage, the rest is good old-fashioned politics.
True, but the point is, at the faculty hiring stage no one at all cares about teaching ability.
Adjuncts are still expected to publish, unless they have no interest at all in upward mobility.
Upward mobility to where? If you want a better position -- a tenure track, a job at a lab or a think tank -- sure, publishing will increase your chances. But the university is interested in adjuncts as warm bodies to teach students without all that tenure commitment. Publishing may be a prerequisite for advancement, but it is not a prerequisite for the job they are holding.
Replies from: None, None↑ comment by [deleted] · 2014-06-03T22:42:17.295Z · LW(p) · GW(p)
at the faculty hiring stage no one at all cares about teaching ability.
Have you been on hiring committees? I've been involved with five at three universities. All of them discussed the teaching statements of the primary candidates.
I don't think you have a good grasp on the adjunct situation, either, but rereading the thread it doesn't look like it matters much.
↑ comment by [deleted] · 2014-06-03T22:29:01.190Z · LW(p) · GW(p)
at the faculty hiring stage no one at all cares about teaching ability.
To date I've been involved with five hiring committees and three institutions -- one Big Ten, one private, and one state school. All five discussed teaching ability; it's standard practice for candidates to write teaching statements. A search of mathjobs.org for "teachin
↑ comment by Stabilizer · 2014-06-04T18:39:36.838Z · LW(p) · GW(p)
I would be interested to know how well documented this "curse of success" is? Is it studied in the economic literature? When do corporations, nations, firms, individuals suffer from this curse, when do they not? When do entire industries--like universities-- suffer from the curse? When do they survive and recover? When do they go completely bust? It seems possible to find examples going both ways, so I'm guessing there's something more subtle going on.
↑ comment by A1987dM (army1987) · 2014-06-03T09:21:28.592Z · LW(p) · GW(p)
Universities have been progressing from providing scholarship for a small fee into selling degrees at a large cost.
That's less true in certain countries than in others.
comment by Eugine_Nier · 2014-06-05T06:41:46.578Z · LW(p) · GW(p)
Darwin apparently originated the concept of lumpers and splitters. Both lumping and splitting are enormously useful for thinking about reality — and it’s even more useful to understand that you can do either depending upon the circumstances and your needs — but as the Troublesome Inheritance brouhaha shows a lot of people who think they are really smart can’t handle F. Scott Fitzgerald’s challenge of holding two ideas at once (e.g., lumping and splitting) and still function.
In general, Wade’s critics have a hard time dealing with complexity (in other words, they aren’t as smart as they think they are). Chuck deals extremely well with complexity (i.e., he’s really smart).
comment by johnlawrenceaspden · 2014-06-04T22:52:21.319Z · LW(p) · GW(p)
It ain't what we don't know that causes trouble, it's what we know that just ain't so.
David Deutsch, claiming the authority of an "unknown sage" http://www.theguardian.com/science/2012/oct/03/philosophy-artificial-intelligence
Replies from: shminux↑ comment by Shmi (shminux) · 2014-06-05T07:02:17.616Z · LW(p) · GW(p)
Usually attributed to Mark Twain.
Replies from: RobinZcomment by [deleted] · 2014-06-19T05:15:39.655Z · LW(p) · GW(p)
So, what does intuitionism suggest instead of the definition of a proposition as a truth value ? Put differently, what does the form of assertion A : prop mean ?
Definition 1. A proposition is defined by laying down what counts as a cause of the proposition.
With this definition in place, it is natural to define truth of a proposition in the following way.
Definition 2. A proposition is true if it has a cause.
-- Johan George Granstrom, Treatise on Intuitionistic Type Theory
Replies from: pragmatist↑ comment by pragmatist · 2014-06-19T17:22:55.610Z · LW(p) · GW(p)
The value of these definitions is completely opaque to me. Could you elaborate on why you believe this is a good rationality quote?
Replies from: None, TheAncientGeek↑ comment by [deleted] · 2014-06-19T19:20:13.542Z · LW(p) · GW(p)
Because it emphasizes that logic is a machine with strict laws and moving parts, not a pool of water to be sloshed in any direction. When you lay down what counts as a (hypothetical) cause of a proposition, you define it clearly and subject it to proof or disproof. When you demonstrate that one proposition causes another, you send truth from effects into causes according to the laws of proof.
Implication, deductibility, and computation are thus the exact same thing.
Replies from: pragmatist↑ comment by pragmatist · 2014-06-22T04:38:47.809Z · LW(p) · GW(p)
But what does it mean for one proposition to cause another? For instance, here's a true proposition: "Either Hilary Clinton is the President of the United States or there exists a planet in the solar system that is smaller than Earth." What is the cause of this proposition?
Also, when Granstrom says a proposition is true if it has a cause, what does that mean? What is "having" a cause? Does it mean that in order for a proposition to be true, its hypothetical cause must also be true? That would be a circular definition, so I'm presuming that's not it. But what then?
Replies from: None↑ comment by [deleted] · 2014-06-22T07:39:38.693Z · LW(p) · GW(p)
But what does it mean for one proposition to cause another?
In the sense of implication?
For instance, here's a true proposition: "Either Hilary Clinton is the President of the United States or there exists a planet in the solar system that is smaller than Earth." What is the cause of this proposition?
A well-formed OR proposition comes with the two alternatives and a cause for one of the alternatives. So in this case, a cause (or evidence, we could say) for "there exists a planet in the solar system smaller than Earth" is the cause for the larger OR proposition.
Also, when Granstrom says a proposition is true if it has a cause, what does that mean? What is "having" a cause?
In this case, cause is identified with computation. When we have an effective procedure (ie: a computation) taking any cause of A into a cause of B, we say that A implies B.
Does it mean that in order for a proposition to be true, its hypothetical cause must also be true?
This is true, but the recursion hits bottom when you start talking about propositions about mere data. Constructive type theory doesn't get you out of needing your recursive justifications to bottom-out in a base case somewhere.
↑ comment by TheAncientGeek · 2014-06-19T17:46:38.709Z · LW(p) · GW(p)
2 is somewhere between wrong and not even wrong. Propositions are regarded, by those who believe in them as abstracta , and as such, non causal. Setting that aside, it's obvious that, say, a belief can have cause but be wrong. Fori instance, someone can acquire a false believe as the causal consequence of being lied to.
Replies from: pragmatist↑ comment by pragmatist · 2014-06-19T19:17:43.861Z · LW(p) · GW(p)
I agree that this is how propositions are usually regarded. The impression I got from the quote, though, is that Granstrom is proposing a re-definition of "proposition", so saying it's wrong seems like a category error. It does seem like a fairly pointless re-definition, though, which is why I asked the question.
comment by Eugine_Nier · 2014-06-03T00:57:41.369Z · LW(p) · GW(p)
Every person who has skin in the game knows sort of what is bullshit and what is not, since our capacities to rationalize —and those of bureaucrats and economists —are way too narrow for the complexity of the world we face, with its complex interactions. And survival is a stamp of statistical validity, while rationalization and narratives are the road to the cemetery.
comment by Eugine_Nier · 2014-06-03T00:55:59.095Z · LW(p) · GW(p)
Financial inequalities are ephemeral, one crash away from reallocation; inequalities of status & academobureaucrat "elite" are there to stay
comment by Robin · 2014-06-21T23:17:38.512Z · LW(p) · GW(p)
"Emotions are not tools of cognition"
Ayn Rand
Replies from: DanielLC, Jayson_Virissimo↑ comment by DanielLC · 2014-06-27T19:41:08.664Z · LW(p) · GW(p)
I beg to differ. Or are you saying that, if Ayn Rand says it, it must be wrong? In which case, I still disagree.
Replies from: Robin↑ comment by Robin · 2014-07-06T00:29:37.329Z · LW(p) · GW(p)
How does the definition you link to contradict Rand's statement? You can acknowledge emotions as real while denying their usefulness in your cognitive process.
Replies from: DanielLC↑ comment by DanielLC · 2014-07-06T07:19:33.632Z · LW(p) · GW(p)
The article I linked to wasn't just saying that emotions exist. It was saying that they're part of rationality.
If emotions didn't make people behave rationally, then people wouldn't evolve to have emotions.
Replies from: Robin↑ comment by Robin · 2014-07-08T04:13:29.717Z · LW(p) · GW(p)
Rand doesn't deny that emotions are part of rationality, she denies that they are tools of rationality. It is rational to try to make yourself experience positive emotions, but to say "I have a good feeling about this" is not a rational statement, it's an emotional statement. It isn't something that should interfere with cognition.
As for emotions affecting humans behavior, I think all mammals have emotions, so it's not easy for humans to discard them over a few generations of technological evolution. Emotions were useful in the ancestral environment, they are no longer as useful as they once were.
Replies from: DanielLC, anandjeyahar↑ comment by DanielLC · 2014-07-08T07:58:16.134Z · LW(p) · GW(p)
but to say "I have a good feeling about this" is not a rational statement, it's an emotional statement.
If your hunches have a bad track record, then you should learn to ignore them, but if they do work, then ignoring them is irrational.
Even if emotions are suboptimal tools in virtually all cases (which I find unlikely), that doesn't mean that ignoring them is a good idea. It's like how getting rid of overconfidence bias and risk aversion is good, but getting rid of overconfidence bias OR risk aversion is a terrible idea. Everything we've added since emotion was built around emotion. If emotion will give you an irrational bias, then you'll evolve a counter bias elsewhere.
Replies from: Robin↑ comment by Robin · 2014-07-08T18:03:21.532Z · LW(p) · GW(p)
If your hunches have a bad track record, then you should learn to ignore them, but if they do work, then ignoring them is irrational.
If your hunches have a good track record, I think you should explore that and come up with a rational explanation, and make sure it's not just a coincidence. Additionally, while following your hunches isn't inherently bad, rational people shouldn't be convinced of an argument merely based on somebody else's hunch.
Even if emotions are suboptimal tools in virtually all cases (which I find unlikely), that doesn't mean that ignoring them is a good idea.
Nobody is suggesting we ignore emotions, merely that we don't let them interfere with rational thought (in practice this is very difficult).
It's like how getting rid of overconfidence bias and risk aversion is good, but getting rid of overconfidence bias OR risk aversion is a terrible idea. .
I don't follow this argument. Your biases can be evaluated absolutely, or relative to the general population. If everybody is biased underconfidence, the being biased in towards overconfidence can be an advantage. There's a similar argument for risk aversion.
Everything we've added since emotion was built around emotion. If emotion will give you an irrational bias, then you'll evolve a counter bias elsewhere
I'm not sure I agree with this, do you think that The Big Bang Theory is based on emotion? You can draw a path from emotion to the people who came up with the Big Bang Theory, but you can do that with things other than emotion as well.
My issue with emotions is only partly that they cause biases, it's also that you can't rely on other people having the same emotions as you. So you can use emotions to better understand your own goals. But you won't be able to convince people who don't know your emotions that your goals are worth achieving.
Replies from: DanielLC↑ comment by DanielLC · 2014-07-08T18:56:17.725Z · LW(p) · GW(p)
If your hunches have a good track record, I think you should explore that and come up with a rational explanation, and make sure it's not just a coincidence.
My explanation is that hunches are based on aggregate data that you are not capable of tracking explicitly.
Additionally, while following your hunches isn't inherently bad, rational people shouldn't be convinced of an argument merely based on somebody else's hunch.
Hunches aren't scientific. They're not good for social things. Anyone can claim to have a hunch. That being said, if you trust someone to be honest, and you know the track record of their hunches, there's no less reason to trust their hunches than your own.
Nobody is suggesting we ignore emotions, merely that we don't let them interfere with rational thought (in practice this is very difficult).
I mean ignore the emotion for the purposes of coming up with a solution.
I don't follow this argument.
Overconfidence bias causes you to take too many risks. Risk aversion causes you to take too few risks. I doubt they counter each other out that well. It's probably for the best to get rid of both. But I'd bet that getting rid of just one of them, causing you to either consistently take too many risks or consistently take too few, would be worse than keeping both of them.
I'm not sure I agree with this, do you think that The Big Bang Theory is based on emotion?
Emotions are more about considering theories than finding them. That being said, you don't come up with theories all at once. Your emotions will be part of how you refine the theories, and they will be involved in training whatever heuristics you use.
You can draw a path from emotion to the people who came up with the Big Bang Theory, but you can do that with things other than emotion as well.
I'm certainly not arguing that rationality is entirely about emotion. Anything with a significant effect on your cognition should be strongly considered for rationality before you reject it.
So you can use emotions to better understand your own goals. But you won't be able to convince people who don't know your emotions that your goals are worth achieving.
This looks like you're talking about terminal values. The utility function is not up for grabs. You can't convince a rational agent that your goals are worth achieving regardless of the method you use. Am I misunderstanding this comment?
↑ comment by anandjeyahar · 2014-07-08T06:21:29.694Z · LW(p) · GW(p)
The only part I object to what you wrote is emotions shouldn't interfere with cognition. I think they already are a part of cognition and it's a bit like calling "quantum physics is weird". Perhaps you meant "emotions shouldn't interfere with rationality" in which case I'll observe that it doesn't seem to be a popular view around lesswrong. Also observe, I used to believe that emotions should be ignored, but later came to the conclusion that it's a way too heavy-handed strategy for the modern world of complex systems. I'll try to conjecture further, by saying, cog, psychologists tend to classify emotion, affect, and moods differently. AFAIK, it's based on the temporal duration it exists with short - long in order of emotion, mood, affect. My conjecture is emotions can and should be ignored, mood can be ignored ( but not necessarily should) and affect should not be ignored, while rational decision-making.
Replies from: Robin↑ comment by Robin · 2014-07-08T18:12:46.321Z · LW(p) · GW(p)
The only part I object to what you wrote is emotions shouldn't interfere with cognition.
This is an ideal which Objectivists believe in, but it is difficult/impossible to actually achieve. I've noticed that as I've gotten older, emotions interfere with my cognition less and less and I am happy about that. You can define cognition how you wish, but given the number of people who see it as separate from emotion it's probably worth having a backup definition in case you want to talk to those people.
RE: emotions, affect, moods. I do think that emotions should be considered when making rational decisions, but they are not the tools by which we come to decisions, here's an example.
If you want to build a house to shelter your family, your emotional connection to your family is not a tool you will use to build the house. It's important to have a strong motivation to do something, but that motivation is not a tool. You'll still need hammers, drills, etc to build the house.
I believe we can and should use drugs (I include naturally occurring hormones) to modify our emotions in order to better achieve our goals.
↑ comment by Jayson_Virissimo · 2014-07-06T01:07:39.295Z · LW(p) · GW(p)
This seems to be in tension with what she has stated elsewhere. For instance:
emotions...are lightning-like estimates of the things around you, calculated according to your values.
-- Ayn Rand, Philosophy: Who Needs It?
Wouldn't immediately available estimates be a good tool of cognition?
Replies from: Robin↑ comment by Robin · 2014-07-08T04:44:16.403Z · LW(p) · GW(p)
Very interesting... it would seem that Rand doesn't actually define emotion consistently, that was not the definition I was using. But the Ayn Rand Lexicon has 11 different passages related to emotions.
http://aynrandlexicon.com/lexicon/emotions.html
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2014-07-08T05:11:39.324Z · LW(p) · GW(p)
More charitably we could say her conception of emotions evolved over time. Thanks for the link, I actually found some of that insightful. Also, I had forgotten how blank slatey her theory of mind was.
comment by Eugine_Nier · 2014-06-15T00:17:33.405Z · LW(p) · GW(p)
Replies from: Cyanalways check their numbers. Con artists always get the math wrong.
↑ comment by Cyan · 2014-06-15T03:16:09.581Z · LW(p) · GW(p)
Following the link, I found that the issue in question is a recent column by George Will regarding sexual assault statistics. If only Vox Day would follow his own good advice.
Here's a rationality quote from the above link that expresses the same notion as Vox Day's, but has the virtue of concluding a post that actually exercises the kind of critical thinking in question:
Replies from: V_Vwhen [George Will’s] numbers didn’t add up, he didn’t think critically about what the numbers mean and where they came from. He didn’t research the source of the data or determine if they were compatible, and instead, he willfully tried to minimize the assaults that 1 in 5 college women say they have personally experienced (which, for the record, comes from several different studies). He looked for a loophole, rather than applying critical thinking.
↑ comment by V_V · 2014-06-15T11:14:02.428Z · LW(p) · GW(p)
According to Wikipedia:
Estimates vary greatly as to the number of women who experience a sexual assault during college, with surveys focused on the United States placing it as low as 1 in 50 (2%)[1] to as high as 1 in 4 (25%).
It seems to me that if studies get so much variance they are likely to be methodologically flawed, if not outright fraudolent.
Rape prevalence among women in the U.S. (the percentage of women who experienced rape at least once in their lifetime so far) is in the range of 15–20%, with different studies disagreeing with each other. (National Violence against Women survey, 1995, found 17.6% prevalence rate;[7] a 2007 national study for the Department of Justice on rape found 18% prevalence rate.[8])
A 15–20% overall rape prevalence in the general female population seems inconsistent with a ~20% prevalence in the female college population, unless you assume that college women are have an exceptionally high risk of being raped, which I would find surprising (I expect that the majority of rapes occurs to victims from socially degraded and impoverished backgrounds).
Replies from: Cyan, army1987↑ comment by Cyan · 2014-06-15T13:38:57.693Z · LW(p) · GW(p)
Not all studies use the same definition of sexual assault. Surveys in particular are subject to question wording and question order effects. As army1987 notes, the ~20% proportion is for sexual assault, not just rape.
Keep in mind that the object-level question here is whether a rape-reporting rate of 12% can possibly be consistent with a ~20% sexual assault rate. Will (and the media in general) misstated the class of events to which the "12%" referred; Will then stated that it could not possibly be the case that the 12% and the 20% were consistent. This is a very strong claim, which means that checking/refuting it is easy in absolute terms. To refute the argument, it is not necessary to have precise estimates -- it is only necessary to show that the statistics being reported are broadly consistent.
↑ comment by A1987dM (army1987) · 2014-06-15T12:48:55.118Z · LW(p) · GW(p)
IIRC sexual assault is a broader category than rape.
Replies from: V_Vcomment by Eugine_Nier · 2014-06-03T01:01:21.433Z · LW(p) · GW(p)
Replies from: Richard_Kennaway, NoneReality exists. It includes certain facts, such as the way men and women are attracted to each other, which some people find hard to accept. These facts are part of human nature. If these facts cannot be accepted, and are opposed, the people who oppose them become enemies of humanity. They cannot accept humanity for what it is, so they hate it.
↑ comment by Richard_Kennaway · 2014-06-03T10:06:32.331Z · LW(p) · GW(p)
Reality exists. It includes certain facts, such as that people die, which some people find hard to accept. These facts are part of human nature. If these facts cannot be accepted, and are opposed, the people who oppose them become enemies of humanity. They cannot accept humanity for what it is, so they hate it.
Not Michael Anissimov.
Reality exists. It includes certain facts, such as [ANY ASSERTION YOU LIKE], which some people find hard to accept. These facts are part of human nature. If these facts cannot be accepted, and are opposed, the people who oppose them become enemies of humanity. They cannot accept humanity for what it is, so they hate it.
Various people.
Anissimov may be correct in his description of Naomi Wolf and Elliot Rodger (although it seems to me that the room he admits for "cultural reasons" is large enough to contain the entire discourse of both). But the quoted soundbite is an anti-rationality template.
Replies from: Jiro↑ comment by Jiro · 2014-06-03T21:03:18.961Z · LW(p) · GW(p)
"That people die" is not "part of human nature" in the sense intended by that quote, which means something like "how people think and react".
Furthermore, you can't actually put any assertion you like in that template because the template only works with true assertions.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-06-04T08:12:54.861Z · LW(p) · GW(p)
"That people die" is not "part of human nature" in the sense intended by that quote, which means something like "how people think and react".
"How people think and react" was Anissimov's subject of the moment. Many things, including both that one and human mortality, have been asserted to be "part of human nature". Look up anyone arguing against life extension. It won't take long to find the argument that "mortality is part of human nature". Literally. It took me less than one minute to find this:
The US President's Council on Bioethics claims that the human life cycle has an inherent worth and that, consequently, age-extension technologies distort or pervert the ‘natural' or ‘proper' human lifespan (President's Council on Bioethics, 2003).
The original source of what is there paraphrased is here (PDF, see pp.189-190).
Furthermore, you can't actually put any assertion you like in that template because the template only works with true assertions.
It works -- that is, can be sincerely said -- for anything the writer believes. It shares this attribute with bald assertion, but surrounds the assertion with an applause light frame.
Replies from: Jiro↑ comment by Jiro · 2014-06-04T15:29:09.871Z · LW(p) · GW(p)
"X is part of human nature" can mean
-- X cannot be changed
-- X should not be changed
-- X has particularly deep connections to human psychology
"How men and women are attracted is part of human nature" normally has the third meaning. "Death is part of human nature" normally has the first meaning, and so isn't comparable. In your quote, "death is part of human nature" has the second meaning; that is indeed a fallacy, but has no bearing on the original statement since that doesn't use the same meaning.
It works -- that is, can be sincerely said -- for anything the writer believes.
By your reasoning nobody should ever say anything about a true statement that is not a proof of it, since whatever they say could have a false statement substituted and would be a fallacy.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-06-05T18:06:00.344Z · LW(p) · GW(p)
"X is part of human nature" can mean
-- X cannot be changed
-- X should not be changed
-- X has particularly deep connections to human psychology
It always and only means the third of these (with minor variations, e.g. theists will talk about souls created by God). The first and second are then drawn as implications of the third.
In your quote, "death is part of human nature" has the second meaning
It has the third meaning, as you could have discovered by consulting the sources I gave. The whole purpose of the authors of that report was to address the question, if various enhancements to human bodies, of which life extension is one, can be made, should they be made?The "human nature" argument presented there was based on our mortality having "particularly deep connections to human psychology".