Posts
Comments
Hi, I'm in Brisbane and potentially interested. Not a lot of free time at the moment though (finishing off PhD). I've been to Skeptics in the Pub, but haven't had time to go recently. I think I'm a member of UQ Skeptics on Facebook.
I think there is a problem in the culture of philosophy.
It's seen as generally better to define things up front, as this is seen as being more precise.
That sounds reasonable. Who doesn't want greater precision?
Precision is good when it is possible. But often we don't have a good enough understanding of the phenomena to be precise, and the "precision" that is given is a faux-precision.
Often logic is used to define precise categories, by philosophers examining their concept for X. Then they look at and discuss and argue over the consequences of these definitions.
I think it'd be more appropriate for them to spend more time trying to examine the nature of the instances of X out there (as opposed to the properties of their concept of X), based on a loose notion of 'X' (because at this point they don't really know what X is).
(caveat: I didn't read the pages linked to in this post's description)
they might, though you have to be very careful in treating partial data as representative of the whole picture.
but the data for the kind of factors she's talking about (i've read the book, though it was a while ago) goes beyond what property records could provide.
The data necessary for such systematic examination is not available in some fields. I'm not sure about this field, but maybe it was one of them (back then at least)?
Nice post. You could write a similar one on helping the environment. How often do you hear people say, about helping the environment, that "every little bit helps"?
While we're on the topic of an Australian meetup, are there any other LW ppl in Brisbane? If there's some we could organise a meetup.
So, my suggestion is to use "rationality" consistently and to avoid using "rationalism". Via similarity to "scientist" and "physicist", "rationalist" doesn't seem to have the same problem. Discus
A while back I argued against using the term "rationalist".
Some further thoughts:
Noticing that something isn't right is very different from developing a solution.
The former may draw on experience and intuition - like having developed a finely honed bullshit detector. You can often just immediately see that there's something wrong.
I've noticed that when people complain that someone has given a criticism but hasn't or can't suggest something better, they seem expect that person to be able to do so on the spot, off the top of their head.
But the task of developing a solution is not usually something you can do off the top of you head. It's a creative act, and that usually means you have to sketch out bits and pieces, critically evaluate them, modify them, and repeat until you have developed something satisfactory.
Yes. Too often people treat it as a sin to criticize without suggesting an alternative. (as if a movie critic could only criticize an element of a film if they were to write a better film).
But coming up with alternatives can be hard, and having clear criticisms of current approaches can be an important step towards a better solution. It might take years of building up various criticisms -- and really coming to understand the problem -- before you are ready to build an alternative.
Yet our brains assume that we hear about all those disasters [we read about in the newspaper] because we've personally witnessed them, and that the distribution of disasters in the newspapers therefore reflects the distribution of disasters in the real world.
Even if we had personally witnessed them, that wouldn't, in itself, be any reason to assume that they are representative of things in general. The representativeness of any data is always something that can be critically assessed.
This seems to be a common response - Tyrrell_McAllister said something similar:
I think that your distinction is really just the distinction between physics and mathematics.
I take that distinction as meaning that a precise maths statement isn't necessarily reflecting reality like physics does. That is not really my point.
For one thing, my point is about any applied maths, regardless of domain. That maths could be used in physics, biology, economics, engineering, computer science, or even the humanities.
But more importantly, my point concerns what you think the equations are about, and how you can be mistaken about that, even in physics.
The following might help clarify.
A successful test of a mathematical theory against reality means that it accurately describes some aspect of reality. But a successful test doesn't necessarily mean it accurately describes what you think it does.
People successfully tested the epicycles theory's predictions about the movement of the planets and the stars. They tended to think that this showed that the planets and stars were carried around on the specified configuration of rotating circles, but all it actually showed was that the points of light in the sky followed the paths the theory predicted.
They were committing a mind projection 'fallacy' - their eyes were looking at points of light but they were 'seeing' planets and stars embedded in spheres.
The way people interpreted those successful predictions made it very hard to criticise the epicycles theory.
I fully agree, and this is completely in line with the points I was trying to make.
It is theoretically possible to accurately describe the motions of celestial bodies using epicycles, though one might need infinite epicycles, and epicycles would themselves need to be on epicycles. If you think there's something wrong with the math, it won't be in its inability to describe the motion of celestial bodies.
But I don't think there's anything "wrong with the math" - I even said precisely that:
A believer in epicycles would likely have thought that it must have been correct because it gave mathematically correct answers. And it actually did . Epicycles actually did precisely calculate the positions of the stars and planets (not absolutely perfectly, but in principle the theory could have been adjusted to give perfectly precise results). If the mathematics was right, how could it be wrong?
.
While 'accurate' and 'precise' are used as synonyms in ordinary language, please never use them that way when talking technically about the meanings of words.
I was trying to talk about how people actually use them, and one of the things I was suggesting is that people do actually tend to treat them as synonymous.
Similarly, please never use 'begs the question' or any form of it when not referring to the logical fallacy.
Isn't this a little picky? The way I used 'begs the question', in the sense of 'raises the question', is fairly common usage. Language is constantly evolving and if you wanted to claim that people only should use terms and phrases in line with their original meanings you'd have throw away most language.
As far as I can see, that's just an acknowledgement that we can't know anything for certain -- so we can't be certain of any 'laws', and any claim of certainty is invalid.
I was arguing that any applied maths term has two types of meanings -- one 'internal to' the equations and an 'external' ontological one, concerning what it represents -- and that a precise 'internal' meaning does not imply a precise 'external' meaning, even though 'precision' is often only thought of in terms of the first type of meaning.
I don't see how that relates in any way to the question of absolute certainty. Is there some relationship I'm missing here?
I'm not trying to be a jerk. Let me try to explain things, as I don't think I communicated my point very clearly.
Just to start off, the quoted text is something you said.
But perhaps you are saying that the sentence I've embedded it in does not reflect what any thing you said? If so, it's not mean to - it's describing the point I was making, and to which your response included that quoted text.
Essentially, my last comment was trying to point out what I'd originally said had been misinterpreted in the Just-So Story bit, even though I didn't do a great job of making this clear. Of course you may argue that you didn't misinterpret me, but I certainly wasn't trying to put words into anyones mouth.
An intuition is correct if it matches reality.
Indeed, and that is why it's wrong to say that attempts to rationally justify statements about reality are "almost certainly going to produce an ad hoc Just-So Story".
science is basically a means to determine whether initial intuitions are true.
No, science is a methodology to determine whether an assertion about reality should be discarded. If it merely dealt with initial intuitions, it's usefulness would be exhausted once the supply of initial intuitions had been run through.
I'm not sure what the second sentence there is taking "initial intuitions" to mean, but I don't think there's any substantial disagreement between our statements.
I doubt those kings can be killed. I think victory against them comes more from inserting layers of suppression between them and action, to modulate and reduce their power. You might be able to think of those layers as governmental machinery.
“If a nation expects to be both ignorant and free in a state of civilization, it expects what never was and never will be” -- Thomas Jefferson
it's score would be the number of karma points to be awarded for implementing it.
upon reflection, a poll might be better. along the lines of:
how many points is the implementation of this feature worth?
- 10
- 20
- 50
- 100
- 150
I wonder - would it be useful for people to receive karma points for programming contributions to the LW community? It sounds reasonable to me.
An interesting question is, how do you determine the number of karma points the work deserves? One approach would be that one of the site admins could assign it a value. Another would be that it could be voted upon.
Essentially the description of the 'feature' to be added would be a post, and it's score would be the number of karma points to be awarded for implementing it. Vote up if you think that score is too little, vote down if you think it is too much. This would also give you a way to rank the 'feature requests' - those with the highest scores are the ones the community cares about most (of course that may not matter much if there's only the occasional bit of programming work to be done).
I realise that there'd be costs and effort required to get any system like this going. E.g. you probably want such feature request 'posts' on a different part of the site, and you'd have to explain the scheme to people, etc.
This idea of providing karma points like this wouldn't have to apply to just programming tasks - it could be anything else that isn't a post or a comment but which is nonetheless a contribution to the community.
Here's an example of such external referencing of Less Wrong posts
http://www.37signals.com/svn/posts/1750-the-planning-fallacy
[edit: included quote]
Any 'figuring out' is almost certainly going to produce an ad hoc Just-So Story.
that implies that the only correct intuition is one you can immediately rationally justify. how could progress in science happen if this was true?
science is basically a means to determine whether initial intuitions are true.
So it seems possible to me that I have an oversensitivity to noise and Bill has an undersensitivity to it.
That seems to imply that the typical case is the "correct" one, and that somehow your (or Bill's) case is invalid because it's non-typical.
If noise means that you can't sleep, study or concentrate, and you can't really help this, then this is a valid factor that should be taken into account.
[edit] though after reading further down i can see that you appreciate that.
that is exactly what you can't assume if you want to explain the basis of representation.
...because they ask for a moral intuition about a case where the outcome is predefined.
One thing i found a bit dodgy about that example is that it just asserts that the outcomes were positive.
I would bet that, for the respondents, simply being told that the outcomes were positive would still have left them feeling that in a real brother-sister situation like that there would likely have likely been some negative consequences.
Greene does not seem to factor this into account when he interprets their responses.
I don't think there's anything that comes close to giving a theoretical account of how mathematical statements are able to, in some sense, represent things in reality.
perhaps i should have phrased it as '...stand by your intuition for a while -- even if you can't reason it out initially -- to give yourself an adequate chance to figure it out'
forgot - there was another observation i had.... this one is just quick sketching:
regarding the idea that 'moral properties' are projected onto reality.
As our moral views are about things in reality, they are -- amongst other things -- forms of representation.
I think we need a solid understanding of what representations are, how they work, and thus exactly what it is they "refer" to in the world (and in what sense they do so), before we'll really even have adequate language for talking about such issues in a precise, non-ambiguous fashion.
We don't have such an understanding of representations at the moment.
I made a similar point in another comment on a post here dealing with the foundations of mathematics - that we'll never properly understaand what mathematical statements are, and what in the world they are 'about' until we have a proper theory of representation.
I.e. i think that in both cases it is essentially the same thing holding us back.
No one said anything in response to that other comment, so I'm not sure what people think of such a position - I'd be quite curious to hear your opinion...
Ok, i skimmed that a bit because it was fairly long, but here's a few observations...
I think the default human behavior is to treat what we perceive as simply being what is out there (some people end up learning better, but most seem not to). This is true for everything we percieve, regardless of the subject matter - i.e. is nothing specific to morality.
I think it can -- sometimes -- be reasonable to stand by your intuition even if you can't reason it out. Sometimes it takes time to figure out and articulate the reasoning. I am not trying to justify obstinance and "blind faith" here! Just saying that sometimes you can't be expected to understand it straight away.
I don't see any justification given, in what you quote from Greene, for the claim that there's essentially no justification for morality.
see also
"How Obama Is Using the Science of Change" http://www.time.com/time/printout/0,8816,1889153,00.html
I think there's some misunderstanding here. I said don't assume. If you have some reason to think what you're doing is reasonable or ok, then you're not assuming.
Rich enough that, if you're going to make these sorts of calculations, you'll get reasonable results (rather than misleading or wildly misleading ones).
A lot of this probably comes down to:
Don’t assume – that you have a rich enough picture of yourself, a rich enough picture of the rest of reality, or that your ability to mentally trace through the consequences of actions comes anywhere near the richness of reality’s ability to do so.
The problem is language. If you use a concept frequently, you pretty much need a shorthand way of referring to it.
But I would ask, do you need that concept – a concept for labeling this type of person – in the first place?
"Mate selection for the male who values the use of a properly weighted Bayesian model in the evaluation of the probability of phenomena" would not make a very effective post title. [as] "Mate selection for the male rationalist".
I don’t think that’s the only other option. Maybe it could’ve been called “Mate selection for rational male” or “Mate selection for males interested in rationality”.
I don’t see why it has to even make any mention of rationality. Presumably anything posted on Less Wrong is going to be targeted at those with an interest in rationality. Perhaps it could have been “Finding a mate with a similar outlook” or “Looking for relationship?”
I’m not suggesting that any of these alternatives are great titles, I'm just using them to suggest that there are alternatives.
I agree that identifying yourself with the label rationality … But it still seems useful to have some sort of terminology to talk about clear thinking, and I can't think of a better candidate term than rationality.
‘Rationality’ is a perfectly fine term to talk about clear thinking, but that is quite a different matter to using 'rationalist' or any other term as a label to identify with.
I must say that I can't help but find it odd that you link to "Keep Your Identity Small" in discussing this problem. Did you read the footnotes? Graham lists that which we would call rationality as one of the few things you should keep in your identity:
He doesn’t quite say it’s a label you should keep in your identity, he lists it as an example of something that might be good to keep in your personal identity. I think that the argument he outlines in the essay applies to what’s in that footnote: that it’d be better to just want to “[follow] evidence wherever it leads”, than to identify too strongly as a scientist.
Heavily paraphrasing:
For local purposes [“rationalists” seems suitable]. For outside purposes [I use a description not a label]
I think it’s pretty much impossible for us to have any sort of private label for ourselves. Even if we were to use a label for ourselves within this site and never use that outside of the site, that use of it within the site is still going to be projecting that label to the wider world.
Anyone from outside the community who looks at the site is going to see whatever label(s) we employ. And even if we employ a label just on this site, it’s still likely to be part of the site’s “reputation” in outside circles -- i.e. the label is still likely to reach people who've never seen the site.
A lot of the content on Less Wrong is describing various types of mental mistakes (biases and whatnot). In terms of this aspect of the site, Less Wrong is like a kind of Wikipedia for mental mistakes.
As with Wikipedia, it’s something that could be linked to from elsewhere – like if you wanted to use it help explain type of mistake to someone. There’s a lot of potential for using the site in this way, considering that the internet consists in a large part of discussions and discussions always involve some component of reasoning.
Seen in this way, the site is not just a community (who could have their own private terminology) but also an internet-wide resource. So we should think of any label as global, and I think that's more of a reason to consider having no label at all.
I don't think Zelazny's statement makes out that "detecting falsehood and discovering truth are not the same skill in practice". He just seems to be saying that you can have good 'detecting falsehood' skills without caring much about the truth ("I’m not at all sure, though, that they care much about truth").
If I thought I was going to have to detect falsehoods - if that, not discovering a certain truth, were my one purpose in life - then I'd probably apprentice myself out to a con man.
I think that's equating 'detecting falsehood' too much with 'detecting tricks of deception'.
If detecting falsehood and discovering truth are not the same skill in practice, then practicing honesty probably makes you better at discovering truth and worse at detecting falsehood.
I'm very doubtful that practising honesty, itself, could make you worse at detecting falsehoods.
Being naive -- for example, by assuming anything that superficially seems to make sense must be true -- can make you worse at detecting falsehoods. We often associate honesty with a kind of naivety, but the problem with being poor at detecting falsehoods is a problem with naivety not with honesty.
A certain kind of naivity is thinking that since you have good intentions about being honest, you therefore are honest. Saying or thinking or feeling that you are honest does not necessarily mean you are actually honest. Yes, having a genuine desire to be honest is going to make you more likely to be honest, and put you on the right track to being honest, but claims of honesty don't necessarily equate with a genuine desire.
To actually make sure you're more honest takes work. It requires you to monitor and reflect on what you say and do. It requires you to monitor and reflect upon the ways that you or others can be dishonest. And I reckon that means that having the ability to be genuinely honest also means you'll have pretty good skills for detecting falsehood.
The following is just sketchy thoughts:
In relation to rationality, I'd say that rationality requires certain types of honesty to yourself, and being rational is likely to make you more honest to yourself as well.
If you can be successfully rational (without any major pockets of irrationality), then you're probably more likely to be considerate of others (because you're better able to appreciate the negative consequences of your actions), and are thus more likely to be honest to people in matters where there could be non-trivial negative consequences.
But I still suspect you can be quite rational without it necessitating that you're particularly honest to others.
I'd estimate that writing a good post takes me about 20 times as much time and effort as writing a long comment. Many people simply can't commit that much time [...] I don't think fear of rejection is the problem. (my emphasis)
I take nazgulnarsil's comment as suggesting that there may be value to more people writing posts that aren't necessarily "good"... in which case that sort of rejection may not be optimal.
I'd argue that intelligence has a contextual nature as well. A simple example would be a computer chess tournament with a fixed algorithm that used as much resources as you threw at it. Say you manage to increase the resources for your team steadily by 10 MIPs per year, you will not win more chess games if another team is expanding their capabilities by 20 MIPs per year.
If you're comparing a randomly selected intelligent system against another randomly selected intelligent system drawn from the same pool, then of course the relative difference isn't going to change as you crank up the general level of intelligence.
But if you compare one of these against anything else as you crank up the general level of intelligence then it's a whole other story. And these other comparisions are pretty much what's at stake here.
(Most people take a concept to mean whatever most uniquely distinguishes it from other concepts - so 'rational' means whatever, in the characteristics they associate with rationality, is most unique and different from the other concepts they have. i.e. Spock-like).
it would appear to the average person that most rational types are only moderately successful while all the extremely wealthy people are irrational.
This only makes sense if you consider "rational" to equal "geeky Spock-wannabe",
We're talking here about perception, not reality, and I'm sorry to say that "geeky Spock-wannabe" probably does equate to the average person's perception of "most rational types" .
So perhaps we need a norm that criticizes use of authority in one area to make claims in an unrelated area. A preacher's opinion carries little weight in biology, just as biologists do not typically do much to define religious rhetoric.
But that would also mean that nobody but an authority in the religion could criticize the religion.
These rules always have to be symmetrical.
That said, I do think it is valid to say "I am entitled to an opinion" in situations where your right to expression is being attacked.
I'm not saying you always do have a right to freely and fully express yourself. But in situations when you do have some measure of this, it can be unfairly stomped on.
For example, you might be in a business meeting where you should be able to have input on a matter but one person keeps cutting you off.
Or say you're with friends and you're outlining your view on some topic and, though you're able to get your view out there, someone else always responds with personal attacks.
Sometimes people are just trying to shut you down.
That article is entitled "You Are Never Entitled to Your Opinion" and says:
If you ever feel tempted to resist an argument or conclusion by saying "everyone is entitled to their opinion," stop! This is as clear a bias indicator as they come.
I don't think Robin really means that people aren't entitled to their opinions. I think what he really means is people aren't allowed to say "I'm entitled to my opinion" - that is, to use that phrase as a defense.
There's a big difference. When people use that defense they don't really mean "I'm entitled to have an opinion", but instead "I'm entitled to express my opinion without having it criticised".
In other words "I'm entitled to my opinion" is really a code for "all opinions are equally valid and thus can't be criticised".
I agree, I probably just didn't explain myself very well. I was just trying to talk about the situations when people express an opinion without really giving any consideration to why they think it is true.
I think a norm is likely to be a product of the solution, not the solution itself.
So the problem is we have a lot of people who don't appreciate what constitutes a reasonable foundation for an opinion. They think they can just say what they feel. To put it one way, they have a poor understanding of the nature of evidence.
I don't think a norm like you describe could have any effect on anyone like that who had a poor understanding of evidence. Those people would just think the norm was wrong or ridiculous.
If they were to come to better understand the nature of evidence, they would be more receptive to the norm. But if they were to undestand evidence better then simply from this fact you'd get the desired result of people not mouthing off as much with "ignorant opinions".
So it the solution has to involve getting people to better understand the nature of evidence (or however you want to describe what is missing from their mental toolkit).
If you were to get enough people to understand the nature of evidence, that could lead to the creation of such a norm. I doubt it could happen the other way around.
Caveat: I'm not 100% confident the above story is true, but I think there's at least and element of truth in it.