Posts
Comments
This is a fairly common and important criticism of aid; it is part of the hypothesis of Dambisa Moyo. Whilst a valid criticism against certain types of aid, it certainly does not apply to all of them. For example, public health interventions such as increased vaccinations, or combating infectious diseases, require little in the way of a local functioning economy. Furthermore, such items are generally considered to be both quasi-public goods (due to herd immunity) and merit goods -- so, there is a strong economic argument that they will be under provisioned, especially in a country lacking an effective government.
But, yes, development aid has a very checquered record.
There have been a fair few articles on LessWrong about utilitarian giving, with existential risk reduction and foreign aid being the most commonly recommended. Given that for many LessWrongers maximizing utilions is a major T. In my own investigations, I've found terribly few well-researched critiques on aid. Most of the criticism that does exist focuses on aid given directly to governments. Whilst this does make up a large portion of overall aid given, this is mostly irrelevant for e.g. deciding whether to give to any GiveWell recommended charities.
The major problem with writing such an article would be the lack of high-quality, academic research in this area. Having said that, there has been [i]some[/i] good research, and doing some original research is not out of the question.
Full disclosure: If you did choose this as a topic for a future prize, it is very likely that I would submit an entry for it.
I can only assume that this is a troll. You seem to have put a lot of work into it, so its somewhat disappointingly bland. Downvoted, unless you explain your idea more clearly.
I was delivered from fear, fear of man, of heart, from rejection from a woman when I was 29 years old. The ministry team gave a word of knowledge regarding my birthday, May 26, confirmed the calling on my life and what was holding me back. No more timidity. I was delivered from self and was told I would be buried in Him and wake up in Christ. I was just reading Romans 6. Blessings and thank you for your obedience to God!
From Spiritual Healing Testimonies
Testimonials are not strong evidence. I don't know Luke, so don't know in any detail what he did, but based on his post it seems like he did a lot of things over a several year period. Peoples personality can change significantly, especially at a young age, in the absence of any external factors over a period of a few years. If Luke was also, as he indicates in his post, trying to spend substantially more time around women, then I don't see how we can conclude that it was his scholarship that helped him. And, if it was, then it could easily have been that he gained confidence by believing that he understood people.
This isn't evidence against scholarship helping, of course. It may well have done. But I don't think we can take Luke claiming that it helped him as particularly strong evidence that it actually did.
When Luke said that, his "aha" moment wasn't that these things existed, it's why they exist. And more importantly, why it's a good idea to focus on that instead of saying "concentrating on looks is vain, a woman should like me for who I am."
I'm curious about this. What was the reason that Luke found for paying attention on fashion, that needed an insight into the reasons people care about fashion? It seems to me that fashions importance depends primarily on how much other people care about it, irrespective of why they care, but I don't understand fashion so could easily be missing something.
This assumption is not there. The assumption is that there is a reason people behave how they do, and this behavior is a logical conclusion from evo psych.
I think that part of my post was rather unclear. Of course, people on this site don't consciously hold the view that people must make sense, but that nevertheless seems to be the direction of a lot of peoples thoughts. To clarify, what I was trying to state is that there is a tendency for many rationalists to try and look for a reason for peoples behaviour and, when failing to find it, to try and find an evo psych explanation for this behaviour. I may be wrong about this, but its something I've noticed in my own thoughts, and it seems to be echoed in what many other people here have written. Crucially, if my behaviour is similar to other peoples, a lot of the apparent benefit of learning the explanations for others behaviour is just in coming to accept it. It's a lot easier to tolerate things about people you disagree with when you know why they're doing it -- even if your wrong about the reason. But, you can also just learn to accept people.
Of course, the assumption that peoples behaviour is a logical conclusion from evo psych is a much more justifiable one. But, even so, I'd challenge this one. Evolutionary psychology is not a scientific model in the same way that something like the standard model in Physics is; from what I know of it, it seems wholly unable to make any firm predictions on any aspect of human behaviour -- it can just try and explain the behaviour of humans that is already observed. So, I suppose I'm still failing to see how knowing the evo psych explanation of something will really help you in interacting with others?
Having said this, I feel more optimistic about some applications of evolutionary psychology to ones own thinking. I think thinking in evolutionary psychology terms has helped me work out when I should pay more or less attention to my own feelings. If one is nervous about taking some minor social risk amongst strangers, it helps to know that this reaction made perfect sense in the EEA, but doesn't now; you then can be confident that its safe to override your emotions.
This kind of post symbolizes a lot of what seems wrong to me about LessWrong. Women are attracted to men who they enjoy spending time with? Fashion matters to a lot of women? Women prefer confident men? It amazes me that many extremely intelligent people are unable to make predictions that could be made by the average truck driver. It indicates, I think, that what is lacking in those people is not analytical intelligence. Because of this, I'm deeply sceptical as to what extent applying rationality techniques such as those taught on LessWrong to social interactions will really improve peoples results.
I'm a very analytical thinker; I excel at math, physics and related subjects. At the same time, I have quite poor social skills. I'd love it if I could read some social psychology books and improve my social and romantic outcomes, but I'm unconvinced, both by this post and the community as a whole. In particular, I think that there is what I'll term the 'rationalists fallacy' -- the hidden assumption, in much of thought by rationalists, that other people and the world in general are supposed to behave rationally. They're not, and by and large they don't. So, to make sense of this, rationalists read evolutionary and social psychology books. Now, armed with these explanations for the seemingly irrational behaviour of the humans around them, they're able to finally understand the cognitive biases and preferences of their fellow human beings, and are then able to successfully interact with them.
This behaviour reminds me of my behaviour in mathematics. When I have not seen a particular result proved and do not intuitively see why it must be true, I'm uncomfortable with using the result. Later, after I've seen it proved, I become comfortable using the result, even if I later forget the proof, and have not gained any insight by reading the proof.
I think this behaviour may be adaptive in mathematics - I've read more proofs than I otherwise would have done - but I think this behaviour is almost certainly maladaptive in social interaction. I don't need to understand why doing things weird will make me unpopular (it signals that I do not subscribe to the group norms), I just need to know that they do, and so only do them in private. I don't need to understand why women tend to be more attracted to confident men (confidence was a strong signal of fitness in the EEA), and men more attracted to pretty women (body symmetry indicates that the person is healthy, and does not have any significant parasitical infection); I just need to know that this is the case, and try and be more confident or more pretty according to gender. Whilst understanding that these preferences are primarily non-concious and so do not necessarily reflect peoples concious preferences is useful, not even that needs evolutionary psychology. One can use the result without proof.
I don't think I'd object to this behaviour so much if I could see real insights that people are gaining from learning more about evolutionary and social psychology -- but I don't. A lot of evolutionary psychology seems to be dangerously close to a just-so story, and there are a large number of conflicting evolutionary psychological explanations for many common human behaviours. If learning something doesn't help me predict anything I don't already know, just provide a (possibly false) explanation for a behaviour I'm already familiar with, why should I learn it?
Having said all this, I'm optimistic about what rationalists can do socially. There are few areas where systematic study as to what works and what doesn't doesn't help; it does. But, I think reading social psychology books should be a very small part of what rationalists do. A very large amount of the social behaviour we engage in is non-concious; being around people enough that you become comfortable with them may be one of the best things one can do to improve ones social skills. Furthermore, do not overestimate what one is able to achieve using deductive logic. Even as a mathematician and as a programmer, most of what I do is intuitive and involves pattern matching. One cannot learn to become a good programmer or a mathematician without writing lots of code, or working on lots of problems; likewise, one cannot become effective socially without meeting people.
tl;dr I see rationalists often engaging in behaviour adaptive in many fields (e.g. math, science), but maladaptive in other fields (e.g. socializing, romance.) I think less emphasis should be placed on attempting to find a deeper meaning behind peoples behaviour, and more on trying to find ways to benefit from peoples existing behaviour. Practice makes perfect in all fields, but especially social.
It seems to be almost universally held that empathy is a desirable personality trait. I can certainly see that having better theory of mind - being better able to predict other peoples actions - is useful in any situation. But empathy, to me at least, also has connotations of sympathizing with the other person. Whilst I can see that this would be very useful in certain situations (e.g. sexual relationships), it seems to also be potentially harmful in other situations (e.g. management.) For example, firing someone who has been a reliable worker for many years, or confronting a person whose work is sub-par are all things that could be made more difficult by being sympathetic. Whilst I'm sure its possible to overcome such a reluctance consciously, these feelings might cause you to shy away from even thinking about such things, which cannot so easily be consciously corrected.
I'm also curious about how empathy brings benefits in social interactions apart from the theory of mind aspect of it. Obviously, people will like people more who seem to agree with them, like them and understand them; but it seems that all these signals could, with moderate effort, be faked effectively. I wouldn't consider this to necessarily be deceptive, either -- the signal one is sending is communicating how much one likes the other person and wishes to be allied with them; provided you genuinely do like the person and wish to be friends with them, then the signal seems to me to be honest. But its possible that I'm missing some subtle aspects of empathy that cannot be so easily faked.
It's acceptable provided that I can accurately measure the outcome. So, I would be willing to e.g. participate in a business venture with payoff (if succesful) of x and known probability p of succeeding provided p*x is better than any other options I have; however, if p is highly uncertain, then I'm wary of it -- I don't trust my judgement, I think it's too easy to be overconfident. I think I could develop a better attitude about risk, however.
Thanks, this sounds like good advice -- I've been concentrating a lot on what external actions I should take, and not on what actions I can take to change myself, but those are at least as important.
I've shunned clichéd things like volunteering at a soup kitchen since they seem to me to be quite low impact activities compared to things such as the SIAI, but they might have a larger impact on my self identity than donations to charities, which I've neglected to consider.
I'm willing to work hard, but I'd prefer to demonstrate that by completing a task that would have a benefit besides a gain in social status amongst people on the Internet, such as by completing relevant academic work and gaining internships in fields I'm interested in.
I believe that, given my aptitudes, I am best able to make a positive impact on the world by attempting to maximize the money I earn, and donating that. I'm curious how you came to that belief.
I arrived at the belief primarily instinctively, and am not particularly confident in it; I'd be happy to revise it on the basis of any more data I receive.
My rational is, roughly, that most adequately funded philanthropic organisations have no difficulties attracting talent, and sot the number of "doers" is determined primarily by demand-side factors. Therefore, by becoming a doer, I would be preventing another would-be doer from attaining a job. Whilst it's possible that this person would go on and contribute in other ways, such as by being a donor, I think it's very likely that they would not (most people seem to be strongly attracted to personally making a difference.)
Therefore, were my becoming a "doer" to have a positive impact, I would have to do such a better job than the person who has been displaced by me, that it would outweigh the loss of donations that I would otherwise have made. Whilst it would probably would be true that I would be more capable than the person who has lost out on getting a job because of me (supply growing should result in the least-capable losing out most), I don't believe that I have any major advantage over other people.
Whilst there seems to be no shortage of talented people looking for non-profit jobs, there is always a shortage of money, and my donating would be unlikely decrease anyone else's donations. So, I feel this is the more effective option.
I think I'm partly also influenced by a heuristic I sometimes use, that of avoiding what seems to be the easy option out. Most people seem to want to be intimately involved with the causes they help, yet there seems to be little justification for this, so I feel compelled to do the opposite. However, it occurs to me that I might in fact be attracted to the status and other benefits of high-paying jobs, so this may be just a rationalization.
I'm much less confident of this conclusion now than when I began writing this comment, which I think is probably a good thing. I'd be interested to hear arguments from people who've come to opposing conclusions to me.
Well, I'm new here, but I thought I might as well just try it. As far as I can tell, a large segment of LessWrong readers are highly interested in philanthropy, especially existential risk reduction. Given this, there seems to have been surprisingly little discussion as to how to best lead ones life to maximize its positive impact.
Whilst there has been some discussion with regard to selecting between charities, I have seen almost no discussion on choosing between careers, or on how to structure ones life more generally. If the type of rationality taught on this site is to be widely applicable, then it should be able to be applied to such situations successfully.
Whilst obviously these choices are highly individual, I nevertheless think that a group effort should be able to shed some light on the problem. In particular, the standard to beat is quite low -- most people have only very limited knowledge of the careers they go into, and make their decision with only limited analysis. It is even rarer for people to seriously consider what actions they can do to maximise the impact they have on the world, although many people choose careers nominally in order to help people.
Whilst I don't want this post to be about myself, here are a few details about myself: I'm just about to enter university (the university is generally considered to be somewhere amongst the top 10 in the world, and is certainly in the top 4 in my country, the UK) to read Mathematics. I believe that, given my aptitudes, I am best able to make a positive impact on the world by attempting to maximize the money I earn, and donating that. I am undecided between existential risk reduction and more ordinary causes. I don't subscribe to any formal moral system, but my feelings are quite closely aligned with preference utilitarianism. I'm unsure on how much money I should donate, but feel that in the long term I should certainly aim to donate any money I earn whose consumption would not serve to further increase my happiness. If anyone wants more details about my personal situation, feel free to PM me.
Do you have any recommendations on how to combat this? Obviously, mixing with groups that reward behaviour you wish to cultivate would be a good first step, but what other steps can one take? Do you think making a concious effort to identify more/feel friendlier towards people whose behaviour you consider laudable would help? This would be a step much more readily made for most people than changing their actual social group.
The beliefs of other people are evidence of some fashion. In some cases (e.g. scientific consensus), a belief being widely held is a very strong signal of correctness. In other cases (e.g. religion), less so.
Of course, our social instinct to conform do not take into account the reliability of the beliefs of the group that one is part of -- although, they do take into account whether you identify yourself as part of that group, which gives one some control (only identify yourself with groups that have a good track-record of correctness.)
I'd be hesitant to classify being either contrarian or conformist as being examples of bias per se. For something to be a bias, it must influence ones beliefs in such a way that is not rationally justified. Being contrarian regarding e.g. the religious beliefs and beliefs stemming from religious beliefs of your parents is, probably, rational; conforming to the beliefs of people with more experience than you working in a field that strongly rewards and punishes success or failure (e.g. stock trading) is, again, probably rational.
Of course, being conformist can be considered to bring great gains in instrumental rationality. A large proportion of the beliefs people hold do not change in any significant way the way they lead their lives, but they do hold a large signalling value -- that one is part of a group, and not some insane, socially inept geek that believes in crazy things such as the singularity. Fortunately, it is possible to get almost all of the same benefits of actual conformity by simply pretending to conform; normally one does not even need to lie, just holding ones tongue is often enough. The only advantage I can see to actually conforming is that it may make it easier to empathize with and predict others behaviour in that group, but I don't think that this is normally much of an advantage.