Posts
Comments
Many people within the Boston EA community seem to have come to it post college and through in-person discussions.
Hmm. I haven't spent much time in the area, but I went to the Cambridge, MA LessWrong/Rationality "MegaMeetup" and it was almost exclusively students. Is there a Boston EA community substantially disjoint from this LW/Rationality group that you're talking about?
More generally, are there many historical examples of movements that experience rapid growth on college campuses but then were able to grow strongly elsewhere? Civil rights and animal welfare are candidates, but I think they mostly fail this test for different reasons.
If you can convince one new person to be an EA for $100k you're more efficient than successfully raising your kid to be one, and that's ignoring time-discounting.
I honestly do not think this is possible, and again I look to religious organizations as examples where (my impression is that) finding effective missionaries is much harder than getting the minimal funding they need to operate at near-maximum efficiency. This is something we need more data on, but I expect a lot of the rosy pictures people have of translating money or other fungibles into EA converts will not stand up to scrutiny, in much the same way that GiveWell has raised by an order of magnitude its estimates of the cost of saving a life in the developing world. I especially think that the initial enthusiasm of new EAers converted through repeatable methods (like 80k hours) will fade more quickly in time than "organic" converts and children raised in EA households (to an even greater extent than for religions).
I think religions mostly expand at first through conversion and then once they start getting diminishing returns switch to expanding through reproduction. EA isn't to this changeover point yet, and isn't likely to be for a while.
Maybe. I have the impression that religions most used missionaries to expand geographically, and hit diminishing returns very quickly once they had a foothold. Basically, I guess that as soon as a potential convert knows the organization exists, you've essentially already hit the wall of diminishing returns. I agree as long as EA stuff has non-structural geographic lumpiness (i.e. geographic concentrations that are a result of accidents of history rather than for intrinsic reasons related to where EA memes are most effective) then EA missionary work may be the major driver of growth. But I think the EA memes are most effective on a wealthy, technologically connected sub-population which we may quickly saturate in just a few years.
I hear many more people describe their own conversion experience as something akin to "I heard the argument, and it just immediately clicked" (even if personal inertia prevented them making immediate drastic changes). I do not hear many people describe it as "I had heard about these ideas a few times, but it was only when Bob [who was supported by EA funding] took the time to sit and talk with me for a few hours that I was convinced." (Again, that's just anecdotal.)
Can we look at the history of the Catholic church during times when new populations of potential converts became accessible through exploration/colonization? What fraction of the church's resources went to missionary work, and did the church reduce its emphasis on having children so that parents would have more free money to give to the church?
Incidentally, these kind of questions are what make me wish we had more EA historians. We could use a lot more data and systematic analysis.
I mostly disagree with both parts of the sentence "Except that it's much cheaper to convince other people's kids to be generous, and our influence on the adult behavior of our children is not that big." I would argue that
(1) Almost all new EA recruits are converted in college by friends and/or by reading a very small number of writers (e.g. Singer). This is something that cannot be replicated by most adults, who are bad writers and who are not friends with college students. We still need good data on the ability of typical humans to covert others to EA ideas, but my anecdotal observations (e.g. Matt Wage) suggest that this is MUCH harder than you might think.
(2) Despite one's acceptance of genetic fatalism, it's known that the biggest influences a parent can have are the religious and political associations of their children. Insofar as donating is determined more by affiliating with the EA movement than by bio-determined factors like IQ, we can expect parents to strongly induce giving by their children.
We can look to evangelical religions to get an idea of what movement building techniques are most effective for the bulk of the population. Yes, many religions have missionaries, but this is usually a small group of unusually motivated and charismatic people. But having lots of children is a strategy that many religions have effectively employed for the bulk of their members.
(One potential counter example I'd be interested to hear about is the effectiveness of the essentially compulsory missionary work for Mormon men.)
The invention of nuclear weapons seems like the overwhelmingly best case study.
- New threat/power comes from fundamental new scientific insight.
- Existential risks (nuclear winter, run-away nitrogen fusion in atmosphere).
- Massive potential effects, both positive and negative (nuclear power for everything, medical treatments, dam building and other manipulation of Earth's crust, space exploration, elimination of war, nuclear war, increased asymmetric warfare, reactor meltdowns, increased stability of dictatorships). Some were realized.
- Very large first-mover advantage with times scales of less than a year.
- Feasible development in secret.
Nuclear weapons differed in that the world was already at war when they were developed, so policy makers would be in a different mindset and have different incentives. But otherwise, I think the parallels are as good as you could possibly hope for. The only other competitor is the (overly broad) case of molecular nano-tech, but this hasn't actually happened yet so you don't have much to go on. In contrast, the Manhattan Project is extensively documented.
With respect, I've always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person's behavior, realize that it doesn't follow a time-invariant utility function, and say "Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!" But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing agent. ("Under environmental condition #1, he assigns 1 million utility to taking action A1 at time T_A1, action B1 at time T_B1, etc. and zero utility for other strategies. Under environmental condition #2...")
On the other hand, my personal experience is that my decision of whether to complete some beneficial goal is largely determined by the mental pain associated with it. This mental pain, which is not directly measurable, is strongly dependent on the time of day, my caffeine intake, my level of fear, etc. If you can't measure it, and you were to just look at my actions, this is what you'd say: "Look, some days he cleans his room and some days he doesn't even though the benefit--a room clean for about 1 day--is the same. When he doesn't clean his room, and you ask him why, he says he just really didn't feel like it even though he now wishes he had. Therefore, the utility he is putting assigning to clean room is varying with time. Dynamical inconsistency, QED!" But the real reason is not that my utility function is varying. It's that I find cleaning my room soothing on some days, whereas other days it's torture.
I agree with you in general, and would especially like to hear from some LW psychologists. I think this field is pretty new, though, and not heavily dependent on any canon.
I've never heard of willpower depletion....Surely willpower is a long-term stat like CON, not an diminishable resource like HP.
In fact, previous research has shown that it is a lot like HP in many situations. See the citations near the beginning of the article.
Sure, on average it's negative sum. But I have to guess that society as a whole suffers greatly from having many (most?) of its technically skilled citizens at the low end of the social-ability spectrum. The question would be whether you could design a set of institutions in this area which could have a net positive benefit on society. (Probably not something I'll solve on a Saturday afternoon...)
I'm pretty sure this varies state-to-state.
Well, there's three kinds of meetups I can imagine.
(1) You go for the intellectual content of the meeting. This is what I was hoping for in Santa Barbara. For the reasons I mentioned above, I now think it's unlikely that the intellectual content will ever be worthwhile unless somebody does some serious planning/preparation.
(2) You go for the social enjoyment of the meeting. I confirmed my suspicion in SB that I personally wouldn't socially mesh with the LW crowd, although maybe this was a small sample size thing.
(3) You go to meet interesting people. In my life I've had a lot of short-term and a few long term friends with whom I've had fun. But I've probably only known 3-4 truly interesting people, in the sense that they challenged my thinking and were pleasing enough to spend a lot of time getting to know well.
Any of the above would get me to go to a meetup, although I'd be most excited about (3).
I suffer from exactly the same thing, but I don't think this what Roko is worring about, is it? He seems to worry about "ugh fields" around important life decisions (or "serious personal problems"), whereas you and I experience them around normal tasks (e.g. responding to emails, tackling stuck work, etc.). The latter may be important tasks -- making this an important motivation/akrasia/efficiency issue -- but it's not a catastrophic/black-swan type risk.
For example, if one had an ugh field around their own death and this prevented them from considering cryonics, this would be a catastrophic/black-swan type risk. Personally, I rather enjoy thinking about these types of major life decisions, but I could see how others might not.
Could you suggest a source for further reading on this?
I attended a meetup in Santa Barbara which I found largely to be a waste of time. The problem there--and I think, frankly, with LW in general--is that there just aren't that many of us with something insightful to say. (I certainly don't have much.) While it's great, I guess, that the participants acknowledge the importance behind some of the ideas championed by Yudkowsky and Hanson, most of us don't have anything to add. Some of us may be experts in other fields, but not in rationality.
Here's the perfect analogy: it's like listening to a bunch of college guys who've never played sports at a high level discuss a professional game; they all repeat the stuff they hear on ESPN, and the discussion isn't wildly wrong, but they're just regurgitating what they hear.
Do you feel like this described the NYC meetup at all? Do you think the meetup was worthwhile?
What happens at the meetups?
In most books, insurance fraud is morally equivalent to stealing. A deontological moral philosophy might commit you to donating all your disposable income to GiveWell-certified charities while not permitting you to kill yourself for the insurance money. But, yea, utilitarians will have a hard time explaining why they don't do this.
Exactly. If a parent doesn't think cryonics makes sense, then they wouldn't get it for their kids anyways. Eliezer's statement can only criticize parents who get cryonics for themselves but not their children. This is a small group, and I assume it is not the one he was targeting.
Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you're examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.
The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.
It's hard to think of a policy which would have a smaller impact on a smaller fraction of the wealthiest population on earth. And it faces extremely dedicated opposition.
I still think that Caplan's position is dumb. It's not so much a question of whether his explanation fits the data (although I think Psychohistorian has shown that in this case it does not), it's that it's just plain weird to characterize the obsessive behavior done by people with OCD as a "preference". I mean, suppose that you were able to modify the explanation you've offered (that OCD people just have high preferences for certainty) in a way that escapes Psychohistorian's criticism. Suppose, for instance, you simply say "OCD people just have a strong desire for things happening a prime number of times". This would still be silly! OCD people clearly have a minor defect in their brains, and redefining "preference" won't change this.
Ultimately, this might just be a matter of semantics. Caplan may be using "preference" to mean "a contrived utility function which happens to fit the data", which can always be done so long as the behavior isn't contradictory. But this really isn't helpful. After all, I can say that the willow's "preference" is to lean in the direction of the wind and this will correctly describe the Willow's behavior. But calling it a preference is silly.
Thanks for the comment. This discussion has helped to clarify my thinking.
I agree that everything you do, you genuinely want to do, in the sense that you're not doing it under duress.
I really think this is a bad way to think about it. Please see my comment elsewhere on this page.
EDIT: Unless of course you just define "genuinely wanting to do something" as anything one does while not under duress. But in that case, what counts as duress?
This is one place where Caplan seems to go off the deep end. I think it illustrates what happens if you take the Cynic's view to the logical conclusion. In his "gun to the head" analogy, Caplan suggests that OCD isn't really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it's better to just say that the person just really likes doing those repetitive behaviors.
As one commenter points out, this is equivalent to saying a person with a broken foot isn't really injured because they could walk up a flight of stairs if we put a gun to their head. They just prefer to not walk up the stairs.
It is an incredibly simplistic technique to reduce the brain to a single, unified organ, and determine the "true" desires by revealed preferences. Minds are much more complex and conflicted than that. Whatever people mean by "myself", it is surely not just the combined output of their brain.
Careful. The term "graph theory" is usually used to refer to a specific branch of mathematics which I don't think you're referring to.
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn't healthy). I don't mean to say that rationalists should give up, but we have to choose how to act in the meantime.
Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don't believe this makes me irrational. In fact, given our current understanding of the problem, I don't know of any other reasonable approaches.
Incidentally, this position is reminiscent of both Pascal's wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
I've read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I'm sure he believes that he brings some new insights, but I would disagree.
Moral skepticism is not particularly impressive as it's the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.
The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that's it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red.
At best we can make empirical statements of the form "A person should act in such-and-such manner in order to achieve some outcome".
Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
The quotation refers to punitive damages in civil cases. What evidence is there that this phenomenon exists with criminal penalties? (I don't deny that it exists, but it is probably suppressed. That is, criminal penalties are more likely to reflect probability of detection than punitive damages).
For instance, there are road signs in northern Virginia warning of a $10,000 fine for littering. The severity of the fine is surely due to the difficulty in catching someone in the act.
Should we be worried that people will vote stuff up just because it is already popular? There is currently no penalty for voting against the crowd, so wouldn't people (rightly) want to do this?
(Of course, we assume people are voting based on their personal impressions. It's clear that votes bases on Bayesian beliefs are not are useful here.)
Exactly. It seems unlikely that prestigious researchers will be unable to publish their brilliant but unconventional idea because they can't fully utilize their fame to sway editors. In fact, prestigious researchers have exactly what is needed to ensure their idea will take hold if it has merit: job security. They have plenty of time to nurture and develop their idea until it is accepted.
That's exactly the point: voting is supposed to put comments in order according to quality, so that you can read the worthwhile comments in a reasonable time. My claim is that the current voting system will not do this well at all and that a dual voting system will be better. (That second bit is just a guess). The opinion poll information is just a nice side effect.
Yep, what I wrote is just based on my best guess. A usability study would be great.
Also, I am going with the crowd and changing to a user name with an underscore