A personal history of involvement with effective altruism

post by JonahS (JonahSinick) · 2013-06-11T04:49:45.858Z · LW · GW · Legacy · 55 comments

Contents

  Interest in altruism rooted in literature
  An analytical bent, and utilitarianism
  Epistemic paralysis
  Enter GiveWell
  The significance of Less Wrong
  Closing summary
None
55 comments

Over the coming weeks, I intend to write up a history of the different parts of the effective altruist movement and their interrelations. It’s natural to start with the part that I know best: the history of my own involvement with effective altruism.

Interest in altruism rooted in literature

My interest in altruism traces to early childhood. Unbeknownst to me, my verbal comprehension ability was unusually high relative to my other cognitive abilities, and for this reason, I gravitated strongly toward reading. Starting from the age of six, I spent hours a day reading fiction. I found many of the stories that I read to be emotionally compelling, and identified with the characters.

My interest is altruism is largely literary in origin — I perceive the sweep of history to be a story, and I want things to go well for the characters, and want the story to have a happy ending. I was influenced both by portrayals of sympathetic, poor characters in need, and by stories of the triumph of the human spirit, and I wanted to help the downtrodden, and contribute to the formation of peak positive human experiences. 

I sometimes wonder whether there are other people with altruistic tendencies that are literary in origin, and whether they would be good candidate members of the effective altruist movement. There is some history of artists having altruistic goals. The great painter Vincent van Gogh moved to an impoverished coal mine to preach and minister to the sick. The great mathematician Alexander Grothendieck gave shelter to the homeless.

An analytical bent, and utilitarianism

When I was young, I had vague and dreamy hopes about how I might make the world a better place. As I grew older, I found myself more focused on careful reasoning and rationality.

In high school, I met Dario Amodei, who introduced me to utilitarianism. The ethical framework immediately resonated with me. For me, it corresponded to valuing the well being of all characters in the story — a manifestation of universal love.

This was the birth of my interest in maximizing aggregated global welfare. Maximizing aggregated global welfare corresponds to maximizing cost-effectiveness, and so this can be thought of as the origin of my interest in effective altruism. 

I believe that I would have developed interest in global welfare, and in effective altruism, on my own accord, without encountering any members of the effective altruist movement. But for reasons that I describe below, if not for meeting these people, I don’t think that my interests would have been actionable.

Epistemic paralysis

My analytical bent had a downside.

Issues pertaining to the human world are very complex, and there aren’t clear-cut objective answers to the question of how best to make the world a better place. On a given issue, there are many arguments for a given position, and many counterarguments to the arguments, and many counterarguments to the counterarguments, and so on. 

Contemplating these resulted in my falling into a state of epistemic learned helplessness. I became convinced that it's not possible to rationally develop confidence in views concerning how to make the world a better place.

Enter GiveWell

In 2007, my college friend Brian Tomasik pointed me to GiveWell. At the time, GiveWell had just launched, and there wasn’t very much on the website, so I soon forgot about it.

In 2009, my high school friend Dario, who had introduced me to utilitarianism, pointed me to GiveWell again. By this point, there was much more information available on the GiveWell website. 

I began following GiveWell closely. I was very impressed by the fact that co-founders Holden and Elie seemed to be making sense of the ambiguous world of effective philanthropy. I hadn’t thought that it was possible to reason so well about the human world. This made effective altruism more credible in my eyes, and inspired me. If hadn’t encountered GiveWell, I may not have gotten involved with the effective philanthropy movement at all, although I may have become involved through interactions of Less Wrong, and I may have gone on to do socially valuable work in math education.

I became progressively more impressed by GiveWell over time, and wanted to become involved. In 2011, I did volunteer work for GiveWell, and in 2012, I began working at GiveWell as a research analyst.

While working at GiveWell, I learned a great deal about how to think about philanthropy, and about epistemology more generally. A crucial development in my thinking was the gradual realization that:

I wrote about this realization in my post Robustness of Cost-Effectiveness Estimates and Philanthropy

This shift in my thinking gradually percolated, and I realized that my entire epistemological framework had been seriously flawed, because I was relying too much on a small number of relatively strong arguments rather than a large number of independent weak arguments

Many people had tried to explain this to me in the past, but I was unable to understand what they were driving at, and it was only through my work at GiveWell and my interactions with my coworkers that I was finally able to understand. The benefits of this realization have spanned many aspects of my life, and have substantially increased my altruistic human capital. 

If GiveWell hadn’t existed, it’s very possible that I wouldn’t have learned these things. If Dario hadn’t pointed me to GiveWell, I’m sure that I would have encountered GiveWell eventually, but it may have been too late for it to be possible for me to work there, and so I may not have had the associated learning opportunities.

My involvement with GiveWell also facilitated my meeting Vipul Naik, the founder of Open Borders. We’ve had many fruitful interactions related to maximizing global welfare, and if I hadn’t met him through GiveWell, it may have been years before we met. 

The significance of Less Wrong

Several people pointed me to Overcoming Bias and Less Wrong starting in 2008, but at the time the posts didn’t draw me in relative to the fascination of reciprocity laws in algebraic number theory. In early 2010, Brian Tomasik pointed me to some of Yvain’s articles on Less Wrong. With the background context of me following GiveWell, Yvain’s posts on utilitarianism really resonated with me. So I started reading Less Wrong. 

I met many impressive people who are seriously interested in effective altruism through Less Wrong. Among these are:

They’ve helped me retain my motivation to do the most good, and have aided me in thinking about effective altruism. They constitute a substantial chunk of the most impressive people who I know in my age group.

It’s genuinely unclear whether I would have gotten to know these people if Eliezer hadn’t started Less Wrong. 

Closing summary

My innate inclinations got me interested in effective altruism, but they probably wouldn’t have sufficed for my interest to be actionable. Beyond my innate inclinations, the things that stand out most in my mind as having been crucial are

Working at GiveWell substantially increased my altruistic human capital. I’ve learned a great deal from the GiveWell staff, from Vipul, and from the members of the Less Wrong community listed above. We’ve had fruitful collaborations, and they’ve helped me retain my motivation to do the most good.

The personal growth benefits that I derived from working at GiveWell are unusual, if only because GiveWell’s staff is small. The networking benefits from Less Wrong are shared by many others.

Note: I formerly worked as a research analyst at GiveWell. All views here are my own. 

This post is cross-posted at www.effective-altruism.com. 

55 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2013-06-16T07:12:35.385Z · LW(p) · GW(p)

Over the coming weeks, I intend to write up a history of the different parts of the effective altruist movement and their interrelations.

I'd be interested to read that. I'm also curious about the relationship between EA and "traditional" charities and philanthropists. Given that for example the Gates Foundation gives out billions of dollars in grants per year while the top charities recommended by GiveWell only have room for funding in the low millions per year, what is stopping the Gates Foundation from maxing out their room for funding? Are they not aware of GiveWell? Do they disagree with its analyses, or its philosophy, if so how or why?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-16T13:34:10.074Z · LW(p) · GW(p)

(General reminder: While it is possible that some hidden justification exists here, the default assumption is that "People are crazy, the world is mad" and the Gates Foundation has not happened to be an unusual exception. Failure to optimize is a human default and it is not particularly probable that GiveWell is doing anything wrong.)

Replies from: Wei_Dai, Kawoomba
comment by Wei Dai (Wei_Dai) · 2013-06-17T10:22:44.135Z · LW(p) · GW(p)

There seem to be a number of people around here who don't subscribe to "People are crazy, the world is mad" to as great an extent as you do, and I'm one of them. (Weren't we having a debate related to this just recently, about how early we can expect mainstream elites to start taking AI seriously, and even other people at MIRI aren't as pessimistic as you?)

Besides that, as an outsider to both GiveWell and organizations like the Gates Foundation, I don't know why I should think that the former is likely less "crazy" than the latter. Both have impressive people associated with them. GiveWell is part of the EA movement, but judging from the Gates Foundation's website I'm pretty sure they would also say that cost-effectiveness, transparency, being analytical and strategic are all part of their core values. It seems like you'd need some inside information to break the symmetry between them, and not just apply "Failure to optimize is a human default".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-17T14:07:42.372Z · LW(p) · GW(p)

I'm not truly impressed with GiveWell's general optimization since they never made a good case that malaria was connected to astronomical benefits or, indeed, seem to have realized that such a case is necessary for effective altruism. But for near-term certain benefits they still think in utilons and do cross-cause comparison of things that will produce those utilons; which puts them far, far ahead of a Gates Foundation which AFAICT picks a measurable cause at emotional whim and then optimizes within that cause. Especially since most of the variance in returns is between causes. GF's only possible claim to superior impact per dollar for some dollars would have to come from longer-term funding of science and technology, taking on risk and time horizon which Givewell refuses. But a great deal of GF's funding also goes to near-term utilons obtained by known mechanisms, so they are clearly competing on ground to which Givewell has staked a plausible claim of optimization, and apparently doing so at whim and not by optimization.

I expect of course that no good justification shall be forthcoming of why GF didn't fund the Against Malaria Foundation with a casual wave of their hands, but perhaps you shall call this small Bayesian evidence because of a worry that if this justification existed, GF would be unlikely to publish it. Givewell is usually pretty open about that sort of thing. But perhaps GF is more constrained, and does not for PR reasons publish the negative judgments of their secret council of epistemic rationalists. But then why haven't we seen the positive judgments? Who would put in Holden's level of cognitive work and then say nothing of that work, and why? Why keep the reasoning of your effective altruism a secret?

More generally, what would make you update towards "People are crazy, the world is mad"? When in many cases such as this I see no evidence that the world is sane, I update towards madness.

Replies from: CarlShulman, Wei_Dai, lukeprog
comment by CarlShulman · 2013-06-25T08:47:59.187Z · LW(p) · GW(p)

I expect of course that no good justification shall be forthcoming of why GF didn't fund the Against Malaria Foundation with a casual wave of their hands,

There is a good justification: they have good empirical evidence that the Gates Foundation stamp of approval attracts third-party funding (governments, other foundations, and sometimes small donors), which causes diminishing returns (in addition to Gates funding being large enough to produce diminishing returns in general, and being most useful for thick concentrated start-up funding, which AMF does not need as a provider of funding to other organizations doing bednet distribution). GiveWell also seems to agree that the GF gives to many opportunities better than AMF that are not available or easily parsed by small donors, and that other GF public health projects are not radically worse on average in expectation than AMF.

which puts them far, far ahead of a Gates Foundation which AFAICT picks a measurable cause at emotional whim and then optimizes within that cause.

Internally, they do use DALY-like measures at the high levels, and use them in thinking about different kinds of projects for ballpark estimates and setting thresholds for action (junior-level employees work to implement). Also like GiveWell they take uncertainty about cause sign and magnitude into account, which boosts the relative virtue of picking out projects that are clearly promising within their field.

Why keep the reasoning of your effective altruism a secret?

Some possibilities:

  • The 'seal of approval' does almost as much to attract funding to target charities as more detailed explanations;
  • The press releases explain the core case
  • The Gates Foundation funds and participates in roundtables, conferences, and other academic and nonprofit venues to convey its thinking, and commissions a lot of external work making information available (e.g. they fund the DCPP cost-effectiveness estimates)
  • The Gates Foundation has many employees working on small projects, and doesn't want to constantly produce public-facing substantive claims piecemeal which might draw it into controversy
  • The full case for even good interventions may depend in part on sensitive issues like judgments about particular people
  • You haven't been reading what the GF does put out itself (or the public info of the initiatives it supports, or the research they sponsor for public consumption); have you been reading the detailed GiveWell reports, so as to be able to make a comparison?

More generally, what would make you update towards "People are crazy, the world is mad"? When in many cases such as this I see no evidence that the world is sane, I update towards madness.

Yet I see you surprised by the higher-than-expected competence of elites more often and severely than I see the reverse, e.g.:

  • That there are good reasons of legal predictability for the role of precedent in common law
  • The ability of good math and computer science students to grasp something you identified as an FAI issue at first glance
  • The judgments of physicists about nuclear chain reactions
  • The intelligence of business elites
  • The capabilities of math, AI, and hard science elites
  • The ability of venture capitalists and scientists to tell in 1999 that Drexlerian nanotechnology was not likely to be developed and lead to a world-wrecking war by 2015, and extremely unlikely by 2003; and this based on good empirical record of past technologies and the lead time in expert opinion and prototypes/precursor technologies

It looks to me like "people are crazy, the world is mad" has lead you astray repeatedly, but I haven't seen as many successes. What are some of the major predictive successes of "the world is mad" that held up under careful investigation of dispositive facts?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-25T17:52:26.469Z · LW(p) · GW(p)

...none of that sounds like an explanation for why Gates hasn't funded AMF. Shouldn't that make it even easier for them to wave their hands? How does GF's inaction produce good consequences here?

Replies from: CarlShulman, lukeprog
comment by CarlShulman · 2013-06-25T19:11:37.570Z · LW(p) · GW(p)

Gates kickstarted GiveWell's previous top charity, VillageReach, with several million dollars before GiveWell got to it. VillageReach was relatively innovative and could use startup funding. AMF is not innovative in that way, but one out of a number of organizations that pay for bednets to be distributed by other organizations in developing countries.

GiveWell says that they wound up with AMF because other seemingly more promising interventions were low-hanging fruit that had been plucked (frequently by Gates) and were low on RFMF.

The Gates Foundation has funded much larger malaria net distribution funding programs, and encouraged donors to give through the consolidated Global Fund and Nothing But Nets. Like AMF, these provide funding to local net distribution partners. It has also engaged in plausibly more highly leveraged enabling projects, like research into effective distribution of nets and better treatments.

Now that bednet distribution is widespread and rapidly scaling up independently of Gates efforts (although in meaningful part thanks to its earlier interventions) they are working on vaccines, diagnostics, drugs, evaluations, advocacy, and matching schemes with other large funders.

The Gates Foundation actors:

  • Made clear their support for bed nets
  • Encouraged individual donations to bednets through larger vehicles than AMF
  • Focus on better/more leveraged malaria-control expenditures to spend on than bednets (and GiveWell seems to agree, see their blog posts on GiveWell Labs and large donors) at this time, with strong growth in non-Gates bednet spending
  • May notice that AMF is small, and its advantages over other net funders are soft, and perhaps nonexistent (GiveWell has repeatedly downgraded its effectiveness estimates for AMF, and has changed its recommended charities based on past mistakes previously)

Zooming out for the bigger picture: the Gates Foundation seems to be plucking the large low-hanging fruit. GiveWell has been searching through the cracks to find small missed opportunities, and finding it quite challenging.

Compare with asteroid risk: one can make a lot of complaints about insufficient attention to the x-risk impact of asteroids relative to mundane harm, etc, but governments have still solved 90%+ of the problem. It's good to look for the further opportunities for improvement, but one shouldn't lose the forest for the trees: the problem was largely solved.

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2013-06-26T04:43:15.363Z · LW(p) · GW(p)

I agree with the content and spirit of this comment — thanks for writing it.

There remains the puzzle of why the Gates Foundation has devoted so many resources toward education efforts, which look to be ineffective from the outside. I have high confidence that they could have found a more effective use of the money.

Replies from: CarlShulman
comment by CarlShulman · 2013-06-26T05:34:05.197Z · LW(p) · GW(p)

I have wondered too, and I am much less impressed with the education than health work. It's possible they are just buying their "help my society" and "help people anywhere." They also have programs to help the Washington area around Microsoft, so it's pretty plausible that they feel they want to discharge several kinds of moral obligations to concentric circles of connectedness each of which gets some weight.

My steelman (which I don't necessarily buy):

Education is a $1.1 trillion sector in the United States alone. Improvements in its productivity will therefore be a big input into economic growth, which affects our ability to do everything else. Moreover, the lifetime impacts of education are significant on worker productivity, and developing and harnessing human capital, which affects the economy, but also public policy and science.

There is very little competition in the sector. Effects of schooling on learning and productivity pay off decades later, and the science of efficacious teaching is ill-developed, so parents are rather imperfectly able to and motivated to improve student outcomes. Moreover, a substantial portion of the establishment in education research has been resistant to the use of randomized trials and the scientific method in education, for a variety of reasons. Powerful interest groups resist experimentation and the adjustment of policies to the current available evidence.

However, the history of philanthropy suggests that wealthy philanthropists can and have had large effects on educational policy and practices. So the opportunity to institute more systematic data collection and conduct a number of major experiments in educational outcomes, and shape policy around the results is one of the more promising ways to increase rich country GDP and virtues, with all the relevant flow-through effects.

The small schools fiasco involved putting too much money into that experiment prematurely, and a real perhaps-statistical blunder, but this was acknowledged and the program dropped in response to poor results.

This is offset by projects like videotaping vast numbers of hours of teacher teaching (correlated with outcomes), the creation of large national databases, causing several jurisdictions to experiment with pay-for-performance (by bankrolling the difference), etc. The chance of a huge win from this extension of science and experimentation is enough to justify things.

The GF also gives a bunch of scholarships to high-ability students (the Gates scholarships at Cambridge modeled after the Rhodes Scholarships, the Gates Millennium Scholars Program for top under-represented minority students in the United States). Again this might not be a utilitarian thing, but such programs provide a way to target talent that might otherwise be lost from key fields, and a huge opportunity for influence by handing out the money to people doing research and work that the GF wants to encourage.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-26T08:16:21.434Z · LW(p) · GW(p)

It could be largely symbolic, e.g. the Gates Foundation paying attention to education reaffirms the status of education as an important thing to pay attention to.

comment by lukeprog · 2013-06-25T21:54:23.589Z · LW(p) · GW(p)

I, for one, would love to read your response to Carl's question about your "world is mad" thesis:

It looks to me like "people are crazy, the world is mad" has lead you astray repeatedly, but I haven't seen as many successes. What are some of the major predictive successes of "the world is mad" that held up under careful investigation of dispositive facts?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-26T04:43:42.718Z · LW(p) · GW(p)

Off the instant top of my head, central line infections and the European Central Bank. I'm busy working on HPMOR and can't really take the time to consult my Freemind map for the top dozen items. Carl's list does seem kinda lopsided to me (i.e. not representative), but again, got to make the update deadline on the 29th and all my energy's going there.

Replies from: JonahSinick, None
comment by JonahS (JonahSinick) · 2013-06-26T04:45:33.960Z · LW(p) · GW(p)

Will you respond when you have more time? :-)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-26T05:03:03.505Z · LW(p) · GW(p)

Yes but I might need to be reprodded come July 16th which is when my schedule quiets down again. Having a general debate on whether the world is mad doesn't seem like a particularly good thing to recurse on deep in a comment thread.

Replies from: lukeprog
comment by lukeprog · 2013-07-16T23:21:21.028Z · LW(p) · GW(p)

Re-prodding, as suggested.

Replies from: Benja, None
comment by Benya (Benja) · 2013-09-15T17:35:21.449Z · LW(p) · GW(p)

For future readers: The discussion has continued here.

comment by [deleted] · 2013-07-17T05:18:55.318Z · LW(p) · GW(p)

You seem to have a reminder system that you enter even trivial reminders into, and then get reminded later and act on them. What do you use?

Replies from: Qiaochu_Yuan, lukeprog, army1987
comment by Qiaochu_Yuan · 2013-07-17T06:34:04.209Z · LW(p) · GW(p)

I currently have two ways to do this. One is RTM and the other is Boomerang. I use RTM for reminders to do something in the future where it's not particularly important exactly when I do them and Boomerang for reminders to do something in the future at a particular time, e.g. if I know I'm going to be in a particular location around 3:00pm next Tuesday and I'd like to be reminded to do something at that location I'll Boomerang an email to myself for that time. (Or, if someone emailed me about this thing, I can Boomerang that email instead.)

comment by lukeprog · 2013-07-17T05:39:05.495Z · LW(p) · GW(p)

Either Gmail+ActiveInbox or Things. These days, moreso the former.

comment by A1987dM (army1987) · 2013-07-22T20:18:49.492Z · LW(p) · GW(p)

I think most cellphones sold over the past decade or so (incl. the Nokia 3310, IIRC) have such a functionality.

comment by [deleted] · 2013-07-17T05:17:08.023Z · LW(p) · GW(p)

consult my Freemind map

Can you expand on this? I'm interested in what you use to organize your time and thoughts.

comment by Wei Dai (Wei_Dai) · 2013-06-17T19:59:50.252Z · LW(p) · GW(p)

Gates Foundation which AFAICT picks a measurable cause at emotional whim and then optimizes within that cause

This (if true, and of course GF claims to pick causes strategically) still doesn't answer my original question, given that malaria is in fact one of the causes that the Gates Foundation has picked.

But then why haven't we seen the positive judgments? Who would put in Holden's level of cognitive work and then say nothing of that work, and why? Why keep the reasoning of your effective altruism a secret?

If the GF is keeping the details of their analyses secret, then I would count that against them, but not heavily. I can think of a bunch of reasons for it that wouldn't reflect too badly on their ability to optimize within a cause. For example maybe Gates developed a habit of keeping things secret from his career in the for-profit IP field. I note that sometimes even MIRI seems to need a push to publish its internal reasonings.

More generally, what would make you update towards "People are crazy, the world is mad"?

I'm not sure what kind of answer you're looking for, but in general I update towards it when the world seems madder than I expect, and away from it when it seems saner than I expect. For example I updated towards it a bit when no mainstream expert predicted ahead of time that something like Bitcoin might be possible (i.e., they ignored decentralized digital currency as a field until Bitcoin showed up), and away from it a bit when the US government seemed to start taking Bitcoin seriously as soon as it showed up on their radar.

comment by lukeprog · 2013-06-25T23:02:18.148Z · LW(p) · GW(p)

I'm not truly impressed with GiveWell's general optimization since they never made a good case that malaria was connected to astronomical benefits or, indeed, seem to have realized that such a case is necessary for effective altruism.

Well, but I'm not sure MIRI can be said to have "made a good case" that its own work is well-connected to astronomical benefits, either. Presumably the argument for that looks something like the FAI Research as Effective Altruism argument, but that argument hasn't been made in much detail, with the key assumptions clearly identified and argued for with clarity and solid evidential backing. E.g.:

  • I'm not aware of a thorough, empirical (written) investigation of whether elites will handle AI just fine.
  • Beckstead's 2013 thesis is the first document I'm aware of that clearly lays out all the assumptions baked into the argument for the overwhelming importance of the far future.
  • My 2013 post When Will AI Be Created? is (I think) the best available piece for capturing the enormous difficulties of predicting AI — with reference to lots of relevant empirical data — while also (barely) making the case for assigning a good chunk of one's probability mass to getting AI this century. But it's still pretty inadequate, and the part making the case for the plausibility of AI this century could be substantially improved if more time was invested. (Compare to Bostrom 1998, which I find inadequate. I also think it will now look naively timelines-optimistic to most observers.)

Moreover, it's not that Givewell (well, Holden) hasn't "realized" that recommended altruistic interventions (e.g. bednets) need to be connected via argument to astronomical benefits. Rather, Holden has been aware of astronomical waste arguments for a long time, and has reasons for rejecting them. He also discussed astronomical waste arguments many times with Beckstead while Beckstead was writing his dissertation. Unfortunately, Holden has struggled to clearly express his reasons for rejecting astronomical waste arguments. He tried to explain his reasons to me in person once but I couldn't make sense of what he was saying. He also tried to explain his point in the last three paragraphs of this comment, but I, at least, still don't understand quite what he's saying. Explaining is hard.

Also, Holden has spent a lot of time working up to an explanation of why he (currently) thinks that (1) "generic good work" (which may indirectly produce astronomical benefits via ripple effects) has higher expected value than (2) narrow interventions aimed more directly at astronomical benefit. His two latest posts in this thread are Flow-through effects and Possible global catastrophic risks, and he has promised that "a future post will discuss how I think about the overall contribution of economic/technological development to our odds of having a very bright, as opposed to very problematic, future."

And all this during the early years in which GiveWell mostly hasn't been investigating trickier issues like how different interventions connect to potential astronomical benefits, because GiveWell (wisely, I think) decided to start under the streetlight.

Replies from: Eliezer_Yudkowsky, Wei_Dai, JonahSinick
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-26T19:29:42.801Z · LW(p) · GW(p)

Well, but I'm not sure MIRI can be said to have "made a good case" that its own work is well-connected to astronomical benefits, either.

False modesty. The 'good case' already made for FAI being (optimally) related to astronomical benefits and the 'good case' already made for malaria reduction being (optimally) related to astronomical benefits are not of the same order of magnitude of already madeness.

Replies from: lukeprog, JonahSinick
comment by lukeprog · 2013-06-26T19:45:35.843Z · LW(p) · GW(p)

I'm not sure "false modesty" applies, at least given my views about the degree to which the FAI case has been made.

For my own idea of "good case made," anyway, I'd say the "malaria nets near-optimally connected to astronomical benefits" case is close to 0% of the way to "good case made," and the "FAI research near-optimally connected to astronomical benefits" case is more like 10% of the way to "good case made."

comment by JonahS (JonahSinick) · 2013-06-26T20:29:16.789Z · LW(p) · GW(p)

I don't think that MIRI has made a case for the particular FAI research that it's doing having non-negligible relevance to AI safety. See my "Chinese Economy" comments here.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-27T05:11:07.637Z · LW(p) · GW(p)

Ah, I'd heard a rumor you'd updated away from that, guess that was mistaken. I've replied to that comment.

Replies from: JonahSinick
comment by Wei Dai (Wei_Dai) · 2013-06-26T10:35:01.700Z · LW(p) · GW(p)

Unfortunately, Holden has struggled to clearly express his reasons for rejecting astronomical waste arguments.

It looks to me like he is using a bounded utility function with a really low bound. See this passage:

I feel that humanity’s future may end up being massively better than its past, and unexpected new developments (particularly technological innovation) may move us toward such a future with surprising speed. Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable. In other words, if I somehow knew that economic and technological development were equally likely to lead to human extinction or to a brighter long-term future, it’s easy for me to imagine that I could still prefer such development to stagnation.

If the best possible future that Holden can imagine (which the rest of the post makes clear does includes space colonization) doesn't have much more than twice the utility of stagnation (setting extinction to be the zero point), then "astronomical waste" obviously isn't very astronomical in terms of Holden's utility function.

Replies from: CarlShulman
comment by CarlShulman · 2013-06-26T18:50:11.014Z · LW(p) · GW(p)

He gave a lower bound, sufficient to motivate the view that we should not seek stagnation, which is what he seems to be talking about there. Why interpret a lower bound (when this is all that is needed to establish the point, and less controversial) which is "easy" into a near-upper-bound?

Stagnation on Earth means astronomical waste almost exactly as much as near-term extinction (and also cuts us off from very high standards of living that might be achieved). Holden is saying that the conclusion that growth with plausible risk levels beats permanent stagnation is robust. Talking about 100:1 tradeoffs would be less robust.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-06-26T20:27:37.141Z · LW(p) · GW(p)

I guess I was doing a Bayesian update based on what he wrote. Yes, technically he gave a lower bound, but while someone who thinks that the best possible future is 10 times better than stagnation (relative to extinction) might still write "Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable", someone who thinks it's at least a thousand or a billion times better probably wouldn't.

comment by JonahS (JonahSinick) · 2013-06-26T01:07:44.705Z · LW(p) · GW(p)

Moreover, it's not that Givewell (well, Holden) hasn't "realized" that recommended altruistic interventions (e.g. bednets) need to be connected via argument to astronomical benefits. Rather, Holden has been aware of astronomical waste arguments for a long time, and has reasons for rejecting them. He also discussed astronomical waste arguments many times with Beckstead while Beckstead was writing his dissertation. Unfortunately, Holden has struggled to clearly express his reasons for rejecting astronomical waste arguments. He tried to explain his reasons to me in person once but I couldn't make sense of what he was saying. He also tried to explain his point in the last three paragraphs of this comment, but I, at least, still don't understand quite what he's saying. Explaining is hard.

Speaking for myself:

One response is that the argument for acting on the astronomical waste argument is only one relatively strong argument that should be weighed against more prosaic ethical considerations in order to account for model uncertainty.

Here is a concrete argument against giving shaping the far future dominant consideration in one's philanthropic decision making, within the astronomical waste framework.

The doomsday argument suggests that the human race is going to go extinct in the relatively near term with very high probability. This is strange, because there doesn't seem to be other reason for thinking this.

A reconciliation of the doomsday argument with the absence of other evidence for extinction that is sometimes offered is the theory that we're living in one of many simulations that were created by past humans who underwent a singularity scenario, and that our simulation is going to be turned off soon.

If we're in one of many simulations with other humans, and the humans in these simulations are sufficiently correlated, timeless decision theory suggests that ordinary helping has astronomical benefits.

Those who subscribe to this view often believe that despite this consideration, shaping the far future nevertheless dominates ordinary helping in expected value. But they might be wrong about this. It appears that they would have to be wrong with awfully high probability in order to overturn the expected value of focusing on shaping the far future. But maybe this appearance is illusory, and for some reason that people haven't recognized yet, the benefits of ordinary helping mediated through timeless decision theory swamp the expected value of focusing on shaping the far future.

A nontrivial chance of this being true would establish a lower bound on how good a potential opportunity to shape the far future has to be in order to overcome opportunities for ordinary helping.

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-06-26T03:03:55.461Z · LW(p) · GW(p)

Thanks, Jonah. I think skepticism about the dominance of the far future is actually quite compelling, such that I'm not certain that focusing on the far future dominates (though I think it's likely that it does on balance, but much less than I naively thought).

The strongest argument is just that believing we are in a position to influence astronomical numbers of minds runs contrary to Copernican intuitions that we should be typical observers. Isn't it a massive coincidence that we happen to be among a small group of creatures that can most powerfully affect our future light cone? Robin Hanson's resolution of Pascal's mugging relied on this idea.

The simulation-argument proposal is one specific way to hash out this Copernican intuition. The sim arg is quite robust and doesn't depend on the self-sampling assumption the way the doomsday argument does. We have reasonable a priori reasons for thinking there should be lots of sims -- not quite as strong as the arguments for thinking we should be able to influence the far future, but not vastly weaker.

Let's look at some sample numbers. We'll work in units of "number of humans alive in 2014," so that the current population of Earth is 1. Let's say the far future contains N humans (or human-ish sentient creatures), and a fraction f of those are sims that think they're on Earth around 2014. The sim arg suggests that Nf >> 1, i.e., we're probably in one of those sims. The probability we're not in such a sim is 1/(Nf+1), which we can approximate as 1/(Nf). Now, maybe future people have a higher intensity of experience i relative to that of present-day people. Also, it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near term. This entropy can come from uncertainty about what the far future will look like, failures of goal preservation, or intrusion of black swans.

Now let's consider two cases -- one assuming no correlations among actors (CDT) and one assuming full correlations (TDT-ish).

CDT case:

  • If we help in the short run, we can affect something like 1 people (where "1" means "7 billion").
  • If we help in the long run, if we're not in a sim, we can affect N people, with an i experience-intensity multiple, with a factor of e for uncertainty/entropy in our efforts. But the probability we're not in a sim is 1/(Nf), so the overall expected value is 1/(Nf)Nie = ie/f.

It's not obvious that ie/f > 1. For instance, if f = 10^-4, i = 10^2, and e = 10^-6, this would equal 1. Hence it wouldn't be clear that targeting the far future is better than targeting the near term.

TDT-ish case:

  • There are Nf+1 copies of people (who think they're) on Earth in 2014, so if we help in the short run, we help all of those Nf+1 people because our actions are mirrored across our copies. Since Nf >> 1, we can approximate this as Nf.
  • If we help by taking far-future-targeting actions, even if we're in a sim, our actions can timelessly affect what happens in the basement, so we can have an impact regardless of whether we're in a sim or not. The future contains N people with i intensity factor, and there's e entropy on actions that try to do far-future stuff relative to short-term stuff. The expected value is Nie.

The ratio of long-term helping to short-term helping is Nie/(Nf) = ie/f, exactly the same as before. Hence, the uncertainty about whether the near- or far-future dominates persists.

I've tried these calculations with a few other tweaks, and something close to ie/f continues to pop out.

Now, this point is again of the "one relatively strong argument" variety, so I'm not claiming this particular elaboration is definitive. But it illustrates the types of ways that far-future-dominance arguments could be neglecting certain factors.

Note also that even if you think ie/f >> 1, it's still less than the 10^30 or whatever factor a naive far-future-dominance perspective might assume. Also, to be clear, I'm ignoring flow-through effects of short-term helping on the far future and just talking about the intrinsic value of the direct targets of our actions.

Replies from: CCC, Pablo_Stafforini
comment by CCC · 2014-06-26T14:21:40.738Z · LW(p) · GW(p)

In the long-run CDT case, why the assumption that people in a sim can't affect people in the far future? At the very least, if we're in a sim, we can affect people in the far future of our sim; and probably indirectly in baseline too, insofar as if we come up with a really good idea in the sim, then those who are running the sim may take notice of the idea and implement it outside said sim.


As for the figures; I have a few thoughts about f. Let us assume that the far future consists of one base world, which runs a number of simulations, which in turn run sub-simulations (and those run sub-sub-simulations, and so on). Let us assume that, at any given moment, each simulation's internal clock is set to a randomly determined year. Let us further assume that our universe is fairly typical in terms of population.

The number of humans who have ever lived, up until 2011, has been estimated at 107 billion. This means that, if all simulations are constrained to run up until 2014 only, the fraction of people in simulations (at any given moment) who believe that they are alive in 2014 will be approximately 7/107 (the baseline will not significantly affect this figure if the number of simulations is large). If the simulations are permitted to run longer (and I see no reason why they wouldn't be), then that figure will of course be lower, and possibly significantly lower.

I can therefore conclude that, in all probability, f < 7/107.

At the same time, Nf >> 1 means that f > 1/N. Of course, since N can be arbitrarily large, this tells us little; but it does imply, at least, that f>0.

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-06-28T04:37:36.163Z · LW(p) · GW(p)

Thanks, CCC. :)

Simulating humans near the singularity may be more interesting than simulating hunter-gatherers, so it may be that the fraction of sims around now is more than 7/107.

One reason not to expect the sims to go into the far future is that any far future with high altruistic import will have high numbers of computations, which would be expensive to simulate. It's cheaper to simulate a few billion humans who have only modest computing power. For the same reason, it's not clear that we'd have lots of sims within sims within sims, because those would get really expensive -- unless computing power is so trivially cheap in the basement that it doesn't matter.

That said, you're right there could be at least a reasonable future ahead of us in a sim, but I'm doubtful many sims run the whole length of galactic history -- again, unless the basement is drowning in computing power that it doesn't know what to do with.

Interesting point about coming up with a really good idea. But one would tend to think that the superintelligent AIs in the basement would be much better at that. Why would they bother creating dumb little humans who go on to create their own superintelligences in the sim when they could just use superintelligences in the basement? If the simulators are interested in cognitive/evolutionary diversity, maybe that could be a reason.

Replies from: CCC
comment by CCC · 2014-06-30T09:38:15.163Z · LW(p) · GW(p)

Simulating humans near the singularity may be more interesting than simulating hunter-gatherers, so it may be that the fraction of sims around now is more than 7/107.

Possibly, but every 2014 needs to have a history; we can find evidence in our universe that around 107 billion people have existed, and I'm assuming that we're fairly typical so far as universes go.

...annnnnd I've just realised that there's no reason why someone in the future couldn't run a simulation up to (say) 1800, save that, and then run several simulations from that date forwards, each with little tweaks (a sort of a Monte Carlo approach to history).

One reason not to expect the sims to go into the far future is that any far future with high altruistic import will have high numbers of computations, which would be expensive to simulate. It's cheaper to simulate a few billion humans who have only modest computing power.

I question the applicability of this assertion to our universe. Yes, a game like Sid Meier's Civilisation is a whole lot easier to simulate than (say) a crate of soil at the level of individual grains - because there's a lot of detail being glossed over in Civilisation. The game does not simulate every grain of soil, every drop of water.

Our universe - whether it's baseline or a simulation - seems to be running right down to the atomic level. That is, if we're being simulated, then every individual atom, every electron and proton, is being simulated. Simulating a grain of sand at that level of detail is quite a feat of computing - but simulating a grain-of-sand-sized computer would be no harder. In each case, it's the individual atoms that are being simulated, and atoms follow the same laws whether in a grain of sand or in a CPU. (They have to, or we'd never have figured out how to build the CPU).

So I don't think there's been any change in the computing power required to simulate our universe with the increase in human population and computing power.

For the same reason, it's not clear that we'd have lots of sims within sims within sims, because those would get really expensive -- unless computing power is so trivially cheap in the basement that it doesn't matter.

Sub-sims just need to be computationally simpler by a few orders of magnitude than their parent sims. If we create a sim, then computing power in that universe will be fantastically expensive as compared to ours; if we are a sim, then computing power in our parent universe must be sufficient to run our universe (and it is therefore fantastically cheap as compared to our universe). I have no idea how to tell whether we're in a top-end one-of-a-kind research lab computer, or the one-universe-up equivalent of a smartphone.

That said, you're right there could be at least a reasonable future ahead of us in a sim, but I'm doubtful many sims run the whole length of galactic history -- again, unless the basement is drowning in computing power that it doesn't know what to do with.

You have a good point. If we're a sim, we could be terminated unexpectedly at any time. Presumably as soon as the conditions of the sim are fulfilled.

Of course, the fact that our sim (if we are a sim) is running at all implies that the baseline must have the computing power to run us; in comparison with which, everything that we could possibly do with computing power is so trivial that it hardly even counts as a drain on resources. Of course, that doesn't mean that there aren't equivalently computationally expensive things that they might want to do with our computing resources (like running a slightly different sim, perhaps)...

Interesting point about coming up with a really good idea. But one would tend to think that the superintelligent AIs in the basement would be much better at that. Why would they bother creating dumb little humans who go on to create their own superintelligences in the sim when they could just use superintelligences in the basement?

Maybe we're the sim that the superintelligence is using to test its ideas before introducing them to the baseline? If our universe fulfills its criteria better than any other, then it acts in such a way as to make baseline more like our universe. (Whatever those criteria are...)

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-06-30T11:09:50.309Z · LW(p) · GW(p)

Hi CCC :)

there's no reason why someone in the future couldn't run a simulation up to (say) 1800, save that, and then run several simulations from that date forwards, each with little tweaks

Yep, exactly. That's how you can get more than 7/107 of the people in 2014.

That is, if we're being simulated, then every individual atom, every electron and proton, is being simulated.

Probably not, though. In Bostrom's simulation-argument paper, he notes that you only need the environment to be accurate enough that observers think the sim is atomically precise. For instance, when they perform quantum experiments, you make those experiments come out right, but that doesn't mean you actually have to simulate quantum mechanics everywhere. Because superficial sims would be vastly cheaper, we should expect vastly more of them, so we'd probably be in one of them.

Many present-day computer simulations capture high-level features of a system without delving into all the gory details. Probably most sims could suffice to have intermediate levels of detail for physics and even minds. (E.g., maybe you don't need to simulate every neuron, just their higher-level aggregate behaviors, except when neuroscientists look at individual neurons.)

Of course, the fact that our sim (if we are a sim) is running at all implies that the baseline must have the computing power to run us; in comparison with which, everything that we could possibly do with computing power is so trivial

This is captured by the N term in my rough calculations above. If the basement has gobs of computing power, that means N is really big. But N cancels out from the final action-relevant ie/f expression.

Replies from: CCC
comment by CCC · 2014-07-01T08:27:37.319Z · LW(p) · GW(p)

Probably not, though. In Bostrom's simulation-argument paper, he notes that you only need the environment to be accurate enough that observers think the sim is atomically precise.

Hmmm. It's a fair argument, but I'm not sure how well it would work out in practice.

To clarify, I'm not saying that the sim couldn't be run like that. My claim is, rather, that if we are in a sim being run with varying levels of accuracy as suggested, then we should be able to detect it.

Consider, for the moment, a hill. That hill consists of a very large number of electrons, protons and neutrons. Assume for the moment that the hill is not the focus of a scientific experiment. Then, it may be that the hill is being simulated in some computationally cheaper manner than simulating every individual particle.

There are two options. Either the computationally cheaper manner is, in every single possible way, indistinguishable from simulating every individual particle. In this case, there is no reason to use the more computationally expensive method when a scientist tries to run an experiment which includes the hill; all hills can use the computationally cheaper method.

The alternative is that there is some way, however slight or subtle, in which the behaviour of the atoms in the hill differs from the behaviour of those same atoms when under scientific investigation. If this is the case, then it means that the scientific laws deduced from experiments on the hill will, in some subtle way, not match the behaviour of hills in general. In this case, there must be a detectable difference; in effect, under certain circumstances hills are following a different set of physical laws and sooner or later someone is going to notice that. (Note that this can be avoided, to some degree, by saving the sim at regular intervals; if someone notices the difference between the approximation and a hill made out of properly simulated atoms, then the simulation is reloaded from a save just before that difference happened and the approximation is updated to hide that detail. This can't be done forever - after a few iterations, the approximation's computational complexity will begin to approach the computational complexity of the atomic hill in any case, plus you've now wasted a lot of cycles running sims that had no purpose other than refining the approximation - but it could stave off discovery for a period, at least).


Having said that, though, another thought has occurred to me. There's no guarantee (if we are in a sim) that the laws of physics are the same in our universe as they are in baseline; we may, in fact, have laws of physics specifically designed to be easier to compute. Consider, for example, the uncertainty principle. Now, I'm no quantum physicist, but as I understand it, the more precisely a particle's position can be determined, the less precisely its momentum can be known - and, at the same time, the more precisely its momentum is known, the less precisely its position can be found. Now, in terms of a simulation, the uncertainty principle means that the computer running the simulation need not keep track of the position and momentum of every particle at full precision. It may, instead, keep track of some single combined value (a real quantum physicist might be able to guess at what that value is, and how position and/or momentum can be derived from it). And given the number of atoms in the observable universe, the data storage saved by this is massive (and suggests that Baseline's storage space, while immense, is not infinite).

Of course, like any good simplification, the Uncertainty Principle is applied everywhere, whether a scientist is looking at the data or not.

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-07-02T11:07:05.712Z · LW(p) · GW(p)

What is and isn't simulated to a high degree of detail can be determined dynamically. If people decide they want to investigate a hill, some system watching the sim can notice that and send a signal that the sim needs to make the hill observations correspond with quantum/etc. physics. This shouldn't be hard to do. For instance, if the theory predicts observation X +/- Y, you can generate some random numbers centered around X with std. dev. Y. Or you can make them somewhat different if the theory is wrong and to account for model uncertainty.

If the scientists would do lots of experiments that are connected in complex ways such that consistency requires them to come out with certain complex relationships, you'd need to get somewhat more fancy with faking the measurements. Worst case, you can actually do a brute-force sim of that part of physics for the brief period required. And yeah, as you say, you can always revert to a previous state if you screw up and the scientists find something amiss, though you probably wouldn't want to do that too often.

There's no guarantee (if we are in a sim) that the laws of physics are the same in our universe as they are in baseline; we may, in fact, have laws of physics specifically designed to be easier to compute.

SMBC

Replies from: CCC
comment by CCC · 2014-07-02T13:26:13.680Z · LW(p) · GW(p)

Worst case, you can actually do a brute-force sim of that part of physics for the brief period required.

This is kind of where the trouble starts to come in. What happens when the scientist, instead of looking at hills in the present, turns instead to look at historical records of hills a hundred years in the past?

If he has actually found some complex interaction that the simplified model fails to cover, then he has a chance of finding evidence of living in a simulation; yes, the simulation can be rolled back a hundred years and then re-run from that point onwards, but is that really more computationally efficient than just running the full physics all the time? (Especially if you have to regularly keep going back to update the model).

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-07-03T04:53:15.742Z · LW(p) · GW(p)

This is where his fellow scientists call him a "crackpot" because he can't replicate any of his experimental findings. ;)

More seriously, the sim could modify his observations to make him observe the right things. For instance, change the photons entering his eyes to be in line with what they should be, change the historical records a la 1984, etc. Or let him add an epicycle to his theory to account for the otherwise unexplainable results.

In practice, I doubt atomic-level effects are ever going to produce clearly observable changes outside of physics labs, so 99.99999% of the time the simulators wouldn't have to worry about this as long as they simulated macroscopic objects to enough detail.

Replies from: CCC
comment by CCC · 2014-07-04T08:29:49.686Z · LW(p) · GW(p)

In practice, I doubt atomic-level effects are ever going to produce clearly observable changes outside of physics labs, so 99.99999% of the time the simulators wouldn't have to worry about this as long as they simulated macroscopic objects to enough detail.

Well, yes, I'm not saying that this would make it easy to discover evidence that we are living in a simulation. It would simply make it possible to do so.

comment by Pablo (Pablo_Stafforini) · 2016-10-04T11:26:12.933Z · LW(p) · GW(p)

it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near-term.

In the past, when I expressed worries about the difficulties associated to far-future meme-spreading, which you favor as an alternative to extinction-risk reduction, you said you thought there was a significant chance of a singleton-dominated future. Such a singleton, you argued, would provide the necessary causal stability for targeted meme-spreading to successfully influence our distant descendants. But now you seem to be implying that, other things equal, far-future meme-spreading is several orders of magnitude less likely to succeed than short-term interventions (including interventions aimed at reducing near-term risk of extinction, which plausibly represents a significant fraction of total extinction risk). I find these two views hard to reconcile.

comment by Kawoomba · 2013-06-17T13:32:32.103Z · LW(p) · GW(p)

In a world without PR repercussions and political backlash, wouldn't that spell "People are stupid, the world is a toy of great apes with guns in one hand and shit in the other"?

Not that I expect an answer, and it may be that you do in fact unpack "crazy" differently, and in any case "crazy" is connotated much more palatably, but I wonder, I wonder ...

(In your shoes I'd at least have some (non-violent) Kill-Bill-esque revenge fantasies in which I tell gung-ho AGI researchers about the effect of their work.)

comment by katydee · 2013-06-13T05:09:42.044Z · LW(p) · GW(p)

I enjoy a lot of your top-level posts, but this one seems maybe less suitable for LessWrong. While I do appreciate what you're trying to do in terms of establishing context for future posts, this post seems perhaps better-suited as a personal blog entry.

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2013-06-13T05:12:56.509Z · LW(p) · GW(p)

I understand what you're coming from, but was following a precedent on Less Wrong of people making personal posts, many of which have been heavily upvoted. In any case, my subsequent posts will be much less focused on myself :-)

Replies from: Raemon
comment by Raemon · 2013-06-13T18:20:34.841Z · LW(p) · GW(p)

I appreciated this post a lot - I think understanding what causes people to become interested in (and actively working on) Effective Altruism is important.

We don't want to be flooded with personal anecdote posts, but I think it's reasonable to have one if it's in a broader context. I expect when you're done with your current set of posts they'll be packaged into a sequence, and having some personal background will be add some useful context.

comment by benkuhn · 2013-06-11T17:03:17.505Z · LW(p) · GW(p)

The benefits of this realization have spanned many aspects of my life, and have substantially increased my altruistic human capital.

This is really interesting. Can you give some concrete examples from different aspects and explain a bit how your capital has increased?

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2013-06-11T23:53:49.035Z · LW(p) · GW(p)

In a sense, it's very straightforward. Better predictive models of the world improve instrumental rationality in all domains. The effect also compounded, because the realization helped me understand how most people think, which further improved my predictive models of the world.

Replies from: benkuhn
comment by benkuhn · 2013-06-13T05:48:23.244Z · LW(p) · GW(p)

Yes, I agree that if it's a substantially better predictive model it would be very useful. Do you have any concrete examples of, say, beliefs you came to using many many weak arguments that you would not have come to using a single strong argument, which later turned out to be true? Or ways in which you started modeling people differently and how specifically those improved models were useful? (My prior for "one thinks one is improving instrumental rationality => one is actually improving instrumental rationality" is rather low.)

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2013-06-13T05:59:04.364Z · LW(p) · GW(p)

The examples are quite personal in nature.

Retrospectively, I see that many of my past predictive errors came from repeatedly dismissing weak arguments against a claim by using a relatively strong argument as a point of comparison, without realizing that I was doing it repeatedly, and ignoring an accumulation of evidence against the claim, just because no single piece of evidence against it appeared to be stronger than the strongest piece of evidence for it.

If you'd like to correspond by email, I'll say a little more, though not so much as to compromise the identities of the people involved. You can reach me at jsinick@gmail.com

comment by oooo · 2013-06-11T06:22:46.258Z · LW(p) · GW(p)

Your math education link is incorrectly specified and leads to a 404 on the LW website instead of directing to your http://mathisbeauty.org site.

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2013-06-11T06:38:26.466Z · LW(p) · GW(p)

Thanks. I fixed this.

Replies from: 9eB1
comment by 9eB1 · 2013-06-11T17:46:51.120Z · LW(p) · GW(p)

The link to Paul's undergraduate paper Quantum Money from Hidden Subspaces is similarly incorrect.

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2013-06-11T21:00:03.172Z · LW(p) · GW(p)

Thanks, I fixed this as well.