2012 Winter Fundraiser for the Singularity Institute
post by lukeprog · 2012-12-06T22:41:32.007Z · LW · GW · Legacy · 113 commentsContents
Accomplishments in 2012 Future Plans You Can Help Support None 113 comments
Cross-posted here.
(The Singularity Institute maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried SI staff members.)
Thanks to the generosity of several major donors,† every donation to the Singularity Institute made now until January 20t (deadline extended from the 5th) will be matched dollar-for-dollar, up to a total of $115,000! So please, donate now!
Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program.
(If you're unfamiliar with our mission, please see our press kit and read our short research summary: Reducing Long-Term Catastrophic Risks from Artificial Intelligence.)
Now that Singularity University has acquired the Singularity Summit, and SI's interests in rationality training are being developed by the now-separate CFAR, the Singularity Institute is making a major transition. Most of the money from the Summit acquisition is being placed in a separate fund for a Friendly AI team, and therefore does not support our daily operations or other programs.
For 12 years we've largely focused on movement-building — through the Singularity Summit, Less Wrong, and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.
Now, the time has come to say "Mission Accomplished Well Enough to Pivot to Research." Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published 30+ research papers and dozens more original research articles on Less Wrong, we certainly haven't neglected research. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research.
Accomplishments in 2012
- Held a one-week research workshop on one of the open problems in Friendly AI research, and got progress that participants estimate would be the equivalent of 1-3 papers if published. (Details forthcoming. The workshop participants were Eliezer Yudkowsky, Paul Christiano, Marcello Herreshoff, and Mihaly Barasz.)
- Produced our annual Singularity Summit in San Francisco. Speakers included Ray Kurzweil, Steven Pinker, Daniel Kahneman, Temple Grandin, Peter Norvig, and many others.
- Launched the new Center for Applied Rationality, which ran 5 workshops in 2012, including Rationality for Entrepreneurs and SPARC (for young math geniuses), and also published one (early-version) smartphone app, The Credence Game.
- Launched the redesigned, updated, and reorganized Singularity.org website.
- Achieved most of the goals from our August 2011 strategic plan.
- 11 new research publications.
- Eliezer published the first 12 posts in his sequence Highly Advanced Epistemology 101 for Beginners, the precursor to his forthcoming sequence, Open Problems in Friendly AI.
- SI staff members published many other substantive articles on Less Wrong, including How to Purchase AI Risk Reduction, How to Run a Successful Less Wrong Meetup, a Solomonoff Induction tutorial, The Human's Hidden Utility Function (Maybe), How can I reduce existential risk from AI?, AI Risk and Opportunity: A Strategic Analysis, and Checklist of Rationality Habits.
- Launched our new volunteers platform, SingularityVolunteers.org.
- Hired two new researchers, Kaj Sotala and Alex Altair.
- Published our press kit to make journalists' lives easier.
- And of course much more.
Future Plans You Can Help Support
In the coming months, we plan to do the following:
- As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and launching a new website.
- Eliezer will publish his sequence Open Problems in Friendly AI.
- We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for many of our core materials, to make them more accessible: The Sequences, 2006-2009, Facing the Singularity, and The Hanson-Yudkowsky AI Foom Debate.
- We will publish several more research papers, including "Responses to Catastrophic AGI Risk: A Survey" and a short, technical introduction to timeless decision theory.
- We will set up the infrastructure required to host a productive Friendly AI team and try hard to recruit enough top-level math talent to launch it.
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $115,000 of total matching funds has been provided by Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer.
I will mostly be traveling (for AGI-12) for the next 25 hours, but I will try to answer questions after that.
113 comments
Comments sorted by top scores.
comment by pengvado · 2012-12-07T02:46:21.226Z · LW(p) · GW(p)
I donated 20,000$ now, in addition to 110,000$ earlier this year.
Replies from: lukeprog, MixedNuts, Kawoomba, CronoDAS↑ comment by MixedNuts · 2012-12-08T17:59:37.801Z · LW(p) · GW(p)
Holy pickled waffles on a pogo stick. Thanks, dude.
Is there anything you're willing to say about how you acquired that dough? My model of you has earned less in a lifetime.
Replies from: pengvado↑ comment by pengvado · 2012-12-08T21:51:53.982Z · LW(p) · GW(p)
I value my free time far too much to work for a living. So your model is correct on that count. I had planned to be mostly unemployed with occasional freelance programming jobs, and generally keep costs down.
But then a couple years ago my hobby accidentally turned into a business, and it's doing well. "Accidentally" because it started with companies contacting me and saying "We know you're giving it away for free, but free isn't good enough for us. We want to buy a bunch of copies." And because my co-founder took charge of the negotiations and other non-programming bits, so it still feels like a hobby to me.
Both my non-motivation to work and my willingness to donate a large fraction of my income have a common cause, namely thinking of money in far-mode, i.e. not alieving The Unit of Caring on either side of the scale.
Replies from: MixedNuts, army1987↑ comment by MixedNuts · 2012-12-08T23:14:51.392Z · LW(p) · GW(p)
Yeah, I know exactly who you are, I just didn't want to bust privacy or drop creepy hints. I didn't know that VideoLAN projects were financially independent of each other, so that explains where profit comes from. It's just that I didn't expect two guys in a basement to make that much, and you're too young (and didn't have much income before anyway) to have significant savings. So they're more money in successful codecs than I guessed.
Replies from: pengvado↑ comment by pengvado · 2012-12-09T03:32:38.270Z · LW(p) · GW(p)
you're too young (and didn't have much income before anyway) to have significant savings.
Err, I haven't yet earned as much from the lazy entrepreneur route as I would have if I had taken a standard programming job for the past 7 years (though I'll pass that point within a few months at the current rate). So don't go blaming my cohort's age if they haven't saved and/or donated as much as me. I'm with Rain in spluttering at how people can have an income and not have money.
↑ comment by A1987dM (army1987) · 2012-12-09T11:54:26.276Z · LW(p) · GW(p)
i.e. not alieving The Unit of Caring on either side of the scale
I don't, either -- possibly because I've never been in real economic hardships; I think if I had grown up in a poorer family I probably would. (I do try to be frugal because so far I've lived almost exclusively on my parents' income and it seems unfair towards them to waste their money, though.)
↑ comment by Kawoomba · 2012-12-07T13:05:36.070Z · LW(p) · GW(p)
(At the time of this comment) 27 karma for a $20k donation, 13 karma for $250, 9 karma for $20 (and a joke) ... something's amiss with the karma-$ currency exchange rate!
Replies from: AlexMennen, Kindly, None↑ comment by AlexMennen · 2012-12-07T17:26:37.279Z · LW(p) · GW(p)
Under the assumption that being rewarded with karma can motivate someone to make a donation, but if they make a donation, they do not respond to karma as an incentive when deciding how much to donate, then upvoting any donation is the best policy for maximizing money to SI. I'm not sure how realistic that model is, but it seems intuitive to me.
Replies from: Kindly↑ comment by Kindly · 2012-12-07T16:21:27.090Z · LW(p) · GW(p)
What do you expect to happen? We don't have enough users giving karma for donation to sustain a linear exchange rate in the [$20, $20000] range. Unless, I suppose, we give up any attempt at fine resolution over the [$1, $500] range.
In practice, what most people are probably doing is picking a threshold (possibly $0) beyond which they give karma for a donation. This could be improved: you could pick a large threshold beyond which you give 1 karma, and give fractional karma (by flipping a biased coin) below that threshold. However, if the large threshold were anywhere close to $20000, and your fractional karma scales linearly, then you would pretty much never give karma to the other donations.
Edit: after doing some simulations, I'm no longer sure the fractional approach is an improvement. It gives interesting graphs, though!
If we knew the Singularity Institute's approximate budget, we could fix this by assuming log-utility in money, but this is complicated.
↑ comment by CronoDAS · 2012-12-07T02:53:32.076Z · LW(p) · GW(p)
Really?
Replies from: Qiaochu_Yuan, Eliezer_Yudkowsky, Alicorn↑ comment by Qiaochu_Yuan · 2012-12-07T03:05:49.179Z · LW(p) · GW(p)
Replies from: CronoDAS"No, she wouldn't say anything to me about Lucius afterwards, except to stay away from him. So during the Incident at the Potions Shop, while Professor McGonagall was busy yelling at the shopkeeper and trying to get everything under control, I grabbed one of the customers and asked them about Lucius."
Draco's eyes were wide again. "Did you really?"
Harry gave Draco a puzzled look. "If I lied the first time, I'm not going to tell you the truth just because you ask twice."
↑ comment by CronoDAS · 2012-12-08T23:14:56.446Z · LW(p) · GW(p)
Nice quote.
"Really?" is more polite to say than "I find that hard to believe, can you provide confirming evidence" or "[citation needed]", though. Also, sometimes people actually will say "No, I was kidding" if you ask them.
Replies from: Kindly↑ comment by Kindly · 2012-12-08T23:53:06.929Z · LW(p) · GW(p)
Also, sometimes people actually will say "No, I was kidding" if you ask them.
Or "Oops, I accidentally typed an extra zero. Twice."
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2012-12-09T07:40:20.792Z · LW(p) · GW(p)
That is unlikely owing to the placement of the commas.
Replies from: Kindly↑ comment by Kindly · 2012-12-09T15:23:17.172Z · LW(p) · GW(p)
No, that just makes it worse, because 20,00$ could be referring to donating 20 dollars.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2012-12-09T21:28:03.921Z · LW(p) · GW(p)
Ah, right. I had forgotten that some people use commas where I would expect periods. Adding an extra zero twice is still somewhat unlikely, though. My current hypotheses about the distribution of LW users make it more plausible that the tail of high income can afford fairly large donations.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-08T01:35:02.976Z · LW(p) · GW(p)
Yes.
Replies from: CronoDAScomment by Rain · 2012-12-08T02:23:34.882Z · LW(p) · GW(p)
I continue to donate $1000 a month, and intend to reduce my retirement savings next year so I can donate more.
Replies from: MixedNutscomment by ArisKatsaris · 2012-12-08T15:30:29.633Z · LW(p) · GW(p)
Just donated 400 €.
Replies from: ArisKatsaris, lukeprog↑ comment by ArisKatsaris · 2013-01-03T03:46:19.708Z · LW(p) · GW(p)
My new year's resolution is tithing, to be split roughly half-in-half between "serious" causes and things like supporting my favorite webcomics/fansubbers/whatever. As part of the former, I decided to add 1000 € to the above donation.
comment by moshez · 2012-12-06T21:04:04.492Z · LW(p) · GW(p)
I am looking forward the the ebooks. I hope you'll provide them in ePub format, for those of us who prefer that. [I was pleased to donate $40, which should soon be matched by my employer as part of the employee-match program, thus getting me double-matched!]
comment by drethelin · 2013-01-10T05:35:39.258Z · LW(p) · GW(p)
Still donating 500 a month.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-10T22:15:14.997Z · LW(p) · GW(p)
Five cheers for this! Those who are steadily donating should get applause every time.
comment by Kutta · 2012-12-07T11:40:55.075Z · LW(p) · GW(p)
I donated 250$.
Update: No, I apparently did not. For some reason the transfer from Google Checkout got rejected, and now PayPal too. Does anyone have an idea what might've gone wrong? I've a Hungarian bank account. My previous SI donations were fine, even with the same credit card if I recall correctly, and I'm sure that my card is still prefectly valid.
Replies from: philh, MichaelAnissimov, MichaelAnissimov↑ comment by philh · 2012-12-07T19:38:05.671Z · LW(p) · GW(p)
I'm having the same problem. I used the card to buy modafinil yesterday, which might raise a red flag in fraud detection software? But if you're having it too, I'd update in the direction of it being a problem on SIAI's end.
Has anyone successfully donated since Kutta posted?
edit - Amazon is declining my card as well.
edit 2 - It's sorted out now, just donated £185.
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2012-12-08T00:24:52.971Z · LW(p) · GW(p)
I'm looking into this now, can you send me an email at michael@intelligence.org so we can share any further details necessary to work out the problem?
Replies from: philh↑ comment by MichaelAnissimov · 2012-12-08T04:06:00.139Z · LW(p) · GW(p)
After investigating the issue, it proved to be a problem on Kutta's side, not ours.
Replies from: Kutta↑ comment by MichaelAnissimov · 2012-12-08T00:43:14.555Z · LW(p) · GW(p)
I just verified that donations in general are working via PayPal and Google Checkout. We'll investigate this specific issue to see where the problem is.
comment by [deleted] · 2013-01-06T19:39:39.658Z · LW(p) · GW(p)
Ok I think I just set up a $1000 monthly.
comment by John_Maxwell (John_Maxwell_IV) · 2012-12-07T10:39:46.414Z · LW(p) · GW(p)
This is great news. Thanks to Edwin Evans, Mihaly Barasz, Rob Zahra, Alexei Andreev, Jeff Bone, Michael Blume, Guy Srinivasan, and Kevin Fischer for providing matching funds!
comment by Scott Alexander (Yvain) · 2012-12-18T06:22:09.170Z · LW(p) · GW(p)
I have some money that I was saving for something like this, but I also just saw Eliezer's (very convincing) request for CFAR donations yesterday and heard a rumor that SIAI was trying to get people to donate to CFAR because they needed it more.
This seems weird to me because I would expect that with SIAI's latest announcement they have shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them, but I'd be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
Replies from: Eliezer_Yudkowsky, lukeprog↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-20T00:11:57.214Z · LW(p) · GW(p)
1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn't funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.
2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
...but...
...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn't matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...
...and...
...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it's a very tough decision. Obviously! SIAI has given money to CFAR. If it had been obvious that this amount should've been shifted in direction A or direction B to minimize x-risk, we would've necessarily been organizationally irrational, or organizationally selfish, about the exact amount. SIAI has been giving CFAR amounts on the lower side of our error bounds because of the hope (uncertainty) that future-CFAR will prove effective at fundraising. Which rationally implies, and does actually imply, that an added dollar of marginal spending is more valuable at CFAR (in my estimates).
The upshot is that you should donate to whichever organization gets you more excited, like Luke said. SIAI is donating/loaning round-number amounts to CFAR, so where you donate $2K does change marginal spending at both organizations - we're not going to be exactly re-fine-tuning the dollar amounts flowing from SIAI to CFAR based on donations of that magnitude. It's a genuine decision on your part, and has a genuine effect. But from my own standpoint, "flip a coin to decide which one" is pretty close to my own current stance. For this to be false would imply that SIAI and I had a substantive x-risk-estimate disagreement which resulted in too much or too little funding (from my perspective) flowing to CFAR. Which is not the case, except insofar as we've been giving too little to CFAR in the uncertain hope that it can scale up fundraising faster than SIAI later. Taking this uncertainty into account, the margins balance. Leaving it out, a marginal absolute dollar of spending at CFAR does more good (somewhat) (in my estimation).
Replies from: Yvain, wedrifid, Nick_Tarleton↑ comment by Scott Alexander (Yvain) · 2012-12-25T06:08:05.841Z · LW(p) · GW(p)
Thank you; that helps clarify the issue for me. Since people who know more seem to think it's a tossup and SIAI motivates me more, I gave $250 to them.
↑ comment by wedrifid · 2012-12-20T01:53:58.423Z · LW(p) · GW(p)
And CFAR really really needs to succeed for SIAI to be viable in the long-term.
That's an extremely strong claim. Is that actually your belief? Not merely that CFAR success would be useful to SIAI success? There is no alternate plan for SIAI to be successful that doesn't rely on CFAR?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-20T03:24:29.465Z · LW(p) · GW(p)
I have backup plans, but they tend to look a lot like "Try founding CFAR again."
I don't know of any good way to scale funding or core FAI researchers for SIAI without rationalists. There's other things I could try, and would if necessary try, but I spent years trying various SIAI-things before LW started actually working. Just because I wouldn't give up no matter what, doesn't mean there wouldn't be a fairly large chunk of success-probability sliced off if CFAR failed, and a larger chunk of probability sliced off if I couldn't make any alternative to CFAR work.
I realize a lot of people think it shouldn't be impossible to fund SIAI without all that rationality stuff. They haven't tried it. Lots of stuff sounds easy if you haven't tried it.
Replies from: wedrifid↑ comment by wedrifid · 2012-12-20T03:43:45.610Z · LW(p) · GW(p)
Thankyou Eliezer. I'm fascinated by the reasoning and analysis that you're hinting at here. It helps puts the decisions you and SIAI have made in perspective.
Could you give a ballpark estimate of how much of the importance of successful rationality spin offs is based on expectations of producing core FAI researchers versus producing FAI funding?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-20T03:59:18.039Z · LW(p) · GW(p)
I've tried less hard to get core FAI researchers than funding. I suspect that given sufficient funding produced by magic, it would be possible to solve the core-FAI-researchers issue by finding the people and talking to them directly - but I haven't tried it!
Replies from: Halfwit↑ comment by Halfwit · 2012-12-20T08:48:45.188Z · LW(p) · GW(p)
How much money would you need magicked to allow you to shed fundraising and infrastructure, etc, and just hire and hole up with a dream team of hyper-competent maths wonks? Restated, at which set amount would SIAI be comfortably able to aggressively pursue its long-term research?
Replies from: hairyfigment↑ comment by hairyfigment · 2013-01-16T07:47:31.327Z · LW(p) · GW(p)
He once mentioned a figure of US $10 million / year. Feels like he's made a similar remark more recently, but it didn't show in my brief search.
↑ comment by Nick_Tarleton · 2013-06-07T21:04:45.526Z · LW(p) · GW(p)
So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...
an added dollar of marginal spending is more valuable at CFAR (in my estimates).
Is this still your view?
↑ comment by lukeprog · 2012-12-19T02:14:38.761Z · LW(p) · GW(p)
[SI has now] shifted from waterline-raising/community-building to more technical areas where CFAR success would be of less help to them
Remember that the original motivation for the waterline-raising/community-building stuff at SI was specifically to support SI's narrower goals involving technical research. Eliezer wrote in 2009 that "after years of bogging down [at SI] I threw up my hands and explicitly recursed on the job of creating rationalists," because Friendly AI is one of those causes that needs people to be "a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts."
So, CFAR's own efforts at waterline-raising and community-building should end up helping SI in the same way Less Wrong did, even though SI won't capture all or even most of that value, and even though CFAR doesn't teach classes on AI risk.
I've certainly found it to be the case that on average, people who get in contact with SI via an interest in rationality tend to be more useful than people who get in contact with SI via an interest in transhumanism or the singularity. (Though there are plenty of exceptions! E.g. Edwin Evans, Rick Schwall, Peter Thiel, Carl Shulman, and Louie Helm came to SI via the singularity materials.)
If someone has pretty good rationality skills, then it usually doesn't take long to persuade them of the basics about AI risk. But if someone is filtered instead for a strong interest in transhumanism or the singularity (and not necessarily rationality), then the conclusions they draw about AI risk, even after argument, often appear damn-near random.
There's also the fact that SI needs unusually good philosophers, and CFAR-style rationality training has some potential to help with that.
I'd be very interested in hearing from an SIAI higher-up whether they really want my money or whether they would prefer I give it to CFAR instead.
My own response to this has generally been that you should give to whichever organization you're most excited to support!
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-12-19T03:42:08.323Z · LW(p) · GW(p)
My own response to this has generally been that you should give to whichever organization you're most excited to support!
Why is that your response?
More precisely... do you actually believe that I should base my charitable giving on my level of excitement? Or do you assert that despite not believing it for some reason?
Replies from: lukeprog↑ comment by lukeprog · 2012-12-19T04:16:45.833Z · LW(p) · GW(p)
Why...?
Oh, right...
Basically, it's because I think both organizations Do Great Good with marginal dollars at this time, but the world is too uncertain to tell whether marginal dollars do more good at CFAR or SI. (X-risk reducers confused by this statement probably have a lower estimate of CFAR's impact on x-risk reduction than I do.) For normal humans who make giving decisions mostly by emotion, giving to the one they're most excited about should cause them to give the maximum amount they're going to give. For weird humans who make giving decisions mostly by multiplication, well, they've already translated "whichever organization you're most excited to support" into "whichever organization maximizes my expected utility [at least, with reference to the utility function which represents my philanthropic goals]."
comment by JGWeissman · 2013-01-04T15:26:42.564Z · LW(p) · GW(p)
I mailed a check for $20,000.
I'm excited about the pivot to research.
comment by Qiaochu_Yuan · 2012-12-25T02:55:31.153Z · LW(p) · GW(p)
I just donated $1,000... to CFAR. Does that still count?
Replies from: lukeprog, katydee↑ comment by lukeprog · 2012-12-26T01:17:15.201Z · LW(p) · GW(p)
Thanks! That counts for CFAR's drive.
comment by [deleted] · 2012-12-06T23:47:08.181Z · LW(p) · GW(p)
I assume a mailed cheque will work?
This post made me super excited. I was just thinking about donating before I found this. Now I really have to. Thanks for the initiative.
Replies from: lukeprog↑ comment by lukeprog · 2012-12-07T00:48:30.726Z · LW(p) · GW(p)
Certainly. Please see the instructions under 'Donate by Check' on the donate page. Thanks very much!
↑ comment by NancyLebovitz · 2012-12-08T12:43:04.769Z · LW(p) · GW(p)
In general, I'd say that people's desire to be anonymous should be respected unless there's a very good reason to override it, and solving a puzzle is not a very good reason.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-12-09T11:50:52.003Z · LW(p) · GW(p)
Anyway, he pretty much admitted who he is now.
comment by Benya (Benja) · 2013-01-04T15:09:15.069Z · LW(p) · GW(p)
Donated $150. One more day! Please donate, too!
comment by Furcas · 2012-12-07T04:38:10.353Z · LW(p) · GW(p)
Does agreeing to display my name in the public donor list help the SI in any way?
Replies from: JoshuaFox, Alexei↑ comment by JoshuaFox · 2012-12-07T07:10:43.663Z · LW(p) · GW(p)
Social proof. Very useful.
Replies from: Furcas↑ comment by Furcas · 2012-12-08T01:32:11.345Z · LW(p) · GW(p)
Okay, thanks.
Another question: Will my donation be matched even if I donate to the Singularity Institute For AI Canada Association?
Replies from: Furcas, Eliezer_Yudkowsky, JoshuaFox↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-17T04:28:12.233Z · LW(p) · GW(p)
No.
Replies from: Furcas↑ comment by Furcas · 2013-01-01T19:59:57.039Z · LW(p) · GW(p)
FYI, the SIAI Canada page on the Singularity Institute website still says this:
SIAI-CA is the Canadian ‘on-ramp’ for supporters of SIAI. We exist to facilitate Canadians in supporting the charitable objectives of SIAI. We do this in two ways: tax relief and oversight.
http://singularity.org/siai-canada/
I know if I hadn't asked Louie before donating to SIAI I would have donated to SIAI Canada, thinking it would have the same consequences except I'd get a tax break. I wonder how many thousand of dollars you've lost this way?
Replies from: lukeprog↑ comment by lukeprog · 2013-01-01T21:27:33.278Z · LW(p) · GW(p)
Not much, at least not since I took over SI in November 2011. SIAI-CA executed our recommendation for how to spend the last $5k they've spent since November 2011 — though it can be quite a lot of effort to find a good way for SIAI-CA to spend the money from Canada. Even more importantly, we know who all our biggest supporters in Canada are, so we've explained the situation to them personally and they generally donate directly rather than through SIAI-CA.
↑ comment by Alexei · 2012-12-08T21:38:44.917Z · LW(p) · GW(p)
It helps people like me, who look at it almost like a competition. The more people competing ,the merrier.
Replies from: Rain↑ comment by Rain · 2012-12-08T22:14:02.526Z · LW(p) · GW(p)
Yeah, I wanted to catch Jaan Tallinn on the Top Donors page to prove some random middle-class person could do better charity than the rich types, but he keeps pulling further ahead and I dropped a couple places in the rankings :-/ Gotta work harder!
comment by [deleted] · 2013-01-10T14:19:18.908Z · LW(p) · GW(p)
Not sure if anyone else noticed, but the end date was pushed back until Jan 20. Although personally, I would rather donate to CFAR (and have done so, $500, and another $500 before the fundraiser timeframe.)
↑ comment by gwern · 2012-12-16T02:56:35.040Z · LW(p) · GW(p)
What report is that? A site-search for "140,000" turns up a number of figures but none from EY; the latest Form 990 I know of lists his compensation at ~$104k (pg7, summing both columns) or ~50% less than your number.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-16T06:49:50.902Z · LW(p) · GW(p)
I've sometimes earned more than my SIAI base salary from speaking fees, but I've never earned $140K in any year, and will cheerfully exhibit my tax returns if Luke, Holden, or any other sufficiently reputable entity requests them. I've also got no idea what that "estimated extra compensation" line is about, unless it's health insurance or something - per the wishes of Peter Thiel, SIAI never pays $100k in any year to any employee, including bonuses.
(Note that, as usual when a poster has received many sufficiently extreme downvotes in their history, I designate them a troll and delete their comments at will.)
comment by JMiller · 2012-12-06T20:51:29.974Z · LW(p) · GW(p)
Luke, the link in the third line "Now is your chance to double your impact while helping us raise up to $230,000 to help fund our research program" does not work.
Replies from: lukeprog↑ comment by lukeprog · 2012-12-06T21:04:20.293Z · LW(p) · GW(p)
Meant to go to singularity.org/research. Will an editor please fix? I'm working from my phone now.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-12-06T21:17:22.811Z · LW(p) · GW(p)
Fixed.
comment by ArisKatsaris · 2012-12-24T00:24:43.038Z · LW(p) · GW(p)
...unless the donation bar is lagging, slightly less than 1/3rd the hoped-for sum has been filled, with only about 11 days remaining. That's rather worrisome.
Replies from: lukeprogcomment by Qiaochu_Yuan · 2012-12-08T08:45:02.243Z · LW(p) · GW(p)
Do we get some kind of reasonable guarantee that there won't in the future be an even better matching offer (say a tripling of our impact), or is the idea here that the value of an SIAI donation is heavily time discounted?
Replies from: lukeprog, Benja, Kevin, GuySrinivasan↑ comment by Benya (Benja) · 2012-12-08T19:15:33.501Z · LW(p) · GW(p)
There doesn't seem to be anything SIAI would gain from running such a program. If big donors are willing to give $N to match donations, if donations are matched dollar-for-dollar then SIAI can reasonably hope to raise $2N in the fundraiser; if donations are matched two-dollars-for-every-dollar, SIAI will only get $(3/2)N. Unless, of course, the big donors would donate more if SIAI sets up the second type of matching program, but why would they?
The only scenario I can see where this would make sense is if SIAI expects small donors to donate less than $(1/2)N in a dollar-for-dollar scheme, so that its total gain from the fundraiser would be below $(3/2)N, but expects to get the full $(3/2)N in a two-dollars-for-every-dollar scheme. But not only does this seem like a very unlikely story, even if it did happen it seems that you should want to donate in the current fundraiser if you're willing to do so at all, since this means that more matching funds would be available in the later two-dollars-for-every-dollar fundraiser for getting the other people to donate who we are postulating aren't willing to donate at dollar-for-dollar.
Replies from: Benja↑ comment by Benya (Benja) · 2013-12-24T04:41:13.817Z · LW(p) · GW(p)
The only scenario I can see where this would make sense is if SIAI expects small donors to donate less than $(1/2)N in a dollar-for-dollar scheme, so that its total gain from the fundraiser would be below $(3/2)N, but expects to get the full $(3/2)N in a two-dollars-for-every-dollar scheme. But not only does this seem like a very unlikely story [...]
One year later, the roaring success of MIRI's Winter 2013 Matching Challenge, which is offering 3:1 matching for new large donors (people donating >= $5K who have donated less that $5K in total in the past) -- almost $232K out of the $250K maximum donated by the time of writing, with more than three weeks time left, where the Winter 2012 Fundraiser the parent is commenting on only reached its goal of $115K after a deadline extension, and the Summer 2013 Matching Challenge only reached its $200K goal around the time of the deadline -- means that I pretty much need to eat my hat on the "very unlikely story" comment above. (There's clearly an upward growth curve as well, but it does seem clear that lots of people wanted to take advantage of the 3:1.)
So far I still stand by the rest of the comment, though:
[...] even if it did happen it seems that you should want to donate in the current fundraiser if you're willing to do so [at 1:1 matching], since this means that more matching funds would be available in the later two-dollars-for-every-dollar fundraiser for getting the other people to donate who we are postulating aren't willing to donate at dollar-for-dollar.
↑ comment by Kevin · 2012-12-09T10:10:44.526Z · LW(p) · GW(p)
Given that historically SI has completed all matching drives to 100%, I wouldn't even recommend waiting for a 2x match to donate.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-12-09T10:59:16.661Z · LW(p) · GW(p)
Probably the best of all is to be a matching drive sponsor.
Replies from: Kevin↑ comment by Kevin · 2012-12-09T11:01:45.687Z · LW(p) · GW(p)
Probably the best of all is to be a matching drive sponsor.
I can't argue with that!
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-12-09T11:05:48.519Z · LW(p) · GW(p)
Yep!
To stay honest though, if someone is reading this thread and planning to do this, they should contact SI now with the amount they're willing to match during a future drive... otherwise they're highly liable to fall prey to donor akrasia.
↑ comment by SarahSrinivasan (GuySrinivasan) · 2012-12-10T01:03:38.228Z · LW(p) · GW(p)
I seem to recall reading a study that concluded that the multiplier on the match (above 0.5x) doesn't change the increase in donations much. Cursory searching didn't refind it though.
comment by drethelin · 2013-01-01T22:31:40.632Z · LW(p) · GW(p)
How encouraging is it to people to see comments saying people donated? To me it just seems like kinda self aggrandizing karma whoring. Have you read this thread and been influenced to donate or to donate more?
Replies from: Qiaochu_Yuan, ArisKatsaris↑ comment by Qiaochu_Yuan · 2013-01-10T02:26:38.158Z · LW(p) · GW(p)
I was influenced both to donate and to donate more. Social proof is very powerful. I also would not have posted if I didn't think it would encourage people to donate or donate more.
↑ comment by ArisKatsaris · 2013-01-01T23:10:53.094Z · LW(p) · GW(p)
self aggrandizing karma whoring
If I didn't hope it would help encourage others, I wouldn't post about my donation. I can't think of reasons that knowing of a donation of mine might discourage others of donating, so I believe it will help encourage them, even if minimally so.
Have you read this thread and been influenced to donate or to donate more?
Generalizing to "this and similar threads", I think the answer is yes in regards to me.
comment by Halfwit · 2013-01-10T02:08:14.364Z · LW(p) · GW(p)
I highly support changing your name--there's all sorts of bad juju associated with the term "singularity". My advice, keep the new name as bland as possible, avoiding anything with even a remote chance of entering the popular lexicon. The term "singularity" has suffered the same fate as "cybernetics".
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-10T11:23:49.403Z · LW(p) · GW(p)
I note that you've retracted you post, but I still feel the need to ask: shouldn't the name reflect what they do?
Replies from: Halfwit↑ comment by Halfwit · 2013-01-10T17:45:26.504Z · LW(p) · GW(p)
In terms of minimizing the status loss for academics affiliating with SIAI, a banal minimally-descriptive name may be superior. People often overestimate the value of the piquant. Beige may not excite, but it doesn't offend. Any term which has the potential to become a buzzword, or acquire alternative definitions, should be avoided. The more exciting the term, the higher the chance of appropriation.
This was the point I was trying to make; on rereading it after posting, I realized it was remarkably poorly written and wasn't even clearly conveying what I was thinking when I wrote it. I didn't have time to edit it then, so I retracted.
Replies from: army1987, MugaSofer↑ comment by A1987dM (army1987) · 2013-01-10T17:51:00.336Z · LW(p) · GW(p)
BTW, here's an interesting blog post about considerations relevant to naming stuff.