MIRI's 2013 Summer Matching Challenge

post by lukeprog · 2013-07-23T19:05:56.873Z · LW · GW · Legacy · 122 comments

Contents

    Donate Now!
  Accomplishments in 2013 so far
  Future Plans You Can Help Support
None
122 comments

(MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.)

Update 09-15-2013: The fundraising drive has been completed! My thanks to everyone who contributed.

The original post follows below...

 

 

 

 

Thanks to the generosity of several major donors, every donation to the Machine Intelligence Research Institute made from now until (the end of) August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!  

Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.

This post is also a good place to ask your questions about our activities and plans — just post a comment!

If you have questions about what your dollars will do at MIRI, you can also schedule a quick call with MIRI Deputy Director Louie Helm: louie@intelligence.org (email), 510-717-1477 (phone), louiehelm (Skype).

progress bar


Early this year we made a transition from movement-building to research, and we've hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on the future's most important problem.

Accomplishments in 2013 so far

Future Plans You Can Help Support

(Other projects are still being surveyed for likely cost and strategic impact.)

We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward.

If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.

$200,000 of total matching funds has been provided by Jaan Tallinn, Loren Merritt, Rick Schwall, and Alexei Andreev.

122 comments

Comments sorted by top scores.

comment by Rain · 2013-07-23T13:09:26.839Z · LW(p) · GW(p)

I continue to donate $1000 per month despite a 20% pay cut.

Replies from: Rain, lukeprog, Gurkenglas
comment by Rain · 2013-07-24T18:04:41.006Z · LW(p) · GW(p)

To those discussing monthly vs. lump sum: Luke recognized the issue a few days ago and counted 6 months' worth of my donations toward this matching drive, much as the original commitment was counted at 12 months (now all paid).

Replies from: lukeprog
comment by lukeprog · 2013-07-24T20:36:00.399Z · LW(p) · GW(p)

Right.

If you prefer to donate monthly, but also want to take advantage of matching, then just tell us you're pledging to keep up the monthly donation for at least 6 months, and we'll count 6 months' worth of your matching donation to the drive, to take advantage of matching.

Replies from: Leonhart
comment by Leonhart · 2013-07-28T21:09:45.426Z · LW(p) · GW(p)

Luke, I've been donating monthly for, um, I think a couple of years now? I so pledge, in addition to anything else I donate. PM me if you need ID to verify this.

Replies from: lukeprog
comment by lukeprog · 2013-07-29T00:02:08.067Z · LW(p) · GW(p)

PMed.

comment by lukeprog · 2013-07-23T19:58:04.946Z · LW(p) · GW(p)

Thanks very much!

comment by Gurkenglas · 2013-07-24T14:43:16.441Z · LW(p) · GW(p)

I advise you and everyone with a similar precommitment to pick up a loan from a bank that will be paid off over the next years or decades at $1000 per month and stop regularly donating for that period, donating the loan immediately so that you can take advantage of the doubling.

Replies from: ArisKatsaris, Gurkenglas
comment by ArisKatsaris · 2013-07-24T15:08:53.654Z · LW(p) · GW(p)

Putting yourself in debt for such a purpose is bad policy.

The "doubling" is a donation drive tactic. It happens every six months or so, and it has an upper limit. Nobody should put themselves in debt in order to donate.

Replies from: Jiro, Gurkenglas
comment by Jiro · 2013-07-24T15:37:57.542Z · LW(p) · GW(p)

I think that's his point: Precommitting to donate $X per month regardless of your personal circumstances is equivalent to taking out a loan that can be paid off at $X per month and donating the proceeds of the loan. The latter course of action is self-evidently bad; the donation of $X per month is bad for the same reasons.

On the other hand, perhaps my sarcasm detector is miscalibrated and he really means it, in which case yeah, taking out a loan to donate is stupid.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-07-24T15:58:36.988Z · LW(p) · GW(p)

I indeed find the two cost-equivalent, I did not doubt that there are people who can precommit to donating monthly, and I figured that if someone precommits, he might as well borrow to increase efficiency. Yes, if loaning is bad, precommiting is of course bad too.

I did not realize that these things happen halfyearly. With that in mind, everyone who donates monthly should stop that, and instead save up money to donate on the next doubling. (Are the $200000 caps usually reached?)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-07-24T16:17:22.707Z · LW(p) · GW(p)

I did not realize that these things happen halfyearly. With that in mind, everyone who donates monthly should stop that

Look, perhaps you should consider that you're not in the best possible position to offer suggestions on the topic, given your lack of information on the topic.

For example, I remember (I don't have a link handy, alas) someone from MIRI saying that monthly donations are better for them, since they're a more reliable source of money that allows them to plan ahead to some extent.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-07-24T16:23:48.339Z · LW(p) · GW(p)

Then why are the major donors rewarding sixmonthly payments?

What sources of information on the topic would you recommend?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-07-24T16:32:43.213Z · LW(p) · GW(p)

Then why are the major donors rewarding sixmonthly payments?

I'm guessing it's because they feel it increases donations among the people who don't monthly-donate.

What sources of information on the topic would you recommend?

When in doubt about what policy is best, then perhaps ask what the beneficiaries themselves think is best policy? They have probably thought about it longer than you.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-07-24T16:43:42.684Z · LW(p) · GW(p)

I'm guessing it's because they feel it increases donations among the people who don't monthly-donate.

Then shouldn't they run the doublings in different areas/communities each month so as to normalize the resulting donations across time?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-07-24T16:47:36.977Z · LW(p) · GW(p)

I think I'll stop answering questions. I don't see why you're interested in my guesswork, rather than asking such questions to MIRI itself who could answer them more definitively.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-07-24T17:03:47.456Z · LW(p) · GW(p)

Can you suggest a place to ask such questions publically?

comment by Gurkenglas · 2013-07-24T15:17:43.018Z · LW(p) · GW(p)

Gather everyone with a precommitment like Rains, take the half that donate least per month, add their donations per month, make a collective loan that can be paid off with a monthly payment equal to that sum, have everyone pay off a ratio of the monthly interest equal to the ratio of their monthly donations vs the sum of all participants' monthly donations, and donate the rest. This will work without problems if at least half the participants continue to pay. If this isnt enough, decrease the collective loan until the probability of failure is under 10^-10.

Alternatively, I advise everyone with a precommitment like Rains who also have a savings account on general principles to empty it into donating now and filling it back up over the next years/decades with the monthly donation.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-07-24T15:26:17.521Z · LW(p) · GW(p)

What, no selling of kidneys?

Replies from: Gurkenglas
comment by Gurkenglas · 2013-07-24T16:03:46.800Z · LW(p) · GW(p)

Was that refering to requiring too much from the donors, or comparing the effectiveness of my suggestion to that of having your kidney taken out rather than earning money in that time?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-07-24T16:27:04.858Z · LW(p) · GW(p)

I think that telling people to empty their savings account is bad advice of similar kind as asking them to sell their kidney, both in regards to the consequences to people and in regards to the consequences to MIRI.

The people suffer for no good reason.
MIRI gets a bad rep for destroying people's lives.
Everyone loses.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-07-24T16:33:00.779Z · LW(p) · GW(p)

Then shouldn't everyone who donates monthly stop that and feed the money into a savings account on general principles instead?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-07-24T16:40:54.258Z · LW(p) · GW(p)

No. It doesn't follow that you must either be naked or wear a mink coat. It doesn't follow you must either be a complete hermit or a total party animal. It doesn't follow that you need either have an empty savings account, or put everything in it.

Replies from: drnickbone, Gurkenglas
comment by drnickbone · 2013-07-24T17:03:07.768Z · LW(p) · GW(p)

This whole crazy thread just reminds me Why Our Kind Can't Cooperate

comment by Gurkenglas · 2013-07-24T16:53:13.078Z · LW(p) · GW(p)

I beg to differ.

http://tvtropes.org/pmwiki/pmwiki.php/Main/GoldenMeanFallacy

https://en.wikipedia.org/wiki/Argument_to_moderation

which applies here because of the argument presented here, and here.

Replies from: AlexMennen
comment by AlexMennen · 2013-07-24T17:15:38.631Z · LW(p) · GW(p)

Humans have decreasing marginal utility with money, so even if we model donors as perfectly rational (a bad assumption, of course), it still makes sense to donate some but not all of your money. Just because people sometimes have a bias towards finding a middle ground doesn't mean that finding a middle ground is always wrong every time.

The xkcd you linked to is a good demonstration of one of the reasons that asking people to go into debt to fund MIRI is a horrible idea.

comment by Gurkenglas · 2013-07-24T14:43:20.262Z · LW(p) · GW(p)

Maybe we shall ask the major donors whether they would split a part of the $200000 off the upper limit of this actions, pledging instead to donate $1 from that fund for every dollar that will be regularly donated per month over the next years/decades? That way, the banks wouldn't be receiving a split of the money.

Or we could ask the major donors whether one could loan from them instead, so they receive the loan interest to reward them for their pledges.

comment by iceman · 2013-07-23T06:44:09.463Z · LW(p) · GW(p)

Wrote a cheque for $5,000.

(I put the redacted image of my donation online because someone else decided to start an ad-hoc fundraising effort for MIRI on FIMFiction.)

Replies from: lukeprog
comment by lukeprog · 2013-07-23T19:57:50.916Z · LW(p) · GW(p)

Thanks very much!

I would also like to affirm that thread's claim that "if what you really want is ponies, the Truly Friendly AI will in fact give you ponies." ("Really want", of course, requires lots of unpacking.)

Replies from: Eliezer_Yudkowsky, iceman
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-24T20:55:48.602Z · LW(p) · GW(p)

Not quite affirmable; a CEV-based FAI only gives you ponies if that's what you would-want, if on average everyone would-want is to give a pony to someone who would-want one. (Because an individually based mechanism might e.g. look at babies and determine that what they would-want as individuals is eternal feeding and burping.)

Replies from: Gurkenglas
comment by Gurkenglas · 2013-12-19T01:04:35.320Z · LW(p) · GW(p)

Can you find a more obviously bad example of the implications of individually-based CEV? I find that if a baby would-want to feed and burp, that's what it should get; and if two parents want to spawn another human with CEV rights, they might not want to spawn it initially only would-wanting to feed and burp.

comment by iceman · 2013-07-23T23:49:27.834Z · LW(p) · GW(p)

Yes, though I find it improbable that they'd Really Want ponies.

(Devil's advocate: there are people who participate in the fandom daily, and have big chunks of their identity tied up in being a brony. If there were actually a population where people would Really Want ponies, this would be the one.)

comment by Furcas · 2013-07-23T02:59:27.573Z · LW(p) · GW(p)

Donated 500 USD (~530 CAD).

Replies from: lukeprog
comment by lukeprog · 2013-07-23T03:18:06.325Z · LW(p) · GW(p)

Thanks very much!

comment by scotherns · 2013-07-23T11:17:19.685Z · LW(p) · GW(p)

Donated $50 (on top of my automated monthly donation).

Replies from: lukeprog
comment by lukeprog · 2013-07-23T19:58:10.558Z · LW(p) · GW(p)

Thanks!

comment by David Althaus (wallowinmaya) · 2013-07-23T20:20:36.602Z · LW(p) · GW(p)

Donated $500.

Replies from: lukeprog
comment by lukeprog · 2013-07-24T03:09:44.719Z · LW(p) · GW(p)

Thanks very much!

comment by Eneasz · 2013-07-24T03:04:26.449Z · LW(p) · GW(p)

$200, and finally signed up for monthly donations as well.

Replies from: lukeprog
comment by lukeprog · 2013-07-24T03:09:55.623Z · LW(p) · GW(p)

Awesome, thanks!

comment by BT_Uytya · 2013-07-24T06:43:02.288Z · LW(p) · GW(p)

$10.00

I'm a student and this is my second PayPal transaction ever, so I was a bit scared to donate more.

Hopefully my example will inspire anybody else. $10.00 isn't very much, but come on, it's not like it is worse than not donating anything at all.

Replies from: somervta, OnTheOtherHandle, army1987, lukeprog, Adele_L
comment by somervta · 2013-08-01T12:54:01.594Z · LW(p) · GW(p)

It did. Also a student, just made my first donation. Whaddya know, public announcement of donations really does motivate!

comment by OnTheOtherHandle · 2013-07-25T07:10:18.020Z · LW(p) · GW(p)

This does make me feel better - thanks. I'm just entering college and don't even have a bank account yet, but your post inspired me to get one fast so I can donate whatever I can afford within the matching window :)

comment by A1987dM (army1987) · 2013-07-26T19:24:05.219Z · LW(p) · GW(p)

$10.00 isn't very much, but come on, it's not like it is worse than not donating anything at all.

http://xkcd.com/871/

:-)

comment by lukeprog · 2013-07-24T20:36:49.943Z · LW(p) · GW(p)

Thanks!

comment by Adele_L · 2013-07-24T16:30:21.438Z · LW(p) · GW(p)

it's not like it is worse than not donating anything at all.

Could be, if the transaction/processing/etc... is more. I think $10 is enough to be a net positive, but I'm not sure where the threshold is.

Replies from: malo
comment by Malo (malo) · 2013-07-24T16:53:47.579Z · LW(p) · GW(p)

Transaction fees for non-profits such as MIRI on PayPal are 2.2% + $0.30, and the processing is automated with our donor database solution so it's definitely net positive :)

comment by ArisKatsaris · 2013-08-05T18:50:08.626Z · LW(p) · GW(p)

$1,200 donated.

I'd like to remark on something that annoys me: Your "donation meter" (at least the one on your site, if not the one in the post above) ought either be certain to be updated daily, or at the very least it should note when it was last updated. I find the phrase "raised to date" frustrating and annoying when I can't trust that the "to date" is actually current.

Replies from: lukeprog, lukeprog, Kawoomba
comment by lukeprog · 2013-08-07T18:08:34.526Z · LW(p) · GW(p)

Donation meters updated, courtesy of Malo Bourgon.

comment by lukeprog · 2013-08-05T22:47:34.670Z · LW(p) · GW(p)

Thanks very much!

I'll ask our web guys if we can add a 'last updated' thing somewhere.

comment by Kawoomba · 2013-08-05T19:21:12.456Z · LW(p) · GW(p)

Well, one way to be sure would be to donate the remainder.

comment by [deleted] · 2013-08-07T18:18:41.989Z · LW(p) · GW(p)

yadda yadda, continuing to donate 1/3 of my income, yadda yadda

feed me karma

Replies from: lukeprog
comment by lukeprog · 2013-08-07T18:50:33.469Z · LW(p) · GW(p)

:)

comment by wmorgan · 2013-07-26T17:47:36.194Z · LW(p) · GW(p)

$3,000.00

Replies from: lukeprog
comment by lukeprog · 2013-07-26T19:10:45.976Z · LW(p) · GW(p)

Thanks very much!

comment by Rain · 2013-07-28T23:50:43.499Z · LW(p) · GW(p)

I'd like to say that PMs from private_messaging disparaging this drive and my donations will NOT deter me from funding the mission I feel will help lead to the best possible future.

comment by Benquo · 2013-07-31T01:29:04.043Z · LW(p) · GW(p)

$1,000 and my employer will match it.

Replies from: lukeprog
comment by lukeprog · 2013-07-31T02:34:12.099Z · LW(p) · GW(p)

Thanks very much!

comment by player_03 · 2013-07-26T16:46:38.783Z · LW(p) · GW(p)

I donated $1000 and then went and bought Facing the Intelligence Explosion for the bare minimum price. (Just wanted to put that out there.)

I've also left myself a reminder to consider another donation a few days before this runs out. It'll depend on my financial situation, but I should be able to manage it.

Replies from: player_03, lukeprog
comment by player_03 · 2013-08-11T00:05:41.101Z · LW(p) · GW(p)

I've donated a second $1000.

Replies from: lukeprog
comment by lukeprog · 2013-08-11T00:24:37.236Z · LW(p) · GW(p)

Thanks again!!

comment by lukeprog · 2013-07-26T19:11:03.934Z · LW(p) · GW(p)

Thanks very much!

comment by So8res · 2013-07-24T17:20:04.913Z · LW(p) · GW(p)

I donated. My employers match charitable donations, though not always in a timely fashion. I'm hoping that their contribution can be further matched.

Replies from: lukeprog
comment by lukeprog · 2013-07-24T20:39:24.819Z · LW(p) · GW(p)

Thanks! Who is your employer? We may need to send them some forms. We already have donation matching set up with Google, Microsoft, Boeing, Adobe, Fannie Mae, and several other companies through Network for Good and America's Charities.

You can also contact me privately via email.

Replies from: So8res
comment by So8res · 2013-07-24T22:38:07.115Z · LW(p) · GW(p)

Google.

comment by So8res · 2013-08-04T20:36:52.035Z · LW(p) · GW(p)

Wow, MIRI is more underfunded than I thought. I donated again, after freeing up some cash.

Replies from: lukeprog
comment by lukeprog · 2013-08-04T21:31:16.524Z · LW(p) · GW(p)

Awesome, thanks!

comment by Ozymandias_King · 2013-07-30T19:55:29.648Z · LW(p) · GW(p)

Donated 1400$

Replies from: lukeprog
comment by lukeprog · 2013-07-30T20:25:44.142Z · LW(p) · GW(p)

Thanks very much!

comment by Larks · 2013-07-25T12:54:36.421Z · LW(p) · GW(p)

Currently between jobs; donated $100 anyway, as the world is not a story and will not wait for my montage to finish.

Replies from: lukeprog
comment by lukeprog · 2013-07-25T19:50:59.660Z · LW(p) · GW(p)

Thanks!

comment by JGWeissman · 2013-08-13T14:39:19.894Z · LW(p) · GW(p)

I donated $5000.

Replies from: lukeprog
comment by lukeprog · 2013-08-13T17:55:36.895Z · LW(p) · GW(p)

Thanks very much!

comment by kgalias · 2013-07-30T12:53:46.543Z · LW(p) · GW(p)

Donated $50.

Replies from: lukeprog
comment by lukeprog · 2013-07-30T17:37:22.707Z · LW(p) · GW(p)

Thanks!

comment by Kawoomba · 2013-08-08T18:35:34.092Z · LW(p) · GW(p)

$50.

Replies from: lukeprog
comment by lukeprog · 2013-08-08T18:46:53.896Z · LW(p) · GW(p)

Thanks!

comment by KnaveOfAllTrades · 2013-08-15T21:43:46.246Z · LW(p) · GW(p)

Donated $50.

Replies from: lukeprog
comment by lukeprog · 2013-08-15T23:38:57.873Z · LW(p) · GW(p)

Thanks!

comment by khafra · 2013-08-09T22:10:38.030Z · LW(p) · GW(p)

Donated 0.9766578425 bitcoins, a number I chose since that's Chaitin's Omega for the shortest FAI.

Replies from: Eliezer_Yudkowsky, nshepperd, lukeprog
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-10T03:27:23.967Z · LW(p) · GW(p)

(Consults Inverse Chaitin function in Wolfram Alpha.)

Actually, is there a definition of Chaitin's Omega for particular programs? I thought it was just for universal Turing machines, or program classes with a measure on them anyway.

Replies from: khafra, endoself, khafra
comment by khafra · 2013-08-11T02:57:41.795Z · LW(p) · GW(p)

Whoops, that's right. I, ah, may have just unleashed a trolly AI.

comment by endoself · 2013-10-18T23:16:23.997Z · LW(p) · GW(p)

Yes, you can take the probability that they will halt given a random input. This is analogous to the case of a universal Turing machine, since the way we ask it to simulate a random Turing machine is by giving it a random input string.

comment by khafra · 2013-10-18T22:41:09.754Z · LW(p) · GW(p)

Dangit, I should've said "the FAI is Turing-complete, you can carry out arbitrary computations simply by running it in carefully selected universes."

With a five orders of magnitude improvement in timing, I could be witty.

comment by nshepperd · 2013-08-10T03:53:36.610Z · LW(p) · GW(p)

Is that the probability that the shortest FAI halts given random input?

comment by lukeprog · 2013-08-10T01:37:13.721Z · LW(p) · GW(p)

Thanks!

comment by LucasSloan · 2013-07-29T21:06:06.679Z · LW(p) · GW(p)

Donated $50.

Replies from: lukeprog
comment by lukeprog · 2013-07-29T21:17:31.199Z · LW(p) · GW(p)

Thanks!

comment by Halfwitz · 2013-07-26T23:50:44.512Z · LW(p) · GW(p)

Just curious, whatever happened to EY's rationality books? You invested months of effort into them. Did you pull a sunk cost reversal? Or is publishing them not on the schedule till next year.

Replies from: Eliezer_Yudkowsky, lukeprog
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-27T04:46:23.454Z · LW(p) · GW(p)

The drafts came out unexciting according to reader reports. I suspect that magical writing energy ['magic' = not understood] was diverted from the rationality book into the first 63 chapters of HPMOR which I was doing in my 'off time' while writing the book, and which does have Yudkowskian magic according to readers. HPMOR and CFAR between them used up a lot of the marginal utility I thought we would get from the book which diminishes the marginal utility of completing it.

Replies from: shminux, Halfwitz
comment by shminux · 2013-07-27T05:38:13.228Z · LW(p) · GW(p)

Shoulda hired Yvain to retell them :)

Replies from: Eliezer_Yudkowsky, Halfwitz
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-27T22:24:25.737Z · LW(p) · GW(p)

We tried that experiment, but Yvain was heading off to a new job and his first stab didn't seem to be a quick fix.

comment by Halfwitz · 2013-07-27T05:44:42.014Z · LW(p) · GW(p)

That makes a lot of sense actually. I can't think of anyone who could do a better job.

comment by Halfwitz · 2013-07-27T05:49:12.458Z · LW(p) · GW(p)

You should consider posting the drafts somewhere. At the very least, we'll get new material to add to the wiki. Wikis don't need 'magic.'

Replies from: Kawoomba
comment by Kawoomba · 2013-07-27T06:59:39.818Z · LW(p) · GW(p)

It's better not to publish than to publish something unpolished. "This is just a draft" wouldn't sufficiently counteract the impression of "I read something of perceived-lower quality, and it's from EY."

Publish unpolished and perish, if you will.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-27T07:58:42.732Z · LW(p) · GW(p)

I wasn't especially happy with the reception / effects of publishing the unpolished TDT draft.

comment by lukeprog · 2013-07-27T04:24:48.774Z · LW(p) · GW(p)

They aren't a priority for this year. We briefly contracted with two different writers who might have been able to finish the books without pulling Eliezer away from his other priorities, but that didn't work. We're still thinking about what is best to do with the drafts.

comment by Simon Fischer (SimonF) · 2013-08-15T11:43:21.507Z · LW(p) · GW(p)

66$, with some help of a friend.

Replies from: lukeprog
comment by lukeprog · 2013-08-15T17:11:28.950Z · LW(p) · GW(p)

Thanks!

comment by Rukifellth · 2013-07-24T22:22:40.369Z · LW(p) · GW(p)

I've only just now gotten a job, and may owe my dad too much money to make this donation drive, but I'll see what I can do. If things go as planned, I might be able to give 700 by the deadline.

Also, isn't three weeks something of a short window?

Replies from: lukeprog
comment by lukeprog · 2013-07-25T00:25:45.028Z · LW(p) · GW(p)

Also, isn't three weeks something of a short window?

We announced it to our blog on July 8th, and to our newsletter a bit after that. This is just the first time we mentioned it on LW.

Replies from: Rukifellth, ArisKatsaris, Rukifellth
comment by Rukifellth · 2013-08-10T03:08:04.256Z · LW(p) · GW(p)

Paycheck came in, donated the 700!

Replies from: lukeprog
comment by lukeprog · 2013-08-10T03:31:19.084Z · LW(p) · GW(p)

Thanks!

Replies from: Kawoomba
comment by Kawoomba · 2013-08-10T10:51:33.940Z · LW(p) · GW(p)

You mean "Thanks very much!", to stay consistent.

You're welcome!

comment by ArisKatsaris · 2013-07-25T15:16:29.302Z · LW(p) · GW(p)

I'm still surprised that it took you two weeks before mentioning it in LessWrong. Was this delay by neglect or design?

Replies from: lukeprog
comment by lukeprog · 2013-07-25T19:54:05.324Z · LW(p) · GW(p)

In part, we wanted to learn something about the degree to which donors are following the blog, following our newsletter, or following Less Wrong. I also wanted to be able to link from this post to a forthcoming interview with Benja Fallenstein that explains in more detail what we actually do at the workshops and why, but that was taking too long to complete, so I decided to just hurry up and post.

Replies from: somervta, Benja
comment by somervta · 2013-08-01T12:58:56.042Z · LW(p) · GW(p)

Data point: I was following the blog, which is where I first heard about the drive, but it was the comments on this thread which got me to finally donate.

comment by Benya (Benja) · 2013-08-10T19:39:56.977Z · LW(p) · GW(p)

For people stumbling upon this in the future: That interview has now been published. (Sorry to have been the cause of that delay :-/)

comment by Rukifellth · 2013-07-25T01:29:11.420Z · LW(p) · GW(p)

I see, my apologies.

comment by JonahS (JonahSinick) · 2013-07-22T22:09:28.866Z · LW(p) · GW(p)

The links to Eliezer's Open Problems in FAI papers are broken.

Replies from: Louie
comment by Louie · 2013-07-22T22:58:21.714Z · LW(p) · GW(p)

Fixed. Thanks.

comment by amcknight · 2013-07-28T20:54:13.546Z · LW(p) · GW(p)

For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I've seen in the first category: CEV (9 years old!), Paul Christiano's man-in-a-box indirect normativity, Luke's decision neuroscience, Daniel Dewey's value learning... I really like these approaches but they are only very early starting points compared to what will eventually be required.

Do you have any plans to tackle the humane values problem? Do MIRI-folk have strong opinions on which direction is most promising? My worry is that if this problem really is as intractable as it seems, then working on problem (2) is not helpful, and our only option might be to prevent AGI from being developed through global regulation and other very difficult means.

Replies from: lukeprog
comment by lukeprog · 2013-07-28T22:29:33.177Z · LW(p) · GW(p)

Do you have any plans to tackle the humane values problem?

Yes. The next open problem description in Eliezer's writing queue is in this category.

comment by Rukifellth · 2013-08-11T03:55:44.543Z · LW(p) · GW(p)

I think small donors should also state their donations amounts of 50-100 dollars. Having counted the medium and large donations in this thread to a rough total of 11,000 dollars, it seems unlikely that the goal is being reached with just those, and I have a feeling there will be some sort of "breaking the ice" effect if small donors chirped up about their chip ins, so to speak. Right now the number of medium and large donors represented in this thread eclipses the smalls.

comment by ShardPhoenix · 2013-08-03T09:11:58.478Z · LW(p) · GW(p)

I was going to donate (a not-huge amount) but I can't because Paypal won't accept my credit card. Don't know why. I did have a Paypal account that got blocked years ago for some unknown reason that I've never bothered to fix since it requires faxing documentation or some such nonsense.

Replies from: lukeprog
comment by lukeprog · 2013-08-03T09:15:34.470Z · LW(p) · GW(p)

Thanks for letting me know. Will you please email malo@intelligence.org and see if he can help?

Replies from: ShardPhoenix
comment by ShardPhoenix · 2013-08-03T09:23:39.156Z · LW(p) · GW(p)

Ok.

comment by Rukifellth · 2013-08-07T22:46:03.224Z · LW(p) · GW(p)

I wonder, if this community has the allegiance of at least 100 rationalists in the 80th percentile for rationality, how much money could be raised if all of them tried to form separate start-ups as feeder companies for MIRI?

Replies from: Randaly
comment by Randaly · 2013-08-08T01:01:30.601Z · LW(p) · GW(p)

Two attempts to do this are Quixey and Metamed. Quixey is notable for being the only for-profit institution to support MIRI; both groups' individual employees have also donated notable sums.

Replies from: somervta, Rukifellth
comment by somervta · 2013-08-08T02:05:31.511Z · LW(p) · GW(p)

Your Quixey link goes to metamed.

comment by Rukifellth · 2013-08-08T14:35:17.207Z · LW(p) · GW(p)

I grinned at how the two at the bottom seem to have donated just enough to be mentioned.

Quixey hasn't been able to pump in as much as I expected though.

Replies from: Benja
comment by Benya (Benja) · 2013-08-10T13:46:26.015Z · LW(p) · GW(p)

There are two donors which have donated $5,000 (just enough to be mentioned), three which have donated $10,000, one which has donated $15,000, and one which has donated $25,000, which suggests "people like donating such that their total is a multiple of $5,000" as a strong competing hypothesis.

Replies from: Rukifellth
comment by Rukifellth · 2013-08-11T03:09:31.050Z · LW(p) · GW(p)

Touche