Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far

post by Louie · 2011-12-27T21:24:30.416Z · LW · GW · Legacy · 47 comments

Contents

  ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER
  ACCOMPLISHMENTS IN 2011
  FUTURE PLANS YOU CAN HELP SUPPORT
    If you'd like to support our work: please donate now!
None
47 comments

** cross-posted from http://singinst.org/2011winterfundraiser/ **

Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work!  -Louie


ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER

Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.


ACCOMPLISHMENTS IN 2011

2011 was our biggest year yet. Since the year began, we have:

 

FUTURE PLANS YOU CAN HELP SUPPORT

In the coming year, we plan to do the following:

 


Now is your last chance to make a tax-deductible donation in 2011.

If you'd like to support our work: please donate now!

47 comments

Comments sorted by top scores.

comment by orthonormal · 2011-12-29T23:15:41.368Z · LW(p) · GW(p)

Just emptied out my altruism chip jar and donated $1850.

comment by juliawise · 2011-12-27T23:23:25.077Z · LW(p) · GW(p)

Be an informed donor. I advise reading the GiveWell interview with SIAI from last spring.

To me, it's not a good sign that SIAI said they had no immediate plans for what they would do with new funding.

Replies from: lukeprog, XiXiDu, Grognor, Louie
comment by lukeprog · 2011-12-27T23:54:20.445Z · LW(p) · GW(p)

Jasen Murray's answers to Holden's questions were problematic, and did not well-represent the Singularity Institute's positions. That is an old interview, and since that time we've done many things to explain what we plan to do with new funding. For example we published a strategic plan and I gave this video interview. Moreover, the donation page linked in the OP has the most up-to-date information on what we plan to do with new funding: see Future Plans You Can Help Support.

Replies from: lincolnquirk, juliawise, Jasen
comment by lincolnquirk · 2011-12-28T05:41:29.822Z · LW(p) · GW(p)

FWIW, the "Future Plans" list seems to me somewhat understating the value of a donation. I realize it's fairly accurate in that it represents the activities of SI. Yet it seems like it could be presented better.

For example, the first item is "hold the Summit". But I happen to know that the Summit generally breaks even or makes a little money, so my marginal dollar will not make or break the Summit. Similarly, a website redesign, while probably important, isn't exciting enough to be listed as the second item. The third item, publish the open problems document, is a good one, though you should make it seem more exciting.

I think the donation drive page should thoroughly make the case that SI is the best use of someone's charity dollars -- that it's got a great team, great leadership, and is executing a plan with the highest probability of working at every step. That page should probably exist on its own, assuming the reader hasn't read any of the rest of the site, with arguments for why working explicitly on rationality is worthwhile; why transparency matters; why outreach to other researchers matters; what the researchers are currently spending time on and why those are the correct things for them to be working on; and so on. It can be long: long-form copy is known to work, and this seems like a correct application for it.

In fact, since you probably have other things to do, I'll do a little bit of copywriting myself to try to discover if this is really a good idea. I'll post some stuff here tomorrow after I've worked on it a bit.

Replies from: lukeprog
comment by lukeprog · 2011-12-28T07:25:15.580Z · LW(p) · GW(p)

I shall not complain. :)

Replies from: lincolnquirk
comment by lincolnquirk · 2011-12-28T08:08:43.593Z · LW(p) · GW(p)

OK, here's my crack: http://techhouse.org/~lincoln/singinst-copy.txt

Totally unedited. Please give feedback. If it's good, I can spend a couple more hours on it. If you're not going to use it, please don't tell me it's good, because I have lots of other work to do.

Replies from: lukeprog, fubarobfusco
comment by lukeprog · 2011-12-28T14:54:29.953Z · LW(p) · GW(p)

It's good enough that if we use it, we will do the editing. Thanks!

comment by fubarobfusco · 2011-12-29T05:16:08.742Z · LW(p) · GW(p)

The connection between AI and rationality could be made stronger.

Indeed, that's been my impression for a little while. I'm unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.

But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.

comment by juliawise · 2011-12-28T03:43:36.924Z · LW(p) · GW(p)

Thanks for pointing out the newer info. The different expansion plans seem sensible.

comment by Jasen · 2012-01-01T02:59:34.125Z · LW(p) · GW(p)

I'll chime in to agree with both lukeprog in pointing out that the interview is very outdated and with Holden in correcting Louie's account of the circumstances surrounding it.

comment by XiXiDu · 2011-12-28T10:11:58.798Z · LW(p) · GW(p)

Be an informed donor. I advise reading the GiveWell interview with SIAI from last spring.

He also talked to Jaan Tallinn. His best points in my opinion:

My reasoning is that it seems to me that if they have unique insights into the problems around AGI, then along the way they ought to be able to develop and publish/market innovations in benign areas, such as speech recognition and language translation programs, which could benefit them greatly both directly (profits) and indirectly (prestige, affiliations) - as well as being a very strong challenge to themselves and goal to hold themselves accountable to, which I think is worth quite a bit in and of itself.

...

I'm largely struggling for a way to evaluate the SIAI team. Certainly they've written some things I like, but I don't see much in the way of technical credentials or accomplishments of the kind I'd expect from people who are aiming to create useful innovations in the field of artificial intelligence.

...

I think that if you're aiming to develop knowledge that won't be useful until very very far in the future, you're probably wasting your time, if for no other reason than this: by the time your knowledge is relevant, someone will probably have developed a tool (such as a narrow AI) so much more efficient in generating this knowledge that it renders your work moot.

...

Instead, in order to build a program that is better at writing source code for AGIs than we are, it seems like you'd likely need to fundamentally understand and formalize what general intelligence consists of. How else can you tell the original program how to evaluate the "goodness" of different possible modifications it might make to its source code?

...

Another note is that even if the real world is more like chess than I think ... the actual story of the development of superhuman chess intelligences as I understand it is much closer to "humans writing the right algorithm themselves, and implementing it in hardware that can do things they can't" than to "a learning algorithm teaching itself chess intelligence starting with nothing but the rules."

...

...designing a dumberthan-humans computer to modify its source code all on its own until it becomes smarter than humans. I don't see how the latter would be possible for a general intelligence (for a specialized intelligence it could be done via trial-and-error in a simulated environment).

...

I feel like once we basically understand how the human predictive algorithm works, it may not be possible to improve on that algorithm (without massive and time-costly experimentation) no matter what the level of intelligence of the entity trying to improve on it. (The reason I gave: The human one has been developed by trial-and-error over millions of years in the real world, a method that won't be available to the GMAGI. So there's no guarantee that a greater intelligence could find a way to improve this algorithm without such extended trial-and-error)...

...

I don't think of the GMAGI I'm describing as necessarily narrow - just as being such that assigning it to improve its own prediction algorithm is less productive than assigning it directly to figuring out the questions the programmer wants (like "how do I develop superweapons"). There are many ways this could be the case.

...

I don't think "programming" is the main challenge in improving one's own source code. As stated above, I think the main challenge is improving on a prediction algorithm that was formed using massive trial-and-error, without having the benefit of the same trial-anderror process.

Replies from: Vladimir_Nesov, cousin_it, wedrifid
comment by Vladimir_Nesov · 2011-12-28T18:58:42.475Z · LW(p) · GW(p)

(Most of these considerations don't apply to developments in pure mathematics, which is my best guess at a fruitful mode of attacking FAI goals problem. The implementation-as-AGI aspect is a separate problem likely of a different character, but I expect we need to obtain basic theoretical understanding of FAI goals first to know what kinds of AGI progress are useful. Jumping to development of language translation software is way off-track.)

comment by cousin_it · 2011-12-28T16:15:24.286Z · LW(p) · GW(p)

Thanks a lot for posting this link. The first point was especially good.

comment by wedrifid · 2011-12-28T14:02:48.900Z · LW(p) · GW(p)

I feel like once we basically understand how the human predictive algorithm works, it may not be possible to improve on that algorithm (without massive and time-costly experimentation) no matter what the level of intelligence of the entity trying to improve on it. (The reason I gave: The human one has been developed by trial-and-error over millions of years in the real world, a method that won't be available to the GMAGI. So there's no guarantee that a greater intelligence could find a way to improve this algorithm without such extended trial-and-error)...

The "I feel" opening is telling. It does seem like the only way people can maintain this confusion beyond 10 seconds of thought is by keeping in the realm of intuition. In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-28T15:35:40.222Z · LW(p) · GW(p)

Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?

It does seem like the only way people can maintain this confusion beyond 10 seconds of thought...

The only way to approach general intelligence may be by emulating the human algorithms. The opinion that we are capable of inventing an artificial and simple algorithm exhibiting general intelligence is not a mainstream opinion among AI and machine learning researchers. And even if one assumes that all those scientists are not nearly as smart and rational as SI folks, they seem to have much headway when it comes to real world experience about the field of AI and its difficulties.

I actually share the perception that we have no reason to suspect that we could reach a level above ours without massive and time-costly experimentation (removing our biases merely sounds easy when formulated in English).

The "I feel" opening is telling.

I think that you might be attributing too much to an expression uttered in an informal conversation.

In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.

What do you mean by "feelings" and "preferences". The use of intuition seems to be universal, even within the field of mathematics. I don't see how computational bounded agents could get around "feelings" when making predictions about subjects that are only vaguely understood and defined. Framing the problem in technical terms like "predictive algorithms" doesn't change anything about the fact that making predictions about subjects that are poorly understood is error prone.

Replies from: wedrifid
comment by wedrifid · 2011-12-28T15:40:06.171Z · LW(p) · GW(p)

Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?

Yes. He just doesn't seem to be someone whose opinion on artificial intelligence should be considered particularly important. He's just a layman making the typical layman guesses and mistakes. I'm far more interested in what he has to say on warps in spacetime!

comment by Grognor · 2011-12-28T00:37:05.013Z · LW(p) · GW(p)

Being an informed donor requires more than an outdated, non-representative interview. This examination has far more high-quality information and, according to the creator, will update soon (although he is apparently behind on the schedule he set for himself).

comment by Louie · 2011-12-28T03:02:06.527Z · LW(p) · GW(p)

I agree with Grognor -- that interview is beyond unhelpful. Even calling it an interview of SIAI is incredibly misleading. (I would say a complete lie). Holden interviewed the only visitor at SI who was there last summer who wouldn't have known anything about the organizations funding needs. Jasen was running a student summer program -- not SIAI. I would liken it to Holden interviewing a random boyscout somewhere and then publishing a report complaining that he couldn't understand the organizational funding needs of Boy Scouts of America.

Also, keep in mind that GiveWell is certainly a good service (and I support them) but their process is limited and is unable to evaluate the value of research. In fact, if an opportunity to donate as good as Singularity Institute existed, GiveWell's methodology would blind them to the possibility of discovering it.

Carl Shulman pointed out how absurd this was: If GiveWell had existed 100 years ago, they would have argued against funding the eradication of smallpox. Their process forces them to reject the possibility that an intervention could be that effective.

I'm curious about the new GiveWell Labs initiative though. Singularity Institute does meet all of that program's criteria for inclusion... perhaps that's why they started this program... so that they aren't forced to overlook so many extraordinary donation opportunities forever.

Replies from: juliawise, CarlShulman
comment by juliawise · 2011-12-28T03:23:45.450Z · LW(p) · GW(p)

Holden seems to have spoken with Jasen "and others", so at least two people. I don't think it's fair to say that speaking with 1/3 of the people in an organization is as unrepresentative as speaking with 1/3,000,000 of the Boy Scouts. And since Holden sent SIAI his notes and got their feedback before publishing, they had a second chance to correct any misstatements made by the guy they gave him to interview.

So calling this interview "a complete lie" seems very unfair.

I agree that GiveWell's process is limited, and I'm interested in the GiveWell Labs project.

Replies from: Louie
comment by Louie · 2011-12-28T05:14:02.061Z · LW(p) · GW(p)

A few corrections.

  • I know that Holden interviewed two other supporters of ours... but I don't think he interviewed 2 other employees. If he did, why did he only publish the unhelpful notes from the one employee he spoke to who didn't know anything?

  • SIAI didn't give Jasen to GiveWell to be interviewed -- Holden chose him unilaterally -- not because he was a good choice, but because Jasen is from New York (just like Holden).

  • I'm unaware of Holden sending his notes to anyone at SIAI prior to publication. Who did he send them to? I never saw them.

  • My guess is Holden sent his notes back to Jasen and called that "sending them to SIAI for feedback". In other words, no one at SIAI who is a leader, or a board member, or someone who understands the plans/finances of the organization saw the notes prior to publication. If Holden had sent the notes to any of the board members of Singularity Institute, they would have sent him tons of corrections.

  • To clarify, I didn't say the interview itself was a lie. I said calling it an interview with SIAI was a lie. I stick by that characterization.

Replies from: HoldenKarnofsky
comment by HoldenKarnofsky · 2011-12-28T15:04:16.252Z · LW(p) · GW(p)

Hi, here are the details of whom I spoke with and why:

  • I originally emailed Michael Vassar, letting him know I was going to be in the Bay Area and asking whether there was anyone appropriate for me to meet with. He set me up with Jasen Murray.
  • Justin Shovelain and an SIAI donor were also present when I spoke with Jasen. There may have been one or two others; I don't recall.
  • After we met, I sent the notes to Jasen for review. He sent back comments and also asked me to run it by Amy Willey and Michael Vassar, who each provided some corrections via email that I incorporated.

A couple of other comments:

  • If SIAI wants to set up another room for more funding discussion, I'd be happy to do that and to post new notes.
  • In general, we're always happy to post corrections or updates on any content we post, including how that content is framed and presented. The best way to get our attention is to email us at info@givewell.org

And a tangential comment/question for Louie: I do not understand why you link to my two LW posts using the anchor text you use. These posts are not about GiveWell's process. They both argue that standard Bayesian inference indicates against the literal use of non-robust expected value estimates, particularly in "Pascal's Mugging" type scenarios. Michael Vassar's response to the first of these was that I was attacking a straw man. There are unresolved disagreements about some of the specific modeling assumptions and implications of these posts, but I don't see any way in which they imply a "limited process" or "blinding to the possibility of SIAI's being a good giving opportunity." I do agree that SIAI hasn't been a fit for our standard process (and is more suited to GiveWell Labs) but I don't see anything in these posts that illustrates that - what do you have in mind here?

Replies from: CarlShulman, Louie
comment by CarlShulman · 2012-01-04T07:32:50.675Z · LW(p) · GW(p)

Hi Holden,

I just read this thread today. I made a clarification upthread about the description of my comment above, under Louie's. Also, I'd like to register that I thought your characterization of that interview as such was fine, even without the clarifications you make here.

They both argue that standard Bayesian inference indicates against the literal use of non-robust expected value estimates, particularly in "Pascal's Mugging" type scenarios.

As a technical point, I don't think these posts address "Pascal's Mugging" scenarios in any meaningful way.

Bayesian adjustment is a standard part of Pascal's Mugging. The problem is that Solomonoff complexity priors have fat tails, because describing fundamental laws of physics that allow large payoffs is not radically more complex than laws that only allow small payoffs. It doesn't take an extra 10^1000 bits to describe a world where an action generates 2^(10^1000) times as many, e.g. happy puppies. So we can't rule out black swans a priori in that framework (without something like an anthropic assumption that amounts to the Doomsday Argument).

The only thing in your posts that could help with Pascal's Mugging is the assumption of infinite certainty in a distribution without relevantly fat tails or black swans, like a normal or log-normal distribution. But that would be an extreme move, taking coherent worlds of equal simplicity and massively penalizing the ones with high payoffs, so that no evidence that could fit in a human brain could convince us we were in the high-payoff worlds. Without some justification, that seems to amount to assuming the problem away, not addressing it.

Disclaimer 1: This is about expected value measured in the currency of "goods" like happy puppies, rather than expected utility, since agents can have bounded utility, e.g. simply not caring much more about saving a billion billion puppies rather than a billion. This seems fairly true of most people, at least emotionally.

Disclaimer 2: Occam's razor priors give high value to Pascal's Mugging cases, but they also give higher expectations to all other actions. For instance, the chance that space colonization will let huge populations be created increases the expected value of reducing existential risk by many orders of magnitude to total utilitarians. But it also greatly increases the expected payoffs of anything else that reduces existential risk by even a little. So if vaccinating African kids is expected to improve the odds of human survival going forward (not obvious but plausible) then its expected value will be driven to within sight of focused existential risk reductions, e.g. vaccination might be a billionth the cost-effectiveness of focused risk-reduction efforts but probably not smaller by a factor of 10^20. By the same token, different focused existential risk interventions will compete against one another, so one will not want to support the relatively ineffective ones.

Replies from: HoldenKarnofsky
comment by HoldenKarnofsky · 2012-01-17T20:04:23.623Z · LW(p) · GW(p)

Carl, it looks like we have a pretty substantial disagreement about key properties of the appropriate prior distribution over expected value of one's actions.

I am not sure whether you are literally endorsing a particular distribution (I am not sure whether "Solomonoff complexity prior" is sufficiently well-defined or, if so, whether you are endorsing that or a varied/adjusted version). I myself have not endorsed a particular distribution. So it seems like the right way to resolve our disagreement is for at least one of us to be more specific about what properties are core to our argument and why we believe any reasonable prior ought to have these properties. I'm not sure when I will be able to do this on my end and will likely contact you by email when I do.

What I do not agree with is the implication that my analysis is irrelevant to Pascal's Mugging. It may be irrelevant for people who endorse the sorts of priors you endorse. But not everyone agrees with you about what the proper prior looks like, and many people who are closer to me on what the appropriate prior looks like still seem unaware of the implications for Pascal's Mugging. If nothing else, my analysis highlights a relationship between one's prior distribution and Pascal's Mugging that I believe many others weren't aware of. Whether it is a decisive refutation of Pascal's Mugging is unresolved (and depends on the disagreement I refer to above).

comment by Louie · 2011-12-28T21:44:59.055Z · LW(p) · GW(p)

Thanks for the helpful comments! I was uninformed about all those details above.

These posts are not about GiveWell's process.

One of the posts has the sub-heading "The GiveWell approach" and all of the analysis in both posts use examples of charities you're comparing. I agree you weren't just talking about the GiveWell process... you were talking about a larger philosophy of science you have that informs things like the GiveWell process.

I recognize that you're making sophisticated arguments for your points. Especially the assumptions that you claim simply must be true to satisfy your intuition that charities should be rewarded for transparency and punished otherwise. Those seem wise from a "getting things done" point of view for an org like GiveWell -- even when there is no mathematical reason those assumptions should be true -- but only a human-level tit-for-tat shame/enforcement mechanism you hope eventually makes this circularly "true" through repeated application. Seems fair enough.

But adding regression adjustments to cancel out the effectiveness of any charity which looks too effective to be believed (based on the common sense of the evaluator) seems like a pretty big finger on the scale. Why do so much analysis in the beginning if the last step of the algorithm is just "re-adjust effectiveness and expected value to equal what feels right"? Your adjustment factor amounts to a kind of Egalitarian Effectiveness Assumption: We are all created equal at turning money into goodness. Or perhaps it's more of a negative statement, like, "None of us is any better than the best of us at turning money into goodness" -- where the upper limit on the best is something like 1000x or whatever the evaluator has encountered in the past. Any claims made above the best limit gets adjusted back down -- those guys were trying to Pascal's Mug us! That's the way in which there's a blinding effect. You disbelieve the claims of any groups who claims to be more effective per capita than you think is possible.

Replies from: HoldenKarnofsky
comment by HoldenKarnofsky · 2011-12-29T00:37:24.706Z · LW(p) · GW(p)

Louie, I think you're mischaracterizing these posts and their implications. The argument is much closer to "extraordinary claims require extraordinary evidence" than it is to "extraordinary claims should simply be disregarded." And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.

I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to "Pascal's Mugging" does not entail an aversion to supporting x-risk charities. (As mentioned in that comment, it appears that important SIAI staff share such an aversion, whether or not they agree with my formal defense of it.)

I also think the message of these posts is consistent with the best available models of how the world works - it isn't just about trying to set incentives. That's probably a conversation for another time - there seems to be a lot of confusion on these posts (especially the second) and I will probably post some clarification at a later date.

comment by CarlShulman · 2012-01-04T06:21:18.845Z · LW(p) · GW(p)

Carl Shulman pointed out how absurd this was: If GiveWell had existed 100 years ago, they would have argued against funding the eradication of smallpox. Their process forces them to reject the possibility that an intervention could be that effective

To clarify what I said in those comments:

Holden had a few posts that 1) made the standard point that one should use both prior and evidence to generate one's posterior estimate of a quantity like charity effectiveness, 2) used example prior distributions that assigned vanishingly low probability to outcomes far from the median, albeit disclaiming that those distributions were essential.

I naturally agree with 1), but took issue with 2). A normal distribution for charity effectiveness is devastatingly falsified by the historical data, and even a log-normal distribution has wacky implications, like ruling out long-term human survival a priori. So I think a reasonable prior distribution will have a fatter tail. I think it's problematic to use false examples, lest they get lodged in memory without metadata, especially when they might receive some halo effect from 1).

I said that this methodology and the example priors would have more or less ruled out big historical successes, not that GiveWell would not have endorsed smallpox eradication. Indeed, with smallpox I was trying to point out something that Holden would consider a problematic implication of a thin-tailed prior. With respect to existential risks, I likewise said that I thought Holden assigned a higher prior to x-risk interventions than could be reconciled with a log-normal prior, since he could be convinced by sufficient evidence (like living to see humanity colonize the galaxy, and witnessing other civilizations that perished). These were criticisms that those priors were too narrow even for Holden, not that GiveWell would use those specific wacky priors.

Separately, I do think Holden's actual intuitions are too conservative, e.g. in assigning overly low probability to eventual large-scale space colonization and large populations, and giving too much weight to a feeling of absurdity. So I would like readers to distinguish between the use of priors in general and Holden's specific intuitions that big payoffs from x-risk reduction (and AI risk specifically) face a massive prior absurdity penalty, with the key anti-x-risk work being done by the latter (which they may not share).

comment by shminux · 2011-12-27T21:48:33.626Z · LW(p) · GW(p)

fundraiser only 20% filled so far

I was checking for the usual swag and membership tiers on the donation page and found nothing. Surely people would go for the t-shirts/hoodies/caps/posters/membership cards, being mentioned on the SI site, etc.

Replies from: Kevin, lukeprog
comment by Kevin · 2011-12-28T05:35:00.626Z · LW(p) · GW(p)

In the meanwhile, until we get that set up, I'll mail a Singinst t-shirt to anyone that donates $100 or more and emails me.

It's this design on the front, and the Singularity Institute logo on the back. http://www.imaginaryfoundation.com/index.php?pagemode=detail&type=Mens%20Sale&uid=C190B0

Replies from: Rain, magfrump, NancyLebovitz
comment by Rain · 2012-04-11T22:24:02.382Z · LW(p) · GW(p)

I never got my t-shirt.

comment by magfrump · 2011-12-28T09:55:58.026Z · LW(p) · GW(p)

How would one go about e-mailing you?

Unless you just meant sending a private message.

Replies from: Kevin
comment by Kevin · 2011-12-29T08:28:40.229Z · LW(p) · GW(p)

kfischer @$ gmail *@ com

comment by NancyLebovitz · 2011-12-28T06:03:33.256Z · LW(p) · GW(p)

I suggest making it easier to get bigger images of your designs-- they're detailed enough that what you've got on your site, or even view image and enlarge, don't show them adequately.

Replies from: Kevin
comment by Kevin · 2011-12-28T06:08:18.435Z · LW(p) · GW(p)

It's not our site, the Imaginary Foundation is kind of like a fake bizarro version of the Singularity Institute that's actually mostly a t-shirt company.

Replies from: curiousepic
comment by curiousepic · 2011-12-29T15:12:22.282Z · LW(p) · GW(p)

Out of curiosity, did SI talk to Imaginary Foundation and set up these shirts or are you modifying them personally, or what's the deal?

Personally I'd like a simple shirt with just the SI logo. As much as I enjoy most of the Imaginary Foundation's designs, this particular shirt has a "Three Wolf Moon" vibe.

Replies from: Kevin
comment by Kevin · 2011-12-29T21:08:30.305Z · LW(p) · GW(p)

Yes, the Director of Imaginary Foundation is perhaps unsurprisingly a long time movement Singularitarian.

comment by lukeprog · 2011-12-27T21:58:29.943Z · LW(p) · GW(p)

Agreed. That's in the works, for the new website.

Replies from: Baughn
comment by Baughn · 2011-12-28T01:43:39.050Z · LW(p) · GW(p)

Should I assume that website will also include a links from your other comments on the donation page?

Replies from: lukeprog
comment by lukeprog · 2011-12-28T01:48:51.096Z · LW(p) · GW(p)

Sorry, what do you mean?

Replies from: Baughn
comment by Baughn · 2011-12-28T01:56:08.441Z · LW(p) · GW(p)

Cross-referencing. If you visit just the donation page, there are no prominent links to 'what this would be used for'-style information, i.e. what you put in your other comment. Obviously a minor issue at most, but you know how those work.

Though with that said, I've been wondering about that particular point. Website development, of all things.. there are probably dozens around here with the skills to do that, myself included, so it seems like the perfect option for in-kind donations. Do you know who I'd need to talk to about that, and whether or not there's any point? I can think of a few reasons you'd want to keep it in-house, not least confidentiality, but don't know which ones might apply.

Replies from: lukeprog
comment by lukeprog · 2011-12-28T02:00:35.901Z · LW(p) · GW(p)

Yes, the new donate page has links to explanations of what the money gets used for.

We are already a long ways down the road to the new website with a professional designer, but we have lots of other design and web development work that we love to give to volunteers when they are willing. If you're interested to donate-in-kind in that way, please contact luke [at] singularity.org.

comment by Dr_Manhattan · 2011-12-29T13:41:35.560Z · LW(p) · GW(p)

Weird - I am a somewhat regular donor and did not hear about the drive until this post. Checked my email, nothing there.

I happened to have donated last week, and did it again for the drive.

"There will be a lot more whales if there is a Future"

comment by daenerys · 2011-12-28T03:28:40.743Z · LW(p) · GW(p)

Those of us who are poor are less-likely to straight-up donate. But there are things such as the Singularity Institute credit card that donates $50 when opened, and like 1% of purchases.

Personally, I would also donate to get more chapters of HPMoR up because I would consider it more similar to "buying a book" and not "giving money away." I remember there was an HPMoR drive before, and it seemed to work well.

Replies from: shminux
comment by shminux · 2011-12-28T05:04:09.290Z · LW(p) · GW(p)

Personally, I would also donate to get more chapters of HPMoR up because I would consider it more similar to "buying a book" and not "giving money away."

JKR would have to approve that (unless it's just an accelerated release, like last time). Maybe EY can ask her nicely. Who knows, she might even decide to donate, he can be quite persuasive, I hear.

Replies from: daenerys
comment by daenerys · 2011-12-28T05:11:25.896Z · LW(p) · GW(p)

You are correct.

I would go for accelerated release, rather than trying to untangle a quagmire of copyright issues, though.

comment by Baughn · 2011-12-28T01:44:44.103Z · LW(p) · GW(p)

It wouldn't stop me from donating, but it's somewhat annoying that donations to US charities are not tax-deductible in Ireland. Before I spend time and money trying to find a workaround - can anyone else think of a solution?

Replies from: orthonormal
comment by orthonormal · 2011-12-28T02:39:28.111Z · LW(p) · GW(p)

Well, there's always the Future of Humanity Institute; I go back and forth on the relative merits of SIAI and FHI.

comment by codythegreen · 2011-12-28T01:27:28.835Z · LW(p) · GW(p)

I was planning on giving a donation at tax return time