The $125,000 Summer Singularity Challenge
post by Kaj_Sotala · 2011-07-29T21:02:54.792Z · LW · GW · Legacy · 262 commentsContents
262 comments
From the SingInst blog:
Thanks to the generosity of several major donors†, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
(Visit the challenge page to see a progress bar.)
Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!
† $125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.
2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.
The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:
- Held our annual Singularity Summit, in San Francisco. Speakers included Ray Kurzweil, James Randi, Irene Pepperberg, and many others.
- Held the first Singularity Summit Australia and Singularity Summit Salt Lake City.
- Held a wildly successful Rationality Minicamp.
- Published seven research papers, including Yudkowsky’s much-awaited ‘Timeless Decision Theory‘.
- Helped philosopher David Chalmers write his seminal paper ‘The Singularity: A Philosophical Analysis‘, which has sparked broad discussion in academia, including an entire issue of Journal of Consciousness Studies and a book from Springer devoted to responses to Chalmers’ paper.
- Launched the Research Associates program.
- Brought MIT cosmologist Max Tegmark onto our advisory board, published our Singularity FAQ, and much more.
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in New York City this year.
- Publish three chapters in the upcoming academic volume The Singularity Hypothesis, along with several other papers.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As PayPal co-founder and Singularity Institute donor Peter Thiel said:
“I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Singularity Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”
Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.
262 comments
Comments sorted by top scores.
comment by Rain · 2011-07-29T16:42:33.276Z · LW(p) · GW(p)
I just put in 5100 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.
Replies from: MixedNuts, MichaelVassar, VNKKET, None↑ comment by MixedNuts · 2011-07-29T17:05:33.313Z · LW(p) · GW(p)
You deserve praise. Would you like some praise?
Replies from: Rain↑ comment by Rain · 2011-07-29T17:09:58.648Z · LW(p) · GW(p)
Thanks! :-)
Replies from: handoflixue↑ comment by handoflixue · 2011-07-29T19:53:08.289Z · LW(p) · GW(p)
Praise Rain, for being such a generous benefactor! :)
Replies from: khafra↑ comment by MichaelVassar · 2011-08-01T14:56:49.377Z · LW(p) · GW(p)
Thank you SO MUCH for the clarification VNKKET linked to. I was worried. I would usually discourage someone from donating all of their savings to any cause including this one, but in this case it looks like you have thought it through and what you are doing a) make sense and b) is the result of a well thought out lifestyle optimization process.
I'd be happy to talk with you or exchange email (my email is public) to discuss the details, both to better learn to optimize my life and to try to help you with yours, since I expect that efforts will be high return, given the evidence that you are a person who actually does the things that you think would be good lifestyle optimizations at least some of the time.
I'm also desperately interested in better characterizing people who optimize their lifestyles and who try to live without fear etc.
Replies from: MixedNuts↑ comment by [deleted] · 2011-08-03T21:51:33.806Z · LW(p) · GW(p)
This is admirable.
However, it's important to note that the path that maximizes your own individual hardship is not necessarily the one that maximizes your contribution to humanity's future. For example, it's possible that by keeping some of that money, you could buy luxuries (like, say, a Netflix subscription) that would allow you to recover more quickly from work-related weariness and spend your evenings starting an online company (or acquiring the skills necessary to start an online company, and then starting an online company) that would result in a larger expected donation to SIAI in the long term.
I used to have your attitude of "live very frugally and give SIAI every spare dollar". My new attitude is optimize for both high income and low expenses (keeping in mind that spending money on myself increases my expected income up to a certain point), and to not donate to SIAI automatically--I'm thinking of starting a rival charity in the long run (due to vague intuition, based on very limited evidence, that healthy competition can be good for charities, and the fact that I have some ideas that I think might be better than SIAI's that Michael Vassar doesn't seem interested in).
By the way, I declare Crocker's Rules--it would be extremely valuable if someone provided persuasive evidence that I'm on the wrong track.
Replies from: Rain, MatthewBaker↑ comment by Rain · 2011-08-03T22:37:55.701Z · LW(p) · GW(p)
I am not a super hero or an ascetic. I'm a regular random internet person with a particular focus on the future. I only donated 26 percent of my gross income last year. And I have a Netflix subscription.
Replies from: MatthewBaker↑ comment by MatthewBaker · 2011-08-03T22:46:48.539Z · LW(p) · GW(p)
Your superpower is willpower and you exist as a hero to many :)
↑ comment by MatthewBaker · 2011-08-03T22:34:14.107Z · LW(p) · GW(p)
Your dumb, i wish i could be more like Rain.
Just had to get that out of my system, but as a whole i act in accordance with what you just stated and i hope you do start that charity if it turns out competition is good for charities. Furthermore, i hope that i can get to the point where i can invoke Crocker's Rules on my own points.
Replies from: Nonecomment by SilasBarta · 2011-08-01T19:03:26.790Z · LW(p) · GW(p)
Good to hear about the successes, but I am still skeptical about this one:
Since the beginning of 2010, we have:...
Held a wildly successful Rationality Minicamp.
I have yet to see any actual substantiation for this claim beyond the SIAI blog's say-so and a few qualitative individual self-reports. I have not seen any attempt to extend and replicate this success, nor evidence that would even be possible.
If it actually were a failure, how would we know? Would anyone there even admit it, or prefer to avoid making its leaders look bad?
Sorry to be the bad guy there, but this claim has been floating around for a while and looks like it will become one of those things "everyone knows".
comment by Plasmon · 2011-07-29T19:58:20.151Z · LW(p) · GW(p)
Wasn't there something similar a while ago? ... yes there was. I can reasonably assume there will be others in the future. You are trying to get people to donate by appealing to an artificial sense of urgency ("Now is your chance to" , "Donate now" ). Beware that this triggers dark arts alarm bells.
Nevertheless, I have now donated an amount of money.
Replies from: dripgrind, Kutta, JGWeissman↑ comment by dripgrind · 2011-07-29T22:36:20.198Z · LW(p) · GW(p)
Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.
Replies from: DSimon, katydee, tenshiko↑ comment by tenshiko · 2011-08-26T00:46:27.204Z · LW(p) · GW(p)
I thought that we'd pretty much ditched the beheading part precisely for that reason?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-08-26T07:47:20.268Z · LW(p) · GW(p)
CI don't behead people, Alcor offer it as an option. If I've just met someone at a party, I'll tend to say "I'm having my head frozen" because people have heard of that, but I'll explain I'm actually signed up for whole-body if the conversation gets that far.
Replies from: lessdazed, tenshiko↑ comment by lessdazed · 2011-08-26T08:02:57.464Z · LW(p) · GW(p)
If I've just met someone at a party, I'll tend to say "I'm having my head frozen"
I usually offer my name and ask them theirs.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-26T08:20:50.126Z · LW(p) · GW(p)
I'm quite often asked about my necklace, and I'll say "It's my contract of immortality with the Cult of the Severed Head", or in some contexts, "It's my soul" or "It's my horcrux".
Replies from: Alicorn, lessdazed, ciphergoth↑ comment by Alicorn · 2011-08-26T16:51:25.423Z · LW(p) · GW(p)
I've asked you this before and you haven't answered: Severed Head? You're signed up with CI, which doesn't do neuro, aren't you? So how does that make sense?
Replies from: lessdazed↑ comment by lessdazed · 2011-08-26T08:44:43.058Z · LW(p) · GW(p)
Is it beneficial to say "immortality"? Would "It's my contract of resurrection with the Cult of the Severed Head" be deficient?
Phrases like "live forever" and "immortal" bring corrupting emotional connotations with them. It's not automatic to ignore the literal meaning of terms, even if we consciously keep track of what we mean - and of course in a discussion, we can only do our best to help the other person not be confused, not think for them.
Replies from: Eliezer_Yudkowsky, sketerpot↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-08-26T12:57:18.577Z · LW(p) · GW(p)
The key thing is for your voice to make it clear that you're not at all afraid and that you think this is what the high-prestige smart people do. Show the tiniest trace of defensiveness and they'll pounce.
Replies from: khafra, lessdazed↑ comment by khafra · 2011-08-26T13:09:08.348Z · LW(p) · GW(p)
So, your method leaves open the option of educating your interlocutor, if they question further. If all you're worried about is avoiding a status hit, you could confidently proclaim it to be an amulet given to you by the king of all geese in honor of your mutual defense treaty.
↑ comment by lessdazed · 2011-08-26T17:22:24.507Z · LW(p) · GW(p)
I wasn't referring to prestige at all when I said "beneficial". I was exclusively referring to what sketerpot is referring to.
In arguments, it's pretty common for people to argue for the traditional "decay and die before living even a hundred years" system with arguments against literal immortality. I've seen this happen so many times.
I don't see how "resurrection" is less of a show of confidence, confidence by nonchalance in framing an issue in light least favorable to the speaker. The advantage is that people do not get confused and think it a bad idea for reasons that don't actually apply.
↑ comment by sketerpot · 2011-08-26T09:02:25.825Z · LW(p) · GW(p)
In arguments, it's pretty common for people to argue for the traditional "decay and die before living even a hundred years" system with arguments against literal immortality. I've seen this happen so many times.
"What if you're getting your liver continuously ripped out by disagreeable badgers?" the argument goes. "Immortality would be potentially super-painful! And that's why the life expectancies in the society in which I happened to be born are about right."
The easiest way to bypass this semantic confusion is to explicitly say that it's about always having the option of continuing to be alive, rather than what people usually mean by immortality.
(P.S: Calling the necklace a phylactery would also be fun.)
Replies from: lessdazed↑ comment by lessdazed · 2011-08-26T09:17:13.458Z · LW(p) · GW(p)
The easiest way to bypass this semantic confusion is to explicitly say that it's about always having the option of continuing to be alive, rather than what people usually mean by immortality.
1) Death can still happen from any number of causes - tornadoes, for example.
2) That may bypass some of the most conscious semantic confusion, in the same way declaring that whenever you said any number, you always meant that number minus seven would clear up some confusion (if you did that). There is a better way.
3) It's probably not true.
↑ comment by Paul Crowley (ciphergoth) · 2011-08-26T12:07:15.636Z · LW(p) · GW(p)
I have a wallet card rather than a necklace; by and large I end up talking about it because another of my friends brings it up.
↑ comment by JGWeissman · 2011-07-30T19:16:01.375Z · LW(p) · GW(p)
Disagree about the dark arts, but upvoted for donating anyways.
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2011-08-24T22:40:21.262Z · LW(p) · GW(p)
I fully and completely embrace "dark arts", is there a problem with that?
Replies from: lessdazed↑ comment by lessdazed · 2011-08-24T22:58:39.662Z · LW(p) · GW(p)
Your thoughts on the matter are unclear. They could be any of the following, or something else:
"I see no reason to classify broad forms of social interaction as always bad, though they may be effective without persuading people as they would wish to be persuaded, that's just another negative to take into account when considering the total consequences of a speech act."
"I see no reason to classify broad forms of social interaction as always bad, though they may be effective without persuading people as they would wish to be, I don't care directly about people's desires to believe ideas only for certain reasons, such as persuasion and not emotional manipulation."
"I see some forms of changing minds as inherently good and others as inherently bad, and though I value being good rather than bad, it's not a high enough priority to be expressed in my actions very often."
"I see some forms of changing minds as inherently good and others as inherently bad, but I prefer to do what I feel like rather than what's good."
"Me want cookie! Me eat cookie! Om nom nom nom!"
Replies from: MichaelAnissimov↑ comment by MichaelAnissimov · 2011-08-25T00:02:19.520Z · LW(p) · GW(p)
Yeah, I can't write out fully what I mean, I just think the term has come to be too broad, to the point where it nixes obviously pragmatic lines of thought and action. More like:
"I see no reason to classify broad forms of social interaction as always bad, though they may be effective without persuading people as they would wish to be persuaded, upon first reflection, but they'll appreciate it later."
comment by fizzfaldt · 2011-07-30T21:24:18.162Z · LW(p) · GW(p)
I just donated Round(1000 Pi / 3) USD. I also had Google doing an employer match.
Strangely enough, I went through the 'donate publicly' link, but chose not to use facebook, and in the end it called me 'Anonymous Donor'.
Replies from: novaliscomment by JGWeissman · 2011-07-30T19:14:36.846Z · LW(p) · GW(p)
I am happy to see that the success of the previous matching program is being followed up with additional matching funds, and that there is such a broad base of sponsors. I have donated $2000 on top of my typical annual donation.
comment by MixedNuts · 2011-07-29T09:04:32.113Z · LW(p) · GW(p)
There's a major conflict of interest in accepting donations from Clippy.
Replies from: wedrifid, Dorikka, Clippy, Kevin↑ comment by Clippy · 2011-08-01T01:52:23.163Z · LW(p) · GW(p)
No, there isn't.
By the way, I think I'll go to the Singularity Summit this year. It is 385 USD if done before the end of July 31 EST.
Replies from: RobertLumley↑ comment by RobertLumley · 2011-08-02T17:57:13.760Z · LW(p) · GW(p)
Clippy, how can donating to the SIAI possibly meet your goal of maximizing paperclips? Not that I object...
Replies from: Pavitra, jimrandomh, Clippy↑ comment by Pavitra · 2011-08-02T18:25:47.456Z · LW(p) · GW(p)
Clippy is also socializing (generating positive affect) with people likely to have a hand in the Singularity. It's rather likely, especially considering the relative popularity around here of the idea of acausal trade, that some LWer might decide to devote 10^-7 or so of their post-singularity resources to paperclip production.
Replies from: Clippy↑ comment by jimrandomh · 2011-08-02T20:38:46.536Z · LW(p) · GW(p)
If there were a positive singularity such that we had a whole galaxy's worth of resources, then we (humanity) might turn one planet into paperclips, just for amusement.
comment by cjb · 2011-07-30T00:29:22.703Z · LW(p) · GW(p)
2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress.
Do people here generally think that this is true? I don't see much of an intersection between Watson and AI; it seems like a few machine learning algorithms that approach Jeopardy problems in an extremely artificial way, much like chess engines approach playing chess. (Are chess engines artificial intelligence too?)
Replies from: MichaelVassar, brazil84, brazil84↑ comment by MichaelVassar · 2011-08-01T14:59:51.554Z · LW(p) · GW(p)
I actually do think it's a big deal, as well as being flashy, though not an extremely big deal. Something along the lines of the best narrow AI accomplishment of any given year and the flashiest of any given 3-5 year period.
↑ comment by brazil84 · 2011-07-30T23:08:11.560Z · LW(p) · GW(p)
Further to my previous comment, I found the second final Jeopardy puzzle to be instructive. The category was "US Cities" and the clue was this:
Its largest airport was named for a World War II hero; its second largest, for a World War II battle.
A reasonably smart human will come up with an algorithm on the fly for solving this, which is to start thinking of major US cities (likely to have 2 or more airports); remember the names of their airports, and think about whether any of the names sound like a battle or a war hero. The three obvious cities to try are Los Angeles, New York, and Chicago. And "Midway" definitely sounds like the name of a battle.
But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.
Probably Watson relies heavily on statistical word associations. If the puzzle has "Charles Shulz" and "This Dog" in it, it will probably guess "Snoopy" without really parsing the puzzle. I'm just speculating here, but my impression is that AI has a long way to go.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-07-31T02:01:59.662Z · LW(p) · GW(p)
A reasonably smart human will come up with an algorithm on the fly for solving this, which is to start thinking of major US cities (likely to have 2 or more airports); remember the names of their airports, and think about whether any of the names sound like a battle or a war hero. The three obvious cities to try are Los Angeles, New York, and Chicago. And "Midway" definitely sounds like the name of a battle.
But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.
This isn't meaningful. Whatever method we use to "come up with algorithms on the fly" is itself an algorithm, just a more complicated one.
Probably Watson relies heavily on statistical word associations. If the puzzle has "Charles Shulz" and "This Dog" in it, it will probably guess "Snoopy" without really parsing the puzzle.
This isn't true. You know, a lot of the things you're talking about here regarding Watson aren't secret...
Replies from: brazil84↑ comment by brazil84 · 2011-07-31T02:14:25.579Z · LW(p) · GW(p)
This isn't meaningful. Whatever method we use to "come up with algorithms on the fly" is itself an algorithm, just a more complicated one
Then why wasn't Watson simply programmed with one meta-algorithm rather than hundreds of specialized algorithms?
This isn't true. You know, a lot of the things you're talking about here regarding Watson aren't secret.
FWIW, the wiki article indicates that Watson would "parse the clues into different keywords and sentence fragments in order to find statistically related phrases." Would you mind giving me some links which show that Watson doesn't rely heavily on statistical word associations?
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-07-31T12:58:03.784Z · LW(p) · GW(p)
Then why wasn't Watson simply programmed with one meta-algorithm rather than hundreds of specialized algorithms?
I don't have a clue what you're talking about. Where are you getting this claim that it was programmed with "hundreds of specialized algorithms"? And how is that really qualitatively different from what we do?
Would you mind giving me some links which show that Watson doesn't rely heavily on statistical word associations?
I never said it didn't. I was contradicting your statement that relied on that without any parsing.
Replies from: brazil84↑ comment by brazil84 · 2011-07-31T15:52:56.226Z · LW(p) · GW(p)
I don't have a clue what you're talking about. Where are you getting this claim that it was programmed with "hundreds of specialized algorithms"?
For one thing, the Wiki article talks about thousands of algorithms. My common sense tells me that many of those algorithms are specialized for particular types of puzzles. Anyway, why didn't Watsons creators program Watson with a meta-algorithm to enable it to solve puzzles like the Airport puzzle?
And how is that really qualitatively different from what we do?
For one thing, smart people can come up with new algorithms on the fly. For example an organized way of solving the airport puzzle. If that were just a matter of making a more complicated computer program, then why didn't Watson's creators do it?
I was contradicting your statement that relied on that without any parsing
My statement was speculation. So if you are confident that it is wrong, then presumably you must have solid evidence to believe so. If you don't know one way or another, then we are both in the same boat.
Replies from: handoflixue, Sniffnoy↑ comment by handoflixue · 2011-08-02T18:22:40.994Z · LW(p) · GW(p)
For one thing, smart people can come up with new algorithms on the fly. For example an organized way of solving the airport puzzle. If that were just a matter of making a more complicated computer program, then why didn't Watson's creators do it?
That's like asking why a human contestant failed to come up with a new algorithm on the fly. Or, put simply: no one is perfect. Not the other players, not Watson, and not Watson's creators. While you've certainly identified a flaw, I'm not sure it's really quite as big a deal as you make it out to be. I mean, Watson did beat actual humans, so clearly they managed something fairly robust.
I don't think Watson is anywhere near an AGI, but the field of AI development seems to mostly include "applied-AI" like Deep Blue and Watson, and failures, so I'm going to go ahead and root for the successes in applied-AI :)
Replies from: brazil84↑ comment by brazil84 · 2011-08-03T00:57:51.023Z · LW(p) · GW(p)
That's like asking why a human contestant failed to come up with a new algorithm on the fly.
I disagree. A human contestant who failed to come up with a new algorithm was perhaps not smart enough, but is still able to engage in the same kind of flexible thinking under less challenging circumstances. I suspect Watson cannot do so under any circumstances.
I mean, Watson did beat actual humans, so clearly they managed something fairly robust.
Without it's super-human buzzer speed, I doubt Watson would have won.
Replies from: gwillen↑ comment by gwillen · 2011-08-08T18:38:36.198Z · LW(p) · GW(p)
I believe that the way things were designed, Ken Jennings was probably at least as good as Watson on buzzer speed. Watson presses the buzzer with a mechanical mechanism, to give it a latency similar to a finger; and Watson doesn't start going for the buzzer until it sees the 'buzzer unlocked' signal. By contrast, Ken Jennings has said that he starts pressing the buzzer before the signal, relying on his intuitive sense of the typical delay between the completion of a question and the buzzer-unlock signal.
Replies from: brazil84↑ comment by brazil84 · 2011-08-08T20:49:35.520Z · LW(p) · GW(p)
Here's what Ken Jennings had to say:
Watson does have a big advantage in this regard, since it can knock out a microsecond-precise buzz every single time with little or no variation. Human reflexes can't compete with computer circuits in this regard. But I wouldn't call this unfair ... precise timing just happens to be one thing computers are better at than we humans. It's not like I think Watson should try buzzing in more erratically just to give homo sapiens a chance.
Here's what Wikipedia says:
Replies from: gwillenThe Jeopardy! staff used different means to notify Watson and the human players when to buzz, which was critical in many rounds. The humans were notified by a light, which took them tenths of a second to perceive. Watson was notified by an electronic signal and could activate the buzzer within about eight milliseconds. The humans tried to compensate for the perception delay by anticipating the light, but the variation in the anticipation time was generally too great to fall within Watson's response time. Watson did not operate to anticipate the notification signal.
↑ comment by Sniffnoy · 2011-07-31T19:21:14.141Z · LW(p) · GW(p)
For one thing, the Wiki article talks about thousands of algorithms. My common sense tells me that many of those algorithms are specialized for particular types of puzzles. Anyway, why didn't Watsons creators program Watson with a meta-algorithm to enable it to solve puzzles like the Airport puzzle?
Er... they did? The whole thing ultimately had to produce one answer, after all. It just wasn't good enough.
Replies from: brazil84↑ comment by brazil84 · 2011-07-31T20:19:58.969Z · LW(p) · GW(p)
The whole thing ultimately had to produce one answer, after all. It just wasn't good enough.
Ok, then arguably it's not so simple to create an algorithm which is "just more complicated." I mean, one could say that an ICBM is just like a Quassam rocket, but just more complicated.
Replies from: Clippy, Sniffnoy↑ comment by Clippy · 2011-08-01T14:59:07.704Z · LW(p) · GW(p)
An ICBM is "just" a bow-and-arrow system with a more precise guidance system, more energy available to spend reaching its destination, and a more destructive payload.
Replies from: brazil84↑ comment by brazil84 · 2011-08-01T19:38:50.565Z · LW(p) · GW(p)
Right, and it's far more difficult to construct. It probably took thousands of years between the first missile weapons and modern ICBMs. I doubt that it will take thousands of years to create general AI, but it's still the same concept.
The first general AI will probably be "just" an algorithm running on a digital computer.
↑ comment by Sniffnoy · 2011-07-31T20:26:09.772Z · LW(p) · GW(p)
This comment doesn't appear to have any relevance. Where did anyone suggest that the way to make it better is to just make it more complicated? Where did anyone suggest that improving it would be simple? I am completely baffled.
Replies from: brazil84↑ comment by brazil84 · 2011-07-31T20:38:20.667Z · LW(p) · GW(p)
Earlier, we had this exchange:
Me:
But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.
You:
Whatever method we use to "come up with algorithms on the fly" is itself an algorithm, just a more complicated one.
So you seemed to be saying that there's no big deal about the human ability to come up with a new algorithm -- it's just another algorithm. Which is technically true, but this sort of meta-algorithm obviously would require a lot more sophistication to create.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-07-31T20:51:39.061Z · LW(p) · GW(p)
Well, yes. Though probably firstly should note that I am skeptical that what you are talking about -- the process of answering a Final Jeopardy question -- could actually be described as coming up with new algorithms on the fly in the first place. Regardless, if we do accept that, my point that there is no meaningful distinction between relying on pre-programmed algorithms, and (algorithmically) coming up with new ones on the fly, stands. There's plenty of ways in which our brains are more sophisticated than Watson, but that one isn't a meaningful distinction. Perhaps you mean something else.
Replies from: brazil84↑ comment by brazil84 · 2011-07-31T21:16:22.255Z · LW(p) · GW(p)
my point that there is no meaningful distinction between relying on pre-programmed algorithms, and (algorithmically) coming up with new ones on the fly,
Then again my question: Why not program such a meta-algorithm into Watson?
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-07-31T21:27:08.070Z · LW(p) · GW(p)
I still don't think you're saying what you mean. The question doesn't make any sense. The answer to the question you probably intended to ask is, "Because the people writing Watson didn't know how to do so in a way that would solve the problem, and presumably nobody currently does". I mean, I think I get your point, but...
Replies from: brazil84↑ comment by brazil84 · 2011-07-31T21:37:30.963Z · LW(p) · GW(p)
Because the people writing Watson didn't know how to do so in a way that would solve the problem, and presumably nobody currently does
Fine, so it's a bit like the state of rocket science in 1900. They had crude military rockets but did not know how to make the kind of really destructive stuff that would come 100 years later. As I said, AI still has a way to go.
Replies from: Sniffnoy↑ comment by brazil84 · 2011-07-30T22:49:59.678Z · LW(p) · GW(p)
I found Watson to be pretty disappointing.
For one thing, it's big advantage was inhuman button-pushing speed since an actuator is much faster than a human finger. Now, one might argue that pushing the button is part of the game, but to that I would respond that reading the puzzles off of the right screen is part of the game too and Watson didn't have to do that -- the puzzles were inputted in the form of a text file. Also, travelling to Los Angeles is part of the game and Watson didn't have to do that either. If the game had been played in Los Angeles instead of New York, then all of Watson's responses would have been delayed by a few hundredths of a second.
Another problem is that a lot of the puzzles on Jeopardy don't actually require much intelligence to solve particularly if you can write a specialized program for each puzzle category. For example, I would guess a competent computer science grad student could pretty easily write a program that did reasonably well in "state capitals" And of the puzzles which do require some intelligence, the two human champions will split the points.
I'm not saying that Watson wasn't impressive, just that it's win was not convincing.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-07-31T01:51:14.548Z · LW(p) · GW(p)
Watson was not specialized for different categories. It would learn categories -- during a game, after seeing question-answer pairs from it. It ignored category titles, because they couldn't find any way to get that to work. (Hence "Toronto" when the category was "U.S. cities".)
Replies from: brazil84↑ comment by brazil84 · 2011-07-31T02:27:19.919Z · LW(p) · GW(p)
Watson was not specialized for different categories. It would learn categories -- during a game, after seeing question-answer pairs from it. It ignored category titles,
I have a really hard time believing this. A lot of the categories on Jeopardy recur regularly and pose the same types of puzzles again and again. IBM would have been crazy not to take advantage of this regularity. Or at least to pay attention to the category titles in evaluating possible answers.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-07-31T13:03:03.777Z · LW(p) · GW(p)
*shrug* I mean, if you want to claim that the makers of IBM coordinated to lie about this point, go ahead, but don't expect to me to bother discussing this with you at that point.
Replies from: ciphergoth, brazil84↑ comment by Paul Crowley (ciphergoth) · 2011-07-31T17:39:50.187Z · LW(p) · GW(p)
If your comment was inaccurate, it would probably be because you were mistaken and perhaps something you read was mistaken, not that IBM had coordinated to lie.
Replies from: Sniffnoy↑ comment by brazil84 · 2011-07-31T16:05:27.304Z · LW(p) · GW(p)
Here's a quote I found from the IBM research blog:
Watson calculates its uncertainty and learns which algorithms to trust under which circumstances, such as different Jeopardy! categories.
Seems to me that at a minimum, this shows that Watson does not ignore category titles.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-07-31T19:28:21.866Z · LW(p) · GW(p)
I didn't say it ignores categories -- it knows which questions go together in a category, and learns what to use for a given category as it sees question-answer pairs for it. What I said was that it ignores category titles.
However as it happened I was wrong about this; slight misremembrance, sorry. Watson does note category titles, it just doesn't weight them very highly. Apparently it learned this automatically during its training games. Source: http://www-03.ibm.com/innovation/us/watson/related-content/toronto.html
comment by Giles · 2011-08-11T21:59:12.928Z · LW(p) · GW(p)
I've donated $512 on top of my monthly donation.
The safety implications of advanced AI form one of the most important (and under-appreciated) ideas out there right now. It's an issue that humanity needs to think long and hard about. So I think that by organizing conferences and writing papers, SIAI are doing pretty much the right thing. I don't think they're perfect, but for me the way to help with that is by getting involved.
I am glad that people are standing up and showing their support, and also that people are voicing criticisms and showing that they are really thinking about the issue.
I hope to see some of you Oct 15-16 in New York!
comment by bentarm · 2011-07-29T16:29:23.157Z · LW(p) · GW(p)
I'm not entirely sure that I believe the premise of this game. Essentially, the claim is that 20 of SingInst's regular donors have extra money lying around that they are willing to donate to SingInst iff someone else donates the same amount. What do the regular donors intend to do with the money otherwise? Have they signed a binding agreement to all get together and blow the money on a giant party? Otherwise, why would they not just decide to donate it to SingInst at the end of the matching period anyway?
Replies from: at_the_zoo, GuySrinivasan, Rain, Clippy, shokwave↑ comment by at_the_zoo · 2011-07-29T18:41:23.836Z · LW(p) · GW(p)
This seems relevant:
Replies from: MichaelVassarFive: US tax law prohibits public charities from getting too much support from big donors.
Under US tax law, a 501(c)(3) public charity must maintain a certain percentage of "public support". As with most tax rules, this one is complicated. If, over a four-year period, any one individual donates more than 2% of the organization's total support, anything over 2% does not count as "public support". If a single donor supported a charity, its public support percentage would be only 2%. If two donors supported a charity, its public support percentage would be at most 4%. Public charities must maintain a public support percentage of at least 10% and preferably 33.3%. Small donations - donations of less than 2% of our total support over a four-year period - count entirely as public support. Small donations permit us to accept more donations from our major supporters without sending our percentage of public support into the critical zone. Currently, the Singularity Institute is running short on public support - so please don't think that small donations don't matter!
↑ comment by MichaelVassar · 2011-08-01T15:09:35.565Z · LW(p) · GW(p)
Yes
↑ comment by SarahNibs (GuySrinivasan) · 2011-07-29T22:20:39.413Z · LW(p) · GW(p)
Here's my totally non-binding plan for my $1100 extra dollars that really were just lying around, budgeted but projected to not be spent: If we meet the full challenge, donate $1100 to SingInst and have Microsoft match it as well. If we meet only e.g. 80%, donate 80% of $1100 and have Microsoft match it, and spend the rest on a party I wouldn't have had otherwise and link y'all to tasteful pictures. That's a x3 multiplier on ~1% of the $125,000.
Before your post, bentarm, my plan was somewhat different but I estimate it gave at least a 2.9x multiplier.
Replies from: Benquo↑ comment by Rain · 2011-07-29T16:47:48.170Z · LW(p) · GW(p)
Status affiliation, feeling even better about donating, creating an "event" which can be linked to and discussed, providing encouragement to others, and likely other reasons I'm not thinking of right now.
I'm betting Peter Thiel also used it to judge popular support for the idea when he ran a $400,000 challenge in 2007. IIRC, they didn't even make it halfway by the deadline, which likely reduced his subsequent donations.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2011-08-01T15:11:14.446Z · LW(p) · GW(p)
They made more like $270K IIRC,
↑ comment by shokwave · 2011-07-29T17:06:35.414Z · LW(p) · GW(p)
Essentially, the claim is that 20 of SingInst's regular donors have extra money lying around that they are willing to donate to SingInst iff someone else donates the same amount.
Bear with my broad strokes here.
Let the utility u of donating x dollars to SingInst be diminishing: u = x^(1/3).
Assume that hedonistic spending can be spread over enough options to make diminishing returns negligible: u = x
The donor then donates their first dollar to SingInst, whereupon hedonistic spending provides them more utils.
The utility u of providing x dollars of a dollar-for-dollar service for SingInst donors is: u = (2x)^(1/3), and the donor gets up to ~1.415 dollars (to gain 2.83 dollars worth of donating utility!) before hedonistic spending is a better option. So this answers the first claim: they are willing to donate extra money iff someone else donates, and they intend to spend it on some other form of utility otherwise. The nature of dollar-for-dollar matching increases the utility they gain from their dollar.
As for the second part... there's no need to commit to a binding agreement to not donate the leftover to SingInst at the end. Say the donor donated a dollar by themselves, then offered up to 41 cents matching contribution. Other donors take advantage of this offer by 20 cents - the donor still has another 21 cents of positive utility before hedonistic spending takes over, but the matching period ends at this point in time. (Why end the matching period? I'll get there in a second). At this point, those 21 cents will generate .21 utils if spent hedonistically, but only .06 utils if donated without matching.
So the nature of diminishing utility will produce this behaviour. It's not so suspicious a premise, in the end.
Replies from: shokwave↑ comment by shokwave · 2011-07-29T17:16:28.588Z · LW(p) · GW(p)
Now, why end the matching period? A utility-maximising donor ought to offer their 41 cents for as long as it takes - the utility of those 41 cents is always going to be higher than any 41-cent section of hedonistic spending (which is where the money is coming from). The answer lies partly in human psychology. A limited-time offer prompts action in a way that open-ended, time-independent offers do not. A progress bar prompts action too. It's also partly from status: successfully completing campaigns like this raises the prestige of SingInst. Meeting the target before the specified end date raises the prestige of both SingInst and its donors.
These factors are convincing enough reasons to time-limit the matching offer, mostly because they solve the problem of the pledge not being fulfilled. Odd quirk of our brains to blame here.
Replies from: drethelin↑ comment by drethelin · 2011-07-30T05:56:09.575Z · LW(p) · GW(p)
don't forget the simple explanation: It's risky to offer to match infinite dollars.
Replies from: shokwave, Clippy↑ comment by shokwave · 2011-07-30T06:01:56.888Z · LW(p) · GW(p)
The rational donor wouldn't offer to match x for y time; they would simply offer to match x. So the donor would simply offer to match the first dollar and forty-one cents donated. There's no risk of accidentally being required to donate more than you'd want to.
comment by Peter Wildeford (peter_hurford) · 2011-07-29T18:07:53.495Z · LW(p) · GW(p)
I understand the SI needs money and I understand a lot of discussion about this has ensued elsewhere, but I'm still skeptical that I can have the most impact with my money by donating to the SI, when I could be funding malaria nets, for instance.
Replies from: Nick_Tarleton, Rain, Kevin↑ comment by Nick_Tarleton · 2011-07-29T18:48:00.696Z · LW(p) · GW(p)
There are two questions here that deserve separate consideration: donating to existential risk reduction vs. other (nearer-term, lower-uncertainty) philanthropy, and donating to SI vs. other x-risk reduction efforts. It seems to me that you should never be weighing SI against malaria nets directly; if you would donate to (SI / malaria nets) conditional on their effectiveness, you've already decided (for / against) x-risk reduction and should only be considering alternatives like (FHI / vaccination programs).
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-07-29T19:34:23.529Z · LW(p) · GW(p)
Thanks. You're right I've been thinking about it wrong, I'll have to reconsider how I approach philanthropy. It's valuable to donate to research anyway, since research is what comes up with things like "malaria nets".
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2011-07-29T20:28:31.819Z · LW(p) · GW(p)
Glad I could help. Thanks for letting me know.
It's valuable to donate to research anyway, since research is what comes up with things like "malaria nets".
Good point; under uncertainty about x-risk vs. near-term philanthropy you might donate to organizations that could help answer that question, like GiveWell or SI/FHI.
↑ comment by Rain · 2011-07-29T18:19:00.742Z · LW(p) · GW(p)
They planned on doing an academic paper on the topic, though it hasn't been completed yet. Here's Anna Salamon's presentation, estimating 8 lives saved per dollar donated to SingInst.
Replies from: jsteinhardt, peter_hurford, David_Gerard↑ comment by jsteinhardt · 2011-07-29T19:59:08.140Z · LW(p) · GW(p)
If a back-of-the-envelope calculation comes up with a number like that, then it is probably wrong.
Replies from: steven0461, Rain↑ comment by steven0461 · 2011-07-29T20:48:15.029Z · LW(p) · GW(p)
I haven't watched the presentation, but 8 lives corresponds to only a one in a billion chance of averting human extinction per donated dollar, which corresponds (neglecting donation matching and the diminishing marginal value of money) to roughly a 1 in 2000 chance of averting human extinction from a doubling of the organization's budget for a year. That doesn't sound obviously crazy to me, though it's more than I'd attribute to an organization just on the basis that it claimed to be reducing extinction risk.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2011-08-01T15:37:52.475Z · LW(p) · GW(p)
For what it's worth, this is in line with my estimates, which are not just on the basis of claimed interest in x-risk reduction. I don't think that an order of magnitude or more less than this level of effectiveness could be the conclusion of a credible estimation procedure.
↑ comment by Rain · 2011-07-29T20:27:51.440Z · LW(p) · GW(p)
The topics of existential risk, AI, and other future technologies inherently require the use of very large numbers, far beyond any of those encountered when discussing normal, everyday risks and rewards.
Replies from: steven0461, jsteinhardt↑ comment by steven0461 · 2011-07-29T20:52:01.876Z · LW(p) · GW(p)
Note that the large number used in this particular back-of-envelope calculation is the world population of several billion, not the still much larger numbers involved in astronomical waste.
↑ comment by jsteinhardt · 2011-07-29T22:57:49.382Z · LW(p) · GW(p)
Even if this is so, there is tons of evidence that humans suck at reasoning about such large numbers. If you want to make an extraordinary claim like the one you made above, then you need to put forth a large amount of evidence to support it. And on such a far-mode topic, the likelihood of your argument being correct decreases exponentially with the number of steps in the inferential chain.
I only skimmed through the video, but assuming that the estimates at 11:36 are what you're referring to, those numbers are both seemingly quite high and entirely unjustified in the presentation. It also overlooks things like the fact that utility doesn't scale linearly in number of lives saved when calculating the benefit per dollar.
Whether or not those numbers are correct, presenting them in their current form seems unlikely to be very productive. Likely either the person you are talking to already agrees, or the 8 lives figure triggers an absurdity heuristic that will demand large amounts of evidence. Heck, I'm already pretty familiar with the arguments, and I still get a small amount of negative affect whenever someone tries to make the "donating to X-risk has expected utility".
I don't think anyone on LW disagrees that reducing xrisk substantially carries an extremely high utility. The points of disagreement are over whether SIAI can non-trivially reduce xrisk, and whether they are the most effective way to do so. At least on this website, this seems like the more productive path of discussion.
Replies from: Vladimir_Nesov, Rain, MugaSofer↑ comment by Vladimir_Nesov · 2011-07-29T23:37:54.912Z · LW(p) · GW(p)
Keep in mind that estimation is the best we have. You can't appeal to Nature for not having been given a warning that meets a sufficient standard of rigor. Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-07-30T11:09:46.133Z · LW(p) · GW(p)
You can't appeal to Nature for not having been given a warning that meets a sufficient standard of rigor.
From a Bayesian point of view, your prior should place low probability on a figure like "8 lives per dollar". Therefore, lots of evidence is required to overcome that prior.
From a decision-theoretic point of view, the general strategy of believing sketchy (with no offense intended to Anna; I look forward to reading the paper when it is written) arguments that reach extreme conclusions at the end is a bad strategy. There would have to be a reason why this argument was somehow different from all other arguments of this form.
Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
If there were tons of actions lying around with similarly huge potential positive consequences, then I would be first in line to take them (for exactly the reason you gave). As it stands, it seems like in reality I get a one-time chance to reduce p(bad singularity) by some small amount. More explicitly, it seems like SIAI's research program reduces xrisk by some small amount, and a handful of other programs would also reduce xrisk by some small amount. There is no combined set of programs that cumulatively reduces xrisk by some large amount (say > 3% to be explicit).
I have to admit that I'm a little bit confused about how to reason here. The issue is that any action I can personally take will only decrease xrisk by some small amount anyways. But to me the situation feels different if society can collectively decrease xrisk by some large amount, versus if even collectively we can only decrease it by some small amount. My current estimate is that we are in the latter case, not the former --- even if xrisk research had unlimited funding, we could only decrease total xrisk by something like 1%. My intuitions here are further complicated by the fact that I also think humans are very bad at estimating small probabilities --- so the 1% figure could very easily be a gross overestimate, whereas I think a 5% figure is starting to get into the range where humans are a bit better at estimating, and is less likely to be such a bad overestimate.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-07-31T04:56:40.029Z · LW(p) · GW(p)
From a Bayesian point of view, your prior should place low probability on a figure like "8 lives per dollar". Therefore, lots of evidence is required to overcome that prior.
My prior contains no such provisions; there are many possible worlds where tiny applications of resources have apparently disproportionate effect, and from the outside they don't look so unlikely to me.
There are good reasons to be suspicious of claims of unusual effectiveness, but I recommend making that reasoning explicit and seeing what it says about this situation and how strongly.
There are also good reasons to be suspicious of arguments involving tiny probabilities, but keep in mind: first, you probably aren't 97% confident that we have so little control over the future (I've thought about it a lot and am much more optimistic), and second, that even in a pessimistic scenario it is clearly worth thinking seriously about how to handle this sort of uncertainty, because there is quite a lot to gain.
Of course this isn't an argument that you should support the SIAI in particular (though it may be worth doing some information-gathering to understand what they are currently doing), but that you should continue to optimize in good faith.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-07-31T16:13:42.859Z · LW(p) · GW(p)
you should continue to optimize in good faith.
Can you clarify what you mean by this?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-08-02T07:29:06.324Z · LW(p) · GW(p)
Only that you consider the arguments you have advanced in good faith, as a difficulty and a piece of evidence rather than potential excuses.
↑ comment by Rain · 2011-07-29T23:15:19.269Z · LW(p) · GW(p)
I don't think anyone on LW disagrees that reducing xrisk substantially carries an extremely high utility.
I'm glad you agree.
The points of disagreement are over whether SIAI can non-trivially reduce xrisk, and whether they are the most effective way to do so. At least on this website, this seems like the more productive path of discussion.
I'd be very appreciative to hear if you know of someone doing more.
Replies from: jsteinhardt, multifoliaterose↑ comment by jsteinhardt · 2011-07-30T10:47:16.348Z · LW(p) · GW(p)
Well for instance, certain approaches to AGI are more likely to lead to something friendly than other approaches are. If you believe that approach A is 1% less likely to lead to a bad outcome than approach B, then funding research in approach A is already compelling.
In my mind, a well-reasoned statistical approach with good software engineering methodologies is the mainstream approach that is least likely to lead to a bad outcome. It has the advantage that there is already a large amount of related research being done, hence there is actually a reasonable chance that such an AGI would be the first to be implemented. My personal estimate is that such an approach carries about 10% less risk than an alternative approach where the statistics and software are both hacked together.
In contrast, I estimate that SIAI's FAI approach would carry about 90% less risk if implemented than a hacked-together AGI. However, I assign very low probability to SIAI's current approach succeeding in time. I therefore consider the above-mentioned approach more effective.
Another alternative to SIAI that doesn't require estimates about any specific research program would be to fund the creation of high-status AI researchers who care about Friendliness. Then they are free to steer the field as a whole towards whatever direction is determined to carry the least risk, after we have the chance to do further research to determine that direction.
Replies from: Wei_Dai, Rain, JGWeissman↑ comment by Wei Dai (Wei_Dai) · 2011-07-30T18:30:00.503Z · LW(p) · GW(p)
My personal estimate is that such an approach carries about 10% less risk than an alternative approach where the statistics and software are both hacked together.
I don't understand what you mean by "10% less risk". Do you think any given project using "a well-reasoned statistical approach with good software engineering methodologies" has at least 10% chance of leading to a positive Singularity? Or each such project has a P*0.9 probability of causing an existential disaster, where P is the probability of disaster of a "hacked together" project. Or something else?
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-07-31T00:55:07.034Z · LW(p) · GW(p)
Sorry for the ambiguity. I meant P*0.9.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-31T02:15:03.988Z · LW(p) · GW(p)
You said "I therefore consider the above-mentioned approach more effective.", but if all you're claiming is that the above mentioned approach ("a well-reasoned statistical approach with good software engineering methodologies") has a P*0.9 probability of causing an existential disaster, and not claiming that it has a significant chance of causing a positive Singularity, then why do you think funding such projects is effective for reducing existential risk? Is the idea that each such project would displace a "hacked together" project that would otherwise be started?
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-07-31T16:07:54.423Z · LW(p) · GW(p)
EDIT: I originally misinterpreted your post slightly, and corrected my reply accordingly.
Not quite. The hope is that such a project will succeed before any other hacked-together project succeeds. More broadly, the hope is that partial successes using principled methodologies will convince them to be more widely adopted in the AI community as a whole, and more to the point that a contingent of highly successful AI researchers advocating Friendliness can change the overall mindset of the field.
The default is a hacked-together AI project. SIAI's FAI research is trying to displace this, but I don't think they will succeed (my information on this is purely outside-view, however).
An explicit instantiation of some of my calculations:
SIAI approach: 0.1% chance of replacing P with 0.1P Approach that integrates with the rest of the AI community: 30% chance of replacing P with 0.9P
In the first case, P is basically staying constant, in the second case it is being replaced with 0.97P.
↑ comment by Rain · 2011-07-30T15:47:39.649Z · LW(p) · GW(p)
I noticed you didn't name anybody. Did you have specific programs or people in mind?
We already seem to (roughly) agree on probabilities.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-08-02T21:14:21.115Z · LW(p) · GW(p)
The only specific plan I have right now is to put myself in a position to hire smart people to work on this problem. I think the most robust way to do this is to get a faculty position somewhere, but I need to consider the higher relative efficiency of corporations over universities some more to figure out if it's worthwhile to go with the higher-volatility route of industry.
Also, as Paul notes, I need to consider other approaches to x-risk reduction as well to see if I can do better than my current plan. The main argument in favor of my current plan is that there is a clear path to the goal, with only modest technical hurdles and no major social hurdles. I don't particularly like plans that start to get fuzzier than that, but I am willing to be convinced that this is irrational.
EDIT: To be more explicit, my current goal is to become one of said high-status AI researchers. I am worried that this is slightly self-serving, although I think I have good reason to believe that I have a comparative advantage at this task.
Replies from: MugaSofer↑ comment by JGWeissman · 2011-07-30T17:39:12.470Z · LW(p) · GW(p)
Another alternative to SIAI that doesn't require estimates about any specific research program would be to fund the creation of high-status AI researchers who care about Friendliness.
That seems more of an alternative within SIAI than an alternative to SIAI. With more funding, their Associate Research Program can promote the importance of Friendliness and increase the status of researchers who care about it.
↑ comment by multifoliaterose · 2011-07-29T23:51:08.204Z · LW(p) · GW(p)
I'd be very appreciative to hear if you know of someone doing more.
Over the coming months I'm going to be doing an investigation of the non-profits affiliated with the Nuclear Threat Initiative with a view toward finding x-risk reduction charities other than SIAI & FHI. I'll report back what I learn but it may be a while.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-07-31T17:45:23.741Z · LW(p) · GW(p)
I'm under the impression that nuclear war doesn't pose an existential risk. Do you disagree? If so, I probably ought to make a discussion post on the subject so we don't take this one too far off topic.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2011-07-31T20:23:56.146Z · LW(p) · GW(p)
My impression is that the risk of immediate extinction due to nuclear war is very small but that a nuclear war could cripple civilization to the point of not being able to recover enough to affect a positive singularity; also it would plausibly increase other x-risks - intuitively, nuclear war would destabilize society, and people are less likely to take safety precautions in an unstable society when developing advanced technologies than they otherwise would be. I'd give a subjective estimate of 0.1% - 1% of nuclear war preventing a positive singularity.
Replies from: steven0461, ciphergoth↑ comment by steven0461 · 2011-07-31T21:15:11.577Z · LW(p) · GW(p)
I'd give a subjective estimate of 0.1% - 1% of nuclear war preventing a positive singularity.
Do you mean:
- The probability of PS given NW is .1-1% lower than the probability of PS given not-NW
- The probability of PS is .1-1% lower than the probability of PS given not-NW
- The probability of PS is 99-99.9% times the probability of PS given not-NW
- etc?
↑ comment by multifoliaterose · 2011-07-31T21:29:45.701Z · LW(p) · GW(p)
Good question. My intended meaning was the second of the meanings that you listed "the probability of a positive singularity is 0.1%-1% lower than the probability of a positive singularity given no nuclear war." Would be interested to hear any thoughts that you have about these things.
Replies from: steven0461↑ comment by steven0461 · 2011-08-01T05:11:32.033Z · LW(p) · GW(p)
I can't think of a mechanism through which recovery would become long-term impossible, but maybe there is one. People taking fewer safety precautions in a destabilized society does sound plausible. There are probably a number of other, similarly important effects of nuclear war on existential risk to take into account. Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups. To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike. Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe. Power would be more in the hands of those who weren't involved in the nuclear war.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we'd have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else. To make things more complicated, it's possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
The number and strength of reasons we see one way or the other may depend more on time people have spent searching specifically for reasons for/against than on what reasons exist. The main reason to expect an imbalance there is that nuclear war causes huge amounts of death and suffering, and so people will be motivated to rationalize that it will also be a bad thing according to this mostly independent criterion of existential risk minimization; or people may overcorrect for that effect or have other biases for thinking nuclear war would prevent existential risk. To the extent that our misgivings about failing to do enough to stop nuclear war have to do with worries that existential risk reduction may not outweigh huge present death and suffering, we'd do better to acknowledge those worries than to rationalize ourselves into thinking there's never a conflict.
Without knowing anything about specific risk mitigation proposals, I would guess that there's even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked. But more specific information could easily overrule that presumption, and some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war, so who knows.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2011-08-01T19:39:10.246Z · LW(p) · GW(p)
Thanks for your thoughtful comment.
I can't think of a mechanism through which recovery would become long-term impossible, but maybe there is one.
I have little idea of how likely it is but a nuclear winter could seriously hamper human mobility.
Widespread radiation would further hamper human mobility.
Redeveloping preexisting infrastructure could require natural resources on of order of magnitude comparable to the infrastructure that we have today. Right now we have the efficient market hypothesis to help out with natural resource shortage, but upsetting the trajectory of our development could exacerbate the problem.
Note that a probability of 0.1% isn't so large (even taking account all of the other things that could interfere with a positive singularity).
Different technologies (IA, uploading, AGI, Friendliness philosophy) have different motivations behind them that would probably be differently affected by a nuclear war. Memes would have more time to come closer to some sort of equilibrium in various relevant groups.
Reasoning productively about the expected value of these things presently seems to me to be too difficult (but I'm open to changing my mind if you have ideas).
To the extent that there are nontrivial existential risks not depending on future technology, they would have more time to strike.
With the exception of natural resource shortage (which I mentioned above) I doubt that this is within an order of magnitude of significance of other relevant factors provided that we're talking about a delay on the order of fewer than 100 years (maybe similarly for a delay of 1000 years; I would have to think about it).
Catastrophes would be more psychologically salient, or maybe the idea of future nuclear war would overshadow other kinds of catastrophe.
Similarly, I doubt that this would be game-changing.
Power would be more in the hands of those who weren't involved in the nuclear war.
These seem worthy of further contemplation - is the development of future technologies more likely to go in Australia than in the current major powers, etc.
In any case, the effect of nuclear war on existential risk seems like a nontrivial question that we'd have to have a better idea about before we could decide that resources are better spent on nuclear war prevention than something else.
This seems reasonable. As I mentioned, I presently attach high expected x-risk reduction to nuclear war prevention but my confidence is sufficiently unstable at present so that the value devoting resources to gather more information outweighs the value of donating to nuclear war reduction charities.
To make things more complicated, it's possible that preventing nuclear war would on average decrease existential risk but that a specific measure to prevent nuclear war would increase existential risk (or vice versa), because the specific kinds of nuclear war that the measure prevents are atypical.
Yes. In the course of researching nuclear threat reduction charities I hope to learn what options are on the table.
Without knowing anything about specific risk mitigation proposals, I would guess that there's even more expected return from looking into weird, hard-to-think-about technologies like MNT than from looking into nuclear war, because less of the low-hanging fruit there would already have been picked.
On the other hand there may not be low hanging fruit attached to thinking about weird, hard-to-think-about technologies like MNT. I do however plan on looking into the Foresight Institute.
Replies from: steven0461↑ comment by steven0461 · 2011-08-01T23:21:24.270Z · LW(p) · GW(p)
Thanks for clarifying and I hope your research goes well. If I'm not mistaken, you can see the 0.1% calculation as the product of three things: the probability nuclear war happens, the probability that if it happens it's such that it prevents any future positive singularities that otherwise would have happened, and the probability a positive singularity would otherwise have happened. If the first and third probabilities are, say, 1/5 and 1/4, then the answer will be 1/20 of the middle probability, so your 0.1%-1% answer corresponds to a 2%-20% chance that if a nuclear war happens then it's such that it prevents any future positive singularities that would otherwise have happened. Certainly the lower end and maybe the upper end of that range seem like they could plausibly end up being close to our best estimate. But note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also. (Probabilities can't really go negative, so the interpretation I gave above doesn't really work, but I hope you can see what I mean.)
Replies from: multifoliaterose↑ comment by multifoliaterose · 2011-08-02T18:25:59.172Z · LW(p) · GW(p)
note that you have to look at the net effect after taking into account effects in both directions; I would still put substantial probability on this estimate ending up effectively negative, also.
I agree and should have been more explicit in taking this into account. However, note that if one assigns a 2:1 odds ratio for (0.1%-1% decrease in x-risk)/(same size increase in x-risk) then the expected value of preventing nuclear war doesn't drop below 1/3 of what it would be if there wasn't the possibility of nuclear war increasing x-risk: still on the same rough order of magnitude.
↑ comment by Paul Crowley (ciphergoth) · 2011-08-01T05:28:15.902Z · LW(p) · GW(p)
Thanks for the clarification on the estimate. Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we'd have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Replies from: gjm, timtyler, multifoliaterose, ArisKatsaris↑ comment by gjm · 2011-08-02T19:56:03.429Z · LW(p) · GW(p)
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal. It's difficult enough even now, with everything basically still working nicely, to see how to wean ourselves off fossil fuels, as for various reasons many people think we should do. Now imagine trying to build a nuclear power industry or highly efficient solar cells with our existing energy infrastructure in ruins.
So it looks to me as if (1) our best prospects for long-term x-risk avoidance all involve advanced technology (space travel, AI, nanothingies, ...) and (2) a major not-immediately-existential catastrophe could seriously jeapordize our prospects of ever developing such technology, so (3) such a catastrophe should be regarded as a big increase in x-risk.
Replies from: ciphergoth, timtyler↑ comment by Paul Crowley (ciphergoth) · 2011-08-02T21:31:25.053Z · LW(p) · GW(p)
I've heard arguments for and against "it might turn out to be too hard the second time around". I think overall that it's more likely than not that we would eventually succeed in rebuilding a technological society, but that's the strongest I could put it, ie it's very plausible that we would never do so.
If enough of our existing thinking survives, the thinking time that rebuilding civilization would give us might move things a little in our favour WRT AI++, MNT etc. I don't know which side does better on this tradeoff. However I seriously doubt that trying to bring about the collapse of civilization is the most efficient way to mitigate existential risk.
Also, and I hate to be this selfish about it but there it is, if civilization ends I definitely die either way, and I'd kind of prefer not to.
↑ comment by timtyler · 2011-08-03T15:48:19.879Z · LW(p) · GW(p)
Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal.
We have a huge mountain of coal, and will do for the next hundred years or so. Doing without doesn't seem very likely.
Replies from: gjm↑ comment by gjm · 2011-08-03T20:42:13.403Z · LW(p) · GW(p)
How easily accessible is that coal to people whose civilization has collapsed, taking most of the industrial machinery with it? (That's a genuine question. Naively, it seems like the easiest-to-get-at bits would have been mined out first, leaving the harder bits. How much harder they are, and how big a problem that would be, I have no idea.)
Replies from: timtyler↑ comment by timtyler · 2011-08-02T09:41:48.453Z · LW(p) · GW(p)
Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we'd have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.
Technical challenges? Difficulty in coordinating? Are there other candidate setbacks?
↑ comment by multifoliaterose · 2011-08-01T18:52:35.244Z · LW(p) · GW(p)
because we'd have more time to think about existential risk mitigation while we rebuild society
It may be highly unproductive to think about advanced future technologies in very much detail before there's a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
I do think that we can get better at some relevant things at present (learning how to obtain as accurate as realistically possible predictions about probable government behaviors, etc.) and that all else being equal we could benefit from more time thinking about these things rather than less time.
However, it's not clear to me that the time so gained would outweigh a presumed loss in clear thinking post-nuclear war and I currently believe that the loss would be substantially greater than the gain.
As steven0461 mentioned, "some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war." I haven't had a chance to talk about this with them in detail; but it updates me in the direction of attaching high expected value reduction to nuclear war risk reduction.
My positions on these points are very much subject to change with incoming information.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-17T14:04:21.837Z · LW(p) · GW(p)
It may be highly unproductive to think about advanced future technologies in very much detail before there's a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.
How much detail is too much?
↑ comment by ArisKatsaris · 2011-08-02T21:41:17.993Z · LW(p) · GW(p)
because we'd have more time to think about existential risk mitigation while we rebuild society."
A more likely result: the religious crazies will take over, and they either don't think existential risk can exist (because God would prevent them) or they think preventing existential risk would be blasphemy (because God ought be allowed to destroy us). Or they even actively work to make it happen and bring about God's judgmenent.
And then humanity dies, because both denying and embracing existential risk causes it to come nearer.
↑ comment by MugaSofer · 2013-04-17T13:59:13.023Z · LW(p) · GW(p)
It also overlooks things like the fact that utility doesn't scale linearly in number of lives saved when calculating the benefit per dollar.
Woah, woah! What! Since when?
Unless you mean "scope insensitivity"?
8 lives figure triggers an absurdity heuristic that will demand large amounts of evidence.
Well, sure, the absurdity heuristic is terrible.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2013-04-18T07:58:35.143Z · LW(p) · GW(p)
Woah, woah! What! Since when?
Why would it scale linearly? I agree that is scales linearly over relatively small regimes (on the order of millions of lives) by fungibility, but I see no reason why that needs to be true for trillions of lives or more (and at least some reasons why it can't scale linearly forever).
Well, sure, the absurdity heuristic is terrible.
Re-read the context of what I wrote. Whether or not the absurdity heuristic is a good heuristic, it is one that is fairly common among humans, so if your goal is to have a productive conversation with someone who doesn't already agree with you, you shouldn't throw out such an ambitious figure without a solid argument. You can almost certainly make whatever point you want to make with more conservative numbers.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-19T13:30:56.741Z · LW(p) · GW(p)
Why would it scale linearly? I agree that is scales linearly over relatively small regimes (on the order of millions of lives) by fungibility, but I see no reason why that needs to be true for trillions of lives or more (and at least some reasons why it can't scale linearly forever).
Lets say you currently have a trillion utility-producing thingies - call them humans, if it helps. You're pretty happy. In fact, you have so many that the utility of more is negligible.
Then Doctor Evil appears! He has five people hostage, he's holding them to ransom!
His ransom: kill off six of the people you already have.
Since those trillion people's value didn't scale linearly, reducing them by six isn't nearly as important as five people!
Rinse. Repeat.
Re-read the context of what I wrote. Whether or not the absurdity heuristic is a good heuristic, it is one that is fairly common among humans, so if your goal is to have a productive conversation with someone who doesn't already agree with you, you shouldn't throw out such an ambitious figure without a solid argument. You can almost certainly make whatever point you want to make with more conservative numbers.
Well sure, if we're talking Dark Arts...
Replies from: jsteinhardt, CCC↑ comment by jsteinhardt · 2013-04-20T08:45:46.446Z · LW(p) · GW(p)
Since those trillion people's value didn't scale linearly, reducing them by six isn't nearly as important as five people!
This isn't true --- the choice is between N-6 and N-5 people; N-5 people is clearly better. Not to be too blunt, but I think you've badly misunderstood the concept of a utility function.
Well sure, if we're talking Dark Arts...
Actively making your argument objectionable is very different from avoiding the use of the Dark Arts. In fact, arguably it has the same problem that the Dark Arts has, which is that is causes someone to believe something (in this case, the negation of what you want to show) for reasons unrelated to the validity of the supporting argument.
Replies from: private_messaging, MugaSofer↑ comment by private_messaging · 2013-04-20T09:23:14.209Z · LW(p) · GW(p)
This isn't true --- the choice is between N-6 and N-5 people; N-5 people is clearly better. Not to be too blunt, but I think you've badly misunderstood the concept of a utility function.
Yes. The hypothetical utility function could e.g. take a list of items and then return the utility. It need not satisfy f(A,B)=f(A)+f(B) where "," is list concatenation. For example, this would apply to the worth of books, where a library is more worthy than however many copies of some one book. To simply sum values of books considered independently is ridiculous, it's like valuing books by weight. Information content of the brain or what ever else it is that you might value (process?) is a fair bit more like a book than its like the weight of the books.
↑ comment by MugaSofer · 2013-04-23T10:43:19.275Z · LW(p) · GW(p)
Actively making your argument objectionable is very different from avoiding the use of the Dark Arts. In fact, arguably it has the same problem that the Dark Arts has, which is that is causes someone to believe something (in this case, the negation of what you want to show) for reasons unrelated to the validity of the supporting argument.
Sorry, I only meant to imply that I had assumed we were discussing rationality, given the low status of the "Dark Arts". Not that there was anything wrong with such discussion; indeed, I'm all for it.
↑ comment by CCC · 2013-04-20T12:45:02.197Z · LW(p) · GW(p)
Since those trillion people's value didn't scale linearly, reducing them by six isn't nearly as important as five people!
This doesn't hold. Those extra five should be added onto the trillion you already have; not considered seperately.
Value only needs to increase monotonically. Linearity is not required; it might even be asymptotic.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-23T10:56:18.497Z · LW(p) · GW(p)
Those extra five should be added onto the trillion you already have; not considered seperately.
That depends on how you do the accounting here. If we check the utility provided by saving five people, it's high. If we check the utility provided by increasing a population of a trillion, it's unfathomably low.
This is, in fact, the point.
Intuitively, we should be able to meaningfully analyse the utility of a part without talking about - or even knowing - the utility of the whole. Discovering vast interstellar civilizations should not invalidate our calculations made on how to save the most lives.
Replies from: CCC↑ comment by CCC · 2013-04-23T12:27:46.824Z · LW(p) · GW(p)
Let us assume that we have A known people in existence. Dr. Evil presents us with B previously unknown people, and threatens to kill them unless we kill C out of our A known people (where C<A). The question is, whether it is ethically better to let B people die, or to let C people die. (It is clearly better to save all the people, if possible).
We have a utility function, f(x), which describes the utility produced by x people. Before Dr. Evil turns up, we have A known people; and a total utility of f(A+B). After Dr. Evil arrives, we find that there are more people; we have a total utility of f(A+B) (or f(A+B+1), if Dr. Evil was previously unknown; from here onwards I will assume that Dr. Evil was previously known, and is thus included in A). Dr. Evil offers us a choice, between a total utility of f(A+B-C) or a total utility of f(A).
The immediate answer is that if B>C, it is better for B people to live; while if C>B, then it is better for C people to live. For this to be true for all A, B and C, it is necessary for f(x) to be a monotonically increasing function; that is, a function where f(y)>f(x) if and only if y>x.
Now, you are raising the possibility that there exist a number, D, of people in vast interstellar civilisations who are completely unknown to us. Then Dr. Evil's choice becomes a choice between a total utility of f(A+B-C+D) and a total utility of f(A+D). Again, as long as f(x) is monotonically increasing, the question of finding the greatest utility is simply a matter of seeing whether B>C or not.
I don't see any cause for invalidating any of my calculations in the presence of vast interstellar civilisations.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-23T14:06:15.655Z · LW(p) · GW(p)
It takes effort to pull the lever and divert the trolley. This minuscule amount has to be outweighed by the utility of additional lives. It gets even worse in real situations, where it may cost a great deal to help people.
Replies from: CCC↑ comment by CCC · 2013-04-24T08:27:31.099Z · LW(p) · GW(p)
Ah; now we begin to compare different things. To compare the effort of pulling the lever, against the utility of the additional lives. At this point, yes, the actual magnitude and not just the sign of the difference between f(A+B-C+D) and f(A+D) becomes important; yet D is unknown and unknowable. This means that the magnitude of the difference can only be known with certainty if f(x) is linear; in the case of a nonlinear f(x), the magnitude cannot be known with certainty. I can easily pick out a nonlinear, monotonically increasing function such that the difference between f(A+B-C+D) and f(A+D) can be made arbitrarily small for any positive integer A, B and C (where A+B>C) by simply selecting a suitable positive integer D. A simple example would be f(x)=sqrt(x).
Now, the hypothetical moral agent is in a quandary. Using effort to pick a solution costs utilions. The cost is a simple, straightforward constant; he known how much that costs. But, with f(x)=sqrt(x), without knowing D, he cannot tell whether the utilions of saving the people is greater or lesser than the utilion cost of picking a solution. (For the purpose of simplicity, I will assume that no-one will ever know that he was in a position to make the choice - that is, his reputation is safe, no matter what he selects). Therefore, he has to make an estimate. He has to guess a value of D. There are multiple strategies that can be followed here:
Try to estimate the most probable value of D. This would require something along the lines of the Drake equation - picking the most likely numbers for the different elements, picking the most likely size of an extraterrestrial civilisation, and doing some multiplication.
Take the most pessimistic possible value of D; D=0. That is, plan as though I am in the worst possible universe; if I am correct, and D=0, then I take the correct action, while if I am incorrect and D later proves greater than zero, then that is a pleasant surprise. This guards against getting an extremely unpleasant surprise if it later turns out that D is substantially lower than the most likely estimate; utilions in the future are more likely to go up than down.
Ignore the cost, and simply take the option that saves the most lives, regardless of effort. This strategy actually reduces the cost slightly (as one does not need to expend the very slight cost of calculating the cost), and has the benefit of allowing immediate action. It is the option that I would prefer that everyone who is not me should take (because if other people take it, then I have a greater chance of getting my life saved at the cost of no effort on my part). I might choose this option out of a sense of fairness (if I wish other people to take this option, it is only reasonable to consider that other people may wish me to take it) or out of a sense of duty (saving lives is important).
↑ comment by A1987dM (army1987) · 2013-04-26T16:48:45.684Z · LW(p) · GW(p)
Try to estimate the most probable value of D.
More precisely, you take the expected value over your probability distribution for D, i.e. if -f(A+D))p(D)) exceeds the cost of pulling the lever then you pull it.
ETA: In case you're wondering, I used this to display the equation.
↑ comment by MugaSofer · 2013-04-25T13:50:23.801Z · LW(p) · GW(p)
Remember, we're trying to approximate human morality here. How may people will kill off billions for a cookie?
Replies from: CCC↑ comment by CCC · 2013-04-25T18:24:04.658Z · LW(p) · GW(p)
In the current situation, that is, with 7 billion people known (i.e. A=7 billion), and a general assumption of D=0, and the threat of consequences (court, prison, etc.), very few. But there are still some who would kill one person for a cookie. And there are some who'd start a war - killing hundreds, or thousands, or even millions - given the right incentive (it generally takes a bit more than a cookie).
If there are 3^^^^^^^^^3 people known to exist, and court/prison is easily avoided, then how many people would kill off billions for a cookie? What if it's billions who they've never met, and are never going to meet?
Replies from: TheOtherDave, MugaSofer↑ comment by TheOtherDave · 2013-04-25T18:35:45.579Z · LW(p) · GW(p)
Frankly, if I try to imagine living in a world in which I am as confident that that many people exist as I am that 7 billion people exist today, I'm not sure I wouldn't kill off billions for a cookie.
I mean, if I try to imagine living in a world where only 10,000 people exist, I conclude that I would be significantly more motivated to extend the lives of an arbitrary person (e.g., by preventing them from starving) than I am now. (Leaving aside any trauma related to the dieback itself.)
If a mere six orders of magnitude difference in population can reduce my motivation to extend an arbitrary life to that extent, it seems likely that another twenty or thirty orders of magnitude would reduce me to utter apathy when it comes to an arbitrary life. Add another ten orders of magnitude and utter apathy when it comes to a billion arbitrary lives seems plausible.
What if it's billions who they've never met, and are never going to meet?
I presumed this.
If it's billions of friends instead, I no longer have any confidence in any statement about my preferences, because any system capable of having billions of friends is sufficiently different from me that I can't meaningfully predict it.
If it's billions of people including a friend of mine, I suspect that my friend is worth about as much as they are in the 7billion-person world, + (billions-1) people who I'm apathetic about. I suspect I either get really confused at this point, or compartmentalize fiercely.
↑ comment by CCC · 2013-04-25T19:04:34.066Z · LW(p) · GW(p)
If it's billions of people including a friend of mine, I suspect that my friend is worth about as much as they are in the 7billion-person world, + (billions-1) people who I'm apathetic about. I suspect I either get really confused at this point, or compartmentalize fiercely.
Thinking about this has caused me to realise that I already compartmentalise pretty fiercely. Some of the lines along which I compartmentalise are a little surprising when I investigate them closely... friend/non-friend is not the sharpest line of the lot.
One pretty sharp line is probably-trying-to-manipulate-me/probably-not-trying-to-manipulate-me. But I wouldn't want to kill anyone on either side of that line (I wouldn't even want to be rude to them without reason (though 'he's a telemarketer' is reason for hanging up the phone on someone mid-sentance)). My brain seems to insist on lumping "have never met or interacted with, likely will never meet or interact with" in more-or-less the same category as "fictional".
Replies from: army1987, MugaSofer↑ comment by A1987dM (army1987) · 2013-04-26T12:22:00.293Z · LW(p) · GW(p)
though 'he's a telemarketer' is reason for hanging up the phone on someone mid-sentance
My brain seems to divide people among “playing characters” and “non-playing characters”, and telemarketers fall in the latter category. (The fact that my native language has a T-V distinction doesn't help, though the distinction isn't exactly the same.)
↑ comment by MugaSofer · 2013-04-26T11:30:24.432Z · LW(p) · GW(p)
My brain seems to insist on lumping "have never met or interacted with, likely will never meet or interact with" in more-or-less the same category as "fictional".
That sounds a lot like some sort of scope insensitivity than a revealed preference.
Replies from: CCC, CCC, Kawoomba↑ comment by CCC · 2013-04-26T13:18:54.011Z · LW(p) · GW(p)
I don't think it's scope insensitivity in this particular case, because I'm considering one-on-one interactions in this compartmentalisation.
Of course, this particular case did come to my mind as a side-effect of a discussion on scope insensitivity.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-26T14:39:09.554Z · LW(p) · GW(p)
Sorry, I was replying to the last bit. Edited.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-26T16:58:56.156Z · LW(p) · GW(p)
Who the hell downvotes a clarification? Upvoted back to 0.
↑ comment by CCC · 2013-04-26T15:31:05.502Z · LW(p) · GW(p)
That edit does make your meaning clearer. It does so by highlighting that my phrasing was sloppy, so let me try to explain myself better.
Let us say that I hear of someone being mugged. My emotional reaction changes as a function of my relationship to the victim. If the victim is a friend, I am concerned and rush to check that he is OK. If the victim is an acquaintence, I am concerned and check that he is OK the next time I see him. If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I am mildly perturbed. If the victim is a fictional character, I am also mildly perturbed.
When considering only one person, those last two categories blur together in my mind somewhat.
Replies from: army1987, MugaSofer↑ comment by A1987dM (army1987) · 2013-04-26T16:57:57.529Z · LW(p) · GW(p)
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I am mildly perturbed. If the victim is a fictional character, I am also mildly perturbed.
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I shrug and think ‘so what? so many people get mugged every day, why should I worry about this one in particular?’ If it's a fictional character, it depends on whether the author is good enough to switch me from far-mode to near-mode thinking.
Replies from: TheOtherDave, MugaSofer↑ comment by TheOtherDave · 2013-04-26T17:29:51.600Z · LW(p) · GW(p)
Well, but this elides differences in the object with differences in the framing. I certainly agree that an author can change how I feel about a fictional character, but an author can also change how I feel about a real person whom I have never met or interacted with, and am unlikely to meet or interact with.
↑ comment by MugaSofer · 2013-04-29T10:10:51.276Z · LW(p) · GW(p)
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I shrug and think ‘so what? so many people get mugged every day, why should I worry about this one in particular?’
Am I the only person here who is in any way moved by accounts of specific victims? Nonfiction writers can switch you to near-mode too, or at least they can to me.
Replies from: army1987, CCC↑ comment by A1987dM (army1987) · 2013-04-29T11:57:13.796Z · LW(p) · GW(p)
If the account is detailed enough, it does move me, but not much more than an otherwise identical account that I know is fictional.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-29T17:20:13.840Z · LW(p) · GW(p)
Phew! I was getting worried there.
OK, so you care about detailed accounts. Doesn't that suggest that if you, y'know, knew more details about all those people being mugged, you would care more? So it's just ignorance that leads you to discount their suffering?
Fictional accounts ... well, people never have been great at distinguishing between imagination and reality, which, if you think about it, is actually really useful.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-29T17:26:48.342Z · LW(p) · GW(p)
No, I mean that more details will switch my System 1 into near mode. My System 2 thinks that's a bug, not a feature.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-29T20:54:32.850Z · LW(p) · GW(p)
Really? My System 2 thinks System 2 is annoyingly incapable of seeing details, and System 1 is annoyingly incapable of seeing the big picture, and wants to use System 1 as a sort of zoom function to approximate something less broken.
I guess I'm unusual in this regard?
↑ comment by CCC · 2013-04-30T13:39:50.782Z · LW(p) · GW(p)
Like army1987, I can be moved by accounts of specific victims, whether they are fictional or not. There is a bug here, and the bug is this; that I am moved the same amount by an otherwise identical fictional or nonfictional account, where the nonfictional account contains no-one with whom I have ever interacted.
That is, simply knowing that an account is non-fictional doesn't affect my emotional reaction, one way or another. (This doesn't mean I am entirely without sympathy for people I have never met - it simply means that I have equivalent sympathy for fictional characters). This is a bug; ideally, my emotional reaction should take into account such an important detail as whether or not something really happened. After all, what detail could be more important?
Replies from: Kawoomba, MugaSofer↑ comment by Kawoomba · 2013-04-30T14:13:46.746Z · LW(p) · GW(p)
It's not a bug, it's a feature (in some contexts).
Consider you were playing 2 games of online chess against an anonymous opponent. You barely lose the first one. Now you're feeling the spirit of competition, your blood boiling for revenge! Should you force yourself to relinquish the thrill of the contest, because "it doesn't really matter"? That would be no fun! :-(
If you're reading a work of fiction, knowing it is fiction, why are you doing so? Because emotional investment is fun? Why would you then sabotage your enjoyment by trying to downsize your emotional investment, since "it's not real"? Also no fun! :-(
If the flawed heuristic you are employing in a certain context works in your favor in that context, switching it off would be dumb (although being vaguely aware of it would not be).
Replies from: CCC↑ comment by CCC · 2013-05-03T14:17:20.172Z · LW(p) · GW(p)
Should you force yourself to relinquish the thrill of the contest, because "it doesn't really matter"? That would be no fun!
Oh, it does matter. There's a real opponent there. That's reality.
If you're reading a work of fiction, knowing it is fiction, why are you doing so? Because emotional investment is fun?
You make a good point.
↑ comment by MugaSofer · 2013-05-01T13:53:16.461Z · LW(p) · GW(p)
I'm not sure I'd characterize that as a "bug", more a feature we need to be aware of and take into account.
If you weren't moved by fictional scenarios, you wouldn't be able to empathize with people in those scenarios - including your future self! We mostly predict other people's actions by using our own brain as a black box, imaging ourselves in their situation and how we would react, so there goes any situation featuring other humans. And we couldn't daydream or enjoy fiction, either.
Would it be useful to turn it off? Maaaybe, but as long as you don't start taking hypothetical people's wishes into account, and stop reading stuff that triggers you, you're fine - I bet the consequences for misuse would be higher than the marginal benefits.
Replies from: CCC↑ comment by CCC · 2013-05-03T14:16:09.164Z · LW(p) · GW(p)
I don't think that empathising with fictional characters should be turned off. I just think that properly calibrated emotions should take all factors into account, with properly relevant weightings. I notice that my emotions do not seem to be taking the 'reality' factor into account, and I therefore conclude that my emotions are poorly calibrated.
My future self would be a potentially real scenario, and thus would deserve all the emotional investment appropriate for a situation that may well come to pass. (He also gets the emotional investment for being me, which is quite large).
I'm not sure whether I should be feeling more sympathy for strangers, or less sympathy for fictional people.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-05-12T20:45:35.117Z · LW(p) · GW(p)
So ... are you saying that they're poorly calibrated, but that's fine and nothing to worry about as long as we don't forget it and start giving imaginary people moral weight? Because if so, I agree with you on this.
Replies from: CCC↑ comment by CCC · 2013-05-14T11:42:34.220Z · LW(p) · GW(p)
More or less. I'm also saying that it might be nice if they were better calibrated. It's not urgent or particularly important, it's just something about myself that I noticed at the start of this discussion that I hadn't noticed before.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-29T10:12:01.070Z · LW(p) · GW(p)
That edit does make your meaning clearer. It does so by highlighting that my phrasing was sloppy, so let me try to explain myself better.
Fair enough.
If the victim is someone whom I have never met or interacted with, and am unlikely to meet or interact with, I am mildly perturbed. If the victim is a fictional character, I am also mildly perturbed.
That depends on much you know about/empathize with them, right?
Replies from: CCC↑ comment by CCC · 2013-04-30T13:31:50.508Z · LW(p) · GW(p)
That depends on much you know about/empathize with them, right?
Yes; but I can know as much about a fictional character as about a non-fictional character whom I have not interacted with. The dependency has nothing to do with the fictionality or lack thereof of the character.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-05-01T14:03:33.981Z · LW(p) · GW(p)
Right, hence me quoting both the section on fictional and non-fictional characters.
To be honest, our brains don't really seem to distinguish between fiction and non-fiction at all; it's merely a question of context. Hence our reactions to fictional evidence and so forth. Lotta awkward biases you can catch from that what with our tendency to "buy in" to compelling narratives.
↑ comment by Kawoomba · 2013-04-26T13:38:48.756Z · LW(p) · GW(p)
It's not a bias if you value an additional dollar less once all your needs are met.
It's not a bias if you value a random human life less if there are billions of others, compared to if there are only a few others.
You may choose for yourself to value a $10 bill the same whether you're dirt poor, or a millionaire. Same with human lives. But you don't get to "that's a bias" others who have a more nuanced and context-sensitive estimation.
Replies from: MugaSofer↑ comment by CCC · 2013-04-26T13:32:39.888Z · LW(p) · GW(p)
Add another ten orders of magnitude and utter apathy when it comes to a billion arbitrary lives seems plausible.
A billion is nine orders of magnitude. As a very rough estimate, then, adding an order of magnitude to the number of lives in existence divides the motivation to extend an arbitrary stranger's life by an order of magnitude. And the same for any other multiplier.
That is, if G is chosen such that f(x)-f(x-1)=G, then f(Mx)-f(Mx-1)=G/M for any given x and any multiplier M. If I then define my hedons such that f(0)=0 and f(1)=1...
...then I get that f(x) is the harmonic series.
For 10,000 people, on this entirely arbitrary (and extremely large) scale, I get a value f(x) between 9 and 10; for seven billion, f(x) lies between 23 and 24 (source)
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-26T17:12:21.250Z · LW(p) · GW(p)
...then I get that f(x) is the harmonic series.
That's pretty much the natural logarithm of x (plus a constant, plus a term O(1/n)).
Replies from: CCC↑ comment by CCC · 2013-04-27T17:36:03.430Z · LW(p) · GW(p)
Hm. Yes, to the level of approximation I'm using here, I could as easily have used a log function. And would have, if I'd thought of it; the log function is used enough that I'd expect its properties to be easier for whoever reads my post to imagine.
↑ comment by MugaSofer · 2013-04-26T11:28:42.362Z · LW(p) · GW(p)
I mean, if I try to imagine living in a world where only 10,000 people exist, I conclude that I would be significantly more motivated to extend the lives of an arbitrary person (e.g., by preventing them from starving) than I am now. (Leaving aside any trauma related to the dieback itself.)
Well, if the population is that low saving people is guarding against an existential risk, so I would feel the same. Does your introspection yield anything on why smaller numbers matter more?
ETA: your brain can't grasp numbers anywhere near as high as a billion. How sure are you murder matters now?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-26T13:12:36.674Z · LW(p) · GW(p)
How sure are you murder matters now?
It's pretty clear that individual murder doesn't matter to me.
I mean, someone was murdered just now, as I write this sentence, and I care about that significantly less than I care about the quality of my coffee. I mean, I just spent five seconds adjusting the quality of my coffee, which is at least a noticeable quantity of effort if not a significant one. I can't say the same about that anonymous murder.
Oh look, there goes another one. (Yawn.)
The metric I was using was not "caring whether someone is murdered", which it's clear I really don't, but rather "being willing to murder someone," which it's relatively clear that I do, but not nearly as much as I could. (Insert typical spiel here about near/far mode, etc.)
Replies from: nshepperd, MugaSofer↑ comment by nshepperd · 2013-04-26T14:11:49.727Z · LW(p) · GW(p)
I think the resolution to that is that you don't have to have an immediate emotional reaction to care about it. There are lots of good and bad things happening in the world right now, but trying to feel all of them would be pointless, and a bad fit for our mental architecture. But we can still care, I think.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-26T15:38:08.604Z · LW(p) · GW(p)
Well, I certainly agree that I don't have to have an emotional reaction to each event, or indeed a reaction to the event at all, in order to be motivated to build systems that handle events in that class in different ways. I'm content to use the word "care" to refer to such motivation, either as well as or instead of referring to such emotional reactions. Ditto for "matters" in questions like "does murder matter", in which case my answer to the above would change, but that certainly isn't how I udnerstood MugaSofer's question.
Replies from: army1987, MugaSofer↑ comment by A1987dM (army1987) · 2013-04-26T17:03:50.235Z · LW(p) · GW(p)
So the question now is: if you could prevent someone you would most likely never otherwise interact with from being murdered, but that would make your coffee taste worse, what would you do?
Replies from: shminux, TheOtherDave↑ comment by Shmi (shminux) · 2013-04-26T18:00:41.175Z · LW(p) · GW(p)
Don't we make this choice daily by choosing our preferred brand over Ethical Bean at Starbucks?
Replies from: Eliezer_Yudkowsky, army1987, MugaSofer↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-26T23:26:06.710Z · LW(p) · GW(p)
I hear the ethics at Starbucks are rather low-quality and in any case, surely Starbucks isn't the cheapest place to purchase ethics.
Replies from: gwern↑ comment by gwern · 2013-04-27T00:23:31.151Z · LW(p) · GW(p)
Bah! Listen, Eliezer, I'm tired of all your meta-hipsterism!
"Hey, let's get some ethics at Starbucks" "Nah, it's low-quality; I only buy a really obscure brand of ethics you've probably never heard of called MIRI". "Hey man, you don't look in good health, maybe you should see a doctor" "Nah, I like a really obscure form of healthcare, I bet you're not signed up for it, it's called 'cryonics'; it's the cool thing to do". "I think I like you, let's date" "Oh, I'm afraid I only date polyamorists; you're just too square". "Oh man, I just realized I committed hindsight bias the other day!" "I disagree, it's really the more obscure backfire effect which just got published a year or two ago." "Yo, check out this thing I did with statistics" "That's cool. Did you use Bayesian techniques?"
Man, forget you!
/angrily sips his obscure mail-order loose tea, a kind of oolong you've never heard of (Formosa vintage tie-guan-yin)
Replies from: Eliezer_Yudkowsky, None, army1987, Vaniver↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-27T08:24:01.559Z · LW(p) · GW(p)
If you can't pick something non-average to meet your optimization criteria, you can't optimize above the average.
This comment has been brought to you by my Dvorak keyboard layout.
Replies from: TsviBT, gwern↑ comment by TsviBT · 2013-04-27T10:11:04.309Z · LW(p) · GW(p)
If you keep looking down the utility gradient, it's harder to escape local maxima because you're facing backwards.
This comment has been brought to you by me switching from Dvorak to Colemak.
Replies from: wedrifid↑ comment by wedrifid · 2013-04-28T10:30:28.766Z · LW(p) · GW(p)
This comment has been brought to you by me switching from Dvorak to Colemak.
I'm always amazed that people advocate Dvorak. If you are going to diverge from the herd and be a munchkin why do a half-assed job of it? Sure, if you already know Dvorak it isn't worth switching but if you are switching from Qwerty anyway then Colemak (or at least Capewell) is better than Dvorak in all the ways that Dvorak is better than Qwerty.
Dvorak is for hipsters, not optimisers.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-28T13:52:09.140Z · LW(p) · GW(p)
Tim Tyler is the actual optimizer here.
↑ comment by gwern · 2013-04-27T19:22:26.780Z · LW(p) · GW(p)
If you can't pick something non-average to meet your optimization criteria, you can't optimize above the average.
But at the same time, there's only so many possible low-hanging fruits etc, and at some level of finding more fruits, that indicates you aren't optimizing at all...
↑ comment by A1987dM (army1987) · 2013-04-27T10:14:48.039Z · LW(p) · GW(p)
(Had to google “backfire effect” to find out whether you had made it up on the spot.)
EDIT: Looks like I had already heard of that effect, and I even seem to recall E.T. Jaynes giving a theoretical explanation of it, but I didn't remember whether it had a name.
Replies from: gwern↑ comment by gwern · 2013-04-27T19:20:47.342Z · LW(p) · GW(p)
Had to google “backfire effect” to find out whether you had made it up on the spot.
"Like I said, it's a really obscure bias, you've probably never heard of it."
I even seem to recall E.T. Jaynes giving a theoretical explanation of it
Really? I don't remember ever seeing anything like that (although I haven't read all of PT:TLoS yet). Maybe you're conflating it with the thesis using Bayesian methods I link in http://www.gwern.net/backfire-effect ?
↑ comment by A1987dM (army1987) · 2013-04-27T10:12:18.212Z · LW(p) · GW(p)
BTW, for some reason, certain “fair trade” products at my supermarket are astoundingly cheap (as in, I've bought very similar but non-“fair trade” stuff for more); I notice that I'm confused.
↑ comment by TheOtherDave · 2013-04-26T17:27:29.968Z · LW(p) · GW(p)
Judging from experience, the answer is that it depends on how the choice is framed.
That said, I'd feel worse afterwards about choosing the tastier coffee.
↑ comment by MugaSofer · 2013-04-26T13:55:08.787Z · LW(p) · GW(p)
This comment was written under the misapprehension that Dave was speaking normatively.It's pretty clear that individual murder doesn't matter to me.
I mean, someone was murdered just now, as I write this sentence, and I care about that significantly less than I care about the quality of my coffee. I mean, I just spent five seconds adjusting the quality of my coffee, which is at least a noticeable quantity of effort if not a significant one. I can't say the same about that anonymous murder.
Oh look, there goes another one. (Yawn.)
I always attributed that to abstract nature of the knowledge. I mean, if you knew anything about the person, you'd care a lot more, which suggests the relevant factor is ignorance, and that's a property of the map, not the territory.
The metric I was using was not "caring whether someone is murdered", which it's clear I really don't, but rather "being willing to murder someone," which it's relatively clear that I do, but not nearly as much as I could. (Insert typical spiel here about near/far mode, etc.)
So you're saying your preferences on this matter are inconsistent?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-26T15:40:49.272Z · LW(p) · GW(p)
Yes, I agree completely that what I'm talking about is an attribute of "the map." (I could challenge whether it's ignorance or something else, but the key point here is that I'm discussing motivational psychology, and I agree.)
So you're saying your preferences on this matter are inconsistent?
Well, that wasn't my point, and I'm not quite sure how it follows from what I said, but I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-29T09:22:35.745Z · LW(p) · GW(p)
I would certainly agree that my revealed preferences are both inconsistent with each other and inconsistent with my stated preferences (which are themselves inconsistent with each other).
Right. This is why I don't use "revealed preferences" to derive ethics, personally.
And neither do you, I'm such an idiot.That said.
Here's a scenario:
Humanity has spread throughout the stars and come into its manifest destiny, yada yada. There are really ridiculous amounts of people. Trillions in every star system, and there are a lot of star systems. We all know this future.
Alas! Some aliens dislike this! They plan to follow you to a newly-settled planet - around a billion colonists. Then they will colonize the planet themselves, and live peacefully building stacks of pebbles or whatever valueless thing aliens do. These aliens are a hive mind, so they don't count as people.
However! You could use your tracking beacon - of some sentimental value to you, it was a present from your dear old grandmother or something - to trick the aliens into attacking and settling on an automated mining world, without killing a single human.
I assume you would be willing to do it to save, say, a small country on modern-day Earth, although maybe I'm projecting here? Everything is certain, because revealed preferences suck at probability math.
Is it worth it?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-29T16:03:02.638Z · LW(p) · GW(p)
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
That said, as is often true of hypothetical questions, I don't quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it's more complicated than that. If I missed the intended point of the example, let me know and I'll try again.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-29T17:01:56.112Z · LW(p) · GW(p)
Reorienting my understanding of this discussion to be, as you say, normative: yes, when offered a choice between destroying a sentimental but not otherwise valuable item and killing a billion humans, I endorse destroying the item, no matter how many other humans there are in the world.
I even endorse it if everything is uncertain, with the usual expected-value calculation.
Glad to hear it. Sorry about that misunderstanding.
That said, as is often true of hypothetical questions, I don't quite agree that the example you describe quite maps to that choice, but I think it was meant to. If I really think about the example, it's more complicated than that.
Curses. I knew I should have gone with the rogue nanotech.
If I missed the intended point of the example, let me know and I'll try again.
Nope, spot-on :)
↑ comment by MugaSofer · 2013-04-26T11:24:09.841Z · LW(p) · GW(p)
So ... if there are vast alien civilizations, murder is OK?
Replies from: CCC↑ comment by CCC · 2013-04-26T13:04:54.917Z · LW(p) · GW(p)
No. Total utility still drops every time a person is killed, as long as f(x) is strictly monotonically increasing.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-26T13:32:05.783Z · LW(p) · GW(p)
Wrong thread, mate.
I was replying to the idea that the marginal utility of not killing one person might be less than the utility of a cookie, if there are enough people.
Replies from: CCC↑ comment by CCC · 2013-04-26T13:42:03.294Z · LW(p) · GW(p)
That doesn't mean that it's OK. That means that it is seen as only very, very slightly not OK.
...unless you and I have different definitions of "OK", which I begin to suspect.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-26T14:04:22.081Z · LW(p) · GW(p)
I have to admit, that was sloppily phrased. However, you do seem to be defining "OK" as equivalent to "actively good" whereas I'm using something more like "acceptable".
Replies from: CCC↑ comment by CCC · 2013-04-26T15:22:18.306Z · LW(p) · GW(p)
Well, I'd accept strictly neutral (neither actively evil nor actively good) as OK as well. It seems that your definition of OK includes the possibility of active evil, as long as the amount of active evil is below a certain threshold.
It seems that we're in agreement here; whether or not it is "OK" is defined by the definitions we are assigning to OK, and not to any part of the model under consideration.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-29T10:22:47.699Z · LW(p) · GW(p)
The threshold being whether I can be bothered to stop it. As I said, it was sloppy terminology - I should have said something like "worth less than the effort of telling someone to stop" or some other minuscule cost you would be unwilling to pay. Since any intervention, in real life, has a cost, albeit sometimes a small one, this seems like an important distinction.
↑ comment by Peter Wildeford (peter_hurford) · 2011-07-29T19:35:35.121Z · LW(p) · GW(p)
8 lives per dollar is an awful, awful lot, but I'll definitely check out those resources. If the 8 lives per dollar claim is true, I'll be spending my money on SI.
↑ comment by David_Gerard · 2014-06-12T14:43:48.166Z · LW(p) · GW(p)
Transcript; the precise wording is "You can divide it up, per half day of time, something like 800 lives. Per $100 of funding, also something like 800 lives." It's at 12:31. The slide up at that moment during the presentation emphasises the point, this wasn't a casual aside.
Replies from: arundelo↑ comment by arundelo · 2014-06-12T17:31:57.605Z · LW(p) · GW(p)
Anna made a comment about this in January 2014.
↑ comment by Kevin · 2011-07-31T18:25:02.343Z · LW(p) · GW(p)
And for something in the developing world aid space, Village Reach is generally considered to be the most efficient.
http://www.givewell.org/international/top-charities/villagereach
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-07-31T19:52:59.709Z · LW(p) · GW(p)
Yeah, I intend to donate a good portion to Village Reach after I do some more thorough research on charity. I don't have that much of an income yet, anyway.
Replies from: wedrifid↑ comment by wedrifid · 2011-07-31T21:28:23.110Z · LW(p) · GW(p)
Yeah, I intend to donate a good portion to Village Reach after I do some more thorough research on charity.
If you already know your decision the value of the research is nil.
Replies from: peter_hurford, Bongo, Benquo↑ comment by Peter Wildeford (peter_hurford) · 2011-08-01T02:10:36.011Z · LW(p) · GW(p)
Lol, good point. What I meant to say was "I probably intend to donate a good portion to Village Reach, if I don't encounter anything in my research to change my mind." It's still probably a biased approach, but I can't pretend I don't already have a base point for my donations.
↑ comment by Bongo · 2011-08-01T10:43:11.175Z · LW(p) · GW(p)
If you already know your decision the value of the research is nil.
No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!
Replies from: wedrifid, MixedNuts↑ comment by wedrifid · 2011-08-01T19:27:33.405Z · LW(p) · GW(p)
No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!
A worthy endeavour!
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-08-03T16:30:46.138Z · LW(p) · GW(p)
Are you being sarcastic here?
Replies from: wedrifid↑ comment by wedrifid · 2011-08-04T00:54:46.632Z · LW(p) · GW(p)
No. Information really is useful for influencing others independently of its use for actually making decisions. It is only the decision making component that is useless after you have already made up your mind.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-08-04T10:39:19.458Z · LW(p) · GW(p)
Okay, thanks.
↑ comment by MixedNuts · 2011-08-01T10:54:47.380Z · LW(p) · GW(p)
Citing evidence that didn't influence you before you wrote your bottom line is lying.
Replies from: Kaj_Sotala, Bongo, wedrifid↑ comment by Kaj_Sotala · 2011-08-01T15:47:00.157Z · LW(p) · GW(p)
So if:
- Something causes me to believe in X
- I post in public that I believe in X
- I read up more on X and find even more reasons to believe in it
- Somebody challenges my public post and I respond, citing both the old reason and the new ones
Then I'm lying? I don't think that's quite right.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-08-01T15:59:00.993Z · LW(p) · GW(p)
Nah; if your credence in X went up when you read the new reasons, and more importantly if it would have gone down if the opposite of these reasons were true, it's kosher.
If someone challenges your post and you think "Crap, my case doesn't look impressive enough" and selectively search for citations, you're lying.
A grey area is when you believe X because you heard it somewhere but you don't remember where except that it sounded trustworthy. You can legitimately be pretty confident that X is true and that good sources exist, but you still have to learn a new fact before you can point to them. The reason this isn't an outright lie is that trust chains need occasional snapping. There's an odd and interesting effect - Alice distorts things just a tiny bit when she tells Bob, which basically doesn't affect anything, but Bob doesn't know exactly what the distortions where so the distorsions he adds when he tells Carol can be huge, though his beliefs are basically correct! (A big source is that uncertainty is hard to communicate, so wild guesses often turn into strong claims.)
Replies from: FAWS↑ comment by FAWS · 2011-08-01T16:25:50.562Z · LW(p) · GW(p)
If someone challenges your post and you think "Crap, my case doesn't look impressive enough" and selectively search for citations, you're lying.
"Selectively" is the keyword here. Searching for additional arguments for your position is legitimate if you would retract on discovering negative evidence IMO.
Replies from: MixedNuts↑ comment by Benquo · 2011-08-04T01:45:03.685Z · LW(p) · GW(p)
Nah, you can't choose to un-donate. Whereas you can always make up for lost time. So giving is a case where some mild indecision may be worthwhile.
Obviously the current expected value of your action should be the same as what you expect to think in the future. But getting more info can increase the expected value of your information thus:
Let's say you have a charity budget, and two charities, A and B. Since your budget is a small fraction of the budget of each charity, assume your utility is linear in this decision, so you'll give all your money to the better charity. You think there's a 60% chance that charity A produces 1000 utilons from your donation and charity B produces 100, and a 40% chance that A only produces 10 and B still produces 100. The expected utility of giving to A is 60% 1000 + 40% 10 = 604. The expected utility of giving to B is 60% 100 + 40% 100 = 100, so you are planning to give to A.
But let's say that by doing some small amount of research (assuming it's costless for simplicity), you can expect to become, correctly, nearly certain (one situation has probability of ~1, the other has probability of ~0). Now if you become certain that A produces 1000 utilons (which you expect to happen 60% of the time), your choice is the same. But if you become certain that A produces only 10 utilons, you give to B instead. So your expected utility is now 60% 1000 + 40% 100 = 640, a net gain of 36 expected utilons.
Replies from: wedrifid, wedrifid↑ comment by wedrifid · 2011-08-04T11:17:44.430Z · LW(p) · GW(p)
Nah, you can't choose to un-donate. Whereas you can always make up for lost time. So giving is a case where some mild indecision may be worthwhile.
You seem to have missed the point.
Replies from: Benquo↑ comment by Benquo · 2011-08-04T11:24:06.312Z · LW(p) · GW(p)
The point being what?
Replies from: wedrifid↑ comment by wedrifid · 2011-08-04T11:31:06.583Z · LW(p) · GW(p)
All information gained after making a decision is irrelevant for the purpose of making said decision. See also: The Bottom Line.
Replies from: Benquo↑ comment by Benquo · 2011-08-04T12:27:46.433Z · LW(p) · GW(p)
Not if you can remake the decision. I read "I intend [...]" to mean "I expect to make this decision, based on the evidence available now, but will gather more evidence first, which may change my mind."
But people don't do very well at thinking when there's already an expected outcome, so peter_hurford should either give up or work on becoming more curious.
comment by khafra · 2011-07-29T12:37:37.007Z · LW(p) · GW(p)
Having $1000 pre-filled makes me feel uncomfortable. I can understand the reasoning behind anchoring to a higher number, and I can't explain much behind why it makes me feel uncomfortable about contributing at all. Perhaps a running average pre-fill like the indie game Humble Bundle 3 would be better.
Replies from: Rain, rhollerith_dot_com↑ comment by Rain · 2011-07-29T16:44:23.583Z · LW(p) · GW(p)
The method of donation will change what you see: the Causes.com form started with a pre-fill of $25.
The Humble Indie Bundle team actually did A/B testing with multiple donation forms, one of which included a set dollar amount, another which included rolling average, and at least one other. Such testing may show that a running average would discourage large donors, thereby reducing overall donations.
Replies from: handoflixue↑ comment by handoflixue · 2011-07-29T19:57:45.834Z · LW(p) · GW(p)
Such testing may show
I'm not sure if this is bad word choice, but if you genuinely don't know the results then it seems disingenuous to focus on one of the three specific results without offering any further support for that stance. (If you do know the results then I would love to see them ^.^)
Replies from: Rain↑ comment by Rain · 2011-07-29T20:08:09.258Z · LW(p) · GW(p)
I don't know the results of their testing. It was briefly discussed on Hacker News.
↑ comment by RHollerith (rhollerith_dot_com) · 2011-07-29T13:36:09.821Z · LW(p) · GW(p)
A running-average pre-fill sounds much better to me.
comment by lukeprog · 2011-11-07T18:48:48.735Z · LW(p) · GW(p)
Not many people heard about the Singularity Summit in Salt Lake City. Here is part of Luke Nosek's talk that struck me:
I was a futurist all my life... but there was a strange detour [as a result of] my time with Paypal...
We all [the "Paypal mafia"] went off and started more companies [Yelp, YouTube, etc.]... and what you'd do in your 20s if you got that level of success was, "Well, I have some money. Now I need some more." This was the mentality...
In 2008 this changed for me, almost like a spiritual conversion. I met... William and Michael Andregg, who decided in their freshman year they wanted to cure aging... So they set out [with Halycon Molecular] to build the perfect gene sequencing machine...
I was trying to help them incorporate the company and set up the right share structure, and they said, "We don't have a share structure... We're all gonna give it away... to a foundation to help cure aging and disease.
And I thought, "Well that's kind of quaint. I've gotta teach these entrepreneurs about how to set up a real business, how to make money. You've gotta divide up your shares, people are gonna fight over them... and that's the primary motivator for running a business, is to make money."
It hit me during that conversation... is that the purpose of Halcyon Molecular was never to make money... it was to solve the greatest problem of mankind: our slavery to our biological form...
That changed my way of thinking about what I was doing in the world. I wasn't there just to make money... I wanted to find more William and Michael Andreggs: more entrepreneurs who were building companies that were developing breakthrough technologies that would enable a positive Singularity for the world within our lifetimes.
comment by hairyfigment · 2011-08-27T03:38:13.948Z · LW(p) · GW(p)
Donated another $500.
comment by steven0461 · 2011-08-16T01:24:34.583Z · LW(p) · GW(p)
I just noticed this hasn't been posted to SL4; I could do it, but maybe better someone from singinst?
Replies from: JGWeissman↑ comment by JGWeissman · 2011-08-17T16:53:03.140Z · LW(p) · GW(p)
I think it is a good signal of more broad-based support when this sort of thing is promoted by supporters. Go for it.
(And getting things done can be more important than making things "official".)
comment by nykos · 2011-08-06T01:59:36.868Z · LW(p) · GW(p)
If you really want to save lives, you better donate to people who do more than write papers. Aubrey de Grey's institute might be a better start.
The bottom line is, the SingInst is just a huge money drain. It really doesn't do anything useful, and all it ever produced is a bunch of papers. It actually does something worse, namely subsidizing a slacker-genius like Yudkowsky, who really should find better uses for his mind than armchair philosophy about "friendly AI" when we don't even have the knowledge to build an AI with the intelligence of a 10-year-old. Mr. Yudkowsky can actually build not one, but several intelligences greater than probably 95% of humans on the planet - all of them almost guaranteed to be friendly. He simply has to shave that ugly beard, stop being so nerdy and actually meet smart women just like him. His IQ is probably above the Ashkenazi already-high average, so having, say, 10 children and directing each on careers in every field that can potentially eliminate human aging and death will probably do more for humanity than endless philosophizing ever will.
Ditto for the rest of you who know you are smarter than the rest of humanity, but still allow mentally-challenged people to outbreed you, which results in decreasing the proportion of people on this planet whose brains can actually understand science and rationality.
Replies from: gwern, Mitchell_Porter↑ comment by Mitchell_Porter · 2011-08-06T02:23:38.752Z · LW(p) · GW(p)
The actual bottom line is that, as a potential confounding factor that might prevent a singularity from ever happening, dysgenic decline is even less of a threat than global warming. But thanks for playing!