The mind-killer
post by Paul Crowley (ciphergoth) · 2009-05-02T16:49:19.539Z · LW · GW · Legacy · 160 commentsContents
160 comments
Can we talk about changing the world? Or saving the world?
I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it. So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make. Given that existential risk is also one of the major themes of Overcoming Bias and of Eliezer's work, it's striking that we don't talk about it more here.
One reason of course was the bar until yesterday on talking about artificial general intelligence; another factor are the many who state in terms that they are not concerned about their contribution to humanity. But I think a third is that many of the things we might do to address existential risk, or other issues of concern to all humanity, get us into politics, and we've all had too much of a certain kind of argument about politics online that gets into a stale rehashing of talking points and point scoring.
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening. How can we use what we discuss here to be able to talk about politics without spiralling down the plughole?
I think it will help in several ways that we are a largely community of materialists and expected utility consequentialists. For a start, we are freed from the concept of "deserving" that dogs political arguments on inequality, on human rights, on criminal sentencing and so many other issues; while I can imagine a consequentialism that valued the "deserving" more than the "undeserving", I don't get the impression that's a popular position among materialists because of the Phineas Gage problem. We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.
For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.
However, I'm more confident of the need to talk about this question than I am of my own answers. There's very little we can do about existential risk that doesn't have to do with changing the decisions made by public servants, businesses, and/or large numbers of people, and all of these activities get us straight into the world of politics, as well as the world of going out and changing minds. There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?
160 comments
Comments sorted by top scores.
comment by HalFinney · 2009-05-02T19:27:14.237Z · LW(p) · GW(p)
It's not obvious that the best way to reduce existential risk is to actually work on the problem. Imagine if every farmer put down his plow and came to the university to study artificial intelligence research. Everyone would starve. It may well be that someone's best contribution is to continue to write software to do billing for health insurance, because that helps keep society running, which causes increased wealth, which then funds and supports people who specialize in researching risks among other fields.
I suspect that actually, only a small percentage of people, even of people here, could usefully learn the political truths relevant to existential risk mitigation via the kind of discussion you are proposing. Very few people are in a position to cause political change. The marginal utility gain for the average person to learn the truth on a political matter is practically zero due to his lack of influence on the political process. The many arguments against voting apply to this question as well, of seeking political truth; and even more so, because it's harder to ascertain political truths than to vote.
Most interest in politics is IMO similar to interest in sports or movies. It's fun, and it offers an opportunity to show off a bit, gives something to talk and socialize about, helps people form communities and define their interests. But beyond these kinds of social goals, there is no true value.
Most of the belief that one is in a position where knowing political truths is important, is likely to be self-deception. We see ourselves as being potentially more important and influential than we are likely ever to become. This kind of bias has been widely documented in many fields.
To me, politics is not so much the mind-killer as the mind-seducer. It leads us to believe that our opinions matter, it makes us feel proud and important. But it's all a lie. Politics is a waste of time and should be viewed simply as a form of entertainment. Now entertainment can be good, we all need a break from serious work and politics may be as valid as any other form of recreation, but we here should recognize that and not inflate its importance.
Replies from: AnnaSalamon, steven0461, Kakun, mtraven, Vladimir_Nesov↑ comment by AnnaSalamon · 2009-05-03T00:19:24.593Z · LW(p) · GW(p)
The set of people seriously working to reduce existential risks is very small (perhaps a few hundred, depending on who and how you count). This gives strong general reason to suppose that the marginal impact of an individual can be large, in cases where the individual aims to reduce existential risks directly and is strategic/sane/rational about how (and not in cases where the individual simply goes about their business as one of billions in the larger economy).
Many LW readers are capable of understanding that there are risks, thinking through the differential impact their donations would have on different kinds of risk mitigation, and donating money in a manner that would help. Fewer, but still many, are also capable of improving the quality of thought regarding existential risks in relevant communities (e.g., in the academic departments where they study or work, or on LW or other portions of the blogosphere). And while I agree with Hal's point that most politics is used as entertainment, there is reason to suppose that improving the quality of discussion of a very-high-impact, under-researched, tiny-numbers-of-people-currently-involved topic like existential risks can improve both (a) the well-directedness of resources like mine that are already being put toward existential risks, and (b) the amount of such resources, in dollars and in brainpower.
↑ comment by steven0461 · 2009-05-02T19:51:49.209Z · LW(p) · GW(p)
increased wealth, which then funds and supports people who specialize in researching risks
Would increased average wealth help risk-fighters more than risk-creators? It's not obvious to me either way. What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
Replies from: MBlume, mattnewport, rwallace↑ comment by MBlume · 2009-05-02T19:53:31.551Z · LW(p) · GW(p)
I believe Eliezer has made a strong case that Moore's Law, for example, mostly benefits the risk-producers
Replies from: MichaelHoward, MichaelHoward, MichaelHoward↑ comment by mattnewport · 2009-05-02T21:23:02.560Z · LW(p) · GW(p)
What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
That's not obvious to me, and even if it were I don't take a utilitarian perspective.
If you think there is underinvestment in risk fighting you have to come up with arguments to persuade people that don't rely on a utilitarian perspective since most people don't take that perspective when making decisions. Or you can try and find ways of increasing investment that don't rely on persuading large numbers of people.
Replies from: steven0461↑ comment by steven0461 · 2009-05-02T23:29:11.730Z · LW(p) · GW(p)
That utilitarianism implies one should do things with permanent effects comes from the future being much bigger than the present, and the probability of affecting it being smaller but not nearly proportionally smaller.
I agree with your second paragraph.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-03T00:08:08.487Z · LW(p) · GW(p)
Even granting that, it's not obvious to me that society is underinvesting in risk fighting. Many of the suggestions for countering global warming for example imply reduced economic growth. It is not obvious to me that the risks of catastrophic global warming outweigh the expected losses from reduced growth from a utilitarian perspective. Any investment in risk fighting carries an opportunity cost in a foregone investment in some other area. The right choice from a utilitarian perspective depends on judgements of expected risk vs. the expected benefits of alternative courses of action. I think the best choices are far from obvious.
Replies from: steven0461↑ comment by steven0461 · 2009-05-03T00:20:15.304Z · LW(p) · GW(p)
Wholly agree on global warming; the best reference I know of on extreme predictions is this. I'm thinking more of future technologies (the self-replicating and/or intelligent kind), but also of building up the general intellectual background and institutions to deal rationally with unknown unknowns.
↑ comment by rwallace · 2009-05-02T22:56:02.033Z · LW(p) · GW(p)
The assumption being made here is that actions taken with the intent of reducing existential risk will actually have the effect of reducing it rather than increasing it. This assumption seems sadly unlikely to be correct.
Replies from: steven0461↑ comment by steven0461 · 2009-05-02T23:26:14.220Z · LW(p) · GW(p)
"Actions taken with the intent to prevent event X make event X less likely" is going to be my default belief unless there's some strong evidence to the contrary.
Replies from: AnnaSalamon, mattnewport, rwallace↑ comment by AnnaSalamon · 2009-05-03T01:39:02.181Z · LW(p) · GW(p)
Or, more particularly: "Actions taken after carefully asking what the evidence implies about the most effective means of making X less likely, and then following out the means with best expected value, make event X less likely".
mattnewport's counterexamples are good, but they are examples of what happens when "intent to reduce X" is filtered through a political system that incentivizes the appearance that something will be done, that penalizes public acknowledgement of unpleasant truths, and that does not understand science. There is reason to suppose we can do better -- at least, there's reason to assign a high enough probability to "we may be able to do better" for it to be clearly worth the costs of investigating particular issue X's.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-03T01:50:32.240Z · LW(p) · GW(p)
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic. That's not a reason not to try but it seems we can agree that mere intent is far from sufficient.
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
Replies from: AnnaSalamon, AnnaSalamon↑ comment by AnnaSalamon · 2009-05-03T02:21:14.480Z · LW(p) · GW(p)
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
This depends partly on what sort of "course of action" is devised, and how many people are needed to put it into action. Francis Bacon's successful spread of the scientific method, Louis Pasteur's germ theory, whoever it was who convinced doctors to wash their hands between childbirths, the invention of the printing press, and the invention of modern fertilizers sufficient to keep larger parts of the world fed... provide historical precedents for the idea that small groups of good thinkers can sometimes have predictably positive impacts on the world without extensively and directly engaging global politics/elections/etc.
↑ comment by AnnaSalamon · 2009-05-03T01:58:06.554Z · LW(p) · GW(p)
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic.
[I'd edited my previous comment just before mattnewport wrote this; I'd previously left my comment at "There is reason to suppose we can do better", then had decided that that was overstating the evidence and added the "--at least...". mattnewport probably wrote this in response to the previous version; my apologies.]
As to evaluating the evidence: does anyone know where we can find data as to whether relatively well-researched charities do tend to improve poverty or other problems to which they turn their attention?
Replies from: MichaelVassar↑ comment by MichaelVassar · 2009-05-03T07:43:10.482Z · LW(p) · GW(p)
givewell.net
↑ comment by mattnewport · 2009-05-03T00:17:02.577Z · LW(p) · GW(p)
Alcohol prohibition, drug prohibition, the criminalization of prostitution, banking regulations designed to reduce bank failures due to excessive risk taking, bailing out automakers to prevent bankruptcy, policies designed to prevent terrorist attacks such as torturing prisoners... All are examples of actions taken with the intent to prevent X which have quite a lot of evidence to suggest that they did not make X less likely.
↑ comment by Kakun · 2009-05-03T21:46:30.976Z · LW(p) · GW(p)
Most interest in politics is IMO similar to interest in sports or movies. It's fun, and it offers an opportunity to show off a bit, gives something to talk and socialize about, helps people form communities and define their interests. But beyond these kinds of social goals, there is no true value.
I'm not totally sure what you mean by this. With that said, it does matter very much how the government distributes its resources. While the government is admittedly inefficient, that doesn't mean that it can't be improved. Since politics determines how those resources are distributed, wouldn't becoming involved in politics be a valid and important way to gain your favored causes-i.e. existential risk mitigation- support? Declaring one method of gaining support to be automatically invalid, no matter the circumstances, won't help you.
↑ comment by Vladimir_Nesov · 2009-05-02T19:44:06.667Z · LW(p) · GW(p)
Are there currently enough soldiers? What is the best way to recruit them? Existential risks is a high-payoff and generally-misunderstood issue. It looks like there is no strong community of professionals to work on it at the moment. In any case, there are existing organizations, and their merits and professional opinion should be considered before anyone commits to anything.
comment by Scott Alexander (Yvain) · 2009-05-03T00:06:41.562Z · LW(p) · GW(p)
I agree with ciphergoth that we would probably have an easier time discussing political issues than some other communities, and I agree with HalFinney that it's probably not a very good use of our time anyway. Let's say that everyone on LessWrong agrees on a solution to some political problem. So what? We already have lots of good ideas no one will listen to. It doesn't take a long-time reader of Overcoming Bias to realize marijuana criminalization isn't working so well, but so far the efforts of groups with far more resources than ourselves have been mostly in vain.
If someone came up with a new idea for pulling ropes sideways, that might be useful. For example, Robin's idea of futarchy is interesting, although so large-scale that it would be very hard to implement. If someone came up with a suggestion that brilliant, but for a smaller problem, it might do some good. But I always interpreted our posting policies to permit that sort of thing anyway.
But I think the best thing we could possibly do would be to raise the sanity waterline - a rising tide lifts all boats. That means coming up with compact, attractive summaries of our key findings and spreading them as far as possible. More on this later, possibly.
Replies from: steven0461, JGWeissman↑ comment by steven0461 · 2009-05-03T00:11:23.110Z · LW(p) · GW(p)
Politics discussion by rationalists is likely to have the most impact when it's about issues that are important, but that aren't widely recognized as such and therefore have relatively few people pulling on the rope. I don't see any point in discussing the Iraq war, say.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-05-03T00:22:58.813Z · LW(p) · GW(p)
Politics action by rationalists is likely to have the most impact on such topics. But since there are already some such topics we know about (global existential risk, for example, or teaching rationality in schools). What do we gain by discovering several more of these and then discussing them?
↑ comment by JGWeissman · 2009-05-03T18:00:04.563Z · LW(p) · GW(p)
I agree that it is not a good use of our time to discuss political issue on Less Wrong. In fact, I think it would be harmful, because it would drown out other discussion and attract people who are not prepared to discuss it rationally.
However, we should discuss politics in other forums, using what we have learned here. We should be able to avoid seeing arguments as soldiers. I would like to spread rationality techniques among those who regularly participate in politics. (Though I am not sure how. Leading by example has been to subtle in my experience, and direct instruction leads to emotional defensiveness. It might be interesting to have debates moderated by a Less Wrong member, where it could be seen as their proper role to point out biases.)
Replies from: orthonormal↑ comment by orthonormal · 2009-05-04T23:30:01.671Z · LW(p) · GW(p)
It might be interesting to have debates moderated by a Less Wrong member, where it could be seen as their proper role to point out biases.
I immediately thought of the Confessors...
comment by nazgulnarsil · 2009-05-03T11:21:32.668Z · LW(p) · GW(p)
"We already have lots of good ideas no one will listen to."
this is my primary thought on all such sentiments. the best thing for people here to do would probably be to stop worrying about altruism and start trying to get rich. Once you're rich your altruism will actually mean something.
Replies from: RobinHanson↑ comment by RobinHanson · 2009-05-03T19:37:08.718Z · LW(p) · GW(p)
Most of you are rich by historical standards, and by the standards of the world. So think carefully about just how "rich" will be "enough" to "actually mean something."
Replies from: Daniel_Burfoot, nazgulnarsil↑ comment by Daniel_Burfoot · 2009-05-04T13:58:15.814Z · LW(p) · GW(p)
I'm not sure your standard of wealth is the correct one. Most modern Americans aren't wealthy enough to hire full-time servants; by that standard of wealth there are probably more wealthy people in India, and were probably more wealthy Americans per capita in the 1920s.
I interpret NN's statement as follows: "the wealth distribution has a long tail, so that the majority of philanthropic impact is caused by outliers (Extremistan); it's more important to try to become an outlier yourself than to worry about whether to donate your yearly $50 to Greenpeace".
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-05-04T16:20:09.459Z · LW(p) · GW(p)
Most modern Americans aren't wealthy enough to hire full-time servants; by that standard of wealth there are probably more wealthy people in India, and were probably more wealthy Americans per capita in the 1920s.
Don't you think modern household convenience machines are more useful than a servant? Think of electric lights, dishwashers, clotheswashers, personal computers, &c., &c.
↑ comment by nazgulnarsil · 2009-05-13T20:06:23.017Z · LW(p) · GW(p)
money is a stand in for other (harder to quantify) metrics for impact on the future. resource distribution in general would be better if it were allocated rationally would it not? thus we should try to take as much control of resource distribution as we can. In contrast you're speaking from the perspective of satisfying material wants, by which standard we all already live like kings of other ages.
comment by byrnema · 2009-05-02T18:05:51.923Z · LW(p) · GW(p)
what traps can we defuse in advance?
We care about saving the world and we care about the truth, so sometimes we start caring too much about the ideas that we think represent those things. How can we foster detachment? How can we encourage people to consider an idea even if they don't like it, and then encourage people to relinquish an idea after it's been considered and evenly rejected?
The following paradigm has worked for me:
It's natural to be afraid of considering an idea that we know is false. Thus it is useful to occasionally practice considering ideas that we don't like in order to find that nothing bad happens to our brains or ourselves when we consider them.
It is really important to be able to consider bad ideas, not just because the bad idea might be a good idea, but because it is only through empathy (identification) with an idea that you will be able to find the right counter-argument that will encourage the holder of the idea to relinquish it. (Otherwise, as I'm sure you've observed, the same argument just gets presented another way, another time.)
Replies from: Steve_Rayhawk↑ comment by Steve_Rayhawk · 2009-05-04T09:18:42.843Z · LW(p) · GW(p)
it is only through empathy (identification) with an idea that you will be able to find the right counter-argument that will encourage the holder of the idea to relinquish it. (Otherwise, as I'm sure you've observed, the same argument just gets presented another way, another time.)
Related: Is That Your True Rejection?, Words as Mental Paintbrush Handles (arguments as paintbrush handles for emotional responses).
Eliezer's counter-argument in "The Pascal's Wager Fallacy Fallacy" is an example of this mistake. Arguments from the Pascal's Wager Fallacy aren't paintbrush handles for expected utility computations, they're paintbrush handles for the fear of being tricked in confusing situations and the fear of exhibiting markers of membership in a ridiculed or rejected group.
comment by byrnema · 2009-05-03T12:33:54.931Z · LW(p) · GW(p)
Survey: are you motivated to improve or save the world?
This survey aims to determine if there is significant consensus or disparity. It is in response to the datapoint presented here.
If you would like to qualify or explain your response, feel free to do so as a comment to the appropriate response.
Note that this is a general solution to the problem of conducting a quick off-the-cuff survey on LW without affecting karma, but you need to be able to view negative scoring comments.
If you want to leave me with positive karma, please keep the survey neutral and vote the parent (this) up instead.
↑ comment by Cameron_Taylor · 2009-05-04T03:02:18.633Z · LW(p) · GW(p)
Note that this is a general solution to the problem of conducting a quick off-the-cuff survey on LW without affecting karma, but you need to be able to view negative scoring comments.
I like your solution Byrnema!
Something you could consider is adding "(and downvote the downvote post to neutralise karma)" to the two alternatives. This somewhat aleviates the problem of the non-displayed negative karma post.
Edit: Pardon me, I just realised that me replying obfuscates the survey. Since people will inevitably have comments on a topic that don't qualify as either 'yes' or 'no', an extra post by the surveyer into which replies can be made would be useful.
comment by steven0461 · 2009-05-02T18:50:01.518Z · LW(p) · GW(p)
Here's Wikipedia's list of Forbidden Words, which I think has some good examples of how language can be subtly loaded on controversial / emotionally charged issues. Diligently watching out for that sort of thing is probably one of the best things we could do to avoid political discussions degenerating.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-02T18:53:37.762Z · LW(p) · GW(p)
That doesn't cut it. An easy-to-use, fairly effective technique, but not a game-defining one. Try enforcing that on a random crowd.
Replies from: steven0461↑ comment by steven0461 · 2009-05-02T19:07:08.540Z · LW(p) · GW(p)
We could consider making a list of similar guidelines that we wouldn't want to enforce generally, but that together could provide a sort of cognitive clean room to discuss super-touchy subjects in. "Never mention how someone's false beliefs could arise from flaws in their personality even when that's actually happening" seems like another important one. Probably ban sarcasm. Possibly even ban anecdotes and analogies.
Replies from: MBlume, AnnaSalamon, Vladimir_Nesov↑ comment by MBlume · 2009-05-02T19:45:09.298Z · LW(p) · GW(p)
"Never mention how someone's false beliefs could arise from flaws in their personality even when that's actually happening"
If two people have a persistent disagreement of fact, eventually the inescapable conclusion is that they do not fully trust one another for rationalists. Exploring how this came to be the case is the first step to changing the situation.
I think ideally what we need is a space in which we can suggest flaws in a person's personality, and still be friends the next day. Is that possible?
Replies from: steven0461↑ comment by steven0461 · 2009-05-02T23:52:01.020Z · LW(p) · GW(p)
Discussions among rationalists needn't involve differences of opinion; they can instead involve differences of personal impression. That said, there are real differences of opinion among rationalists. I'm not sure, however, that we need to resort to psychoanalysis to resolve them -- after all, argument screens off personality.
↑ comment by AnnaSalamon · 2009-05-03T11:09:52.972Z · LW(p) · GW(p)
We could consider making a list of similar guidelines that we wouldn't want to enforce generally, but that together could provide a sort of cognitive clean room to discuss super-touchy subjects in.
Great idea. I'd say the biggest useful guideline here is that on mind-killing subjects we should make a norm of only saying the pieces we actually know. That is, we should cite evidence for all conclusions, or, better still, cite the real causes of our beliefs, and we should keep our conclusions really carefully to only what is almost tautologically implied by that evidence. We should be extra-precise. And we should not, really really not, bringing in extraneous issues if there's any way to avoid them.
When people try to talk about AI risks, say, without background, they often come up with plausible this and plausible that, and the topics and misconceptions multiply faster than one can sort them out. Whereas interested interlocutors even without much rationality background who have taken the time to sort through the sub-issues one at a time, slowly, sorting through the causes of each intuition and the sum total of evidence on that point, in my experience generally have managed useful conversations.
↑ comment by Vladimir_Nesov · 2009-05-02T19:24:14.423Z · LW(p) · GW(p)
That's just generally raising the level of fallacy alert, maybe specifically around the politics-induced fallacies. It should be default behavior whenever the fallacious arguments start raining down, around any issue. A typical battle ground for x-rationality skills in action, not a special case.
Replies from: steven0461↑ comment by steven0461 · 2009-05-02T19:37:05.763Z · LW(p) · GW(p)
There's a difference between just being hypersensitive to bad reasoning (usually a good idea), and being hypersensitive to anything that could directly or indirectly cause emotions to flare up (usually not worth the bother).
Replies from: Relsqui, Vladimir_Nesov↑ comment by Relsqui · 2010-09-21T05:11:14.450Z · LW(p) · GW(p)
being hypersensitive to anything that could directly or indirectly cause emotions to flare up (usually not worth the bother)
Molybdenumblue said it really well elsewhere:
If you believe that other human beings are a useful source of insight, you would do well to make some effort not to offend.
Yes, hypersensitivity is by definition uncalled for, but when attempting to communicate with human beings and encourage their reply, it's clearly useful to choose words which are less likely to invoke negative emotions. It's possible to keep the juggling balls of precision, reason, and sensitivity all in the air at the same time; that it can be difficult is not sufficient reason not to try.
↑ comment by Vladimir_Nesov · 2009-05-02T19:48:14.017Z · LW(p) · GW(p)
Hence I mentioned escalation of your level of sensitivity, meaning to refer to any factors that (potentially) deteriorate constructive thinking. Being hypersensitive to bad reasoning isn't always a good idea, for example if you don't care to reeducate the interlocutor.
comment by swestrup · 2009-05-03T05:05:27.093Z · LW(p) · GW(p)
I think it will be very necessary to carefully frame what it would be that we might wish to accomplish as a group, and what not. I say this because I'm one of those who thinks that humanity has less than a 50% chance of surviving the next 100 years, but I have no interest in trying to avert this. I am very much in favour of humanity evolving into something a lot more rational than what it is now, and I don't really see how one can justify saying that such a race would still be 'humanity'. On the other hand, if the worry is the extinction of all rational thought, or the extinction of certain, carefully chosen, memes, I might very well wish to help out.
The main problem, as I see it, is in being clear on what we want to have happen (and what not) and what we can do to make the preferred outcomes more likely. The more I examine the entire issues, the harder it appears to define how to distinguish between the good and the bad outcomes.
Replies from: byrnema↑ comment by byrnema · 2009-05-03T12:28:52.606Z · LW(p) · GW(p)
I wonder how many rationalists share this view. If a significant number, it would be worthwhile to even discuss this first, in hopes to muster a broader consensus about what the group should do or even to just be aware of the reasons for lack of agreement.
comment by cousin_it · 2009-05-02T20:43:52.014Z · LW(p) · GW(p)
If politicians start following expected utility consequentialism, special interest groups will be able to exploit the system by manufacturing in themselves "offense" (extreme emotional disutility) at unfavored measures, forcing your maximizer to give in to their demands. To avoid this, you need a procedure for distinguishing "warranted" offense from "unwarranted" offense: some baseline of personal rights ultimately derived from something other than self-assessed emotional utility.
If you see a way around this difficulty, let me know, because it seems insurmountable to me right now. Until we sort this out, I find it hard to talk about politics from a consequentialist standpoint, because most successful interest groups today are already heavily using the exploit I've described.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-02T20:56:12.392Z · LW(p) · GW(p)
I don't see the object of attack in the room. An exploration of potential utility-maximization political frameworks and their practical pitfalls would possibly be interesting, although in practice I expect this sort of institution to turn into a kind of market, not so much politician-mediated.
Replies from: cousin_it↑ comment by cousin_it · 2009-05-02T21:03:34.273Z · LW(p) · GW(p)
I meant to attack this part of ciphergoth's post:
We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.
For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.
I didn't intend to criticize any real or hypothetical political system. The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility, as ciphergoth seems to propose.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-02T21:38:37.497Z · LW(p) · GW(p)
The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility
Well, since you've easily recognized this exploit already at the hypothetical stage, this kind of vulnerability won't be a problem. Any consequentialist framework should be able to fight moral sabotage, for example by introducing laws that disincentivize it.
Replies from: cousin_it, Vladimir_Nesov↑ comment by cousin_it · 2009-05-02T21:49:42.531Z · LW(p) · GW(p)
Before disincentivizing, you face the problem of defining and recognizing moral sabotage. It doesn't sound trivial to me. Remember, groups don't admit to using the outrage tactic; they do it sincerely, sometimes over several generations of members. I repeat the question: how does a rationalist tell "warranted" emotional disutility from "unwarranted" in a fair way?
Replies from: steven0461, ciphergoth↑ comment by steven0461 · 2009-05-02T23:41:51.821Z · LW(p) · GW(p)
Incentive effects are hugely important, but a utilitarian decision process that causes predictable harm is not a true utilitarian decision process. Your question is a tough one, but it's answerable in principle.
↑ comment by Paul Crowley (ciphergoth) · 2009-05-02T23:51:05.877Z · LW(p) · GW(p)
I don't see the problem in principle with a utilitarian deciding that giving in to an instance of moral sabotage will greatly increase later incidence of moral sabotage, resulting in total disutility greater than the manufactured weeping and gnashing of teeth you face if you stand against it now.
Replies from: cousin_it↑ comment by cousin_it · 2009-05-03T00:35:05.167Z · LW(p) · GW(p)
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits. Well... which should I pick, then?
Looks like we've run into another of those nasty recursive problems: I choose my utility function depending on what every other agent could do to exploit me, and everyone else does the same. The only natural solution might well turn out to be everyone caring about their own welfare and no one else's, to avoid "mugging by suffering". Let's model the problem mathematically and look for other solutions - I love this stuff.
Replies from: loqi↑ comment by loqi · 2009-05-03T02:16:36.513Z · LW(p) · GW(p)
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits.
No, it needs a different method of maximizing expected utility. Avoiding moral sabotage doesn't reflect a preference, it's purely instrumental.
Replies from: cousin_it↑ comment by Vladimir_Nesov · 2009-05-03T09:49:41.221Z · LW(p) · GW(p)
A related idea: moral sabotage is what happens when one player in the Ultimatum game insists on taking more than a fair share, even if what fare share is depends on his preferences.
comment by Daniel_Burfoot · 2009-05-04T13:39:59.356Z · LW(p) · GW(p)
Come on, Ciphergoth, the problem of saving humanity would be too easy if you could convince a large number of humans to go along with your proposals! You have a harder challenge: save humanity in spite of the apathy, and in many cases intransigent opposition, of the humans.
I have a hard time believing that anyone in power is serious about saving humanity. There are so many obvious and easy things that could be done, that would clearly be enormously helpful, that no one with power is doing or even suggesting. Politics is almost entirely a signalling game.
As several people have been starting to suggest, it's likely that the best strategy for rationalists involves two steps:
- accumulate massive wealth and power
- use it to save humanity
And probably the first step is far harder than the second.
comment by Nominull · 2009-05-02T19:47:45.867Z · LW(p) · GW(p)
I will admit to an estimate higher than 95% that humanity or its uploads will survive the next hundred years. Many of the "apocalyptic" scenarios people are concerned about seem unlikely to wipe out all of humanity; so long as we have a breeding population, we can recover.
Replies from: Nick_Tarleton, Mario, byrnema, mattnewport↑ comment by Nick_Tarleton · 2009-05-03T05:33:37.901Z · LW(p) · GW(p)
No significant risk of unFriendly AI (especially since you apparently consider uploading within 100 years plausible)? Nanotech war? Even engineered disease? I'm surprised.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-03T05:45:13.141Z · LW(p) · GW(p)
The comment appears to me to be saying there is no significant risk of wiping out all of humanity, not that there is no significant risk of any of the dangers you describe causing significant harm.
I think an unfriendly AI is somewhat likely for example but put a very low probability on an unfriendly AI completely wiping out humanity. The consequences could be quite unpleasant and worth working to avoid but I don't think it's an existential threat with any significant probability.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-03T11:58:38.523Z · LW(p) · GW(p)
That's a very strange perspective. Other threats are good in that they are stupid, so they won't find you if you colonize space or live on an isolated island, or have a lucky combination of genes, or figure out a way to actively outsmart them, etc. Stupid existential risks won't methodically exterminate every human, and so there is a chance for recovery. Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet. (Indifference works this way too, it's the application of power indifferent to humankind that is methodical, e.g. Paperclip AI.)
Replies from: Cameron_Taylor, Nominull, loqi, mattnewport↑ comment by Cameron_Taylor · 2009-05-04T07:58:01.394Z · LW(p) · GW(p)
That's a very strange perspective.
It's not very strange. It's a perspective that tends to match most human intuitions. It is, however, a very wrong perspective.
↑ comment by Nominull · 2009-05-03T22:34:12.919Z · LW(p) · GW(p)
Consider: humanity is an intelligence, one not particularly friendly to, say, the fieldmouse. Fieldmice are not yet extinct.
Replies from: MBlume, Nick_Tarleton, Vladimir_Nesov, MichaelHoward↑ comment by Nick_Tarleton · 2009-05-04T04:31:59.690Z · LW(p) · GW(p)
Humans satisfice, and not very well at that compared to what an AGI could do. If we effectively optimized for... almost any goal not referring to fieldmice... fieldmice would be extinct.
↑ comment by Vladimir_Nesov · 2009-05-03T22:57:05.958Z · LW(p) · GW(p)
Humanity is weak.
Replies from: Nominull, byrnema↑ comment by byrnema · 2009-05-04T00:44:12.706Z · LW(p) · GW(p)
Humanity is beautiful. A significantly more intelligent AI will love us more perfectly (no: more truly) than we love a field mouse. (It is an intermediately stupid AI that I would worry about.)
Later edit: If you're interested in reading past group discussion on the topic of how superintelligence does not imply supermorality, search "surface analogies" and "supermorality".
Replies from: Vladimir_Nesov, SoullessAutomaton↑ comment by Vladimir_Nesov · 2009-05-04T01:02:52.178Z · LW(p) · GW(p)
Affective rhetoric. It seems like you are reasoning by surface analogies that don't apply, anthropomorphizing AIs without realizing that, thinking that you've successfully abstracted away all the human-specific (and irrelevant!) things. Unless you are condensing a deeper model, which you'd need to present in more detail to discuss, you just need to learn more about the subject before drawing any strong conclusions. Read up on the Yudkowsky's posts. (For this comment in particular: see That Tiny Note of Discord and dependencies, although that's far from an ideal first post on the subject.)
↑ comment by SoullessAutomaton · 2009-05-04T00:54:21.456Z · LW(p) · GW(p)
You assume that an AI will necessarily value beauty as you conceive it. This is unlikely.
Replies from: byrnema↑ comment by byrnema · 2009-05-04T01:49:29.497Z · LW(p) · GW(p)
Yes, I understand that this is not religion and all positions will have to be argued and defended in due time. I am merely declaring my position. I do find it really fascinating that, in the first stages of drafting this new map, we begin by drawing lines in the sand...
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-04T01:53:53.692Z · LW(p) · GW(p)
It's more that the counterargument against your position was covered, at great length, and then covered some more, on OB by Yudkowsky, the person that most of are here because we respect.
If you're going to take a stand for something that most people here have already read very persuasive arguments against, I don't think it's unreasonable to expect more than just a position statement (and an emotionally-loaded one, at that).
Replies from: byrnema↑ comment by byrnema · 2009-05-04T06:22:15.967Z · LW(p) · GW(p)
I meant no disrespect. (Eliezer has 661 posts on OB.) I do appreciate your direction/correction. I didn't mean to take a stand against.
(Sigh.) I have no positions, no beliefs, prior to what I might learn from Eliezer.
So the idea is that a unique, complex thing may not necessarily have an appreciation for another unique complexity? Unless appreciating unique complexity has a mathematical basis.
Replies from: AnnaSalamon, AnnaSalamon↑ comment by AnnaSalamon · 2009-05-04T07:30:42.428Z · LW(p) · GW(p)
brynema, “disrespect” isn’t at all the the right axis for understanding why your last couple comments weren’t helpful. (I’m not attacking you here; LW is an unusual place, and understanding how to usefully contribute takes time. You’ve been doing well.) The trouble with your last two comments is mostly:
Comments on LW should aspire to rationality. As part of this aspiration, we basically shouldn’t have “positions” on issues we haven’t thought much about; the beliefs we share here should be evidence-based best-guesses about the future, not clothes to decorate ourselves with.
Many places encourage people to make up and share “beliefs”, because any person’s beliefs are as good as any other’s and it’s good to express oneself, or something like that. Those norms are not useful toward arriving at truth, at least not compared to what we usually manage on LW. Not even if people follow their made-up “beliefs” with evidence created to support their conclusions; nor even if evidence or intuitions play some role in the initial forming of beliefs.
This is particularly true in cases where the subjects are difficult technical problems that some in the community have specialized in and thought carefully about; declaring positions there is kind of like approaching a physicist, without knowledge of physics, and announcing your “position” on how atoms hold together. (Though less so, since AI is less well-grounded than physics.)
AI risks are a particularly difficult subject about which to have useful conversation, mostly because there is little data to help keep conversation from veering off into nonsense-land. So it makes sense, in discussing AI risks and other slippery topics, to have lower tolerance for folks making up positions.
Also, yes, these particular positions have been discussed and have proven un-workable in pretty exhaustive detail.
As to the object-level issue concerning possible minds, I wrote an answer in the welcome thread, on the theory that, if we want to talk about AI or other prerequisite-requiring topics on LW, we should probably get in the habit of taking “already discussed to death” questions to the welcome thread, where they won’t clutter mainline discussion. Please don’t be offended by this, though; I value your presence here, and you had no real way of knowing this had already been discussed.
Replies from: byrnema↑ comment by byrnema · 2009-05-04T17:04:29.058Z · LW(p) · GW(p)
I've spent some time working through my emotional responses and intellectual defenses to the posts above. I would like to make some observations:
(1) I'm disappointed that even as rationalists, while you were able to recognize that I had committed some transgression, you were not able to identify it precisely and power was used (authority, shaming) instead of the truth to broadly punish me.
(2) My mistake was not in asserting something false. This happens all the time here and people usually respond more rationally.
(3) My transgression was using the emotionally loaded word "love". (So SoulessAutomaton actually came close.) The word in this context is taboo for a good reason -- I will try to explain but perhaps I will fail: while I believe in love, I should not put the belief in those terms because invoking the word is dark art manipulation; the whole point of rationality is to find a better vocabulary for explaining truth.
(4) We can look at this example as a case study to evaluate what responses were rational and which weren't. SoulessAutomaton's and Anna Salamon's responses were well-intentioned but escalated the emotional cost of the argument for me (broadly, SoulessAutomaton accused me of being subversive/disrespectful and AnnaSalamon made the character attack that I'm not rational.) Both tempered their meted 'punishments' with useful suggestions. VladimirNesov's comment was I think quite rational: he asserted I probably needed to learn more about the subject and he provided some links. (The specific links are enormously helpful for navigating this huge maze.) One criticism would be that he was overly charitable with his assessment of “affective rhetoric”. While my rhetoric was indeed affective by some measure external to LW, the point is, I know, affective rhetoric for its own sake is not appropriate here. I suspect Vladimir_Nesov was just trying to signal respect for me as an individual before criticizing my position, generally a good practice.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-04T17:29:14.358Z · LW(p) · GW(p)
(2) My mistake was not in asserting something false.
It was. What you asserted, depending on interpretation, is either ill-formed or false. A counterexample to your claim is that a Paperclip AI won't, in any meaningful sense, love humanity.
(3) My transgression was using the emotionally loaded word "love".
The use of emotionally-loaded word is inappropriate, unless it is. In this case, your statement of attribution of emotion was false, and so affective aura accompanying the statement was inappropriate. I hypothesized that emotional thinking was one of the sources of your belief in the truth of the statement you made, so stating that your words were "affective rhetoric" meant to communicate this diagnostic (by analogy with "empty rhetoric"). I actually edited to that phrase from earlier "affective silliness", that directly communicated a reference to the fact of you making a mistake, but I changed it to be less offensive.
Vladimir Nesov's comment was I think quite rational: he asserted I probably needed to learn more about the subject and he provided some links
The 'probably' was more of a weasel word, referring to the fact that I'm not sure whether you actually want to spend time learning all that stuff, rather than to special uncertainty in whether the answer to your question is found there.
(1) I'm disappointed that even as rationalists, while you were able to recognize that I had committed some transgression, you were not able to identify it precisely and power was used (authority, shaming) instead of the truth to broadly punish me.
The problem is that the inferential distance is too great, and so it's easier to refer the newcommer to the archive, where the answer to what was wrong can be learned systematically, instead of trying to explain the problems on her own terms.
Replies from: byrnema↑ comment by byrnema · 2009-05-04T17:43:43.660Z · LW(p) · GW(p)
I read "affective rhetoric" as "effective rhetoric". (oops) Yes, "affective rhetoric" is a much more appropriate comment than ("effective rhetoric"). Since it seems like a good place for a neophyte to begin, I will address your comment about the paperclip AI in the welcome thread where Anna Salamon replied.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-04T17:49:23.847Z · LW(p) · GW(p)
Anna Salamon replied on the Welcome thread, starting with:
This is in response to a comment of brynema’s elsewhere; if we want LW discussions to thrive even in cases where the discussions require non-trivial prerequisites, my guess is that we should get in the habit of taking “already discussed exhaustively” questions to the welcome thread.
↑ comment by AnnaSalamon · 2009-05-04T06:42:21.380Z · LW(p) · GW(p)
If we want to talk usefully about AI as a community, we should probably make a wiki page that summarizes or links to the main points. And then we should have a policy in certain threads: "don't comment here unless you've read the links off of wiki page such-and-such".
brynema's right that we want newcomers in LW, and that newcomers can't be expected to know all of what's been discussed. But it is also true that we'll never get discussions off the ground if we have to start all over again every time someone new enters.
↑ comment by MichaelHoward · 2009-05-03T23:00:04.667Z · LW(p) · GW(p)
Fieldmice (outside of Douglas Adams fiction) aren't any particular threat to us in the way we might be to the Unfriendly AI. They're not likely to program another us to fight us for resources.
If fieldmice were in danger of extinction we'd probably move to protect them, not that that would necessarily help them.
↑ comment by loqi · 2009-05-04T01:30:23.560Z · LW(p) · GW(p)
Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet.
Not on another planet, no. But I wonder how practical a constantly accelerating seed ship will turn out to be.
↑ comment by mattnewport · 2009-05-03T18:14:45.781Z · LW(p) · GW(p)
You are assuming that mere intelligence is sufficient to give an AI an overwhelming advantage in any conflict. While I concede that is possible in theory I consider it much less likely than seems to be the norm here. This is partly because I am also skeptical about the existential dangers of self replicating nanotech, bioengineered viruses and other such technologies that an AI might attempt to use in a conflict.
As long as there is any reasonable probability that an AI would lose a conflict with humans or suffer serious damage to its capacity to achieve its goals, its best course of action is unlikely to be to attempt to wipe out humanity. A paperclip maximizer for example would seem to better further its goals by heading to the asteroid belt where it could advance its goals without needing to devote large amounts of computational capacity to winning a conflict with other goal-directed agents.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-03T23:06:50.474Z · LW(p) · GW(p)
For people who've voted this down, I'd be interested in your answers to the following questions:
1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity?
2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity?
3) If you answered no to 1), what makes you certain that such a scenario is not possible?
↑ comment by Mario · 2009-05-03T18:48:25.732Z · LW(p) · GW(p)
I agree generally, but I think when we talk about wiping out humanity we should include the idea that if we were to lose a significant portion of our accumulated information it would be essentially the same as extinction. I don't see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.
Replies from: Nick_Tarleton, Nominull, mattnewport, Vladimir_Nesov↑ comment by Nick_Tarleton · 2009-05-04T04:28:31.046Z · LW(p) · GW(p)
I don't see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.
See In Praise of Boredom and Sympathetic Minds: random evolved intelligent species are not guaranteed to be anything we would consider valuable.
↑ comment by mattnewport · 2009-05-03T19:21:47.734Z · LW(p) · GW(p)
We have pretty solid evidence that a stone age tech group of humans can develop a technologically advanced society in a few 10s of thousands of years. I imagine it would take considerably longer for squirrels to get there and I would be much less confident they can do it at all. It may well be that human intelligence is an evolutionary accident that has only happened once in the universe.
Replies from: Mario↑ comment by Mario · 2009-05-03T19:57:25.820Z · LW(p) · GW(p)
The squirrel civilization would be a pretty impressive achievement, granted. The destruction of this particular species (humans) would seemingly be a tremendous loss universally, if intelligence is a rare thing. Nonetheless, I see it as only a certain vessel in which intelligence happened to arise. I see no particular reason why intelligence should be specific to it, or why we should prefer it over other containers should the opportunity present itself. We would share more in common with an intelligent squirrel civilization than a band of gorillas, even though we would share more genetically with the latter. If I were cryogenically frozen and thawed out a million years later by the world-dominating Squirrel Confederacy, I would certainly live with them rather than seek out my closest primate relatives.
EDIT: I want to expand on this slightly. Say our civilization were to be completely destroyed, and a group of humans that had no contact with us were to develop a new civilization of their own concurrent with a squirrel population doing the same on the other side of the world. If that squirrel civilization were to find some piece of our history, say the design schematics of an electric toothbrush, and adopt it as a part of their knowledge, I would say that for all intents and purposes, the squirrels are more "us" than the humans, and we would survive through the former, not the latter.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-03T21:26:23.444Z · LW(p) · GW(p)
I don't see any fundamental reason why intelligence should be restricted to humans. I think it's quite possible that intelligence arising in the universe is an extremely rare event though. If you value intelligence and think it might be an unlikely occurrence then the survival of some humans rather than no humans should surely be a much preferred outcome?
I disagree that we would have more in common with the electric toothbrush wielding squirrels. I've elaborated more on that in another comment.
Replies from: Mario↑ comment by Mario · 2009-05-03T21:36:22.746Z · LW(p) · GW(p)
Preferred, absolutely. I just think that the survival of our knowledge is more important than the survival of the species sans knowledge. If we are looking to save the world, I think an AI living on the moon pondering its existence should be a higher priority than a hunter-gatherer tribe stalking wildebeest. The former is our heritage, the latter just looks like us.
↑ comment by Vladimir_Nesov · 2009-05-03T19:14:20.536Z · LW(p) · GW(p)
Does this imply that you are OK with a Paperclip AI wiping out humanity, since it will be an intelligent life form much more developed than we are?
Replies from: Mario↑ comment by Mario · 2009-05-03T19:49:18.260Z · LW(p) · GW(p)
If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don't see why the species that is doing the rebuilding is all that important -- they aren't "us" in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.
Replies from: ciphergoth, mattnewport↑ comment by Paul Crowley (ciphergoth) · 2009-05-03T20:06:07.422Z · LW(p) · GW(p)
Bear in mind, the paperclip AI won't ever look up to the broader challenges of being a sentient being in the Universe; the only thing that will ever matter to it, until the end of time, is paperclips. I wouldn't feel in that instance that we had left behind a creature that represented our legacy, no matter how much it knows about the Beatles.
Replies from: Mario↑ comment by Mario · 2009-05-03T20:50:21.861Z · LW(p) · GW(p)
OK, I can see that. In that case, maybe a better metric would be the instrumental use of our accumulated knowledge, rather than its mere possession. Living in a library doesn't mean you can read, after all.
Replies from: ciphergoth, Vladimir_Nesov↑ comment by Paul Crowley (ciphergoth) · 2009-05-03T23:01:08.410Z · LW(p) · GW(p)
What I think you're driving at is that you want it to value the Beatles in some way. Having some sort of useful crossover between our values and its is the entire project of FAI.
Replies from: Mario↑ comment by Mario · 2009-05-03T23:17:45.523Z · LW(p) · GW(p)
I'm just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn't enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn't expect an AI to find value in everything we have produced, just as we don't. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs "us." That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-05-03T23:29:20.795Z · LW(p) · GW(p)
I strongly suspect that that is the same thing as a Friendly AI, and therefore I still consider UFAI an existential risk.
↑ comment by Vladimir_Nesov · 2009-05-03T21:21:59.801Z · LW(p) · GW(p)
The Paperclip AI will optimally use its knowledge about the Beatles to make more paperclips.
↑ comment by mattnewport · 2009-05-03T21:19:53.829Z · LW(p) · GW(p)
How much of what it means to be human do you think is cultural conditioning versus innate biological tendency? I think the evidence points to a very large biologically determined element to humanity. I would expect to find more in common with a hunter gatherer in a previously undiscovered tribe, or even with a paleolithic tribesman, than with an alien intelligence or an evolved dolphin.
If you read ancient Greek literature, it is easy to empathize with most of the motivations and drives of the characters even though they lived in a very different world. You could argue that our culture's direct lineage from theirs is a factor but it seems that westerners can recognize as fellow humans the minds behind ancient Chinese or Indian texts with less shared cultural heritage with our own.
Replies from: Mario↑ comment by Mario · 2009-05-03T21:45:52.614Z · LW(p) · GW(p)
I don't consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don't pretend to be fully able to do so), but I think it's important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.
I'm not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I'm concerned. It might as well have been anything else.
Replies from: loqi↑ comment by loqi · 2009-05-04T01:34:59.067Z · LW(p) · GW(p)
I'm curious as to what sorts of goals you think a "solely rational" creature possesses. Do you have a particular point of disagreement with Eliezer's take on the biological heritage of our values?
Replies from: Mario↑ comment by Mario · 2009-05-04T02:30:54.470Z · LW(p) · GW(p)
Oh, I don't know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn't affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology).
I don't have any disagreement with Eliezer's description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever. I could be wrong, however. It may not be possible to be a person without certain biological-like reactions. I can certainly see how this would be the case for people in early learning stages of development, particularly if your goal is to mold that person into a friendly one. Even then, though, I think it would be beneficial to keep those parts to the bare minimum required to function.
Replies from: loqi↑ comment by loqi · 2009-05-04T03:45:18.780Z · LW(p) · GW(p)
What would remain of you if you could download your mind into a computer?
That depends on the resolution of the simulation. Wouldn't you agree?
Once you subtract the biological from the human, I imagine what remains to be pure person.
I think you're using the word "biological" to denote some kind of unnatural category.
I don't have any disagreement with Eliezer's description of how our biology molded our growth, but I see no reason why we should hold on to that biology forever.
The reasons you see for why any of us "should" do anything almost certainly have biologically engineered goals behind them in some way or another. What of self-preservation?
Replies from: Mario↑ comment by Mario · 2009-05-04T19:06:04.796Z · LW(p) · GW(p)
Not unnatural, obviously, but a contaminant to intelligence. Manure is a great fertilizer, but you wash it off before you use the vegetable.
Replies from: loqi↑ comment by loqi · 2009-05-05T21:20:25.724Z · LW(p) · GW(p)
I meant this kind of unnatural category. I don't quite know what you mean by "biological" in this context. A high-resolution neurological simulation might not require any physical carbon atoms, but the simulated mind would presumably still act according to all the same "biological" drives.
↑ comment by byrnema · 2009-05-03T00:18:38.487Z · LW(p) · GW(p)
I'm certain.
Pretend that someone says "I'll give you __ odds, which side do you want?", and figure out what the odds would have to be to make you indifferent to which side you bet on. Consider the question as if though you were actually going to put money on it .
↑ comment by mattnewport · 2009-05-02T21:27:52.884Z · LW(p) · GW(p)
I take much the same position.
comment by jimmy · 2009-05-02T18:32:39.542Z · LW(p) · GW(p)
My impression is that the material covered on OB/LW is more than sufficient to allow people that really understand the material to talk politics without exploding. I don't think we need any politics specific tricks for those that are likely to be helpful contributors.
This came up in the Santa Barbara LW meetup, and I felt like that group could have talked politics the right way. The implicit consensus seemed to be "Yeah, it'd probably work", though we didn't try.
Of course, with a smaller group and stronger selection pressures it is less likely to have a group member or two that can't hold it together when compared to a larger and online community (ie LW).
To summarize: I think there are plenty of people here that can handle it, but probably enough that can't to ruin it if there aren't any preventative measures. Hopefully a merciless downvoting policy will be enough, but failing that, I'm sure we could come up with a way to select a subset of people that are allowed to talk politics.
Replies from: Vladimir_Nesov, davidr↑ comment by Vladimir_Nesov · 2009-05-02T18:46:57.339Z · LW(p) · GW(p)
You also need to sufficiently care about the specific question to work on it, which is not a given. Less general, less popular.
↑ comment by davidr · 2009-05-03T19:08:36.133Z · LW(p) · GW(p)
Im not sure its just a matter of rationality (which it is), but also of complexity, ie predicting or estimating utility for policy A vs B can be impossible to model because of chaotic effects etc.
Just because most of the mistakes we see when people argue politics are rather obvious (from a rationalistic pov) doesnt mean they are the only ones. Otherwise social science and economics would be sciences, with capital S.
comment by Cameron_Taylor · 2009-05-04T02:48:00.104Z · LW(p) · GW(p)
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening.
And you say that like it is a bad thing! The possibility of creating just such a utopia sounds like a damn good motivating influence for concerted altruistic effort and existintial risk mitigation to me!
Replies from: kpreid↑ comment by kpreid · 2009-05-04T11:38:19.821Z · LW(p) · GW(p)
I understood ciphergoth's description as “what we have been discussing being useful for nothing more than these tasks”, not a world where those tasks are all you need to deal with.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-05-04T13:16:24.887Z · LW(p) · GW(p)
As did I Kpreid, and I do appreciate Cipher's overall message. I don't, however accept the implicit argument (the implication clear in the loaded language) that those basic activities are inadequate or evidence that greater political influence is necessary.
I give you credit for noticing that the second, larger of my exclaimations does not particularly refute ciphergoth. In fact, it was a tangent which served as a filler and to lighten the contradiction somewhat. It also hints at one reason that I consider those activities of value. If something would exist in a utopia that I would accept then chances are that making it in this banal reality is a good thing in itself.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-05-04T13:53:47.603Z · LW(p) · GW(p)
Not sure what you're driving at. I value both getting up and getting laid, though I'm not sure I appreciate the preparation for Omega so much. If you agree that we could usefully spend more time talking about concerted altruistic effort and existential risk mitigation, not least in order to change the world so that we can concentrate more on fun, then I think you agree with the thrust of the paragraph you quote.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-05-04T15:19:45.957Z · LW(p) · GW(p)
I don't agree, but it is not really a disagreement worth breaking down to our respective implicit and explicit premises, arguments and conclusions and any potential conflict between the two positions.
comment by taw · 2009-05-03T12:16:51.117Z · LW(p) · GW(p)
I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it.
For the record I would put it at levels overwhelmingly higher than 95%. More like 99.999%.
Replies from: jimmy, homunq, orthonormal, Vladimir_Nesov↑ comment by jimmy · 2009-05-04T00:31:58.376Z · LW(p) · GW(p)
You can't get away with having such extreme probabilities when a bunch of smart and rational people disagree. There are reasons why the whole Aumann agreement thing doesn't work perfectly in real life, but this is an extreme failure.
If a bunch of people on LW think it's only 50% likely and you think theres only a 0.1% chance that they're right and you're wrong (which is already ridiculously low) it still brings your probability estimate down to around 99.95. This is a 50 fold increase in the probability that the world is going to end over what you stated. Either you have some magic information that you haven't shared, or you're hugely overconfident.
http://lesswrong.com/lw/9x/metauncertainty/ http://lesswrong.com/lw/3j/rationality_cryonics_and_pascals_wager/69t#comments
Replies from: taw, homunq↑ comment by taw · 2009-05-04T09:45:50.516Z · LW(p) · GW(p)
You cannot selectively apply Aumann agreement. If you want to count tiny bunch of people who believe in AI foom, you must also take into account 7 billion people, many of them really smart, who definitely don't.
I don't have this problem, as I don't really believe that using Aumann agreement is useful with real humans.
Or you could count my awareness of insider overconfidence as magic information:
http://www.overcomingbias.com/2007/07/beware-the-insi.html
Replies from: jimmy↑ comment by jimmy · 2009-05-05T07:08:09.917Z · LW(p) · GW(p)
This is Less Wrong we're talking about. Insider overconfidence isn't "magic information".
See my top level post for a full response.
↑ comment by homunq · 2009-05-14T16:47:24.473Z · LW(p) · GW(p)
Large groups of smart people are frequently wrong about the future, and overwhelmingly so about the non-immediate future. 0.1% may be low but it's not ridiculously so.
(Also "they're right and you're wrong" is redundant. This has nothing to do with any set of scenario probabilities being "right". And any debate of "p=.9" "no, p=.1" is essentially silly because it misunderstands both the meaning of probability as a function of knowledge and our ability to create models which give meaningfully-accurate probabilities.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-14T17:01:35.964Z · LW(p) · GW(p)
And any debate of "p=.9" "no, p=.1" is essentially silly because it misunderstands both the meaning of probability as a function of knowledge and our ability to create models which give meaningfully-accurate probabilities.
Subjective probability is (in particular) a tool for elicitation of model parameters from expert human gut-feelings, which you can then use to find further probabilities and align them with other gut-feelings and decisions, gaining precision from redundancy and removing inconsistencies. The subjective probabilities don't promise to immediately align with physical frequencies, even where the notion makes sense.
It is a well-studied and useful process, you'd need a substantially more constructive reference than "it's silly" (or you could just seek a reasonable interpretation).
Replies from: homunq↑ comment by homunq · 2009-05-14T17:58:36.723Z · LW(p) · GW(p)
As you explain it, it's not silly.
Do you have a link for a top-level post that puts this kind of caveat on probability assignments? Personally, I think that if most people here understood it that way, they'd use more qualified language when talking about subjective probability. I also think that developing and standardizing such qualified language would be a useful project.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-14T20:46:12.955Z · LW(p) · GW(p)
It is the sense in which the term "probability" is generally understood on OB/LW, with varying levels of comprehension by specific individuals. There are many posts on probability, both as an imprecise tool and an ideal (but subjective) construction. They should probably be organized in the Bayesian probability article on the wiki. In the meantime, you are welcome to look for references in the Overcoming Bias archives.
You may be interested in the following two posts, related to this discussion:
Probability is in the Mind
When (Not) To Use Probabilities
↑ comment by homunq · 2009-05-14T15:58:03.960Z · LW(p) · GW(p)
I myself would be disappointed if over half of LW put the probability of a single biological human (not an upload, not a reconstruction - an actual descendent with the appropriate number of ancestors alive today) alive in 100 years under 95%. I would consider that to be a gross instance of all kinds of biases. I'm not going to argue about scenarios, here, just point out that there any scenario which tends inevitably to wipe out humanity within one lifetime is totally unimaginable. That doesn't mean implausible, but it does mean improbable.
Personally, I do not believe that any person, group of people, or human-built model to date can consistently predict the probability of defined classes of black-swan events ("something that's never happened before which causes X" where X is a defined consequence such as humanity's extinction) to within even an order of magnitude for p/(1-p). I doubt anybody can get even to within two orders of magnitude consistently. (I also doubt that this hypothesis of mine will be clearly decidable within the next 20 years, so I'm not particularly inclined to listen to philosophical arguments from people who'd like to discard it.)
What I'm saying is, we should stop trying to put numbers on this without big error bars. And I've yet to see anybody propose an intelligent way to deal with probabilities like 10^(-6 +/- 4); just meta-averaging it over the distribution of possible probabilities, to come up with something like 10^-3 seems to be discarding data and to lead to problems. However, that's the kind of probability I'd put on this lemma. ("Earth made uninhabitable by normal cosmic event and rescue plans fail" would probably put a floor somewhere above 10^-22 per year.)
"The chance we're all wrong about something totally unprecedented has got to be less than 99.9%" is total hubris. Yes, totally unprecedented things happen every day. But telling yourselves stories about AGI and foom does not make these stories likely.
This is not, by the way, an argument to ignore existential risk. Even at the 10^-6 (or, averaged over meta-probabilities, 10^-3) level which I estimated, it is clearly worth thinking about, given the consequences. But if you're all getting that carried away, then Less Wrong should just be renamed More Wrong.
Replies from: homunq, Vladimir_Nesov↑ comment by homunq · 2009-05-14T16:11:48.308Z · LW(p) · GW(p)
Oh, also, I'd accept that the risk of humanity being seriously hosed within 100 years, or extinct within 1000 years, is significant - say, 10^(-3 +/- 4) which meta-averages to something like 15%.
("Seriously hosed" means gigadeath events, total enslavement, or the like. Note that we're already moderately hosed and always have been, but that seriously hosed is still distinguishable.)
↑ comment by Vladimir_Nesov · 2009-05-14T16:09:53.493Z · LW(p) · GW(p)
I myself would be disappointed if over half of LW put the probability of a single biological human alive in 100 years under 95%.
This is an assertion of your confidence in extinction risk being below 5%.
Personally, I do not believe that any person, group of people, or human-built model to date can consistently predict the probability of defined classes of black-swan events
[...]
This is not, by the way, an argument to ignore existential risk. Even at the 10^-6 (or, averaged over meta-probabilities, 10^-3) level which I estimated, it is clearly worth thinking about, given the consequences.
Not understanding a phenomenon, being unable to estimate its probability, doesn't give you an ability to place its probability below a strict bound. Your assertion of confidence contradicts your assertion of confusion.
Replies from: homunq, homunq↑ comment by homunq · 2009-08-06T21:27:11.088Z · LW(p) · GW(p)
I have confidence that nobody here has secret information that makes human extinction much more likely - because almost no information which currently exists could have more than a marginal bearing on a result which, if likely, is a result of human (that is, intelligent) interaction. Therefore I have confidence that the difference in estimates is largely not due to information, but to models. I have confidence that inductive models - say, "how often does a random species survive any hundred year period, correcting for initial population" give answers over 95% which should be considered the default. Therefore, I have confidence that a community of people who generally give lower estimates is subject to some biases (such as narrative bias).
Doesn't mean LW's wrong and I'm right. But to believe that human extinction within a century is likely clearly puts LW in the minority of humanity in your beliefs - even in the minority of rational atheists. And the fact that there is substantial agreement within the LW community on this, when uncertainty is clearly so high that orders of magnitude of disagreement are possible, makes me suspect bias.
Also, I find it funny that people will argue passionately over estimates that differ in log(p/q) from -1 to +1 (~10% to ~90%), but couldn't care less over the difference from say -9 to -7 (.0001% vs .000001%) or 7 to 9. This is in one sense the right attitude for people who think they can do something about it, but it ends up biasing numbers towards log(p/q)=0 [ie 50%], since you are more likely to get argument from somebody who has an estimate on the other side of 50% as yours is.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-08-08T00:31:02.299Z · LW(p) · GW(p)
The fact that we believe something unusual is only weak evidence for the validity of that unusual belief, you are right on that. And given the hypothesis that we are wrong, which is dominant while all you have is the observation that we believe something unusual, you can draw a conclusion that we are wrong because of some systematic error of judgment that makes most here to claim the unusual belief.
To move past this point, you have to consider the specific arguments, and decide for yourself whether to accept them.
Most of the beliefs people can hold intuitively are about 50% in certainty. The beliefs far away from this point aren't useful as primitive concepts, classifying the possible events on one side or the other, as most everything is only on one side, and human mind can't keep track of their levels of certainty. New concepts get constructed, that are more native to human mind and express the high-certainty concepts in question only in combinations, or that are supported by non-intuitive procedures for processing levels of certainty. But if the argument is dependent on use of intuition, you aren't always capable of moving towards certainty, so you remain in doubt. This is the case for unknown unknowns, in particular.
↑ comment by homunq · 2009-05-14T18:17:20.128Z · LW(p) · GW(p)
You clipped out "to within an order of magnitude". I stated that my best-guess probability for human extinction within a century was 10^(-6 +/- 4). This is a huge confusion - 9 orders of magnitude on the probability - yet still means that I have over 80% confidence that the probability is under 10^-2. There is no contradiction here.
(It also means that, despite believing that extinction is probably one-in-a-million, I should treat it as more like one-in-a-thousand, because averaging over the meta-probability distribution naturally weights the high end. It would be a pity if this effect, of uncertainty inflating small probabilities, resulted in social feedback. When you hear me say "we should treat it as a .1% risk", I am implicitly stating that all models I can credit give a significantly lower risk. If your best model's risk-estimate is .01%, I am actually telling you that I think your model overestimates the risk.)
Replies from: Vladimir_Nesov, steven0461↑ comment by Vladimir_Nesov · 2009-05-15T08:02:47.682Z · LW(p) · GW(p)
So, where did you get those numbers from? 10^-6? 10^-2? Why not, say, 1-10^-6 instead? Gut feeling again, and that's inevitable. You either name a number, or make decisions without the help of even this feeble model, choosing directly. From what people on this site know, they believe differently from you.
I have one of the lowest estimates, 30% for not killing off 90% of the population by 2100. Most of it comes from Unfriendly AI, with estimate of 50% of AGI foom by 2070, or 70% by 2100 (expectation of relatively low-hanging fruit, it levels off as time goes on) if nothing goes wrong with the world, 3/4 of that to Unfriendly AI, given my understanding of how hard it is to find the right answer from many efficient world-eating possibilities, and human irrationality, making it likely that the person to invent the first mind won't think about the consequences. That's already 55% total extinction risk, add some more for biological (at least, human-inhabiting) weapons, such as an engineered pandemic (not total extinction, but easily 90%), and new possible goodies the future has to offer. It'll only get worse until it gets better. On second thought, I should lower my confidence from these explicit models, they seem too much like planning. Make that 50%.
↑ comment by steven0461 · 2009-05-14T18:30:06.960Z · LW(p) · GW(p)
When you speak of "the probability", what information do you mean that to take into account and what information do you mean that not to take into account? What things does a rational agent need to know for the agent's subjective probability to become equal to the probability? (Not a rhetorical question.)
Replies from: homunq, homunq↑ comment by homunq · 2009-05-14T19:21:00.699Z · LW(p) · GW(p)
"the probability" means something like the following: take a random selection of universe-histories starting with a state consistent with my/your observable past and proceeding 100 years forward, with no uncaused discontinuities in the laws of physics, to a compact portion of a wave function (that is "one quantum universe", modulo quantum computers which are turned on). What portion of those universes satisfy the given end state?
Yes, I'm doing what I can to duck the measure problem of universes, sorry. And of course this is underdefined and unobservable. Yet it contains the basic elements: both knowledge and uncertainty about the current state of the universe, and definite laws of physics, assumed to independently exist, which strongly constrain the possible outcomes from a given initial state.
On a more practical level, it seems to be the case that, given enough information and study of a class of situations, post-hoc polynomial-computable models which use non-determinism to model the effects of details which have been abstracted out, can provide predictions about some salient aspects of that situation under certain constraints. For instance, the statement "42% of technological societies of intelligent biological agents with access to fissile materiels destroy themselves in a nuclear holocaust" could, subject to the definitions of terms that would be necessary to build a useful model, be a true or false statement.
Note that this allows for three completely different kinds of uncertainty: uncertainty about the appropriate model(s), uncertainty about the correct parameters for those model(s), and uncertainty inherent within a given model. In almost all questions involving predicting nonlinear interactions of intelligent agents, the first kind of uncertainty currently dominates. That is the kind of uncertainty I'm trying (and of course failing) to capture with the error bar in the exponent. Still, I think my failure, which at least acknowledges the overwhelming probability that I'm wrong (albeit in a limited sense) is better than a form of estimation that presents an estimate garnered from a clearly limited set of models as a final one.
In other words: I'm probably wrong. You're probably wrong too. Since giving an estimate under 95% requires certain specific extrapolations, while almost any induction points to estimates over 95%, I would expect most rational people to arrive at an estimate over 95%, and would suspect any community with the reverse situation to be subject to biases (of which selection bias is the most innocuous). This suspicion would not apply when dealing with individuals.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-14T21:30:44.332Z · LW(p) · GW(p)
See the posts "Priors as Mathematical Objects", "Probability is Subjectively Objective" linked from the Priors wiki article.
↑ comment by homunq · 2009-05-14T20:15:52.407Z · LW(p) · GW(p)
To get the right answer, you need to make a honest effort to construct a model which is an unbiased composite of evidence-based models. Metaphorical reasoning is permitted as weak evidence, but cannot be the only sort of evidence.
And you also need to be lucky. I mean, unless you have the resources to fully simulate universes, you can never know that you have the right answer. But the process above, iterated, will tend to improve your answer.
↑ comment by orthonormal · 2009-05-04T23:41:27.042Z · LW(p) · GW(p)
Without even going into different specific risks, you should beware the conjunction fallacy (or, more accurately, its flip side) when assigning such a high probability. A lack of details tends to depress estimates of an event that could occur as a result of many different causes, since if you aren't visualizing a full scenario it's tempting to say there's no way for it to occur.
You're effectively asserting that not only are all of the proposed risks to humanity's survival this minuscule in aggregate, but that you're also better than 99.9% confident that there won't be invented or discovered anything else that presents a plausible existential threat. How do you arrive at such confidence of that?
↑ comment by Vladimir_Nesov · 2009-05-03T12:19:33.902Z · LW(p) · GW(p)
Then, as a necessary condition (leaving other risks from the discussion for the moment), you either don't believe in the feasibility of AGI, or you believe in the objective morality, which any AGI will "discover". Which one is that?
Replies from: taw↑ comment by taw · 2009-05-03T17:37:48.124Z · LW(p) · GW(p)
I don't believe in feasibility of any scenario like AGI foom.
First, I fail to see how anybody taking an outside view on AI research - which is a clear instance of class of sciences with extraordinary claims and very long history of failure to deliver in spite of unusually adequate funding - can think otherwise - to me it all seems like extreme case of insider bias to assign non-negligible probabilities to scenarios like that. Virtually none sciences with this characteristics delivered what they promised (even if they delivered something useful and vaguely related).
Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.
Both extraordinarily unlikely events would have to occur before we would be exposed to risk of AGI-caused destruction of humanity, which even in this case is far from certain.
Replies from: loqi, Vladimir_Nesov↑ comment by loqi · 2009-05-04T01:14:43.274Z · LW(p) · GW(p)
AI research - which is a clear instance of class of sciences with extraordinary claims
It seems like you're reversing stupidity here. What correlation does a failed prediction have with the future?
Replies from: taw↑ comment by taw · 2009-05-04T09:52:29.218Z · LW(p) · GW(p)
It's not reverse stupidity - it's "reference class forecasting", which is a more specific instance of our generic "outside view" concept. I gather data about AI research as an instance, look at other cases with similar characteristics (hyped overpromised and underdelivered over a very long time span) and estimate based on that. It is proven to work better than inside view of estimating based on details of a particular case.
http://en.wikipedia.org/wiki/Reference_class_forecasting
Replies from: AnnaSalamon, loqi↑ comment by AnnaSalamon · 2009-05-04T10:09:48.668Z · LW(p) · GW(p)
I agree that reference class forecasting is reasonable here. I disagree that you can get anything like the 99.999% probability you claim from applying reference class forecasting to AI projects. Since rare events happen, well, rarely, it would take an exceedingly large data-set before an "outside view" or frequency-based analysis would imply that our actual expected rate should be placed as low as your stated 0.001%. (If I flip a coin with unknown weighting 20 times, and get no heads, I should conclude that heads are probably rare, but my notion of "rare" here should be on the order of 1 in 20, not of 1 in 100,000.)
With more precision: let's say that there's a "true probability", p, that any given project's "AI will be created by us" claim is correct. And let's model p as being identical for all projects and times. Then, if we assume a uniform prior over p, and if n AI projects that have been tried to date have failed to deliver, we should assign a probability of ((1+n)/n+2) to the chance that the next project from which AI is forecast will also fail to deliver. (You can work this out by an integral, or just plug into Laplace's rule of succession).
If people have been forecasting AI since about 1950, and if the rate of forecasts or AI projects per decade has been more or less unchanged, the above reference class forecasting model leaves us with something like a 1/[number of decades since 1950 + 2] = 1/8 probability of some "our project will make AI" forecast being correct in the next decade.
↑ comment by loqi · 2009-05-06T05:06:38.316Z · LW(p) · GW(p)
Oops. You're totally right.
That said, I still take issue with reference class forecasting as support for this statement:
I don't believe in feasibility of any scenario like AGI foom.
Considering that the general question "is the foom scenario feasible?" doesn't have any concrete timelines attached to it, the speed and direction of AI research don't bear too heavily on it. All you can say about it based on reference class forecasting is that it's a long way away if it's both possible and requires much AI research progress.
Even if AGI happens, it is extraordinarily unlikely it will be any kind of foom, again based on outside view argument that virtually none of disruptive technologies were ever foom-like.
I'm not sure "disruptive technology" is the obvious category for AGI. The term basically dereferences to "engineered human-level intelligence", easily suggesting comparisons to various humans, hominids, primates, etc.
↑ comment by Vladimir_Nesov · 2009-05-04T01:21:40.834Z · LW(p) · GW(p)
A reasonable position, so long as you remain truly ignorant of what AI is specifically about.
Replies from: taw↑ comment by taw · 2009-05-04T09:57:14.619Z · LW(p) · GW(p)
I don't know if inside view forecasting can ever be more reliable than outside view forecasting. It seems that insiders as a general and very robust rule tend to be strongly overconfident, and see all kinds of reason why their particular instance is different and will have better outcome than the reference class.
http://www.overcomingbias.com/2007/07/beware-the-insi.html
http://en.wikipedia.org/wiki/Reference_class_forecasting
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-04T10:44:56.272Z · LW(p) · GW(p)
I don't know if inside view forecasting can ever be more reliable than outside view forecasting. It seems that insiders as a general and very robust rule tend to be strongly overconfident, and see all kinds of reason why their particular instance is different and will have better outcome than the reference class.
Try applying that to physics, engineering, biology, or any other technical field. In many cases, the outside view doesn't stand a chance.
comment by byrnema · 2009-05-03T03:26:42.771Z · LW(p) · GW(p)
One observation and a related suggestion:
(1) We've gone off-topic regarding the demands of this post. Ciphergoth asks what traps can we defuse in advance, before we start to talk about specific ideas to do with what one does in order to change the world. However, I'm neutral about not following instructions -- perhaps Ciphergoph hasn't asked the right question after all, and we need to triangulate towards the right question.
(2) I've got no idea how to begin answering some of the other problems that are being posed. (E.g., how can we best help the world?) So I like Vladimir_Nesov's reminder that there is a post about the value of not proposing solutions until the problem has been discussed as thoroughly as possible. I think we might try it to see if it works in practice. So let's talk about the problem/problems. What is it / what are they?
comment by Vladimir_Nesov · 2009-05-03T23:29:56.705Z · LW(p) · GW(p)
Richard Posner on the economics of the flu epidemic:
We need an overall "catastrophe budget" that would match expenditures to the net expected benefits of particular measures targeted at particular catastrophic threats.
comment by pjeby · 2009-05-03T00:50:27.603Z · LW(p) · GW(p)
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening.
Sounds like good work if you can get it. ;-)
More seriously, though, if you can't handle the getting out of bed part, it seems like taking on much bigger tasks might be off the agenda. And if more people were getting laid in the evening, we might have less violent conflict in the world.
But I'm definitely with you on the superintelligence puzzle solving being a bit less important. ;-)
There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?
The first trap is assuming that having good ideas or being able to talk about them has anything to do with getting others to go along with them. For that, you need to be able to understand and expect to deal with irrational hidden agendas, not open rational discussion. If you can't deal with the former, the latter isn't going to help much.
comment by AlanCrowe · 2009-05-02T20:37:48.721Z · LW(p) · GW(p)
So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make.
This sentence smuggles in the assumption that we are in a position to reduce existential risk.
Two big risk are global warming and nuclear war.
The projections for large changes in climate depend on continuing growth in wealth and population in order to get the high levels of carbon dioxide emission needed to create the change. If it really goes horribly wrong, we are still looking at a self limiting problem with billions dying but billions living on in nuclear powered prosperity at higher latitudes. It is not an existential risk.
Nuclear war in the next hundred years is shaping up to be 2nd rank nations duking it out with 20kilo-ton fission weapons, not 200kilo-ton fusion weapons. That is enough to change building codes in ways that make current earthquake precautions seem cheap, but it is a long way short of an existential risk.
Existential risk might be large, but it comes from the unkown unkowns, not the known unkowns, and not even kowning that we don't know there is nothing useful we can do beyond maintaining a willingness to recognise a new danger if it makes its possibility known.
Replies from: Vladimir_Nesov, mattnewport↑ comment by Vladimir_Nesov · 2009-05-02T20:50:36.381Z · LW(p) · GW(p)
Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).
Replies from: Nick_TarletonDo not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.
[...]
I have often used this edict with groups I have led - particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately.
↑ comment by Nick_Tarleton · 2009-05-03T05:30:47.288Z · LW(p) · GW(p)
Alan, since there are in fact known existential risks, you are jumping to conclusions here without even superficial research (or you are carefully hiding that fact by ignoring the conclusions you disagree with).
Seconded. Also see:
Nick Bostrom's Existential Risks paper from 2002
(Agreed, though, that global warming isn't a direct existential risk, but it could spur geopolitical instability or dangerous technological development. Disagree that global thermonuclear war is very unlikely, especially considering accidents, but even that seems highly unlikely to be existential.)
Replies from: homunq↑ comment by homunq · 2009-05-14T16:32:17.434Z · LW(p) · GW(p)
I think that the original poster was discounting low-probability non-anthropogenic risks (sun goes nova, war of the worlds) and counting as "unknown unknowns" any risk which is unimaginable (that is, involves significant new developments which would tend to limit the capacity of human (metaphorical) reasoning to assess the specific probability or consequences at this time; this includes all fooms, gray goos, etc.)
I would agree with the poster that a general attitude of readiness (that is, education, democracy, limits on overall social inequality, and precautionary attitudes to new technologies) is probably orders of magnitude more effective at dealing with such threats than any specific measures until a specific threat becomes clearer.
And I dispute the characterization that, if I'm correct about the poster's attitudes, they're "carefully hiding conclusions [they] disagree with"; a refusal to consider vague handwaving categories of possibility like gray goo in the same class as much-more-specific possibilities like nuclear holocaust may not be your attitude, but that does not make it dishonest.
↑ comment by mattnewport · 2009-05-02T21:37:41.326Z · LW(p) · GW(p)
I agree with your characterization of the risks of global warming and nuclear war. I get the impression that people allow the reasonably high probability of a few degrees of warming or a few nuclear attacks to unduly influence their estimates of the probability of true existential risk from these sources.
In both cases I'm much more receptive to discussions of harm reduction than to scaremongering about 'the end of the world as we know it'. The twentieth century has quite a few examples of events that caused 10s of millions of deaths and yet did not represent existential risks. Moderate global warming or a few nuclear detonations in or over major cities would be highly disruptive events and would have a high cost in human lives and are certainly legitimate concerns but they are not existential risks and talking of them as such is unhelpful in my opinion.