Imperfect Levers
post by blogospheroid · 2010-11-17T19:12:41.564Z · LW · GW · Legacy · 36 commentsContents
An example The Pattern None 36 comments
Related to : Lost Purposes, The importance of Goodhart's Law, Homo Hypocritus, SIAI's scary idea, Value Deathism
Summary : Whenever human beings seek to achieve goals far beyond their individual ability, they use leverage of some kind of another. Creating organizations to achieve goals is a very powerful source of leverage. However due to their nature, organizations are imperfect levers and the primary purpose is often lost. The inertia of present forms and processes dominates beyond its useful period. The present system of the world has many such imperfect organizations in power and any of them developing near-general intelligence without significant redesign of their utility function can be a source of existential risk/values risk.
When human beings seek to achieve large ambitious goals, it is natural to use some kind of leverage, some kind of ability to multiply one's power. Financial leverage, taking debt, is one of the most common means of leverage as it turns a small profit or spread into a large one. An even more powerful means of leverage is to bring people together and creating organizations to achieve the purpose that one had set out to achieve.
However, unlike the cleanliness of financial leverage(though not without problems of its own), using organizational leverage is messy, especially if the organization is created to achieve goals that are subtle and complex. There is an entire body of management literature that tries to align the interest of principals and agents. But most agents do not avoid Goodhart's law. As organizations grow, the mechanization of their incentive structures increases. Agents will do what they have been incentivized to do.
An example
Recently, we saw an awesome demonstration of great futility, Quantitative Easing II. The purchase of a large number of US government bonds by the US Fed in the hope of creating more prosperity. People feeling poor due to the bursting of the housing bubble are definitely not the recipients of this money. And these are the people who eventually have to start producing and consuming again for this recession to end. Where would they attain the money from? The expected causal chain (econ experts can correct me if I'm wrong here) went like this
Buying US bonds - Creating the expectation of inflation in the market - Leading to banks wanting to lend out their reserves to people - leading to the people getting credit from banks - leading to them spending again - leading to improved profits in firms - leading to those firms hiring - leading to more jobs and so on.
The extremely long causal chain is not the main failure feature here. Nor is the fact that there are direct contradictory policies acting against step3 (paying interest on reserves maintained at the Fed) . My point is that even if this entire chain were to happen and the nominal end result, GDP growth, were to be achieved, the resulting prosperity would not be long lasting because it is not one that is based on a sustainable pattern of production and trade.
Maintaining equitable prosperity in a society that is facing competition from younger and poorer countries is a tough problem. Instead of tackling this problem head-on, the various governmental sources continued on their inertial paths and chose to adapt the patterns of home and asset ownership to create an illusion of prosperity. After all, the counter (GDP) was still running.
The Pattern
Many smart people have voiced opinions against such blind following of metrics in government, but almost all organizations, once beyond the grip of the founders fall into some such pattern. Compassionate mystic movements become rigid churches. Political parties for eg. pay more attention to lip service to issues, pomp, show and mind killing than to actual issues. Companies seek to make money at the expense of creating actual value, forgetting that money is only a symbol of value. A lot of people have bemoaned the brainpower that is moving into finance. And something even more repugnant, there is an entire economy thriving around the war on drugs, with everyone in on the cut.
In the short run, all these organizations/formations/coalitions are winning. So, voices against their behaviour that do not threaten them are being ignored.
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!" - Upton Sinclair
There is a great deal of intelligence being applied today in these areas by people far more smarter than you or me. But the overall systems are still not as intelligent as a human yet. In the long run, they are probably undermining their own foundations.
The scary part really is these corporations and governments, which while being sub-humanly intelligent right now, are probably going to be at the forefront of creating GAI. I expect that near human intelligence will emerge in organizations and will most probably be a well knit human+computer team, probably with Brain Computer Interfaces. This team may or may not share all the values of the organization, but the incentives that this intelligence is being rewarded for, it will seek to achieve. And if these incentives are as narrowly phrased as
- maintain power over these certain set of people (As we might imagine the actual value system of government seems to be)
- make money without getting into legal trouble (As we might imagine the value system of a corporation would be)
then there will be a continuation of today's sheer insane optimization, but on a monstrous scale. The altruists amongst us have shown the inability to curb the sub-humanly intelligent larvae, what will we do the near human or super human butterfly? Competition between such entities would very quickly eliminate most compassionate values and a lot of what humanity holds dear. (My personal belief is that corporate AIs might be a little safer as they would probably crash the money system, but would not be lobbing nuclear weapons onto rivals).
In the end, I don't want to undermine the very idea of organizations because they have brought us unprecedented prosperity. I could not be transmitting my opinions to you without the support of many such organizations.
So, my take is that, I don't find the scary idea of SIAI as a completely alien idea. The present sources of optimization power, whether they be People in governments, LLCs in the present mixed economy system or political parties in the present plurality system, do not show any inclination towards understanding or moving towards true human morality. They do not "search their souls", they respond to incentives. They act like a system with a utility function, a function indifferent to morality. Their AIs will inherit these characteristics of these imperfect levers and there is no reason to expect, from increased intelligence alone, that the AI will move towards friendliness/morality.
EDIT : Edited to make clear the conclusion and set right the Goodhart's law link. Apologies to Vaniver, Nornagest, atucker, b1shop, Will_Sawin, AdShea, Jack, mwaser and magfrump who posted before the edit. Thanks to xamdam who pointed out the wrong link.
36 comments
Comments sorted by top scores.
comment by b1shop · 2010-11-17T20:54:00.512Z · LW(p) · GW(p)
Regarding monetary policy, there are a lot of conduits through which expansionary policy increases GDP growth. Others include…
Increased Ms -> inflation -> depreciation -> increased net exports -> increased GDP
Increased MB -> more lending -> more financial activity -> increased GDP
Increased Ms -> inflation -> lower real interest rate -> cheaper borrowing -> more financial activity -> increased GDP
So there's more going on than just inflation expectations. Inflation expectations usually lag behind inflation, so I wouldn't count on that channel as much as others.
You make a very good point about IRER decreasing the amount of lending.
My favorite monetary economists are Scott Sumner at UC Berkley and George Selgin at UGA. The former makes the point you made quite frequently. He write an interesting blog @ http://www.themoneyillusion.com.
He would argue that changing the expected path of NGDP growth is a disruptive action with real consequences.
He would definitely disagree with:
My point is that even if this entire chain were to happen and the nominal end result, GDP growth, were to be achieved, the resulting prosperity would not be long lasting because it is not one that is based on a sustainable pattern of production and trade.
If the expansionary monetary policy is designed to counteract an unexpected drop in NGDP growth, then it is a good thing. If the Fed is going to pay IRER and increase ER by over a trillion dollars in late 2008, then it had better be more expansionary or risk a destructive change in NGDP's path that will move us away from sustainable patterns of production and trade.
More mainstream economists would disagree with your quote for other reasons that I'm less familiar with.
All of this is just picking nits and has nothing to do with the core message of your post.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2010-11-18T00:57:23.129Z · LW(p) · GW(p)
I would, in addition, add the textbook economics that this downturn is not caused by long-term problems of competitiveness, and that long-term problems of competitiveness solve themselves. If you can sell more because you're poor, that makes you less poor.
If there are problems, they are long-term, structural ones. The business cycle is a different matter.
In the short term, it leads to a changing exchange rate, which also helps the rich country compete. China is holding down the exchange rate, which leads to domestic inflation. America has domestic deflation, and increasing inflation will force China to change the exchange rate or suffer more inflation.
comment by Vaniver · 2010-11-17T20:43:25.200Z · LW(p) · GW(p)
I disagree strongly with the idea that organizations have sub-human intelligence. Organizations are significantly smarter than people- indeed, people can only afford specialized knowledge because of their organizational membership.
Organizations don't have values in the ways that humans do, and so value creep is a giant problem- but value creep and intelligence loss are very different things.
Replies from: Nornagest, blogospheroid↑ comment by Nornagest · 2010-11-17T21:06:13.542Z · LW(p) · GW(p)
I think it might be productive to taboo "intelligence" here. It's pretty clear that any reasonably large organization has more raw computational power at its disposal than any individual -- but you need to make some nontrivial assumptions to say that organizations are better on average at allocating that power, or at making and acting on predictions.
There are any number of organization-specific failures of rationality -- groupthink, the Peter Principle, etc. It's not immediately clear to me under what circumstances these would outweigh the corresponding benefits for the classes of problem that are being discussed, although I suspect organizations would still outperform individuals most of the time (with some substantial caveats).
Replies from: Vaniver↑ comment by Vaniver · 2010-11-17T22:18:43.903Z · LW(p) · GW(p)
What class of problems are being discussed? The OP seemed pretty open-ended, and it seems to me that for any problem, a well-designed organization will outperform a well-chosen individual. I agree that organizations have failures- but so do individuals.
Indeed, it seems we're more likely to get a recursively self-improving AI (with or without the G) through organizational design than by approaching the problem directly.
Replies from: Nornagest, atucker, CronoDAS, blogospheroid↑ comment by Nornagest · 2010-11-17T22:39:07.511Z · LW(p) · GW(p)
It's not entirely clear, but I get the impression that the OP is mainly concerned with how efficiently organizational effort satisfies our long-term preferences. If I'm right, then specifying the goal precisely would amount to solving the "meaning of right" problem, which is probably why the post seems a little muddled.
As to organizational vs. individual rationality, I broadly AWYC -- but with the caveats that the optimal organizational design is not identical for all problems, and that I don't have anything approaching a proof.
↑ comment by atucker · 2010-11-18T01:51:35.508Z · LW(p) · GW(p)
Entire agreement with the organizations being smarter than people part. In terms of actually being able to steer the future into more favorable regions, I'd say that organizations are smarter than the vast majority of humans.
To use a specific example, my robotics team is immensely better at building robots than I am (or anyone else on the team is) on my own. Even if it messes up really really badly it's still better at building a robot in the given constraints (budgetary constraints, 6 week time span) than I am.
I can see an argument being made that organizations don't very efficiently turn raw computational power into optimization though.
Ima split up intelligence into optimizer (being able to more effectively reach a specified goal) and an inferencer (being able to process information to produce accurate models of reality).
There are weaknesses of the team as an inferencer. It seems to be (largely) unable to remember things beyond the extent that individuals involved with the team do, and all connecting of integration of information is ultimately done by individuals.
Though the organization does facilitate specialization of knowledge, and conversations cause ideas better than the ideas of an individual working alone, I don't think its a fundamental shift. It seems to more be augmenting a human inferencer, and combining those results for better optimization, rather than a fundamental change in cognitive capability.
To illustrate, breakthroughs in science are certainly helped by universities, but I doubt that you can make breakthroughs significantly faster by combining all of the world's universities into one well-organized super-university. There's a limit to how brilliant people can be.
That being said, I'm pretty sure that the rate incremental change could be drastically improved by a combination like that.
My 2 cents.
Replies from: Vaniver↑ comment by Vaniver · 2010-11-18T04:47:18.715Z · LW(p) · GW(p)
Though the organization does facilitate specialization of knowledge, and conversations cause ideas better than the ideas of an individual working alone, I don't think its a fundamental shift.
Maybe, if you discount the organization of "modern civilization." There's certainly a fundamental shift between self-reliant generalists and trading specialists. But is the difference between programmers in a small company and programmers working alone "fundamental"? Possibly not, though I'd probably call it that.
To illustrate, breakthroughs in science are certainly helped by universities, but I doubt that you can make breakthroughs significantly faster by combining all of the world's universities into one well-organized super-university. There's a limit to how brilliant people can be.
Actually, there's some question about this. Having one flagship university where you put all the best people seems significantly better than spreading them out across the country/world. All the work that gets done at conferences could be the kind of work that gets done at a department's weekly meeting. Part 4 of this essay suggests something similar.
Now, would it be best to have one super-university? Probably not- one of the main benefits of top universities is their selectivity. If you're at Harvard, you probably have a much higher opinion of your colleagues than if you're at a community college. It seems there are additional benefits to be gained from clustering, but there are decreasing returns to clustering (that become negative).
Replies from: atucker↑ comment by atucker · 2010-11-18T04:58:07.686Z · LW(p) · GW(p)
Maybe, if you discount the organization of "modern civilization." There's certainly a fundamental shift between self-reliant generalists and trading specialists. But is the difference between programmers in a small company and programmers working alone "fundamental"? Possibly not, though I'd probably call it that.
I avoided this example because I don't have a particularly good goalset for modern civilization to cohesively work towards, so discussing optimization is sort of difficult.
In terms of optimization to my standards I agree that its a huge shift. I can get things from across the world shipped to my door, preprocessed for my consumption, at a ridiculously low cost. But in terms of informational processing ability I feel like its not that gigantic of a deal. Like, it processes way more information but can't do much beyond the capabilities of its constituent people to individual pieces of data, (like, eyeball a least squares regression line or properly calculate a posterior probability without using an outside algorithm to do so). (Note: lots of fuzzy linguistic constructions in the previous two sentences. I notice some confusion.)
Now, would it be best to have one super-university? Probably not- one of the main benefits of top universities is their selectivity. If you're at Harvard, you probably have a much higher opinion of your colleagues than if you're at a community college. It seems there are additional benefits to be gained from clustering, but there are decreasing returns to clustering (that become negative).
I wasn't being clear, sorry. Concentrating your best people does help, but I don't think you can get the equivalent of the best people by just clustering together enough people, no matter how good your structure is.
Replies from: Vaniver↑ comment by Vaniver · 2010-11-18T05:16:40.616Z · LW(p) · GW(p)
Concentrating your best people does help, but you can't get the equivalent of the best people by just clustering together enough people.
Not sure about this either. It seems like a few good people can be as effective as a great person, and a few great people as effective as a fantastic person, especially when you're looking for things that are broader (design an airplane) rather than deeper (design general relativity). It's very possible we've hit saturation and so this isn't as noticeable; the few great people aren't competing against one fantastic person, but a few fantastic people.
Replies from: atucker↑ comment by CronoDAS · 2010-11-19T07:09:21.415Z · LW(p) · GW(p)
The OP seemed pretty open-ended, and it seems to me that for any problem, a well-designed organization will outperform a well-chosen individual.
Counterexample: Very few great works of literature were written by committee. The King James Bible may be the one example of one that was. (Note that I'm explicitly excluding "literature" meant to be performed by actors instead of read from a page.)
Replies from: Vaniver, wedrifid↑ comment by Vaniver · 2010-11-19T17:09:39.085Z · LW(p) · GW(p)
Counterexample: Very few great works of literature were written by committee.
Counter counter example: very few great works of literature were created by hermits. And, of those that were written by hermits, most of the time that's legendary, like with the Tao Te Ching.
It's true that in any organization, there's a level where individuals dominate. Organizations are built up of individuals, so that level must exist and it must be significant. But whenever you go up a step, you often find an organization that helps those individuals accomplish a task. Form follows function- and so for creative works, those organizations tend to be loose groups of mutually inspiring people. Good design happens in chunks.
The case for literature seems a bit worse than the case for painting- but my feeling is still the best literature comes out of an organization of great and good people, not from fantastic people working alone. Particularly if you're only interested in the best literature you've heard of, not the best literature that didn't sell a thousand copies.
↑ comment by wedrifid · 2010-11-19T08:35:21.318Z · LW(p) · GW(p)
Very few great works of literature were written by committee. The King James Bible may be the one example of one that was.
I would dispute that (based on the opinions and information I held when I was religious). The King James Bible is overrated.
Replies from: CronoDAS↑ comment by CronoDAS · 2010-11-19T08:48:22.601Z · LW(p) · GW(p)
Well, yeah, there's a lot of stuff in the Bible that, when considered as literature, is actually pretty bad by any standard, such as the extensive listing of the laws of ancient Israel. And there have been a lot of changes in the English language since the King James Bible was written, so it's hard to judge it as a translation by reading it today, but it's supposed to have had some really good poetry in it - for a translation, at least.
Replies from: wedrifid↑ comment by wedrifid · 2010-11-19T12:03:05.781Z · LW(p) · GW(p)
And there have been a lot of changes in the English language since the King James Bible was written
That's a good point. I'm also comparing it to other translations done by committees. Better committees with more education, superior technology and with more early manuscripts to work with. That doesn't leave much to be impressed with.
↑ comment by blogospheroid · 2010-11-18T03:36:08.830Z · LW(p) · GW(p)
Indeed, it seems we're more likely to get a recursively self-improving AI (with or without the G) through organizational design than by approaching the problem directly.
Precisely the point I was trying to make with "well knit human computer team"
↑ comment by blogospheroid · 2010-11-18T03:33:36.019Z · LW(p) · GW(p)
Take a theoritical human being who has most of the actionable knowledge that a corporation possesses at one moment. My point is that the decision she makes will be better, for the bottom line, than one that most corporations make today. Today's business intelligence systems try to achieve this, but they are only proxies.
Replies from: timtyler, Vaniver↑ comment by timtyler · 2010-11-18T23:43:28.547Z · LW(p) · GW(p)
Take a theoritical human being who has most of the actionable knowledge that a corporation possesses at one moment.
So, this "theoretical" human has Google's data centres between her ears? One has to wonder what the point of such "theoretical" considerations are - if they start out this absurd.
↑ comment by Vaniver · 2010-11-18T04:19:37.574Z · LW(p) · GW(p)
My issue is that, depending on what you mean by "actionable knowledge" and "bottom line," this is either impossible or what already happens. I'm only interested in realistic theories.
The actionable knowledge a corporation possesses is all the actionable knowledge possessed by all of its members. It may not aggregate that knowledge very well for decision-making purposes, but generally the limitation it faces are human limitations (generally on time and memory). When workarounds are adopted, it's not "bring a smarter person on board" but organizational tricks to lessen the knowledge cost involved. Several retail outlets decide what products to stock by polling their employees (who are strongly representative of their customer base), and I believe a few even have it set up as a prediction market, which is an organizational trick that replaced having a clever person do research and tell them what their customer base wants.
The alternative is you mean "if I gave the CEO's reports to someone cleverer, that cleverer person would make a better decision." Possibly, but anyone you have in mind would probably be able to get the CEO job normally.
By bottom line you can either mean corporate profit, or total social benefit- if it's the first, corporations are at least as good as individuals as doing that (often better, because it's easier to insulate decision-makers from the emotional consequences of their actions), and if it's the second, we again run into the knowledge cost problem. How can a single person hold in their head all the information necessary to determine the total social impact of an action?
comment by taw · 2010-12-05T21:30:38.033Z · LW(p) · GW(p)
Recently, we saw an awesome demonstration of great futility, Quantitative Easing II.
Do not use political examples unless absolutely necessary, and even then - just don't. You're automatically generating hostile reaction in half of your audience. (and in this case, you're just wrong, and clearly lack basic understanding of economics monetary policy, but even if you were right, just don't use such examples)
comment by Carinthium · 2010-11-17T23:14:22.865Z · LW(p) · GW(p)
1- Most modern corporations would have been founded with the explicit goal of making money.
2- Why aim for "equitable" propserity?
3- Are you discussing prosperity in absolute or relative terms?
Not commenting on the A.I issue, as I don't know enough to meaningfully contribute.
comment by mwaser · 2010-11-17T23:06:02.799Z · LW(p) · GW(p)
For-profit corporations, as a matter of law, have the goal of making money and their boards are subject to all sorts of legal consequences and other unpleasantnesses if they don't optimize that goal as a primary objective (unless some other goal is explicitly written into the corporate bylaws as being more important than making a profit -- and even then, there are profit requirements that must be fulfilled to avoid corporate dissolution or conversion to a non-profit -- and very few corporations have such provisions).
Translation
Corporations = powerful, intelligent entities with the primary goal of accumulating power (in the form of money).
Replies from: timtyler, None↑ comment by timtyler · 2010-11-18T23:45:22.723Z · LW(p) · GW(p)
It's maximise profits under the constraint of obeying the law.
The latter part is where some moral constraints are explicitly encoded.
Replies from: wnoise↑ comment by wnoise · 2010-11-19T06:30:42.329Z · LW(p) · GW(p)
It's maximise profits under the constraint of obeying the law.
No, it's maximize profits even after the costs (such as fines or lost business) of possibly getting caught violating the law. There are many cases where the fines for a given practice are less than the profit from said activity...
Replies from: timtyler↑ comment by timtyler · 2010-11-19T20:48:03.401Z · LW(p) · GW(p)
Mark Waser (above) was talking about corporations being legally compelled to maximise profit - in some juristiction or other. They are not legally compelled to break the law.
Replies from: wnoise↑ comment by wnoise · 2010-11-19T21:09:48.381Z · LW(p) · GW(p)
I know. The legality of an act is not a binary. A act can be completely legal, completely illegal, a civil violation with varying fines, ignored or swept under the rug for some actors, and heavily penalized for other actors.
Nor are the incentives of acting in a system that has a legal system quite the same as the incentives due just to the legal system.
Corporations are legally compelled to maximize profit in the sense that this is grounds for a shareholders' suit. Even given this limited premise, operating officers are less likely to face such a suit if they do better by violating laws that they are unlikely to be caught at, or where the consequences of getting caught are minimal. The same holds, of course, for shareholder elections.
They're also compelled to maximize profit in that that is one way individuals within the corporation are likely to do better. Again, this holds even when violating laws that are unlikely to have large bad consequences for the individuals.
comment by JenniferRM · 2010-11-18T18:38:42.281Z · LW(p) · GW(p)
Many smart people have voiced opinions against such blind following of metrics in government, but almost all organizations, once beyond the grip of the founders fall into some such pattern. Compassionate mystic movements become rigid churches. Political parties for eg. pay more attention to lip service to issues, pomp, show and mind killing than to actual issues. Companies seek to make money at the expense of creating actual value, forgetting that money is only a symbol of value.
It would be useful to see a link to a peer-reviewed study supporting each of these summaries. I would be surprised (and hence informed!) if they all turned out to be clearly true. My understanding of the sociology of religion is actually that over time religious movements become more liberal and cosmopolitan, in opposition to their early doctrines and the needs of their original members and later members similar to the founders, in the sociology of religion this is basically accepted as a driver behind the formation of schismatic religious movements and the theoretical arguments are generally about the causative factors behind the change in membership over time. Some argue that the increasing number of church members who want cosmopolitan doctrines is simply regression towards the mean (away from the atypical needs of early members) and others argue that the doctrines play a causal role in increasing the worldly success of church members. I suspect it is a little of both.
Also, I don't think companies ever have the purpose of simply creating value "in general" so it can't be a lost purpose. My understanding is that companies are created to solve the relatively well defined and more limited problem of doing something so unique and positive that they create scarce value whose combined scarcity and value are so dramatic that people are willing to fork money over for the products or services. There are lots of ways to create value, like providing free internet services or picking up garbage by the side of the road. No one does these sorts of things in private industry without some angle. The trick is for the company to do something that is clearly valuable and also that compels people to pay them for it based on their ability to deprive users of the benefits of their work unless users pay them. This sometimes "feels immoral" in small groups of people, but is cybernetically necessary in large anonymous groups where services have high marginal costs, which is one of many reasons that coordination is hard.
The political economy of democracies is even more complicated, with (1) difficult to measure psychological benefits being allocated to radicalized volunteer activists, (2) selfishly rational and highly strategic lobbying efforts coming from all sides, (3) substantial voter apathy, and (4) a gaping free rider problem right at the center because people who don't contribute to good government cannot be deprived of most of its benefits. I expect people who are experts in this domain to be doing a lot of complicated stuff that I don't currently understand and so when you casually dismiss "pomp, show, and mind killing" in the outer forms of politics, and offer no supporting citation, I'm tempted to think that you're expecting inferential distances to be small and making a rookie rationality error because all of politics isn't "doing what you naively think it should do".
The same paragraph continued...
A lot of people have bemoaned the brainpower that is moving into finance. And something even more repugnant, there is an entire economy thriving around the war on drugs, with everyone in on the cut.
I've heard people bemoaning that fact, so I agree with your front line evidence: yes people do bemoan this fact.
But this sort of begs the question about whether the bemoaners are actually right. I consider myself moderately smart, and I've thought about getting a formal degree in economics and going into finance both to learn more about adaptively self-regulating processes and because it seems like an area that needs to be "done right" if the rest of the world is to function well. Most of the reason I never followed through on these impulses is that I wouldn't plan on staying in finance after I'd learned what I came to learn and the business culture of finance doesn't seem particularly nurturing, especially for newbies. I can easily imagine people similar to me in some respects but different from me in other respects who go into finance for "basically good reasons" and thereby do enormous good.
Moreover, while it seems to be illiberal to punish people for putting whatever they want into their own bodies, the fact that so many people do engage in pharmaceutical wireheading seems kind of tragic in some respects. I could imagine reasonably good paternalistic arguments for making certain drugs illegal, and I would expect if the paternalistic argument carried the day, then it would be necessary to make the enforcement systems self sustaining if you actually wanted to decrease drug use in the long term. The only way it would be dramatically repugnant for the war on drugs to be self sustaining is if the war on drugs was itself totally repugnant so that its failure to sustain itself would be an obvious boon. Honest policy discussions should not be one-sided.
For what it's worth, I basically agree with your bottom line conclusion that there is a real potential for a swiftly arising Unfriendly AGI.
I think there are reasonable defenses of the so-called Scary Idea (maybe the "Scary Hypothesis" would be a better name?), and I think that it improves the odds of the world having a positive outcome if "algorithm literate" net activists occasionally descend on AGI researchers and demand that they give an accounting of themselves in light of the Scary Hypothesis. The trick is that the truth value of the Scary Hypothesis probably changes based on the socio-technological context and the details of each specific project. In light of this, a generic policy of publicly sourced demand for a positive theory of particular safety from each AGI project (maybe every year or two?) seems to me like a prudent and reasonably cheap political safety measure :-)
comment by magfrump · 2010-11-18T00:51:01.627Z · LW(p) · GW(p)
The underlying idea of Artificial Intelligences being likely to come from and work in corporate and financial situations, which is likely to lead to a very specific kind of mild friendliness/unfriendliness (i.e. working at goals which benefit some humans and are very unlikely to end the world, but which are not "good altruistic goals" that one would naively associate with a positive singularity) is something that I have thought about for some time and agree with wholeheartedly.
I can't speak to your views on, for example, quantitative easing. I don't understand the subject.
I do notice that there were some grammatical errors in the post ("more smarter") and the formatting is a little odd and not as polished as most posts I see.
I think that, had this post started out with an abstract in the discussion section, it would have received several upvotes and been worth promoting to the main page. As it is it feels in need of a rewrite or two.
comment by Jonathan_Graehl · 2010-11-19T01:20:25.260Z · LW(p) · GW(p)
everyone in on the cut
Confusing. Halfway between everyone "on the take" and "getting/taking a/their cut".
comment by xamdam · 2010-11-18T02:12:15.687Z · LW(p) · GW(p)
this link is messed up in the post
http://lesswrong.com/lw/1ws/the_importance_of_goodharts_law/
comment by Jack · 2010-11-17T23:37:00.942Z · LW(p) · GW(p)
So the fear here doesn't appear to be that corporate and government GAIs won't accurately model what they have incentive to do. Especially if we're talking about a human-computer team we're not worried about a machine hacking the NYSE just to make the ticker for their stock go up. We're worried, I guess, that these teams will pursue what they have incentive to pursue really well. Having this worry seems to require a) that we distrust the free market generally and deny that the incentives in place are good for most people and b) that the problems we trace to corporate decision-making were wise given the incentives those decision makers had. Neither of these seem right to me.
When it comes to corporations the problem seems to me to be about incorrectly programming the incentive structure- which isn't a institutional or structural issue, it's a similar problem to FAI just with corporate/employee incentives instead of human values. In other words, the problem isn't people doing it, the problem is people doing it wrong.
Not sure if this applies to the general point or not...
comment by AdShea · 2010-11-17T22:35:22.544Z · LW(p) · GW(p)
Just because corporations don't have a nuclear arsenal at their disposal doesn't make them all that less dangerous if they get to the hyper-optimizing arena. Just look at the various megacorps in scifi. A sufficiently powerful megacorp can make just as much trouble by conventional means through standard demolition, police (and para-military) action, and through environmental oversights (look at the fiasco in the Gulf of Mexico).
Replies from: Jack, CronoDAS↑ comment by Jack · 2010-11-17T23:17:01.559Z · LW(p) · GW(p)
Just look at the various megacorps in scifi.
Fictional evidence...
and through environmental oversights (look at the fiasco in the Gulf of Mexico).
Seems to me this is exactly the kind of thing a GAI run company would try to avoid.
Which is to say a corporation could easily build a nuclear weapon. They don't for the same reason they don't give all their profits to charity: incentives.
↑ comment by CronoDAS · 2010-11-19T07:22:53.610Z · LW(p) · GW(p)
There was once something in the real world that came close to being the equivalent of the sci-fi megacorp: the British East India Company, which ended up directly ruling a large portion of India.