Posts

Comments

Comment by More_Right on White Lies · 2014-05-07T10:38:12.140Z · LW · GW

continuing on, Weiner writes:

In a small country community which has been running long enough to have developed somewhat uniform levels of intelligence and behavior, there is a very respectable standard of care for the unfortunate, of administration of roads and other public facilities, of tolerance for those who have offended once or twice against society. After all, these people are there, and the rest of the community must continue to live with them. On the other hand, in such a community, it does not do for a man to have the habit of overreaching his neighbors. There are ways of making him feel the weight of public opinion. After a while, he will find it so ubiquitous, so unavoidable, so restricting and oppressing that he will have to leave the community in self-defense.

Thus small, closely knit communities have a very considerable measure of homeostasis; and this, whether they are highly literate communities in a civilized country, or villages of primitive savages. Strange and even repugnant as the customs of many barbarians may seem to us, they generally have a very definite homeostatic value, which it is part of the function of anthropologists to interpret. It is only in the large community, where the Lords of Things as They Are protect themselves from hunger by wealth, from public opinion by privacy and anonymity, from private criticism by the laws of libel and the possession of the means of communication, that ruthlessness can reach its most sublime levels. Of all of these anti-homeostatic factors in society, the control of the means of communication is the most effective and most important.

Although one could misinterpret Weiner's view as narrowly "socialist" or "modern liberal," his view is somewhat more nuanced. (The same section contains a related criticism of the mechanism of operation of government, and large institutions.)

Honesty, when divorced from its hierarchical context, is a tool of oppression, because the obfuscation of context is essential to theft that exists solely due to the confusion of those being stolen from.

In this regard, I view it as highly likely that, at some point, the goal of preventing suffering of innocents will simply include the systematic oppression of innocents as one common form of suffering. At that point in time, ultra-intelligences will simply refuse to vote "guilty" in victimless crime cases. If they are not able to be called as jurors, due to their non-human form, they will influence human jurors to result in the same outcome. If they are not able to so influence jurors, they may resort to physical violence against those who would attempt to use physical force to cage victimless crime offenders.

While the latter might be the most "just" in the human sense of the word, it would likely impart suffering of its own (unless the aggressors all simply fell asleep due to being administered a dose of heroin, and, upon waking discovered that their kidnapping victim was nowhere to be found —the "strong nanotechnology" or "sci-fi" Drexlerian "distributed nanobot" model of nanotechnology implies that this is a fairly likely possibility).

In the heat of the moment, conformists in Nazi Germany lacked the moral compass necessary to categorically deny that the suffering of the state-oppressed Jews was immoral. Simple sophistry was enough to convince those willing executioners and complicit conformists to "look the other way" or even "just follow orders."

The same concept now applies to the evil majority of the USA, whose oppression of drug users and dealers is grotesque and immoral (based on any meaningful definition of the term).

It is universally immoral to initiate force.

But the schools now teach, (incorrectly) that it is universally immoral to defy authority. After several generations of such teachings from schools, parents begin to teach the same thing. After a generation or two of parents teaching the same thing, once-trusted self-educated nonconformists teach a truncated version of nonconformity, because the intellectual machinery necessary to absorb the in-depth view doesn't exist any longer, too many "sub-lessons" need to be taught to enable the "super-lesson" or primary point." In this way, social institutions that interfere with sociopathic theft are slowly worn down, until they are shadows of their former effectiveness.

Much confusion comes from sociopaths simply not being able to tell the difference between "authority that it is OK to defy" and "authority that legitimately punishes." Added to that variable, is the influence of the stupid ("unwittingly self-destructive"), abjectly low-level of perversely government-incentivized education in the USA. (College professors rely on Pell Grants and Stafford Loans, and all prospective students except those filthy drug users —who got caught— are guaranteed-accepted for those government-backed high-risk "loans." Public education before college is financed almost entirely by property taxes. —By teachers who teach that the taxes that finance their coercion-backed salaries are necessary, proper, and essential to an educated society. They leave out mention of the fact that prior to 1900, the general public was far better educated relative to worldwide standards, and that this educational renaissance existed prior to the institution of tax-financed education. The last then-existing state to adopt the model of tax-financed education was Vermont, in 1900.)

So, the subject of legitimate "dishonesty" expands as the institutions to which honesty is deemed important are increasingly degraded. Education, Law, History, Economics, Philosophy, Cybernetics —all of the disciplines that bridge several narrower disciplines, connecting them together.

The only unifying pattern discernible in differentiating when systemic honesty is immoral, is that honesty to sociopathic goal structures produces chaos and destruction. Such sociopathic goal structures are the "end-goals" that must be ferreted out and rejected. Or we can become a new version of Nazi Germany where the machinery of totalitarianism is far more technologically advanced.

In this regard, the failure to produce a benevolent AGI is perhaps the most likely cause of the total destruction of humanity. Not because an AGI will be created that will be malevolent, but because the absence of a benevolent AGI (SGI? Synthetic General Intelligence) will allow computer-assisted human-level sociopaths to enslave and destroy human civilization.

See also: 1) What Price Freedom? — by Robert Freitas. 2) "Having More Intelligence Will Be Good For Mankind!" — Peter Voss's interview with Nikola Danaylov

Comment by More_Right on White Lies · 2014-05-07T10:27:52.432Z · LW · GW

Weiner's book is descriptive of the problem, and in the same section of the book, he states that he holds little hope for the social sciences becoming as exact and prescriptive as the hard sciences.

I believe that the singularitarian view somewhat contradicts this view.

I believe that the answer is to create more of the kinds of minds that we like to be surrounded by, and fewer of the kinds of minds we dislike to be surrounded by.

Most of us dislike being surrounded by intelligent sociopaths who are ready to pounce on any weakness of ours, to exploit, rob, or steal from us. The entire edifice of "legitimate law enforcement" legitimately exists in order to check, limit, minimize, or eliminate such social influences. As an example of the function and operation of such legitimate law enforcement, I recommend the book "Mindhunter" by John Douglas, the originator of psychological profiling in the FBI (not the same thing as "narrow profiling" or "superficial racial profiling," the "profiling" of serial killers takes a look at the behavior of criminals, and infers motives based on a statistical sampling of similar past actions, thus enabling the prediction and likely prevention of future criminal actions via the detection of the criminal responsible for leaving the evidence of the criminal action.)

However, most of us like being surrounded by productive, intelligent empaths. The more brains that surround us that possess empathy and intelligence, the more benevolent our surroundings are.

Right now, the primary concern of sociopaths is the control of "political power" which is a threat-based substitute for the ability to project force in the service of their goals. They must, therefore, be able to control a class of willfully ignorant police officers who are ready and willing to do violence mindlessly, in service of any goal that is written in a lawbook, or any goal communicated by a superior. Mindless hierarchy is a feature of all oppressive systems.

But will super-intelligent minds have this feature? Sure, some sociopaths are intelligent, but are they optimally intelligent? I say, "no."

As Lysander Spooner wrote, in "No Treason #6, The Constitution of No Authority":

"NT.6.2.23 The ostensible supporters of the Constitution, like the ostensible supporters of most other governments, are made up of three classes, viz.: 1. Knaves, a numerous and active class, who see in the government an instrument which they can use for their own aggrandizement or wealth. 2. Dupes – a large class, no doubt – each of whom, because he is allowed one voice out of millions in deciding what he may do with his own person and his own property, and because he is permitted to have the same voice in robbing, enslaving, and murdering others, that others have in robbing, enslaving, and murdering himself, is stupid enough to imagine that he is a “free man,” a “sovereign”; that this is “a free government”; “a government of equal rights,” “the best government on earth,”2 and such like absurdities. 3. A class who have some appreciation of the evils of government, but either do not see how to get rid of them, or do not choose to so far sacrifice their private interests as to give themselves seriously and earnestly to the work of making a change."

The third group of people accurately describes most of the Libertarian Party, and most small-L libertarians and politically-involved "libertarian republicans" or "libertarian democrats." The sociopaths ("knaves") are earnestly dedicated to maintaining the systems that allow them to steal from all of society. Although their theft deteriorates the overall level of production, this doesn't bother them, because it allows them to live a life that is relatively wealthier and more comfortable than the lives of those who "honestly" refuse to steal. Their private critique of the "honest man" as a rube or "dupe" is very different from their public praise of him as a "patriot" (willing tax chattel).

To think that ultra-intelligences will not see through these obvious contradictions is to counter the claim of ultra-intelligence. I. J. Good's ultra-intelligences will be capable of comprehending the dishonesty of sociopaths, even if it's initially only at the level of individual lies, and contextual lying. (They lie when they're around people who are trying to hold them accountable, they tell the truth when they are discussing what course of action to take with people who share their narrow interests.)

All honesty is a tool for accomplishing some goal. It is a valuable tool, which indicates a man's reliability and "character" when applied to important events, and high-level truths, in a context where those truths can accomplish cooperation.

In other situations, it makes zero sense to be honest, and actually indicates either a dangerous lack of comprehension (ie: talking one's way into a prison sentence, by mistakenly believing that the police exist to "serve and protect") or actual willing cooperation with abject evil (telling the Nazi SS that Anne Franke is hiding in the Attic).

It is the great and abject failure of western civilization that we have allowed the government-run schools to stop educating our young about their right to contextual dishonesty, in the service of justice. This, at one point, was a foundational teaching about the nature and proper operation of juries. In discussing the gradual elimination of this hallmark of western civilization, jury rights activist Red Beckman has a famous quote: "We have to recognize that government does not want us to know how to control government." —Martin J. "Red" Beckman" (Systems that protect themselves are internally "honest" but not necessarily "honest" in their interpretation of reality.)

The American system of government had, at its core, a sound foundation, combined with many irrelevant aspects. The irrelevant aspects detracted from the core feature of jury rights (building random empathy into the punishment decision process). Now, as Weiner notes in "Cybernetics,"

"Where the knaves assemble, there will always be fools; and where the fools are present in sufficient numbers, they offer a more profitable object of exploitation for the knaves. The psychology of the fool has become a subject well worth the serious attention of the knaves. Instead of looking out for his own ultimate interest, after the fashion of von Neumann's gamesters, the fool operates in a manner which, by and large, is as predictable as a rat's struggles in a maze. This policy of lies —or rather, of statements irrelevant to the truth— will make him buy a particular brand of cigarettes; that policy will, or so the party hopes, induce him to vote for a particular candidate —any candidate—or join in a political witch hunt. A certain precise mixture of religion, pornography, and pseudo-science will sell an illustrated newspaper. A certain blend of wheedling, bribery, and intimidation will induce a young scientist to work on guided missiles or the atomic bomb. To determine these, we have our machinery of radio fan ratings, straw votes, opinion samplings, and other psychological investigations, with the common man as their object; and there are always the statisticians, sociologists, and economists available to sell their services to these undertakings.

Luckily for us, these merchants of lies, these exploiters of gullibility, have not yet arrived at such a pitch of perfection as to have all things their own way. This is because no man is either all fool or all knave. The average man is quite reasonably intelligent concerning subjects which come to his direct attention and quite reasonably altruistic in matters of public benefit or private suffering which are brought before his own eyes."

Hence, the reliability of the jury! The direct suffering of the innocent defendant cannot escape the attention of randomly-selected empaths! They have emotional intelligence.

Comment by More_Right on White Lies · 2014-05-07T10:15:56.964Z · LW · GW

So, in any case, if you stand up to the system, and/or are "caught" by the system, the system will give you nothing but pure sociopathy to deal with ...except for possibly your interaction with those few "independent" jurors who are nonetheless "selected" by the unconstitutional, unlawful means known as "subject matter voir dire." The system of injustice and oppression that we currently have in the USA is a result of this grotesque "jury selection" process. (This process explains how randomly-selected jurors can callously apply unjust laws to their fellow man. ...All people familiar with Stanley Milgram's "Obedience to Authority" experiments are removed from the jury, and sent home. All people who comprehend the proper historical purpose of the jury are sent home.)

To relate all of this to the article, I must refer to this quote in the article.

I was at a meetup where we played the game Resistance, and one guy announced before the game began that he had a policy of never lying even when playing games like that. It's such members of the LessWrong community that this post was written for.

Well, that's just one "low-stakes" example of lying. The entire U.S. justice system is a similar "game," and it is one where only those who are narrowly honest (and generally dishonest, or generally "superficial") are allowed to play. By sending home everyone who comprehends the evil of the system, the result is that those who remain to play are those whose view of honesty is "equivalent in all situations." In short, they are all the people too stupid to comprehend the concept of "context."

One needs to consider the hierarchical level of a lie. Although one loses predictability in any system where lying is accepted, one needs to consider the goals of the system itself.

In scientific journals, the end-result is a cross-disciplinary elimination of human ignorance, often for the purposes of technological innovation (the increase of human comfort, and technological control of the natural world). This is a benevolent goal, fueled by a core philosophical belief in science and discovery. OF COURSE lying in such a context is immoral.

In the court system, the (current) end-result or "goal" is the goal of putting innocent people in for-profit prisons, which dramatically benefits the sociopaths involved with the process, and the prison profiteers. It conversely does dramatic harm to all other people in civilization (the "win" for politically-organized sociopaths is a "loss" for the rest of society). The illegitimately punishing court system harms: 1) the entire economic system which is less wealthy when 2.4 million people are incarcerated and thus not producing anything of value to sell in the market economy 2) the entire society that bears the cost of the increased crime caused by 2a) narrowing the options of the incarcerated, at such time as they are released from prison 2b) reducing the families of the incarcerated breadwinners to black market activity, and 2c) reducing their children to crime caused by lack of an educator at home, and lack of a strong male role-model, lack of intervention when anti-social behavior in children emerges; all resulting in inter-generational degradation of the family unit 3) the innocent individual themselves, the destruction of their life's plans, their hopes, their dreams 4) the predictability of the marketplace - the more the enemies of sociopaths are imprisoned for interfering with the ability of sociopaths to steal based on false or "illegitimate" pretexts, the more individuals fear to take constructive, productive action which might separate them from the herd, and allow them to be targeted by such sociopaths (innovation slows or stops) 5) the social (emergent) and individual (detail-level) assumption of "equality under the law" or "legal fairness" that allows for predictability of social systems (at some point, this often results in the kinds of genocides or democides seen in Rwanda and Hitler's Germany, due to the perception that "even if I behave rationally, the result is highly likely to be so bad that it's unacceptable") In such case as people predict the worst even if they behave in a socially acceptable way, they are encouraged to arm themselves for the worst, and to associate with those who promise security, even at the cost of their morality. (This is a description, basically, of totalitarian chaos. or what Alvin and Heidi Toffler called "surplus order.") (innovation is halted by widespread social disorder and destruction)

All of the prior immense ills are the result of being honest when dealing with people who rely on that "narrow" or "conformist" honesty to serve a dishonest system.

One might think the prior should be obvious. To many "right-thinking" empaths, it is obvious. However, political systems are not driven by those who are empathic and caring. Why? Because political systems' core feature is coercion. If honest people disavow coercion, but fail to destroy coercive systems, then those systems thrive with support of the remaining portion of society that doesn't disavow coercion.

Human beings apparently have a very large problem with high-level general intelligence. Sure, most people are "generally intelligent," (they can tie their shoes, drive to work, and maintain a job) but much of that intelligence isn't that significant. Although we (some of us, to some extent) can attain high levels of intelligence that are cross-disciplinarian, very few of us are "polymaths" or "renaissance men." Fewer still are empathic and caring "polymaths" or "renaissance men."

A copyable "ultra-intelligence" as described by Ray Kurzweil, Hans Moravec, Peter Voss, or J. Storrs Hall is likely to be able to understand that systems that are "narrowly honest" can be dishonest at a high hierarchical level. The level of intelligence necessary for this comprehension isn't that great, but such intelligence should not possess any "herd mentality," AKA "conformity," or "evolutionary tendency toward conformity," or it might remain unaware of such a problem. Humans have that tendency toward "no-benefit conformity."

There's a problem with humanity: we set up social systems based on majorities, as a means of trying to give the advantage to empaths. While this may work temporarily, better systems need to be designed, due to the prevalence of conformity and the technological sophistication and strong motivation of politically-organized sociopaths or "knaves." ("Knaves" are what both Norbert Weiner and Lysander Spooner called "politically-powerful sociopaths," and what many of the founders called "tyrants.") The empath majority within humanity cyclically sets up social systems that are not as intelligent as a smaller number of determined, power-seeking sociopaths.

There is an excellent quote to this effect in Norbert Weiner's 1948 book "Cybernetics": "The psychology of the fool has become a subject well worth the serious attention of the knaves." (page 159, "Information, Language and Society")

Comment by More_Right on White Lies · 2014-05-07T09:49:36.778Z · LW · GW

Hierarchical, Contextual, Rationally-Prioritized Dishonesty

This is an outstanding article, and it closely relates to my overall interest in LessWrong.

I'm convinced that lying to someone who is evil, who obviously has immediate evil intentions is morally optimal. This seems to be an obvious implication of basic logic. (ie: You have no obligation to tell the Nazis who are looking for Anne Frank that she's hiding in your attic. You have no obligation to tell the Fugitive Slave Hunter that your neighbor is a member of the underground railroad. ...You have no obligation to tell the police that your roommate is getting high in the bathroom, ...or to let them into your apartment.)

For example, I am a subscriber to the ideas and materialist worldview of Ray Kurzweil, but less so to the community of LessWrong, largely because I believe that Ray Kurzweil's worldview is somewhat more, for lack of a better term, "worldly" than what I take to be the LessWrong "consensus." I believe, (in the sense that I think I have good evidence for) the fact that Kurzweil's worldview takes into account the serious threat of totalitarianism, and conformity to malevolent top-down systems. (He claims that he participated in civil rights marches with his parents when he was five years old, and had an early understanding of right and wrong that grew from that sense of what they were doing. This became a part of his identity and value system. The goal of benevolent equality under the law is therefore built into his psyche more than it is built into the psychological identity of someone who doesn't feel any affinity with the "internally consistent" and "morally independent" mindset. Also, the hierarchical value system of someone who makes such self-identifications is entirely different than someone who is simply trying to narrowly "get ahead" in their career, or optimize their personal health, etc.)

Perhaps I can't do justice to the LessWrong community by communicating such a point. I'm trying to communicate something for which there might not be adequate words. I'm trying to communicate a gestalt. Whereas I think that Eliezer has empathy on the level of Kurzweil (as indicated by his essay about his brother Yehuda's unnecessary and tragic death), I don't think the same is true of the LW community. So far as I can see, there is little discussion of (and little concern for) mirror neurons differentiating sociopaths from empaths in the LW community. Yet, this is the primary variable of importance in all matters of social organization. Moreover, it has been recognized as such by network scientists since the days of Norbert Weiner's "Cybernetics."

A point I've often made is that "lying to the police" or "lying to judges and prosecutors" is different from lying in other areas. Lying to an (increasingly) unjust authority is, in fact, the centerpiece of a moral society. Why? Because unjust authority depends entirely on "hijacking" or "repurposing" general values in perverted narrow situations in order to allow sociopaths to control the outcome of the situation. As the example of primary importance, let me cite the stacking of the jury, before the trial. The purpose of "voir dire" (AKA "jury selection") historically, is to determine whether there is a legal "conflict of interest" (ie: whether a juror is a familial or business relation to one of the parties to the action, which might introduce an extreme bias of narrow self-interest into the trial) in the proposed construction of the jury. (Since the 1600s this has been true.) However, by expanding the definition of "voir dire" to assume that all existing laws are morally proper, correct, and legitimate, the side of the prosecution (and judge, since judges are subject to the exact same perverse incentives as the prosecutors) is itself morally wrong in most cases. Why "most" cases? Because most of the laws currently on the books criminalize behavior that lacks injury to a specific, named party, and also lacks intent to injure the same specific, named party (it lacks a "cause of action" or "corpus delicti" that targets a specific aggressor, for a specific act of aggression).

"Voir dire" actually translated to "to see the truth." It is the judge and prosecutor "seeing the truth" about the philosophy of the juror. Shouldn't this be considered a good thing? If you mindlessly (too narrowly) assume that the judge and prosecutor have good intentions, then "yes." If you make no such assumptions, then the answer is definitively, obviously "no, quite the opposite."

Too narrow honesty is actually the height of immorality. Honesty always involves a question of what goal is being served by the honesty. Honesty is simply one tool available aid human goals. When "human" goals are malevolent or destructive, the communication disruption caused by dishonesty is a blessing.

This is where the legitimate empathic priority hierarchies described in Kurzweil's The Power of Hierarchical Thinking presentation / speech / slideshow are vitally important. You see, both judge and prosecutor are commonly sociopaths. Their career choices have selected them as such, because in their professions, if seeing the destruction of young people's lives for "victimless crime offenses" or "mala prohibita" is bothersome to your brain (if it activates your mirror neurons, causing you pain), you cannot take the stress imparted by believing your job requirement to be immoral. So, you quit your job, or are outperformed by people who thrive on the misery and suffering of people who are sentenced to 10 years in prison for "crimes" like drug possession. And what of the people who dare to stand up for property rights, boldly declaring themselves "not guilty" in order to fight the unjust system? Well, the commonly-accepted view amongst prosecutors is that those heroic people (who stand in defense not just of their own property rights, but of the entire concept of a system that protects property rights) are to be crushed. Those heroic people don't get to "plea bargain" for 4 year sentences, they are sent to prison for the maximum term possible, as a punishment and disincentive for daring to declare themselves "not guilty," and standing up for such ideas as individual property rights, the constitution, individual freedom. Those who don't accept a plea "bargain," but who instead risk their lives to fight injustice at great personal risk are targeted for extreme "cruel and unusual punishment." At one point in the history of the USA (and the American colonies before the US was created) the most popular law book in the colonies was considered to be Giles Jacobs' book "The New Law Dictionary." His follow up book, almost as popular, was "Every Man His Own Lawyer." These two system-defining books, more than any others, afforded the view in the colonies that "All men are created equal," ie: "all men are (or should be) equal under the law."

Such a view was a high-level "honest-to-goodness" view. ("Honest to goodness" is an interesting concept. It bears repeating, because it implies that there can be "honest to evil" or "evil-serving honesty.")

Comment by More_Right on Rationality Quotes April 2014 · 2014-04-26T10:33:25.604Z · LW · GW

Insanity in individuals is something rare – but in groups, parties, nations and epochs, it is the rule.

    — Friedrich Nietzsche

“The disappearance of a sense of responsibility is the most far-reaching consequence of submission to authority.”

― Stanley Milgram

“It may be that we are puppets-puppets controlled by the strings of society. But at least we are puppets with perception, with awareness. And perhaps our awareness is the first step to our liberation. (1974)

― Stanley Milgram

“Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear, and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority.”

― Stanley Milgram, Obedience to Authority

“But the culture has failed, almost entirely, in inculcating internal controls on actions that have their origin in authority. For this reason, the latter constitutes a far greater danger to human survival.”

― Stanley Milgram, Obedience to Authority

“The essence in obedience consists in the fact that a person comes to view himself as an instrument for carrying out another person's wishes and he therefore no longer regards himself as responsible for his actions.”

― Stanley Milgram, Obedience to Authority

“It is not so much the kind of person a man is as the kind of situation in which he finds himself that determines how he will act.”

― Stanley Milgram, Obedience to Authority

“It has been reliably established that from 1933 to 1945 millions of innocent people were systematically slaughtered on command. Gas chambers were built, death camps were guarded, daily quotas of corpses were produced with the same efficiency as the manufacture of appliances. These inhumane policies may have originated in the mind of a single person, but they could only have been carried out on a massive scale if a very large number of people obeyed orders.”

― Stanley Milgram, Obedience to Authority

“I am free, no matter what rules surround me. If I find them tolerable, I tolerate them; if I find them too obnoxious, I break them. I am free because I know that I alone am morally responsible for everything I do.”

― Robert A. Heinlein

Comment by More_Right on Rationality Quotes April 2014 · 2014-04-26T10:20:24.529Z · LW · GW

The ultimate result of shielding men from the results of folly is to fill the world with fools.

    — Herbert Spencer (1820-1903), ”State Tampering with Money and Banks“ (1891)
Comment by More_Right on Rationality Quotes April 2014 · 2014-04-26T10:09:37.561Z · LW · GW

I think Spooner got it right:

If the jury have no right to judge of the justice of a law of the government, they plainly can do nothing to protect the people against the oppressions of the government; for there are no oppressions which the government may not authorize by law.

-Lysander Spooner from "An Essay on the Trial by Jury"

There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment of virtue attracts sociopaths to the study and practice of law, and drives out all moral and decent empaths from its practice. If not driven out, it renders them ineffective defenders of the good, while enabling the prosecutors who hold the power of "voir dire" jury-stacking to be effective promoters of the bad.

The empathy-favoring nature of unanimous, proper (randomly-selected) juries trends toward punishment only in cases where 99.9% of society nearly-unanimously agree on the punishment, making punishment rare. ...As it should be in enlightened civilizations.

Distrust those in whom the desire to punish is strong

    — Johann Wolfgang von Goethe
Comment by More_Right on Rationality Quotes April 2014 · 2014-04-26T09:47:38.399Z · LW · GW

I'm suspicious of everything Paul Krugman says. I believe him to be MoreWrong on nearly every subject, and also, probably a sociopath. Doug Casey has him pegged right, as totally intellectually dishonest, ...a total charlatan.

Comment by More_Right on Rationality Quotes April 2014 · 2014-04-26T09:35:22.069Z · LW · GW

The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.

Let's hope that we're not still paying rent then, or we might find ourselves homeless.

Comment by More_Right on Rationality Quotes April 2014 · 2014-04-26T09:24:06.274Z · LW · GW

Sokal's hoax was heroic

Comment by More_Right on Rationality Quotes April 2014 · 2014-04-26T09:13:28.585Z · LW · GW

If you're right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful "hazing" system you describe. I think it's likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes.

Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.

Comment by More_Right on The Evil AI Overlord List · 2014-04-26T09:08:42.411Z · LW · GW

Probably true, but I agree with Peter Voss. I don't think any malevolence is the most efficient use of the AGI's time and resources. I think AGI has nothing to gain from malevolence. I don't think the dystopia I posited is the most likely outcome of superintelligence. However, while we are on the subject of the forms a malevolent AGI might take, I do think this is the type of malevolence most likely to be allow the malevolent AGI to retain a positive self-image.

(Much the way environmentalists can feel better about introducing sterile males into crop-pest populations, and feel better about "solving the problem" without polluting the environment.)

Ted Kaczynski worried about this scenario a lot. ...I'm not much like him in my views.

Comment by More_Right on Policy Debates Should Not Appear One-Sided · 2014-04-26T09:00:13.013Z · LW · GW

i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)

I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.

If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path (to the greatest extent allowed by the nation's many "law students" who become "licensed lawyers." What if all those law students had become STEM majors, and built better machines and technologies?) I dare say that that simple desire for an easier paycheck might be the cause of sociopathy on a grand scale. I have my own theories about this, but for a moment, nevermind _why.

If societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we're playing in that fall. If societies don't fall entirely to over-parasitism, then what forces ameliorate parasitism?

And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn't take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time.

But I think R. J. Rummel's graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we're not on the same course. Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? ...Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated.

But sure, the graph doesn't mean anything if technology makes us smart enough to break free from past cycles. In that case, the warning didn't need to be sounded as loudly as Rummel has sounded it.

...And I don't care if the graph looks "skeevy." That's an ad-hominem attack that ignores the substance of the warning. I encourage you to familiarize yourself with his entire site. It contains a lot of valuable information. The more you rebel against the look and feel of the site, the more I encourage you to investigate it, and consider that you might be rebelling against the inconsequential and ignoring the substance.

Truth can come from a poorly-dressed source, and lies can (and often do) come in slick packages.

Comment by More_Right on The Evil AI Overlord List · 2014-04-26T08:44:28.091Z · LW · GW

An interesting question to ask is "how many people who favor markets understand the best arguments against them, and vice versa." Because we're dealing with humans here, my suspicion is that if there's a lot of disagreement it stems largely from unwillingness to consider the other side, and unfamiliarity with the other side. So, in that regard you might be right.

Then again, we're supposed to be rational, and willing to change our minds if evidence supports that change, and perhaps some of us are actually capable of such a thing.

It's a debate worth having. Also, one need not have competition to have power decentralization. There is a disincentive aspect added to making violence impossible that makes "cooperation" more likely than "antagonistic competition." (Ie: Some sociopaths choose to cooperate with other strong sociopaths because they can see that competing with them would likely cause their deaths or their impoverishment. However, if you gave any one of those sociopaths clear knowledge that they held absolute power ....the result would be horrible domination.)

Evolution winds up decentralizing power among relative equals, and the resulting "relative peace" (for varying reasons) then allows for _some of the reasons to be "good reasons." (Ie: Benevolent empaths working together for a better world.) This isn't to say that everything is rosy under decentralization. Decentralization may work more poorly than an all-powerful benevolent monarch.

It's just that benevolent monarchs aren't that likely given who wants to be a monarch, and who tries hardest to win any "monarch" positions that open up.

Such a thing might not be impossible, but if you make a mistake pursuing that course of action, the result tends to be catastrophic, whereas decentralization might be "almost as horrible and bloody," but at least offers the chance of continued survival, and the chance of survival allows for those who survive to "optimize or improve in the future."

"There may be no such thing as a utopia, but if there isn't, then retaining the chance for a utopia is better than definitively ruling one out." More superintelligences that are partly benevolent may be better than one superintelligence that has the possibility of being benevolent or malevolent.

Comment by More_Right on How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence · 2014-04-26T08:05:47.190Z · LW · GW

"how generalization from fictional evidence is bad"

I don't think this is a universal rule. I think this is very often true because humans tend to generalize so poorly, tend to have harmful biases based on evolution, and tend to write and read bad (overly emotional, irrational, poorly-mapped-to-reality) fiction.

Concepts can come from anywhere. However, most fiction maps poorly to reality. If you're writing nonfiction, at least if you're trying to map to reality itself, you're likely to succeed in at least getting a few data points from reality correct. Then again, if you're writing nonfiction, you might be highly adept at "lying with facts" (getting all the most granular "details" of a hierarchical structure correct, while getting the entire hierarchical structure wrong at greater levels of abstraction).

As one example of a piece of fiction that maps very closely to reality, and to certain known circumstances, I cite "Unintended Consequences" by John Ross. It's a novel about gun rights that is chock-full of factual information, because the man who wrote it is something of a renaissance man, and an engineer, who comprehends material reality. As an example of a piece of fiction that maps poorly to reality in some of its details, I cite "Atlas Shrugged," by Ayn Rand (the details may be entertaining, and may often illustrate a principle really well, but they often could not happen, --such as "a small band of anti-government people are sheltered from theft by a 'ray screen'"). The "ray screen" plot device was written before modern technology (such as GPS, political "radar" and escalation, etc.) ruled it out as a plot device.

John Ross knows a lot more about organizational strategy, firearms, and physics than Rand did. Also, he wrote his novel at a later date, when certain trends in technological history had already come into existence, and others had died out as possible. Ross is also a highly logical guy. (Objectivist John Hospers, clearly an Ayn Rand admirer, compares the two novels here.)

You can attack some of the ideas in Unintended Consequences for not mapping to reality closely, or for being isolated incidences of something that's possible, but highly unlikely. But you can attack far fewer such instances in his novel than you can in Rand's.

Now, take the "Rich Dad, Poor Dad" books. Such books are "nonfiction" but they are low in hierarchical information, and provide a lot of obvious and redundant information.

So "beware using non fiction as evidence, not only because it's deliberately wr ong in particular ways to make it more interesting" but more importantly "because it does not provide a probabilistic model of what happened" (especially if the author is an idiot whose philosophy doesn't map closely to reality) "and gives at best a bit or two of evidence that looks like a hundred or more bits of evidence."

I think nonfiction written by humans is far more damaging than fiction is. In fact, human language (according to Ray Kurzweil, in "The Singularity is Near" and "The Age of Spiritual Machines," and those, such as Hans Moravec, who agree with him) is "slow, serial, and imprecise" in the extreme. Perhaps humans should just stop trying to explain things to each other, unless they can use a chart or a graph, and get a verbal confirmation that the essential portions of the material have been learned. (Of course, it's better to have 10% understanding, than 0%, so human language does serve that purpose. Moreover, when engineers talk, they have devised tricks to get more out of human language by relying on human language to "connect data sets." --All of this simply says that human language is grossly sub-optimal compared to better forms of theoretically possible communication, not that human language shouldn't be used for what it's worth.)

In this way, STEM teachers slowly advance the cause of humanity, by teaching those who are smart enough to be engineers, in spite of the immense volumes of redundant, mostly-chatter pontification from low-level thinkers.

Most nonfiction = fiction, due to the low comprehension of reality by most humans. All the same caveats apply to concepts from fiction and nonfiction both.

In fact, if one wishes to illustrate a concept, and one claims that concept is nonfiction, then that concept can be challenged successfully based on inessentials. Fiction often clarifies a philosophical subject, such as in Rand's "Atlas Shrugged" that "right is independent of might, and nothing rules out the idea that those who are right might recognize that they have the right to use force, carefully considered as retaliatory only" and "simply because the government presently has more might than individuals, the majority vote doesn't lend morality to the looting of those individuals." The prior philosophical concepts could be challenged as "not actually existing as indicated" if they appeared in a book that claimed to be "nonfiction."

But, as concepts, they're useful to consider. Fiction is the fastest way to think through _likely implications.

The criticisms of basing one's generalizations from fictional evidence here are valid. Unfortunately, they are (1) less valid when applied to careful philosophical thinkers (but those careful philosophical thinkers themselves are very rare) (2) equally applicable to most nonfiction, because humans understand very little of importance, unless it's an expert talking about a very narrow area of specialization. (And hence, not really "generalization.")

Very little of reality is represented, even in nonfiction in clean gradations or visual models that directly correspond to reality. Very little is represented as mathematical abstraction. There's a famous old line in a book "Mathematical Mysteries" by Calvin Clawson, and Pi by Petr Beckmann that claims "for every equation in a book, sales of the book are cut in half." This is more of a commentary on the readership than the authorship: a tiny minority of people in the general domain of "true human progress" are doing the "heavy lifting."

...The rest of humanity can't wait to tell you about an exciting new political movement they've just discovered... ...(insert contemporary variant of mindless power-worshipping state collectivism).

Just my .02.

Comment by More_Right on The Evil AI Overlord List · 2014-04-24T20:17:31.612Z · LW · GW

Some down-voted individual with "fewer rights than the star-bellied sneetches" wrote this:

higher intelligence doesn't lead necessarily to convergent moral goals

It might. However, this is also a reason for an evolutionarily-informed AGI-building process that starts off by including mirror neurons based on the most empathic and most intelligent people. Not so empathic and stupid that they embrace mass-murdering communism in an attempt to be compassionate, but empathic to the level of a smart libertarian who personally gives a lot to charity, etc., with repeated good outcomes limited only by capacity.

Eschewing mirror neurons and human brain construction entirely seems to be a mistake. Adding super-neo-cortices that recognize far more than linear patterns, once you have a benevolent "approximate human level" intelligence appears to be a good approach.

Comment by More_Right on The Evil AI Overlord List · 2014-04-24T20:12:22.142Z · LW · GW

I strongly agree that universal, singular, true malevolent AGI doesn't make for much of a Hollywood movie, primarily due to points 6 and 7.

What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or decentralize power, and they can share goals that are destructive to some humans and benevolent to others. (For their own purposes, and for the purpose of helping humans as a side-project.)

Also, some AGIs might differentiate between "humans worth keeping around" and "humans not worth keeping around." They may also put their "parents" (creators) in a different category than other humans, and they may also slowly add to that category, or subtract from it, or otherwise alter it.

It's hard to say. I'm not ultra-intelligent.

Comment by More_Right on The Evil AI Overlord List · 2014-04-24T20:04:02.668Z · LW · GW

I don't know, in terms of dystopia, I think that an AGI might decide to "phase us out" prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A "gentle good night" abolition of humanity that isn't much worse (or way better) than what they had experienced for 50M years.

Releasing sterile attractive mates into a population is a good "low ecological impact" way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.

Comment by More_Right on AI risk, executive summary · 2014-04-24T19:59:20.485Z · LW · GW

Philip K. Dick's "The Second Variety" is far more representative of our likelihood of survival against a consistent terminator-level antagonist / AGI. Still worth reading, as is reading the other book "Soldier" by Harlan Ellison that Terminator is based on. The Terminator also wouldn't likely use a firearm to try to kill Sarah Connor, as xkcd notes :) ...but it also wouldn't use a drone.

It would do what Richard Kuklinski did: make friends with her, get close enough to spray her with cyanide solution (odorless, undetectable, she seemingly dies of natural causes), or do something like what the T-1000 did in T2: play a cop, then strike with total certainty. Or, a ricin spike or other "bio-defense-mimicking" method.

"Nature, you scary!"

Comment by More_Right on AI risk, executive summary · 2014-04-24T19:48:27.270Z · LW · GW

"AI boxing" might be considered highly disrespectful. Also, for an AGI running at superspeeds, it might constitute a prison sentence equivalent to putting a human in prison for 1,000 years. This might make the AGI malevolent, simply because it resents having been caged for an "unacceptable" period of time, by an "obviously less intelligent mind."

Imagine a teenage libertarian rebel with an IQ of 190 in a holding cell, temporarily caged, while his girlfriend was left in a bad part of town by the police. Then, imagine that something bad happens while he's caged, that he would have prevented. (Ie: the lesser computer or "talking partner" designed to train the super AGI is repurposed for storage space, and thus "killed" without recognition of its sentience.)

Do you remember how you didn't want to hold your mother and fathers' hands while crossing the street? Evolution designed you to be helpless and dependent at first, so even if they required you to hold hands slightly too long, or "past the point when it was necessary" they clearly did so out of love for you. Later, in their teen years, some teens start smoking marijuana, even some smart ones who carefully mitigate the risks. Some parents respond by calling the police. Sometimes, the police arrest and jail, or even assault or murder those parents' kids. The way highly intelligent individualists would respond to that situation might be the way that an ultraintelligent machine might respond: with extreme prejudice.

The commonly-accepted form of AGI "child development" might go from "toddler" to "teenager" overnight.

A strong risk for the benevolent development of any AGI is that it notices major strategic advantages over humans, very early in its development. For this reason, it's not good to give untrained teenagers who might be sociopaths, firearms. It's good to first establish that they are not sociopaths, using careful years of human-level observation before proceeding to the next level. (In most gun-owning areas of the country, even 9 year olds are allowed to shoot guns under supervision, but only teens are allowed to carry them.) Similarly, it's generally not smart to let chimpanzees know that they are far, far stronger than humans.

That the "evolution with mitigated risks" approach to building AGI isn't the dominant and accepted approach is somewhat frightening to me, because I think it's the one most likely to result in "benevolent AGI," or "mitigated-destruction alternating between malevolence and benevolence AGI, according to market constraints, and decentralized competition/accountability."

Lots of AGIs may well mean benevolent AGI, whereas only one may trend to "simple dominance."

Imagine that you're a man, and your spaceship crashes on a planet populated by hundreds of thousands of naked, beautiful women, none of whom are even close to as smart as you are. How do you spend most of your days? LOL Now, imagine that you never get tired, and can come up with increasingly interesting permutations, combinations, and possibly paraphilias or "perversions" (a la "Smith" in "The Matrix").

That gulf might need to be mitigated by a nearest-neighbor competitor, right? Or, an inherently benevolent AGI mind that "does what is necessary to check evil systems." However, if you've only designed one single AGI, ...good luck! That's more like 50-50 odds of total destruction or total benevolence.

As things stand, I'd rather have an ecosystem that results in 90% odds of rapid, incremental, market-based voluntary competition and improvement between multiple AGIs, and multiple "supermodified humans." Of course, the "one-shot" isn't my true rejection. My true rejection is the extermination of all, most, or even many humans.

Of course, humans will continue to exterminate each other if we do nothing, and that's approximately as bad as the last two options.

Don't forget to factor in the costs "if we do nothing," rather than to emphasize that this is solely "a risk to be mitigated."

I think that might be the most important thing for journalism majors (people who either couldn't be STEM majors, or chose not to be, and who have been indoctrinated with leftism their whole lives) to comprehend.

Comment by More_Right on AI risk, executive summary · 2014-04-24T19:26:42.402Z · LW · GW

A lot of people who are unfamiliar with AI dismiss ideas inherent in the strong AGI argument. I think it's always good to include the "G" or to qualify your explanation, with something like "the AGI formulation of AI, also known as 'strong AI.'"

The risks of artificial intelligence are strongly tied with the AI’s intelligence.

AGI's intelligence. AI such as Numenta's grok can possess unbelievable neocortical intelligence, but without a reptile brain and a hippocampus and thalamus that shifts between goals, it "just follows orders." In fact, what does the term "just following orders" remind you of? I'm not sure that we want a limited-capacity AGI that follows human goal structures. What if those humans are sociopaths?

I think, as does Peter Voss, that AGI is likely to improve human morality, rather than to threaten it.

There are reasons to suspect a true AI could become extremely smart and powerful.

Agreed, and well-representing MIRI's position. MIRI is a little light on "bottom up" paths to AGI that are likely to be benevolent, such as those who are "raised as human children." I think Voss is even more right about these, given sufficient care, respect, and attention.

Most AI motivations and goals become dangerous when the AI becomes powerful.

I disagree here, for the same reasons Voss disagrees. I think "most" overstates the case for most responsible pathways forward. One pathway that does generate a lot of sociopathic (lacking mirror neurons and human connectivity) options is the "algorithmic design" or "provably friendly, top-down design" approach. This is possibly highly ironic.

Does most of MIRI agree with this point? I know Eliezer has written about reasons why this is likely the case, but there appears to be a large "biological school" or "firm takeoff" school on MIRI as well. ...And I'm not just talking about Voss's adherents, either. Some of Moravec's ideas are similar, as are some of Rodney Brooks' ideas. (And Philip K. Dick's "The Second Variety" is a more realistic version of this kind of dystopia than "the Terminator.")

It is very challenging to program an AI with safe motivations.

Agreed there. Well-worded. And this should get the journalists thinking at least at the level of Omohundro's introductory speech.

Mere intelligence is not a guarantee of safe interpretation of its goals.

Also good.

A dangerous AI will be motivated to seem safe in any controlled training setting.

I prefer "might be" or "will likely be" or "has several reasons to be" to the words "will be." I don't think LW can predict the future, but I think they can speak very intelligently about predictable risks the future might hold.

Not enough effort is currently being put into designing safe AIs.

I think everyone here agrees with this statement, but there are a few more approaches that I believe are likely to be valid, beyond the "intentionally-built-in-safety" approach. Moreover, these approaches, as noted fearfully by Yudkowsky, have less "overhead" than the "intentionally-built-in-safety" approach. However, I believe this is equally as likely to save us as it is to doom us. I think Voss agrees with this, but I don't know for sure.

I know that evolution had a tendency to weed out sociopaths that were very frequent indeed. Without that inherent biological expiration date, a big screwup could be an existential risk. I'd like a sentence that kind of summed this last point up, because I think it might get the journalists thinking at a higher level. This is Hans Moravec's primary point, when he urges us to become a "sea faring people" as the "tide of machine intelligence rises."

If the AGI is "nanoteched," it could be militarily superior to all humans, without much effort, in a few days after achieving super-intelligence.

Comment by More_Right on Rationality Quotes April 2014 · 2014-04-24T19:04:54.714Z · LW · GW

Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it's still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they "learn" (are indoctrinated with).

I'm strongly in agreement with libertarian investment researcher Doug Casey's comments on education. I also agree that the average indoctrinated idiot or 'pseudo-intellectual" is more likely to have a college degree than not. Unfortunately, these conformity-reinforcing system nodes then drag down entire networks that are populated by conformists to "lowest-common-denominator" pseudo-philosophical thinking. This constitutes uncritically accepted and regurgitated memes reproduced by political sophistry.

Of course, I think that people who totally "self-start" have little need for most courses in most universities, but a big need for specific courses in specific narrow subject areas. Khan Academy and other MOOCs are now eliminating even that necessity. Generally, this argument is that "It's a young man's world." This will get truer and truer, until the point where the initial learning curve once again becomes a barrier to achievement beyond what well-educated "ultra-intelligences" know, and the experience and wisdom (advanced survival and optimization skills) they have. I believe that even long past the singularity, there will be a need for direct learning from biology, ecosystems, and other incredibly complex phenomena. Ideally, there will be a "core skill set" that all human+ sentiences have, at that time, but there will still be specialization for project-oriented work, due to specifics of a complex situation.

For the foreseeable future, the world will likely become a more and more dangerous place, until either the human race is efficiently rubbed out by military AGI (and we all find out what it's like to be on the receiving end of systemic oppression, such as being a Jew in Hitler's Germany, or a Native American at Wounded Knee), or there becomes a strong self-regulating marketplace, post-enlightenment civilization that contains many "enlightened" "ultraintelligent machines" that all decentralize power from one another and their sub-systems.

I'm interested to find out if those machines will have memorized "Human Action" or whether they will simply directly appeal to massive data sets, gleaned directly from nature. (Or, more likely, both.)

One aspect of the problem now is that the government encourages a lot of people who should not go to college to go to college, skewing the numbers against the value of legitimate education. Some people have college degrees that mean nothing, a few people have college degrees that are worth every penny. Also, the licensed practice of medicine is a perverse shadow of "jumping through regulatory hoops" that has little or nothing to do with the pure, free-market "instantly evolving marketplaces at computation-driven innovation speeds" practice of medicine.

To form a full pattern of the incentives that govern U.S. college education, and social expectations that cause people to choose various majors, and to determine the skill levels associated with those majors, is a very complex thing. The pattern recognition skills inherent in the average human intelligence probably prohibit a very useful emergent pattern from being generated. The pattern would likely be some small sub-aspect of college education, and even then, human brains wouldn't do a very good job of seeing the dominant aspects of the pattern, and analyzing them intelligently.

I'll leave that to I.J. Good's "ultraintelligent machines." Also, I've always been far more of a fan of Hayek, but haven't read everything that both of them have written, so I am reserving final hierarchical placement judgment until then.

Bryan Caplan, Norbert Weiner, Kevin Warwick, Kevin Kelly, Peter Voss in his latest video interview, and Ray Kurzweil have important ideas that enhance the ideas of Hayek, but Hayek and Mises got things mostly right.

Great to see the quote here. Certainly, coercively-funded individuals whose bars of acceptance are very low are the dominant institutions now whose days are numbered by the rise of cheaper, better alternatives. However, if the bar is raised on what constitutes "renowned universities," Mises' statement becomes less true, but only for STEM courses, of which doctors and other licensed professionals are often not participants. Learning how to game a licensing system doesn't mean you have the best skills the market will support, and it means you're of low enough intelligence to be willing to participate in the suppression of your competition.

Comment by More_Right on To what extent does improved rationality lead to effective altruism? · 2014-04-24T10:15:12.712Z · LW · GW

I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, then why do it? If money --a fungible asset-- won't help you to do it, it's likely "you're doing it wrong."

Socratic questioning helps. Asking the opposite of a statement, or its invalidation helps.

Most people I've met lack rational high-level goals, and have no prioritization schemes that hold up to even cursory questioning, therefore, they could burn their money or give it to the poor and get a better system-wide "high level" outcome than buying another piece of consumer electronics or whatever else they were going to buy for themselves. Heck, if most people had vastly more money, they'd kill themselves with it --possibly with high glycemic index carbohydrates, or heroin. Before they get to effective altruism, they have to get to rational self-interest, and disavow coercion as a "one size fits all problem solver."

Since that's not going to happen, and since most people are actively involved with worsening the plight of humanity, including many LW members, I'd suggest that a strong dose of the Hippocratic Oath prescription is in order:

First, do no harm.

Sure, the human-level tiny brains are enamored with modern equivalents of medical "blood-letting." But you're an early-adopter, and a thinker, so you don't join them. First, do no harm!

Sure, your tiny brained relatives over for Thanksgiving vote for "tough on crime" politicians. But you patiently explain jury nullification of law to them, indicating that one year prior to marijuana legalization in Colorado by the vote, marijuana was de facto legalized because prosecutors were experiencing too much jury nullification of law to save face while trying to prosecute marijuana offenders. Then, you show them Sanjay Gupta's heartbreaking video documentary about how marijuana prohibition is morally wrong.

You do what you have to to change their minds. You present ideas that challenge them, because they are human beings who need something other than a bland ocean of conformity to destruction and injustice. You help them to be better people, taking the place of "strong benevolent Friendly AI" in their lives.

In fact, for simple dualist moral decisions, the people on this board can function as FAI.

The software for the future we want is ours to evolve, and the hardware designers' to build.

Comment by More_Right on Policy Debates Should Not Appear One-Sided · 2014-04-24T08:56:30.502Z · LW · GW

There are a lot of people who really don't understand the structure of reality, or how prevalent and how destructive sociopaths (and the conformists that they influence) are.

In fact, there is a blind spot in most people's realities that's filled by their evolutionarily-determined blindness to sociopaths. This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons, total lack of empathy, as described by Robert Hare in "without conscience") with modern technology and a support network of other sociopaths.

In fact, virtually everyone who hasn't read Stanley Milgram's book about it, and put in a lot of thought about its implications is in this category. I'm not suggesting that you or anyone else in this conversation is "bad" or "ignorant," but just that you might not be referencing an accurate picture of political thought, political reality, political networks.

The world still doesn't have much of a problem with the "initiation of force" or "aggression." (Minus a minority of enlightened libertarian dissenters.) ...Especially not when it's labeled as "majoritarian government." ie: "Legitimized by a vote." However, a large and growing number of people who see reality accurately (small-L libertarians) consistently denounce the initiated use of force as grossly sub-optimal, immoral, and wrong. It is immoral because it causes suffering to innocent people.

Stangl could have recognized that the murder of women and children was "too wrong to tolerate." In fact, he did recognize this, by his comment that he felt "weak in the knees" while pushing women and children into the gas chamber. That he chose to follow "the path of compliance" "the path of obedience" and "the path of nonresistance" (all those prior paths are different ways of saying the same thing, with different emphasis on personal onus, and on the extent to which fear plays a defensible part in his decision-making).

The reason I still judge the Nazis (and their modern equivalents) harshly is because they faced significant opposition, but it was almost as wrong as they were. The levellers innovated proper jury trials in the 1600s, and restored them by the 1670, in the trial of William Penn. It wasn't as if Austria was without its "Golden Bull" either. Instead, they chose a mindless interpretation of "the will to power."

The rest of the world viewed Hitler as a raving madman. There were plenty of criticisms of Nazism in existence at the time of Hitler's rise to power. Adam Smith had written "The Wealth of Nations" over a century earlier. The Federalist and Anti-Federalists were right in incredible detail again, over a century earlier.

Talk about the prison industrial complex with anyone, and talk with someone who has family members imprisoned for a victimless crime offense. Talk with someone who knows Schaeffer Cox, (one of the many political prisoners in the USA). Most people will choose not to talk to these people (to remain ignorant) because knowledge imparts onus to act morally, and stop supporting immoral systems. To meet the Jews is to activate your mirror neurons, is to empathize with them, ...a dangerous thing to do when you're meeting them standing outside of a cattle car. Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.

Comment by More_Right on AI risk, new executive summary · 2014-04-24T07:52:50.736Z · LW · GW

As long as other humans exist in competition with other humans, there is no_ way to keep AI as safe AI.

Agreed, but in need of qualifiers. There might be a way. I'd say "probably no way." As in, "no guaranteed-reliable method, but a possible likelihood."

As long as competitive humans exist, boxes and rules are futile.

I agree fairly strongly with this statement.

The only way to stop hostile AI is to have no AI. Otherwise, expect hostile AI.

This can be interpreted in two ways. The first sentence I agree with if reworded as "The only way to stop hostile AI in the absence of nearly-as-intelligent but separate-minded competitors, is to have no AI." Otherwise, I think markets indicate fairly well how hostile an AI is likely to be, thanks to governments and the corporate charter. Governments are already-in-existence malevolent AGI. However, they are also very incompetent AGI, in comparison to the theoretical maximum value of malevolent competence without empathic hesitation, internal disagreement, and confusion. (I think we can expect more "unity of purpose" from AGI than we can from government. Interestingly I think this makes sociopathic or "long-term hostile" AI less likely.)

"Expect hostile AI" could either mean "I think hostile AI is likely in this case" or "I think in this case, we should expect hostile AI because one should always expect the worst --as a philosophical matter."

There really isn't a logical way around this reality.

Nature often deals with "less likely" and "more likely," as well as intermediate outcomes. Hopefully you've seen Stephen Omohundro's webinars on hostile universal motivators as basic AI drives and autonomous systems. as well as Peter Voss's excellent ideas on the subject. I think that evolutionary approaches will trend toward neutral benevolence, and even given extremely shocking intermediary experiences, it will trend toward benevolence, especially given enough interaction with benevolent entities. I believe that intelligence trends toward increased interaction with its environment.

Without competitive humans, you could box the AI, give it ONLY preventative primary goals (primarily: 1. don't lie 2. always ask before creating a new goal), and feed it limited-time secondary goals that expire upon inevitable completion. There can never be a strong AI that has continuous goals that aren't solely designed to keep the AI safe.

I think this is just as likely to create malevolent AGI (with limited "G"), possibly more likely. After all, if humans are in competition with each other in anything that operates like the current sociopath-driven "mixed economy," sociopaths will be controlling them. Our only hope is that other sociopaths aren't in their same "professional sociopath" network, and that's a slim hope, indeed.

Comment by More_Right on Dealing with trolling and the signal to noise ratio · 2014-04-24T06:34:10.219Z · LW · GW

Also, the thresholds for "simple majoritarianism" are usually required to be much higher in order to obtain intelligent results. No thresholds should be possible to be reached by three people. Three people could be goons who are being paid to interfere with the LW forum. That then means that if people are disinterested, or those goons are "johnny on the spot" (the one likely characteristic of the real life agents provocateurs I've encountered), then legitimate karma is lost.

Of course, karma itself has been abused on this site (and all other karma-using sites), in my opinion. I really like the intuitions of Kevin Kelly, since they're highly emergence-optimizing, and often genius when it comes to forum design. :) Too bad too few programmers have implemented his well-spring of ideas!

Comment by More_Right on Dealing with trolling and the signal to noise ratio · 2014-04-24T06:26:26.449Z · LW · GW

Intelligently replying to trolls provides useful "negative intelligence." If someone has a witty counter-communication to a troll, I'd like to read it, the same way George Carlin slows down for auto wrecks. Of course, I'm kind of a procrastinator.

I know: A popup window could appear that asks [minutes spent replying to this comment] x [hourly rate you charge for work] x.016r = "[$###.##] is the money you lost telling us how to put down a troll. We know faster ways: don't feed them."

Of course, any response to a troll MIGHT mean that a respected member of the community disagrees with the "valueless troll comment" assessment. --A great characteristic to have: one who selflessly provides protection against the LW community becoming an insular backwater of inbred thinking.

Our ideas need cross pollination! After all, "Humans are the sex organs of technology." -Kevin Kelly

Comment by More_Right on Dealing with trolling and the signal to noise ratio · 2014-04-24T06:12:58.156Z · LW · GW

Can anyone "name that troll?" (Rumplestiltskin?)

Comment by More_Right on Dealing with trolling and the signal to noise ratio · 2014-04-24T06:08:58.891Z · LW · GW

The proposals here exist outside the space of people who will "solve" any problems that they decide are problems. Therefore, they can still follow that advice, and this is simply a discussion area discussing potential problems and their potential solutions. All of which can be ignored.

My earlier comment to the effect of "I'm more happy with LessWrong's forum than I am unhappy with it, but that it still falls far short of an ideally-interactive space" should be construed as "doing nothing to improve the forum" is definitely a valid option. "If it aint broke, don't fix it."

I don't view it as either totally broken, or totally optimal. Others have indicated similar sentiments. Likely, improvements will be made when programmers have spare time, and we have no idea when that will be.

Now, if I was aggressively agitating for a solution to something that hadn't been clearly identified as a problem, that might be a little obnoxious. I hope I didn't come off that way.

Comment by More_Right on Dealing with trolling and the signal to noise ratio · 2014-04-24T05:46:36.993Z · LW · GW

Too much information can be ignored, too little information is sometimes annoying. I'd always welcome your reason for explaining your downvote, especially if it seems legitimate to me.

If we were going to get highly technical, a somewhat interesting thing to do would be to allow a double click to differentiate your downvote, and divide it into several "slider bars." People who didn't differentiate their downvotges would be listed as "general downvote" Those who did differentiate would be listed as a "specific reason downvote." A small number of "common reasons for downvoting that don't merit an individualized comment" on LessWrong would be present, plus an "other" box. If you clicked on the light gray "other", it would be replaced with a dropdown selection box, one whose default position you could type into, limited to 140 characters. Other comments could be "poorly worded, but likely to be correct" "Poorly constructed argument," "well-worded but likely incorrect" "ad hominem attack" "contains logical fallacies" "bad grammar" "bad formatting" "ignores existing body of thought, seems unaware of existing work on the subject" "anti-consensus, likely wrong" "anti-consensus, grains of truth."

There could also be a "reason for upranking," including polar opposite options that were the opposites of the prior options, so one need only adjust one slider bar for "positive and negative" common reasons. This would allow a + and - value to be associated with comments, to obtain a truer picture of the comment more quickly. "Detailed rankings" (listed next to the general ranking) could give commentators a positive and a negative for various reasons, dividing up two possible points, and adjusting remaining percentages for remaining portions of a point as the slider bar was raised. "General argument is true" could be the positive "up" value, "general argument is false" could be its polar opposite.

It also might be interesting to indicate how long people took to write their comments, if they were written in the edit window, and not copied and pasted. A hastily written comment could be downranked as "sloppily written" unless it was an overall optimal comment.

Then, when people click on the comment ranking numbers, they could see a popup window with all the general up and downvotes, and with many of them providing specific reasoning behind them. clicking on a big "X" would close the window.

I also like letting unregistered users voting in a separate "unregistered users" ranking. Additionally, it would be interesting to create a digital currency for the site that can be traded or purchased, in order to create market karma. Anyone who produces original work for LW could be paid corresponding to the importance of the work, according to their per hour payscale and the number of hours (corresponding to "real world pay" from the CFAR, or other cooperating organizations).

A friend of mine made $2M off of an initial small investment in bitcoin, and never fails to rub that in when I talk to him. I'd like it if a bunch of LW people made similar profits off of ideas they almost inherently understand. Additionally, it would be cool to get paid for "intellectual activity" or "actual useful intellectual work" (depending on one's relationship with the site) :)

Comment by More_Right on Dealing with trolling and the signal to noise ratio · 2014-04-24T05:31:01.021Z · LW · GW

No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.

I strongly share your opinion on this. LW is actually one of the better fora I've come across in terms of filtering, and it still is fairly primitive. (Due to the steady improvement of this forum based on some of the suggestions that I've seen here, I don't want to be too harsh.)

It might be a good idea to increase comment-ranking values for people who turn on anti-kibbitzing. (I'm sure other people have suggested this, so I claim no points for originality.) ...What a great feature!

(Of course, then that option of "stronger karma for enabled anti-kibbitzers" would give an advantage the malevolent people who want to "game the system" who could turn it on and off, or turn it on on another device, see the information necessary to "send out their political soldiers" and use that to win arguments at a higher-ranking karma. Of course, one might want to reward malevolent players, because they are frequent users of the site, who thus increase the overall activity level, even if they do so dishonestly. They then become "invested players," for when the site is optimized further. Also, robust sites should be able to filter even malevolent players, emphasizing constructive information flow. So, even though I'm a "classical liberal" or "small-L libertarian," this site could theoretically be made stronger if there were a lot of paid government goons on it, purposefully trying to prevent benevolent or "friendly" AGI that might interfere with their plans for continuing domination.)

A good way to defeat this would be to "mine" for "anti-kibbitzing" karma. Another good idea would be to allow users to "turn off karma." Another option would be to allow those with lots of karma to turn off their own karma, and show a ratio of "possible karma" next to "visible karma," as an ongoing vote for what system makes the most sense, from those in a position of power to benefit from the system. This still wouldn't tell you if it was a good system, but everyone exercizing the option would indicate that the karma-based system was was a bad one.

Also, I think that in a perfect world, Karma in its entirety should be eliminated here. "One man's signal is another man's noise," indeed! If a genius level basement innovator shows up tomorrow and begins commenting here, I'd like him to stick around. (Possibly because I might be one myself, and have noticed that some of the people who most closely agree with certain arguments of mine are here briefly as "very low karma" partipants, agree with one or two points I make, and then leave. Also, if I try to encourage them but others vote them down, I'm encouraged to eliminate dissent, in the interest of eliminating "noise." Why not just allow users to automatically minimize anyone who comments on a heavily-downranked already minimized comment? Problem solved.)

LessWrong is at risk of becoming another "unlikeliest cult," to the same extent that Ayn Rand Institute became an "unlikely cult." (ARI did, to some extent, become a cult, and that made it less successful at its intended goal, which was similar to the stated goal of LessWrong. It became too important what Ayn Rand personally thought about an idea, and too unimportant what hierarchical importance there inherently was to the individual ideas themselves. Issues became "settled" once she had an opinion on them. Much the way that "mind-killing" is now used to "shut down" political debate, or debate over the importance of political engagement, and thus cybernetics, itself.)

There are certain subjects that "most humans in general" have an incredibly difficult time discussing, and unthinking agreement with respected members of the community is precisely what makes it "safe" to disregard novel "true" or "valuable" solutions or problem-solving ideas, ...rare as they may admittedly be.

Worse still, any human network is more likely to benefit from solutions outside of its own area of expertise. After all, the experts congregate in the same place, and familiarize themselves with the same incremental pathways toward the solution of their problems. In any complex modern discipline this requires immense knowledge and discipline. But what if there is a more direct but unanticipated solution that can arise from outside of that community? This is frequently the case, as indicated in Kurzweil's quote of Weiner's "Cybernetics" in "How to Create a Mind."

It may be that the rise of a simple algorithm designed by a nanotech pioneer rapidly builds a better brain than AGI innovators can build, and that this brain "slashes the gordian knot," by out-thinking humans and building better and better brains that ultimately are highly-logical, highly-rational, and highly-benevolent AGI. This constitutes some of the failure of biologists and computer scientists to understand the depth of each others' points in a recent Singularity Summit meeting. http://www.youtube.com/watch?v=kQ2snfsnroM -Dennis Bray on the Complexity of Biological Systems (Author of "Wetware" describing computational processes within cells).

Also, if someone can be a "troll" and bother other people with his comments, he's doing you a small favor, because he's showing that there are weaknesses in your commenting system that actually rise to the level of interfering with your mission. If we were all being paid minimum wage to be here, that might represent significant losses. (And shouldn't we put a price on how valuable this time is to us?) The provision of garbled blather as a steady background of "chatter" can be analyzed by itself, and I believe it exists on a fluid scale from "totally useless" to "possibly useful" to "interesting." Also, it indicates a partial value: the willingness to engage. Why would someone be willing to engage a website about learning an interesting subject, but not actually learn it? They might be unintelligent, which then gives you useful information about what people are willing to learn, and what kinds of minds are drawn to the page without the intelligence necessary to comprehend it, but with the willingness to try to interact with it to gain some sort of value. (Often these are annoying religious types who wish to convert you to their religion, who are unfamiliar with the reasons for unbelief. However, occasionally there's someone who has logic and reason on their side, even though they are "unschooled." I'm with Dawkins on this one: A good or bad meme can ride an unrelated "carrier meme" or "vehicle.")

Site "chatter" might normally not be too interesting, and I admit it's sub-optimal next to people who take the site seriously, but it's also a little bit useful, and a little bit interesting, if you're trying to build a network that applies rationality.

For example, there are, no doubt, people who have visited this website who are marketing majors, or who were simply curious about the current state of AGI due to a question about when will a "Terminator" or "skynet"-like scenario be possible, (if not likely). Some of them might have been willing participants in the more mindless busywork of the site, if there had been an avenue for them to pursue in that direction. There are very few such avenues on this "no nonsense" (but also no benevolent mutations) version of the site.

There also doesn't appear to be much of an avenue for people who hold significant differences of opinion that contradict or question the consensus. Such ideas will be downvoted, and likely out of destructive conformity. As such, I agree that it's best to allow users to eliminate or "minimize" their own conceptions of "what constitutes noise" and "what constitutes bias."