Posts

Comments

Comment by jacoblyles on Our Phyg Is Not Exclusive Enough · 2012-10-15T17:02:11.048Z · LW · GW

It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.

The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he's not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.

So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it's straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.

I'm pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren't so nerdy and pacifistic to begin with.

And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by "shut up and calculate", which says trust your arithmetic utilitarian calculus and not your ugh fields.

Comment by jacoblyles on Our Phyg Is Not Exclusive Enough · 2012-10-14T20:07:30.831Z · LW · GW

Oh sure, there are plenty of other religions as dangerous as the SIAI. It's just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.

However, there are ideologies that don't contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They'll say things like "don't trust your reasoning if it leads you to do awful things" (again, compare that to "shut up and calculate"). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.

One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.

Comment by jacoblyles on Our Phyg Is Not Exclusive Enough · 2012-10-14T19:03:27.208Z · LW · GW

Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don't know where they got that from. Certainly not these pages.

Ordinarily, I would count on people's unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here ("shut up and calculate!")

LW scares me. It's straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.

Comment by jacoblyles on Cult impressions of Less Wrong/Singularity Institute · 2012-08-18T19:56:19.473Z · LW · GW

We should try to pick up "moreright.com" from whoever owns it. It's domain-parked at the moment.

Comment by jacoblyles on Muehlhauser-Wang Dialogue · 2012-08-18T05:32:54.764Z · LW · GW

The principles espoused by the majority on this site can be used to justify some very, very bad actions.

1) The probability of someone inventing AI is high

2) The probability of someone inventing unfriendly AI if they are not associated with SIAI is high

3) The utility of inventing unfriendly AI is negative MAXINT

4) "Shut up and calculate" - trust the math and not your gut if your utility calculations tell you to do something that feels awful.

It's not hard to figure out that Less Wrong's moral code supports some very, unsavory, actions.

Comment by jacoblyles on Who Wants To Start An Important Startup? · 2012-08-18T01:00:22.023Z · LW · GW

Fortunately, the United States has a strong evangelical Christian lobby that fights for and protects home schooling freedom.

Comment by jacoblyles on Who Wants To Start An Important Startup? · 2012-08-17T18:42:58.865Z · LW · GW

...And you just blew your cover. :)

Nobody of any importance reads Less Wrong :)

Comment by jacoblyles on What is moral foundation theory good for? · 2012-08-17T18:40:37.415Z · LW · GW

I'm pretty sure they are sourced from census data. I check the footnotes on websites like that.

Comment by jacoblyles on Who Wants To Start An Important Startup? · 2012-08-17T00:14:58.901Z · LW · GW

Tagline: Coursera for high school

Mission: The economist Eric Hanushek has shown that if the USA could replace the worst 7% of K-12 teachers with merely average teachers, it would have the best education system in the world. What if we instead replaced the bottom 90% of teachers in every country with great instruction?

The Company: Online learning startups like Coursera and Udacity are in the process of showing how technology can scale great teaching to large numbers of university students (I've written about the mechanics of this elsewhere). Let's bring a similar model to high school.

This Company starts in the United States and ties into existing home school regulations with a self-driven web learning program that requires minimum parental involvement and results in a high school degree. It cloaks itself as merely a tool to aid homeschool parents, similar to existing mail-order tutoring materials, hiding its radical mission to end high school as we know it.

The result is high-quality education for every student. In addition to the high quality, it gives the student schedule flexibility to pursue other interests outside of high school. Many exceptional young people I know dodge the traditional schools early in life. This product gives everyone that opportunity.

By lowering the cost of going home-school, this product will enlargen the home school market and threaten traditional educrats while producing more exceptional minds.

With direct access to millions of students, the website will be able to monetize through one-on-one tutoring markets, college prep services, and other means.

Course material can be bootstrapped by constructing a curriculum out of free videos provided through sources like the Khan Academy. The value-add of the Company will be to tailor the curriculum to the home-school requirements of the particular state of the student.

My background: I cofounded a company that's had reasonable success. I'm not much of a Less Wrong fan - I find the community to be an intellectual monoculture, dogmatic, and full of blind spots to flaws in the philosophy it preaches. BUT this is an idea that needs to happen, as it will provide much value to the world. Contact me at firstname lastname gmail if you have lots of money or can hack. Or hell, steal the idea and do it yourself. Just make it happen.

Comment by jacoblyles on What is moral foundation theory good for? · 2012-08-15T22:54:07.234Z · LW · GW

Out of wedlock birth rates have exploded with sexual freedom:

-http://www.familyfacts.org/charts/205/four-in-10-children-are-born-to-unwed-mothers

Marriage is way down:

-http://www.familyfacts.org/charts/105/the-annual-marriage-rate-has-declined-significantly-in-the-past-generation

Comment by jacoblyles on Muehlhauser-Wang Dialogue · 2012-07-31T18:47:12.858Z · LW · GW

If an AGI research group were close to success but did not respect friendly AI principles, should the government shut them down?

Comment by jacoblyles on The Moral Void · 2012-07-18T23:47:40.223Z · LW · GW

I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.

Comment by jacoblyles on Welcome to Less Wrong! (July 2012) · 2012-07-18T23:43:10.562Z · LW · GW

Welcome!

The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations.

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo-jumbo as opposed to western mumbo-jumbo. Stoicism might be the most rational joy machine I can find.

Let me know if you ever un-convert.

Comment by jacoblyles on Reply to Holden on 'Tool AI' · 2012-07-18T23:29:11.358Z · LW · GW

It's interesting that we view those who do make the tough decisions as virtuous - i.e. the commander in a war movie (I'm thinking of Bill Adama). We recognize that it is a hard but valuable thing to do!

Comment by jacoblyles on Welcome to Heaven · 2012-07-18T22:03:00.851Z · LW · GW

This reminds me of a thought I had recently - whether or not God exists, God is coming - as long as humans continue to make technological progress. Although we may regret it (for one, brief instant) when he gets here. Of course, our God will be bound by the laws of the universe, unlike the Theist God.

The Christian God is an interesting God. He's something of a utilitarian. He values joy and created humans in a joyful state. But he values freedom over joy. He wanted humans to be like himself, living in joy but having free will. Joy is beautiful to him, but it is meaningless if his creations don't have the ability to choose not-joy. When his creations did choose not-joy, he was sad but he knew it was a possibility. So he gave them help to make it easier to get back to joy.

I know that LW is sensitive to extended religious reference. Please forgive me for skipping the step of translating interesting moral insights from theology into non-religious speak.

I do hope that the beings we make which are orders of magnitude more powerful than us have some sort of complex value system, and not anything as simple as naive algebraic utilitarianism. If they value freedom first, then joy, then they will not enslave us to the joy machines - unless we choose it.

(Side note: this post is tagged with "shut-up-and-multiply". That phrase trips the warning signs for me of a fake utility function, as it always seems to be followed by some naive algebraic utilitarian assertion that makes ethics sound like a solved problem).

edit: Whoa, my expression of my emotional distaste for "shut up and multiply" seems to be attracting down-votes. I'll take it out.

Comment by jacoblyles on Reply to Holden on 'Tool AI' · 2012-07-18T21:44:00.156Z · LW · GW

A common problem that faces humans is that they often have to choose between two different things that they value (such as freedom vs. equality), without an obvious way to make a numerical comparison between the two. How many freeons equal one egaliton? It's certainly inconvenient, but the complexity of value is a fundamentally human feature.

It seems to me that it will be very hard to come up with utility functions for fAI that capture all the things that humans find valuable in life. The topology of the systems don't match up.

Is this a design failure? I'm not so sure. I'm not sold on the desirability of having an easily computable value function.

Comment by jacoblyles on Purchase Fuzzies and Utilons Separately · 2012-07-18T21:21:21.183Z · LW · GW

This is a great framework - very clear! Thanks!

Comment by jacoblyles on Reply to Holden on 'Tool AI' · 2012-07-18T21:18:39.404Z · LW · GW

Sorry, "meaning of life" is sloppy phrasing. "What is the meaning of life?" is popular shorthand for "what is worth doing? what is worth pursuing?". It is asking about what is ultimately valuable, and how it relates to how I choose to live.

It's interesting that we are imagining AIs to be immune from this. It is a common human obsession (though maybe only among unhappy humans?). An AI isn't distracted by contradictory values like a human is then, it never has to make hard choices? No choices at all really, just the output of the argmax expected utility function?

Comment by jacoblyles on Purchase Fuzzies and Utilons Separately · 2012-07-18T20:59:02.836Z · LW · GW

I follow the virtue-ethics approach, I do actions that make me like the person that I want to be. The acquisition of any virtue requires practice, and holding open the door for old ladies is practice for being altruistic. If I weren't altruistic, then I wouldn't be making myself into the person I want to be.

It's a very different framework from util maximization, but I find it's much more satisfying and useful.

Comment by jacoblyles on Reply to Holden on 'Tool AI' · 2012-07-18T20:37:52.011Z · LW · GW

Let me see if I understand what you're saying.

For humans, the value of some outcome is a point in multidimensional value space, whose axes include things like pleasure, love, freedom, anti-suffering, and etc. There is no easy way to compare points at different coordinates. Human values are complex.

For a being with a utility function, it has a way to take any outcome and put a scalar value on it, such that different outcomes can be compared.

We don't have anything like that. We can adjust how much we value any one dimension in value space, even discover new dimensions! But we aren't utility maximizers.

Which raises the question - if we want to create AI that respect human values, then why would we make utility maximizer AI in the first place?

I'm still not sold on the idea that an intelligent being would slavishly follow its utility function. For AI, there are no questions about the meaning of life then? Just keep on U maximizing?

Comment by jacoblyles on Torture vs. Dust Specks · 2012-07-18T19:06:17.771Z · LW · GW

First, I don't buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I'm just not biting those bullets. I don't think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer's recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.

Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me on that from the Complex Values post and posts about the evils of paperclip maximizers. So great evils are qualitatively different to me from small evils, even small evils done to a great number of people!

I get what you're trying to do here. You're trying to demonstrate that ordinary people are innumerate, and you all are getting a utility spike from imagining you're more rational than them by choosing the "right" (naive hyper-rational utilitarian-algebraist) answer. But I don't think it's that simple when we're talking about morality. If it were, the philosophical project that's lasted 2500 years would finally be over!

Comment by jacoblyles on Torture vs. Dust Specks · 2012-07-18T09:08:28.068Z · LW · GW

I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.

My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably valid.

In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks. Which brings us to an interesting question - is morality carried by events or by information about events? If nobody else knew of my choice, would that make it better?

For a utilitarian, the answer is clearly that the information about morally significant events is what matters. I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.

Also, I'm interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your "utility is all that matters" philosophy go?

Comment by jacoblyles on A Parable On Obsolete Ideologies · 2012-07-18T08:04:55.849Z · LW · GW

Certain self-consistent metaphysics and epistemologies lead you to belief in God. And a lot of human emotions do too. If you eliminated all the religions in the world, you would soon have new religions with 1) smart people accepting some form of philosophy that leads them to theism 2) lots of less smart people forming into mutually supporting congregations. Hopefully you get all the "religion of love" stuff from Christianity (historically a rarity) and the congregations produce public goods and charity.

Comment by jacoblyles on Reply to Holden on 'Tool AI' · 2012-07-18T07:38:36.528Z · LW · GW

What makes us think that AI would stick with the utility function they're given? I change my utility function all the time, sometimes on purpose.

Comment by jacoblyles on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics · 2012-02-05T09:23:54.926Z · LW · GW

"Long-term monogamy should not be done on the pretense that attraction and arousal for one's partner won't fade. It will."

This is precisely the point of monogamy. Polyamory/sleeping around is a young man's game. Long-term monogamy is meant to maintain strong social units throughout life, long after the thrill is gone.

Comment by jacoblyles on Why Our Kind Can't Cooperate · 2009-03-20T19:14:52.261Z · LW · GW

My point isn't exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the "truth = moral good = prudent" assumption, and sometimes not.

He's provided me with links to some of his past writing, I've talked enough, it is time to read and reflect (after I finish a paper for finals).

Comment by jacoblyles on Why Our Kind Can't Cooperate · 2009-03-20T19:11:19.784Z · LW · GW

Thanks for the links, your corpus of writing can be hard to keep up with. I don't mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.

Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.

Comment by jacoblyles on Why Our Kind Can't Cooperate · 2009-03-20T18:54:56.736Z · LW · GW

My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: "there is no guarantee that morally good actions are beneficial".

The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don't claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Christianity being an old religion and having the time to work out the philosophical kinks.

Of course, they make up for it by offering infinite bliss in the next life, which is cheating. But Christians do have a more honest view of this world in some ways.

Maybe we conflate true, good, and prudent because our "religion" is a hard sell otherwise. If we admitted that true and morally right things may be harmful, our pitch would become "Believe the truth, do what is good, and you may become miserable. There is no guarantee that our philosophy will help you in this life, and there is no next life". That's a hard sell. So we rationalists cheat by not examining this possibility.

There is some truth to the Christian criticism that Atheists are closed-minded and biased, too.

Comment by jacoblyles on Why Our Kind Can't Cooperate · 2009-03-20T18:24:11.295Z · LW · GW

"Does this sound like what you mean by a "beneficial irrationality"?"

No. That's not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.

In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point.

There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.

In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn't recommend it to a 13th century peasant.

"I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right."

This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: "prove it".

And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.

Disprove the parable of Eve and the fruit of the tree of knowledge.

Comment by jacoblyles on Why Our Kind Can't Cooperate · 2009-03-20T17:53:49.471Z · LW · GW

"Except that we are free to adopt any version of rationality that wins. "

In that case, believing in truth is often non-rational.

Many people on this site have bemoaned the confusing dual meanings of "rational" (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.

I believe I consistently used the "believing in truth" definition of rational in the parent post.

Comment by jacoblyles on Why Our Kind Can't Cooperate · 2009-03-20T09:54:09.543Z · LW · GW

There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.

You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.

Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this. The prisoners dilemma and the tragedy of the commons are not new ideas. Telling people to act in the group interest because God said so is effective. It is easy to see how informing people of the costs of action, because truth is noble and people ought not be lied to, can be counter-effective.

Perhaps we should stop striving for the maximum rational society, and start pursuing the maximum rational society which is stable in the long term. That is, maybe we ought to set our goal to minimizing irrationality, recognizing that we will never eliminate it.

If we cannot purposely introduce a small bit of beneficial irrationality into our group, then fine: memetic evolution will weed us out and there is nothing we can do about it. People will march by the millions to the will of saints and emperors while rational causes whither on the vine. Not much will change.

Robin made an excellent post along similar lines, which captures half of what I want to say:

http://lesswrong.com/lw/j/the_costs_of_rationality/

I'll be writing up the rest of my thoughts soon.

Sorry, I can't find the motivation to jump on the non-critical bandwagon today. I had the idea about a week ago that there is no guarantee that truth= justice = prudence, and that is going to be the hobby-horse I ride until I get a good statement of my position out, or read one by someone else.

Comment by jacoblyles on How to Not Lose an Argument · 2009-03-20T08:24:11.864Z · LW · GW

Also, by following their arguments, trying to clarify it and understanding the pieces. Your sincere and genuine attempt to understand them in the best possible light will make them open to your point of view.

The smart Christians are some of the most logical people I've ever met. There worldview fits together like a kind of Geometry. They know that you get a completely different form of it if you substitute one axiom for another (existence of God for non-existence of God), much like Euclid's world dissolves without the parallel postulate.

Once we got to that point in our conversation, I realized that they we agreed on everything about the world except that postulate, which they were also aware of. I realized that they were neither stupid nor evil, as I had assumed before (a remarkably common, and uncivil, view that atheists have of believers). I still disagree with them. However, I was fine with leaving the conversation with both of our positions unchanged, but understanding each other better.

Comment by jacoblyles on How to Not Lose an Argument · 2009-03-20T08:14:58.396Z · LW · GW

I am curious about the large emphasis that rationalists place on the religious belief. Religion is an old institution, ingrained in culture and valuable for aesthetic and social reasons. To convince a believer to leave his religion, you need not only convince him, but convince him so thoroughly as to drive him to take a substantial drop in personal utility to come to your side (to be more exact, he must weigh the utility gained from believing the truth to outweigh the material, social, and psychic benefits that he gets from religion).

For rationalists' attention, there are myriad more important and relevant issues where human irrationality has an effect on the world. In addition, these issues are normally easier to change people's beliefs about.

People have been believing in God for 500,000 years. People have been believing unsupported things about Global Warming for 30. I would rather teach people how to be skeptical and cautious about modern policy debates than have Yet Another God Conversation.

I was scarred by religion growing up. I understand the impulse to despise it and oppose it. But there came a time in my life when I realized that it was going to be around for as long as humanity, though its fortunes may wax and wane. It's time to move on.