Stupid Questions Thread - January 2014

post by RomeoStevens · 2014-01-13T02:31:57.366Z · LW · GW · Legacy · 298 comments

Contents

298 comments

Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!

298 comments

Comments sorted by top scores.

comment by JoshuaFox · 2014-01-13T07:42:58.469Z · LW(p) · GW(p)

If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?

Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it's not good enough to say that we're really rational, scientific, altruist, utilitarian, etc, in contrast to those people -- they thought the same.)

So, how might we find that all these ideas are massively wrong?

Replies from: Lalartu, JoshuaFox, adbge, ChristianKl, RomeoStevens, Squark, Calvin, bokov, None
comment by Lalartu · 2014-01-13T14:19:55.418Z · LW(p) · GW(p)

Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.

Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.

1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.

2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don't launch any AI before civilization runs out of resources or collapses for some other reason.

3) Consciousness is sort of optional feature, intelligence can work just well without it. We can reliably say if given intelligence is a person. In other words, real world works the same way as in Peter Watts "Blindsight". Results if wrong: many, among them classic sci-fi AI rebellion.

4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.

Replies from: Chrysophylax
comment by Chrysophylax · 2014-01-13T14:59:56.883Z · LW(p) · GW(p)

4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.

Under the assumption that cryonics patients will never be unfrozen, cryonics has two effects. Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.

The second effect is in increasing the rate of circulation of the currency; freezing corpses that will never be revived is pretty close to burying money, as Keynes suggested. Widespread, sustained cryonic freezing would certainly have stimulatory, and thus inflationary, effects; I would anticipate a slightly higher inflation rate and an ambiguous effect on economic growth. The effects would be very small, however, as cryonics is relatively cheap and would presumably grow cheaper. The average US household wastes far more money and real resources by not recycling or closing curtains and by allowing food to spoil.

Replies from: gwern
comment by gwern · 2014-01-13T18:42:16.135Z · LW(p) · GW(p)

Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.

How does this connect with the funding process of cryonics? When someone signs up and buys life insurance, they are eliminating consumption during their lifetime of the premiums and in effect investing it in the wider economy via the insurance company's investment in bonds etc; when they die and the insurance is cashed in for cryonics, some of it gets used on the process itself, but a lot goes into the trust fund where again it is invested in the wider economy. The trust fund uses the return for expenses like liquid nitrogen but it's supposed to be using only part of the return (so the endowment builds up and there's protection against disasters) and in any case, society's gain from the extra investment should exceed the fund's return (since why would anyone offer the fund investments on which they would take a loss and overpay the fund?). And this gain ought to compound over the long run.

So it seems to me that the main effect of cryonics on the economy is to increase long-term growth.

Replies from: lmm
comment by lmm · 2014-01-13T22:49:03.450Z · LW(p) · GW(p)

Money circulates more when used for short-term consumption, than long-term investment, no? So I'd expect a shift from the former to the latter to slow economic growth.

Replies from: gwern
comment by gwern · 2014-01-13T22:53:56.513Z · LW(p) · GW(p)

I don't follow. How can consumption increase economic growth when it comes at the cost of investment? Investment is what creates economic output.

Replies from: lmm, Chrysophylax
comment by lmm · 2014-01-15T12:48:30.126Z · LW(p) · GW(p)

Economic activity, i.e. positive-sum trades, are what generate economic output (that and direct labour). Investment and consumption demand can both lead to economic activity. AIUI the available evidence is that with the current economy a marginal dollar will produce a greater increase in economic activity in consumption than in investment.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-01-16T06:21:42.836Z · LW(p) · GW(p)

I think you are failing to make a crucial distinction: positive-sum trades do not generate economic activity, they are economic activity. Investment generates future opportunities for such trades.

comment by Chrysophylax · 2014-01-14T11:28:13.783Z · LW(p) · GW(p)

There is such a thing as overinvestment. There is also such a thing as underconsumption, which is what we have right now.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-01-16T06:23:52.472Z · LW(p) · GW(p)

Can you define either one without reference to value judgements? If not, I suggest you make explicit the value judgement involved in saying that we currently have underconsumption.

Replies from: Chrysophylax
comment by Chrysophylax · 2014-01-16T21:33:44.791Z · LW(p) · GW(p)

Yes, due to those being standard terms in economics. Overinvestment occurs when investment is poorly allocated due to overly-cheap credit and is a key concept of the Austrian school. Underconsumption is the key concept of Keynesian economics and the economic views of every non-idiot since Keynes; even Friedman openly declared that "we are all Keynesians now". Keynesian thought, which centres on the possibility of prolonged deficient demand (like what caused the recession), wasn't wrong, it was incomplete; the reason fine-tuning by demand management doesn't work simply wasn't known until we had the concept of the vertical long-run Phillips curve. Both of these ideas are currently being taught to first-year undergraduates.

comment by JoshuaFox · 2014-01-14T07:47:45.833Z · LW(p) · GW(p)

I think the whole MIRI/LessWrong memeplex is not massively confused.

But conditional on it turning out to be very very wrong, here is my answer:

A. MIRI

  1. The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.

  2. MIRI's AI work turns out to trigger a massive negative outcome -- either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.

  3. It turns out that the UFAI explosion really is the risk, but that MIRI's AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.

B. CfAR

  1. It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only find this out years down the road, this could be a massive failure scenario.

  2. It turns out that epistemologically non-rational techniques are instrumentally valuable. Cf. Mormonism. Again, CfAR knows this, but in this failure scenario, they fail to reconcile the differences between the two types of rationality they are trying for.

Again, I think that the above scenarios are not likely, but they're my best guess at what "massively wrong" would look like.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-01-15T05:44:51.761Z · LW(p) · GW(p)

MIRI failure modes that all seem likely to me:

  • They talk about AGI a bunch and end up triggering an AGI arms race.

  • AI doesn't explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.

  • The future is just way harder to predict than everyone thought it would be... we're cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn't have possibly forseen.

  • Uploads come first.

comment by adbge · 2014-01-13T23:32:40.157Z · LW(p) · GW(p)

If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?

A few that come to mind:

  • Some religious framework being basically correct. Humans having souls, an afterlife, etc.
  • Antinatalism as the correct moral framework.
  • Romantic ideas of the ancestral environment are correct and what feels like progress is actually things getting worse.
  • The danger of existential risk peaked with the cold war and further technological advances will only hasten the decline.
comment by ChristianKl · 2014-01-13T12:10:17.150Z · LW(p) · GW(p)

It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.

Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.

Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren't signed up for cryonics.

Take a figure like Nassim Taleb. He's frequently quoted on LessWrong so he's not really outside the LessWrong memeplex. But he's also a Christian.

There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don't take to their full conclusion.

So, how might we find that all these ideas are massively wrong?

It's a topic that's very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can't put into good metrics.

There however no way to explain the framework in an article. Most people who read the introductory book don't get the point before they spent years experiencing the system from the inside.

It's the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won't be misunderstood.

Replies from: Ishaan
comment by Ishaan · 2014-01-13T21:36:06.126Z · LW(p) · GW(p)

It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.

That's not LW-memeplex being wrong, that's just a LW-meme which is slightly more pessimistic than the more customary "the vast majority of all UFAI's are unfriendly but we might be able to make this work" view. I don't think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.

MIRI-LW being plausibly wrong about AI friendliness is more like, "Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don't actually "FOOM" dramatically ... they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn't much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that."

If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)

Replies from: ChristianKl
comment by ChristianKl · 2014-01-13T22:05:43.924Z · LW(p) · GW(p)

If it's impossible to build FAI that might mean that one should in general discourage technological development to prevent AGI from being build.

It might building moral framework that allow for effective prevention of technological development. I do think that's significantly differs from the current LW-memeplex.

Replies from: Ishaan
comment by Ishaan · 2014-01-13T22:50:01.086Z · LW(p) · GW(p)

What I mean is...the difference between "FAI is possible but difficult" and "FAI is impossible and all AI are uFAI" is like the difference between "A narrow subset of people go to heaven instead of hell" and " and "every human goes to hell". Those two beliefs are mostly identical

Whereas "FOOM doesn't happen and there is no reason to worry about AI so much" is analogous to "belief in afterlife is unfounded in the first place". That''s a massively different idea.

In one case, you're committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that "all AI are UFAI" is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-01-16T06:26:32.206Z · LW(p) · GW(p)

like the difference between "A narrow subset of people go to heaven instead of hell" and " and "every human goes to hell". Those two beliefs are mostly identical

Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you're in the narrow subset. In the second, you want to overthrow the system.

comment by RomeoStevens · 2014-01-13T09:13:39.540Z · LW(p) · GW(p)

We should be wary of ideologies that involve one massive failure point....crap.

Replies from: Curiouskid
comment by Curiouskid · 2014-01-13T20:46:59.544Z · LW(p) · GW(p)

Could you elaborate/give-some-examples?

What are some ideologies that do/don't have (one massive failure point)/(Lots of small failure points)?

Replies from: RomeoStevens
comment by RomeoStevens · 2014-01-13T22:44:22.140Z · LW(p) · GW(p)

The one I was thinking of was capitalism vs communism. I have had many communists tell me that communism only works if we make the whole world do it. A single point of failure.

Replies from: Luke_A_Somers, Nornagest
comment by Luke_A_Somers · 2014-01-15T14:23:18.449Z · LW(p) · GW(p)

I wouldn't call that a single point of failure, I'd call that a refusal to test it and an admission of extreme fragility.

comment by Nornagest · 2014-01-13T23:37:13.279Z · LW(p) · GW(p)

That's kind of surprising to me. A lot of systems have proportional tipping points, where a change is unstable up to a certain proportion of the sample but suddenly turns stable after that point. Herd immunity, traffic congestion, that sort of thing. If the assumptions of communism hold, that seems like a natural way of looking at it.

A structurally unstable social system just seems so obviously bad to me that I can't imagine it being modeled as such by its proponents. Suppose Marx didn't have access to dynamical systems theory, though.

Replies from: Lalartu
comment by Lalartu · 2014-01-14T19:07:52.257Z · LW(p) · GW(p)

This is what some modern communists say, and it is just an excuse (and in fact wrong, it will not work even in that case). Early communists actually believed the opposite thing: an example of one communitst nation would be enough to convert the whole world.

Replies from: Nornagest
comment by Nornagest · 2014-01-14T20:42:16.744Z · LW(p) · GW(p)

It's been a while since I read Marx and Engels, but I'm not sure they would have been speaking in terms of conversion by example. IIRC, they thought of communism as a more-or-less inevitable development from capitalism, and that it would develop somewhat orthogonally to nation-state boundaries but establish itself first in those nations that were most industrialized (and therefore had progressed the furthest in Marx's future-historical timeline). At the time they were writing, that would probably have meant Britain.

The idea of socialism in one country was a development of the Russian Revolution, and is something of a departure from Marxism as originally formulated.

comment by Squark · 2014-01-13T18:43:38.739Z · LW(p) · GW(p)

Define "massively wrong". My personal opinions (stated w/o motivation for brevity):

  • Building AGI from scratch is likely to be unfeasible (although we don't know nearly enough to discard the risk altogether)
  • Mind uploading is feasible (and morally desirable) but will trigger intelligence growth of marginal speed rather than a "foom"
  • "Correct" morality is low Kolmogorov complexity and conforms with radical forms of transhumanism

Infeasibility of "classical" AGI and feasibility of mind uploading should be scientifically provable.

So: My position is very different from MIRI's. Nevertheless I think LessWrong is very interesting and useful (in particular I'm all for promoting rationality) and MIRI is doing very interesting and useful research. Does it count as "massively wrong"?

comment by Calvin · 2014-01-13T08:45:27.453Z · LW(p) · GW(p)

We might find out by trying to apply them to the real world and seeing that they don't work.

Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.

Replies from: lmm
comment by lmm · 2014-01-13T12:49:19.026Z · LW(p) · GW(p)

Is it? I mean, I'd happily say that the LW crowd as a whole does not seem particularly good at winning at life, but that is and should be our goal.

Replies from: Calvin
comment by Calvin · 2014-01-13T13:05:47.470Z · LW(p) · GW(p)

Speaking broadly, the desire to lead happy / successful / interesting life (however winning is defined) it is a laudable goal shared by wast majority of humans. The problem was that some people took the idea further and decided that winning is a good qualification measure as to weather someone is a good rationalist or not, as debunked by Luke here. There are better examples, but I can't find them now.

Also, my two cents are that while rational agent may have some advantage over irrational one in a perfect universe, real world is so fuzzy and full of noisy information that if superior reasoning decision making skill really improves your life, improvements are likely to be not as impressive as advertised by hopeful proponents of systematized winning theory.

Replies from: lmm
comment by lmm · 2014-01-13T19:15:17.742Z · LW(p) · GW(p)

I think that post is wrong as a description of the LW crowd's goals. That post talks as if one's akrasia were a fixed fact that had nothing to do with rationality, but in fact a lot of the site is about reducing or avoiding it. Likewise intelligence; that post seems to assume that your intelligence is fixed and independent of your rationality, but in reality this site is very interested in methods of increasing intelligence. I don't think anyone on this site is just interested in making consistent choices.

comment by bokov · 2014-01-13T17:56:29.245Z · LW(p) · GW(p)

It would look like a failure to adequately discount for inferential chain length.

comment by [deleted] · 2014-01-14T16:00:32.424Z · LW(p) · GW(p)

By their degree of similarity to ancient religious mythological and sympathetic magic forms with the nouns swapped out.

comment by Risto_Saarelma · 2014-01-13T09:44:33.147Z · LW(p) · GW(p)

Should I not be using my real name?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-13T11:31:33.348Z · LW(p) · GW(p)

Do you want to have a career at a conservative institution such a bank or a career in politics? If so, it's probably a bad idea to have too much attack surface by using your real name.

Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.

If you meet people in real life might already know you from your online commentary that they have read and you don't have to start introducing yourself.

It's really a question of whether you think strangers are more likely to hurt or help you.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-13T14:59:18.426Z · LW(p) · GW(p)

Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.

I think the best long-term strategy would be to invent a different name and use the other name consistently, even in the real life. With everyone, except the government. Of course your family and some close friends would know your real name, but you would tell them that you prefer to be called by that other name, especially in public.

So, you have one identity, you make it famous and everyone knows you. Only when you want to get anonymous, you use your real name. And the advantage is that you have papers for it. So your employer will likely not notice. You just have to be careful never to use your real name together with your fake name.

Unless your first name is unusual, you can probably re-use your first name, which is how most people will call you anyway, so if you meet people who know your true name and people who know your fake name at the same time, the fact that you use two names will not be exposed.

Replies from: Curiouskid, ChristianKl
comment by Curiouskid · 2014-01-13T20:49:13.896Z · LW(p) · GW(p)

This seems to be what Gwern has done.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-14T11:07:42.876Z · LW(p) · GW(p)

Exactly! He is so good example that it is easy to not even notice him being a good example.

There is no "Gwern has an identity he is trying to hide" thought running in my mind when I think about him (unlike with Yvain). It's just "Gwern is Gwern", nothing more. Instead of a link pointing to the darkness, there is simply no link there. It's not like I am trying to respect his privacy; I feel free to do anything I want and yet his privacy remains safe. (I mean, maybe if someone tried hard... but there is nothing reminding people that they could.) It's like an invisible fortress.

But if instead he called himself Arthur Gwernach (abbreviated to Gwern), that would be even better.

comment by ChristianKl · 2014-01-13T16:52:01.325Z · LW(p) · GW(p)

I think the best long-term strategy would be to invent a different name and use the other name consistently, even in the real life. With everyone, except the government.

What threat do you want to protect against? If you fear the NSA, they have probably have no trouble linking your real name to your alias.

They know where the person with your real name lives and they know what web addresses get browsed from that location.

You just have to be careful never to use your real name together with your fake name.

I could not do that. I study in university under my real name and my identity as a university student is linked to my public identity. The link is strong enough that a journalist who didn't contact me via a social network called my university to get in touch with me.

On LessWrong I write under my firstname plus the first two letters of my lastname. That means that anyone who recognises my identity from somewhere else can recognize me but if someone Google's for me he can't find me easily.

I have no trouble having to stand up for write I write on Lesswrong to people I meet in real life but having a discussion with one of my aunts about it wouldn't be fun, so I don't make it too easy. I also wouldn't want the writing to be quoted out of context in other places. I would survive it but given the low level of filtering on what I write on LW it would be annoying.

As far as self censoring goes I feel safe to say one of my aunts given that I have multiple of them. Anybody reading couldn't reduce who I mean. Whenever else I write something about someone I know I think twice whether someone could identify the person and if so I wouldn't write it publicly under this identity. Asking about relationship advice and flashing out specific a problem would be a no-go for me because it might make details public that the other person didn't want to have public. Everything I say in that regard is supposed to be general enough that no harm will come from it to other people I know personally.

Replies from: Viliam_Bur, Lumifer, Ishaan
comment by Viliam_Bur · 2014-01-13T17:29:32.944Z · LW(p) · GW(p)

What threat do you want to protect against?

A conservative employer, less skilled than NSA.

For example I want to write blogs against religion or against some political party, and yet not be at a disadvantage when applying for a job in a company where the boss suports them. Also to avoid conflicts with colleagues.

I study in university under my real name and my identity as a university student is linked to my public identity.

Good point. In such case I would put the university in the same category as an employer. Generally, all institutions that have power over me at some point of my life.

Replies from: Error
comment by Error · 2014-01-15T16:25:21.148Z · LW(p) · GW(p)

Generally, all institutions that have power over me at some point of my life.

This. The face one presents to one's peers is justifiably different from the face one presents to amoral, potentially dangerous organizations. Probably the first thing that, say, a job interviewer will do with a potential candidate is Google their name. Unless the interviewer is exceptionally open minded, it is critical to your livelihood that they not find the Harry Potter erotica you wrote when you were fifteen.

I have both a handle and a legal name. The handle is as much "me" as the legal one (more so, in some ways). I don't hesitate to give out my real name to people I know online, but I won't give my handle out to any organizational representative. I fear the bureaucracy more than random Internet kooks. It's not about evading the NSA; it's about keeping personal and professional life safely separated.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-16T09:36:44.357Z · LW(p) · GW(p)

It's like when I lock my doors, a skilled thief would get inside anyway. But it's good to protect myself against the 99% of unskilled thieves (people who could become thieves when given a tempting opportunity). Similarly, it would be good to be protected against random people who merely type my name into google, look at the first three pages of results, open the first five linked articles, and that's it.

It's already rather late for me, but this is probably an advice I will give my children.

Technically, I could start using a new identity for controversial things today, and use my real name only for professional topics. But I am already almost 40. Even if after 10 years most of stuff written using my real name would get away from google top search results, it probably wouldn't make a big difference. And seems to me that these days link rot is slower than it used to be. Also, I wouldn't know what to do with my old unprofessional blog articles: deleting them would be painful; moving them to the new identity would expose me; keeping them defeats the purpose. -- I wish I could send this message back in time to my teenage self. Who would probably choose a completely stupid nickname.

comment by Lumifer · 2014-01-14T20:15:56.637Z · LW(p) · GW(p)

What threat do you want to protect against?

How about a publicly accessible collection of everything you did or said online that is unerasable and lasts forever?

"I hope you know that this will go down on your permanent record"

Replies from: ChristianKl
comment by ChristianKl · 2014-01-14T22:15:26.755Z · LW(p) · GW(p)

You don't name a threat.

If you think that the work you produce online is crap and people you care about will dislike you for it, than having a permanent record of it is bad. If you think that the work that you produce online is good than having a permanent record of it is good.

You might say that some people might not hire me when they read that I expressed some controversial view years ago in an online forum. I would say that I don't want to increase the power of those organisations by working for them anyway.

I rather want to get hired by someone who likes me and values my public record.

There a bit of stoicism involved but I don't think that it's useful to live while hiding yourself. I rather fear having a lived a life where I leave no meaningful record than living a life that leaves a record.

Replies from: gattsuru, Lumifer
comment by gattsuru · 2014-01-15T00:02:40.284Z · LW(p) · GW(p)

You might say that some people might not hire me when they read that I expressed some controversial view years ago in an online forum. I would say that I don't want to increase the power of those organisations by working for them anyway.

How far from the normal are you? You may quickly find your feelings change drastically as your positions become more opposed. I don't want to work for organizations that would not hire me due to controversial views, but depending on the view and on my employment prospects, my choices may be heavily constrained. I'd rather know which organizations I can choose to avoid, rather than be forced out of organizations. ((There are also time-costs involved with doing it this way: it a company says it hates X in the news, and I like X, I can not send them a resume. But I send them a resume and then discover than they don't want to hire me due to my positions on X during an interview, it's a lot of lost energy.))

Conversely, writing under my own name would incentive avoiding topics that are or are likely to become controversial enough.

comment by Lumifer · 2014-01-14T22:36:29.180Z · LW(p) · GW(p)

You don't name a threat.

No, I name a capability to misrepresent and hurt you.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-14T23:09:10.515Z · LW(p) · GW(p)

If I do most of my public activity under identity Bob but the government knows me as Dave, someone can still misrepresent me as I'm acting as Bob by misquoting things written under the Bob identity in the past.

If I want to prevent permanent records I would have to switch identities every so often which is hard to do without losing something if you have anything attached to those identities that you don't want to lose.

comment by Ishaan · 2014-01-13T23:05:22.993Z · LW(p) · GW(p)

What threat do you want to protect against?

It depending on how vocal and how controversial you are being with your internet persona. There is always the chance that you'll acquire the ire of an angry mob...and if so, you've effectively doxxed yourself for them.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-13T23:57:06.258Z · LW(p) · GW(p)

Being open with your name doesn't automatically mean that phone numbers and your address is also public.

For most people I don't think the risk is significant compared to other risks such as getting hit by a car. I would expect it to be one of those risks that's easy to visualize but that has a rather low probability.

Replies from: gattsuru
comment by gattsuru · 2014-01-14T20:08:11.329Z · LW(p) · GW(p)

Being open with your name does mean that your phone numbers and address are likely to be public. Saarelma is a little more protected than the average, since Finland's equivalent to WhitePages is not freely available world-wide, but those in the United States with an unusual name can be found for free.

That's separate from the "ire of an angry mob" risk, which seems more likely to occur primarily for people who have a large enough profile that they'd have to have outed themselves anyway, though.

comment by solipsist · 2014-01-13T03:33:09.801Z · LW(p) · GW(p)

Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?

ETA: Potentially less contentious rephrase: why isn't making a life as important as saving a life?

Replies from: Pablo_Stafforini, pragmatist, lmm, Dias, Ishaan, Arran_Stirton, Kaj_Sotala, Douglas_Knight, blacktrance, hairyfigment, Manfred, DanielLC, hyporational, Lumifer
comment by Pablo (Pablo_Stafforini) · 2014-01-13T04:51:07.433Z · LW(p) · GW(p)

Whether this is so or not depends on whether you are assuming hedonistic or preference utilitarianism. For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder, except that as a matter of fact murder causes much more suffering than contraception does, both to the person who dies, to his or her loved ones, and to society at large (by increasing fear). By contrast, preference utilitarians can also appeal to the preferences of the individual who is killed: whereas murder causes the frustration of an existing preference, contraception doesn't, since nonexisting entities can't have preferences.

The question also turns on issues about population ethics. The previous paragraph assumes the "total view": that people who do not exist but could or will exist matter morally, and just as much. But some people reject this view. For these people, even hedonistic utilitarians can condemn murder more harshly than contraception, wholly apart from the indirect effects of murder on individuals and society. The pleasure not experienced by the person who fails to be conceived doesn't count, or counts less than the pleasure that the victim of murder is deprived of, since the latter exists but the former doesn't.

For further discussion, see Peter Singer's Practical Ethics, chap. 4 ('What's wrong with killing?").

Replies from: torekp, blacktrance
comment by torekp · 2014-01-20T01:34:23.439Z · LW(p) · GW(p)

Pablo makes great points about the suffering of loved ones, etc. But, modulo those points, I'd say making a life is as important as saving a life. (I'm only going to address the potentially contentious "rephrase" here, and not the original problem; I find the making life / saving life case more interesting.) And I'm not a utilitarian.

When you have a child, even if you follow the best available practices, there is a non-trivial chance that the child will have a worse-than-nothing existence. They could be born with some terminal, painful, and incurable illness. What justifies taking that risk? Suggested answer: the high probability that a child will be born to a good life. Note that in many cases, the child who would have an awful life is a different child (coming from a different egg and/or sperm - a genetically defective one) than the one who would have a good life.

comment by blacktrance · 2014-01-13T05:12:24.868Z · LW(p) · GW(p)

For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder

Only if the hedonistic utilitarian is also a total utilitarian, rather than an average utilitarian, right?

Edit: Read your second paragraph, now I feel silly.

comment by pragmatist · 2014-01-13T03:50:04.928Z · LW(p) · GW(p)

Making a person and unmaking a person seem like utilitarian inverses

Doesn't seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you're only focusing on the utility for the person made or unmade, then maybe (although see blacktrance's comment on that), but as a utilitarian you have no license for doing that.

Replies from: solipsist
comment by solipsist · 2014-01-13T04:17:12.447Z · LW(p) · GW(p)

A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple's child leads a life far longer and happier than the forgotten Hermit's ever would have been.

Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?

Replies from: pragmatist, None, cata, Ishaan, Lumifer, army1987
comment by pragmatist · 2014-01-13T05:27:55.045Z · LW(p) · GW(p)

Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.

If you're asking me, I'd say no, but I'm not a utilitarian, partly because utilitarianism answers "yes" to questions similar to this one.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-01-16T18:46:33.123Z · LW(p) · GW(p)

Only if you use a stupid utility function.

Replies from: pragmatist
comment by pragmatist · 2014-01-17T04:43:58.103Z · LW(p) · GW(p)

Utilitarianism doesn't use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone's utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.

In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as "utilitarianism doesn't respect the separateness of persons." For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it's possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don't matter, just the amount of utility sloshing about (or, if you're an average utilitarian, the number of vessels matters, but the vessels don't matter beyond that). An extreme consequence of this kind of thinking is the whole "utility monster" problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).

I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn't mean that trade-offs between peoples' rights/well-being/whatever are always ruled out, but they shouldn't be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can't capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-01-17T13:48:20.331Z · LW(p) · GW(p)

Yes, I should have rephrased that as 'Only because hedonic utilitarianism is stupid' --- how's that?

comment by [deleted] · 2014-01-13T15:08:21.853Z · LW(p) · GW(p)

If there are a large number of "yes" replies, the hermit lfestyle becomes very unappealing.

comment by cata · 2014-01-13T04:31:17.088Z · LW(p) · GW(p)

Sure, Eve did a good thing.

Replies from: solipsist, Calvin
comment by solipsist · 2014-01-13T16:26:05.473Z · LW(p) · GW(p)

Does that mean we should spend more of our altruistic energies on encouraging happy productive people to have more happy productive children?

Replies from: cata
comment by cata · 2014-01-13T20:40:23.029Z · LW(p) · GW(p)

Maybe. I think the realistic problem with this strategy is that if you take an existing human and help him in some obvious way, then it's easy to see and measure the good you're doing. It sounds pretty hard to figure out how effectively or reliably you can encourage people to have happy productive children. In your thought experiment, you kill the hermit with 100% certainty, but creating a longer, happier life that didn't detract from others' was a complicated conjunction of things that worked out well.

comment by Calvin · 2014-01-13T05:01:29.672Z · LW(p) · GW(p)

I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.

Replies from: solipsist, RowanE
comment by solipsist · 2014-01-13T16:32:21.491Z · LW(p) · GW(p)

I didn't mean for the hermit to be sad, just less happy than the child.

Replies from: Calvin
comment by Calvin · 2014-01-13T16:40:00.617Z · LW(p) · GW(p)

Ah, must have misread your representation, but English is not my first language, so sorry about that.

I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.

comment by RowanE · 2014-01-13T11:13:36.424Z · LW(p) · GW(p)

It's specified that he was killed painlessly.

Replies from: Calvin
comment by Calvin · 2014-01-13T11:24:04.794Z · LW(p) · GW(p)

It is true, I wasn't specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.

He was, presumably - killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.

If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.

Replies from: Chrysophylax
comment by Chrysophylax · 2014-01-13T15:04:03.001Z · LW(p) · GW(p)

We live in a world full of utility monsters. We call them humans.

Replies from: Calvin
comment by Calvin · 2014-01-13T16:25:37.687Z · LW(p) · GW(p)

I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don't become upset about those atrocities that are currently being committed in my name?

We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.

Replies from: Chrysophylax
comment by Chrysophylax · 2014-01-13T20:47:48.313Z · LW(p) · GW(p)

No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn't exist otherwise, but the same cannot be said for battery animals.

Replies from: Gunnar_Zarncke, Calvin
comment by Gunnar_Zarncke · 2014-01-15T12:12:15.511Z · LW(p) · GW(p)

But driving this reasoning to its logical conclusion you get a lot of strange results.

The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.

Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don't know will always choose defect and thus reap a higher benefit than you - except if they are too many.

So either

  • You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.

  • Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.

The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its 'logical conclusion' out of context and without regard to that this concept is an evolved feature itself.

As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.

In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment - only we have already learned that the latter might be no good idea).

The conclusion is that you also have to balance extreme empathy with reality.

ADDED: Just found this relevant link: http://lesswrong.com/lw/69w/utility_maximization_and_complex_values/

Replies from: Chrysophylax
comment by Chrysophylax · 2014-01-15T17:02:10.855Z · LW(p) · GW(p)

Robert Nozick:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.

My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don't identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-15T17:15:43.229Z · LW(p) · GW(p)

That way it looks. And this is probably part of being human.

I'd like to rephrase your answer as follows to drive home that ethics is most driven by empathy:

Humans mostly act as though they are utility monsters with respect to entities they have empathy with; they act as though the utility of entities they have no empathy toward is vastly smaller than the utility of those they relate to and so caring for them is always the best option.

comment by Calvin · 2014-01-13T21:22:46.456Z · LW(p) · GW(p)

In this case, I concur that your argument may be true if you include animals in your utility calculations.

While I do have reservations against causing suffering in humans, I don't explicitly include animals in my utility calculations, and while I don't support causing suffering for the sake of suffering, I don't have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.

comment by Ishaan · 2014-01-13T22:07:30.791Z · LW(p) · GW(p)

This fails to fit the spirit of the problem, because it takes the preferences of currently living beings (the childless couple) into account.

A scenario that would capture the spirit of the problem is:

"Eve kills a moderately happy hermit who moderately prefers being alive, uses the money to create a child who is predisposed to be extremely happy as a hermit. She leaves the child on the island to live life as an extremely happy hermit who extremely prefers being alive." (The "hermit" portion of the problem is unnecessary now - you can replace hermit with "family" or "society" if you want.)

Compare with...

"Eve must choose between creating a moderately happy hermit who moderately prefers being alive OR an extremely happy hermit who extremely prefers being alive." (Again, hermit / family / society are interchangeable)

and

"Eve must choose between kliling a moderately happy hermit who moderately prefers being alive OR killing an extremely happy hermit who extremely prefers being alive."

comment by Lumifer · 2014-01-13T04:39:22.941Z · LW(p) · GW(p)

This looks very similar to the trolley problem, specifically the your-organs-are-needed version.

Replies from: army1987, adbge
comment by A1987dM (army1987) · 2014-01-13T17:05:57.743Z · LW(p) · GW(p)

The grounds to avoid discouraging people from walking into hospitals are way stronger than the grounds to avoid discouraging people from being hermits.

Replies from: Lumifer
comment by Lumifer · 2014-01-13T17:16:56.489Z · LW(p) · GW(p)

So you think that the only problem with the Transplant scenario is that it discourages people from using hospitals..?

Replies from: army1987, Eugine_Nier
comment by A1987dM (army1987) · 2014-01-13T17:50:21.286Z · LW(p) · GW(p)

Not the only one, but the deal-breaking one.

Replies from: Lumifer
comment by Lumifer · 2014-01-13T17:51:40.555Z · LW(p) · GW(p)

See this

comment by Eugine_Nier · 2014-01-16T05:32:16.036Z · LW(p) · GW(p)

Well, that's the standard rationalization utilitarians use to get out of that dilemma.

comment by adbge · 2014-01-13T04:42:57.792Z · LW(p) · GW(p)

I thought the same thing and went to dig up the original. Here it is:

One common illustration is called Transplant. Imagine that each of five patients in a hospital will die without an organ transplant. The patient in Room 1 needs a heart, the patient in Room 2 needs a liver, the patient in Room 3 needs a kidney, and so on. The person in Room 6 is in the hospital for routine tests. Luckily (for them, not for him!), his tissue is compatible with the other five patients, and a specialist is available to transplant his organs into the other five. This operation would save their lives, while killing the “donor”. There is no other way to save any of the other five patients (Foot 1966, Thomson 1976; compare related cases in Carritt 1947 and McCloskey 1965).

This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.

Replies from: solipsist
comment by solipsist · 2014-01-13T05:06:37.993Z · LW(p) · GW(p)

This situation seems different for me for two reason:

Off-topic way: Killing the "donor" is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn't go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian.

On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I'm not sure why (or if) they differ from a utilitarian perspective.

Replies from: Viliam_Bur, Lumifer
comment by Viliam_Bur · 2014-01-13T08:25:00.196Z · LW(p) · GW(p)

Analogically, "killing a less happy person and conceiving a more happy one" may be wrong in a long term, by changing a society into one where people feel unsafe.

comment by Lumifer · 2014-01-13T17:41:05.662Z · LW(p) · GW(p)

If doctors killed random patients then patients wouldn't go to hospitals and medicine would collapse.

You're fixating on the unimportant parts.

Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?

Replies from: Moss_Piglet, Nornagest, army1987, solipsist
comment by Moss_Piglet · 2014-01-16T18:35:59.914Z · LW(p) · GW(p)

Any problems here?

That people are stupefyingly irrational about risks, especially in regards to medicine.

As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.

Now granted that's a rather extreme case, and she wasn't exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.

(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might "just happen" to come up in their cases if they went in for treatment; it's not like American bureaucrats have never abused their power to target political enemies before.)

comment by Nornagest · 2014-01-16T18:16:32.682Z · LW(p) · GW(p)

The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we'd expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it's being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.

Replies from: Lumifer
comment by Lumifer · 2014-01-16T18:22:57.510Z · LW(p) · GW(p)

The traditional objection to this sort of thing is that it creates perverse incentives

That's a very weak objection given that the real world is full or perverse incentives and still manages to function, more or less, sorta-kinda...

comment by A1987dM (army1987) · 2014-01-16T18:01:30.436Z · LW(p) · GW(p)

Only if the Q in QALY takes into account the fact that people will be constantly worried they might be picked by the RNG.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-17T09:05:09.618Z · LW(p) · GW(p)

And of course, I wouldn't trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...

comment by solipsist · 2014-01-13T18:44:11.234Z · LW(p) · GW(p)

Edited away an explanation so as not to take the last word

Any problems here?

Short answer, no.

I'd like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.

comment by A1987dM (army1987) · 2014-01-13T17:02:59.255Z · LW(p) · GW(p)

Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?

Yes, but I wouldn't do that myself because of ethical injunctions.

comment by lmm · 2014-01-13T12:51:30.486Z · LW(p) · GW(p)

Cheap answer, but remember that it might be the true one: because utilitarianism doesn't accurately describe morality, and the right way to live is not by utilitarianism.

comment by Dias · 2014-01-13T04:38:08.984Z · LW(p) · GW(p)

Upvoted. Remember to keep in mind the answer might be "making a person is as good as killing a person is bad.

Here's a simple argument for why we can't be indifferent to creating people. Suppose we have three worlds:

  • Jon is alive and has 10 utils
  • Jon was never conceived
  • 1Jon is alive and has 20 utils

Assume we prefer Jon to have 20 utils to 10. Assume also we're indifferent between 10 utils and Jon's. Hence by transitivity we must prefer Jon exist and have 20 utils to Jon's non-existance. So we should try to create Jon, if we think he'll have over 10 utils.

Replies from: Gurkenglas
comment by Gurkenglas · 2014-01-13T09:38:42.374Z · LW(p) · GW(p)

Note that this kind of utilon calculation also equates your scenarios with those where, magically, a whole bunch of people came and ceased to exist a few minutes ago with lots of horrible torture, followed by amnesia, in between.

comment by Ishaan · 2014-01-13T21:54:53.967Z · LW(p) · GW(p)

Why isn't making a person as good as killing a person is bad

Possibly because...

I don't think contraception is tantamount to murder.

You have judged. It's possible that this is all there is to it... not killing people who do not want to die might just be a terminal value for humans, while creating people who would want to be created might not be a terminal value.

(Might. If you think that it's an instrumental value in favor of some other terminal goal, you should look for it)

comment by Arran_Stirton · 2014-01-14T21:26:32.455Z · LW(p) · GW(p)

As far as I can tell killing/not-killing a person isn't the same not-making/make a person. I think this becomes more apparent if you consider the universe as timeless.

This is the thought experiment that comes to mind. It's worth noting that all that follows depends heavily on how one calculates things.

Comparing the universes where we choose to make Jon to the one where we choose not to:

  • Universe A: Jon made; Jon lives a fulfilling life with global net utility of 2u.
  • Universe A': Jon not-made; Jon doesn't exist in this universe so the amount of utility he has is undefined.

Comparing the universes where we choose to kill an already made Jon to the one where we choose not to:

  • Universe B: Jon not killed; Jon lives a fulfilling life with global net utility of 2u.
  • Universe B': Jon killed; Jon's life is cut short, his life has a global net utility of u.

The marginal utility for Jon in Universe B vs B' is easy to calculate, (2u - u) gives a total marginal utility (i.e. gain in utility) from choosing to not kill Jon over killing him of u.

However the marginal utility for Jon in Universe A vs A' is undefined (in the same sense 1/0 is undefined). As Jon doesn't exist in universe A' it is impossible to assign a value to Utility_Jon_A', as a result our marginal (Utility_Jon_A - Utility_Jon_A') is equal to (u - [an undefined value]). As such our marginal utility lost or gained by choosing between universes A and A' is undefined.

It follows from this that the marginal utility between any universe and A' is undefined. In other words our rules for deciding which universe is better for Jon break down in this case.

I myself (probably) don't have a preference for creating universes where I exist over ones where I don't. However I'm sure that I don't want this current existence of me to terminate.

So personally I choose maximise the utility of people who already exist over creating more people.

Eliezer explains here why bringing people into existence isn't all that great even if someone existing over not existing has a defined(and positive) marginal utility.

comment by Kaj_Sotala · 2014-01-15T06:28:27.897Z · LW(p) · GW(p)

I created a new article about this.

comment by Douglas_Knight · 2014-01-13T16:44:51.014Z · LW(p) · GW(p)

Here are two related differences between a child is and an adult. (1) It is very expensive to turn a child into an adult. (2) An adult is highly specific and not replaceable, while a fetus has a lot of subjective uncertainty and is fairly easily duplicated within that uncertainty. Uploading is relevant to both of these points.

comment by blacktrance · 2014-01-13T03:33:57.354Z · LW(p) · GW(p)

Because killing a person deprives them of positive experiences that they otherwise would have had, and they prefer to have them. But a nonexistent being doesn't have preferences.

Replies from: gwern
comment by gwern · 2014-01-13T04:23:26.163Z · LW(p) · GW(p)

Once you've killed them and they've become nonexistent, then they don't have preferences either.

Replies from: pragmatist, blacktrance, Leonhart, Pablo_Stafforini
comment by pragmatist · 2014-01-13T05:38:06.229Z · LW(p) · GW(p)

Presumably what should matter (assuming preference utilitarianism) when we evaluate an act are the preferences that exist at (or just before) the time of commission of the act. If that's right, then the non-existence of those preferences after the act is performed is irrelevant.

The Spanish Inquisition isn't exculpated because it's victims' preferences no longer exist. They existed at the time they were being tortured, and that's what should matter.

Replies from: lmm
comment by lmm · 2014-01-13T22:54:44.911Z · LW(p) · GW(p)

So it's fine to do as much environmental damage as we like, as long as we're confident the effects won't be felt until after everyone currently alive is dead?

Replies from: Nornagest
comment by Nornagest · 2014-01-13T22:58:16.183Z · LW(p) · GW(p)

I'd presume that many people's preferences include terms for the expected well-being of their descendants.

Replies from: lmm
comment by lmm · 2014-01-15T12:52:36.806Z · LW(p) · GW(p)

That's a get out of utilitarianism free card. Many people's preferences include terms for acting in accordance with their own nonutilitarian moral systems.

Replies from: Nornagest
comment by Nornagest · 2014-01-15T21:26:34.267Z · LW(p) · GW(p)

Preference utilitarianism isn't a tool for deciding what you should prefer, it's a tool for deciding how you should act. It's entirely consistent to prefer options which involve you acting according to whim or some nonutilitarian system (example: going to the pub), yet for it to dictate -- after taking into account the preferences of others -- that you should in fact do something else (example: taking care of your sick grandmother).

There may be some confusion here, though. I normally think of preferences in this context as being evaluated over future states of the world, i.e. consequences, not over possible actions; it sounds like you're thinking more in terms of the latter.

Replies from: lmm
comment by lmm · 2014-01-16T01:30:05.969Z · LW(p) · GW(p)

Yeah, I sometimes have trouble thinking like a utilitarian.

If we're just looking at future states of the world, then consider four possible futures: your (isolated hermit) granddaughter exists and has a happy life, your granddaughter exists and has a miserable life, your granddaughter does not exist because she died, your granddaughter does not exist because she was never born.

It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same - if we're allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism.

If we accept that the utility of the last two options is the same, then we have an awkward dilemma. Either this utility value is higher than option 2 - meaning that if someone's life is sufficiently miserable, it's better to kill them than allow them to continue living. Or it's lower, meaning that it's always better to give birth to someone than not. Worse, if your first granddaughter was going to be miserable and your second would be happy, it's a morally good action if you can do something that kills your first granddaughter but gives rise to the birth of your second granddaughter. It's weirdly discontinuous to say that your first granddaughter's preferences become valid once she's born - does that mean that killing her after she's born is a bad thing, but if you set up some rube goldberg contraption that will kill her after she's born then that's a good thing?

Replies from: pragmatist
comment by pragmatist · 2014-01-16T06:50:20.823Z · LW(p) · GW(p)

It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same - if we're allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism.

Whatever action I take right now, eventually the macroscopic state of the universe is going to look the same (heat death of the universe). Does this mean the utilitarian is committed to saying that all actions available to me are morally equivalent? I don't think so. Even though the (macroscopic) end state is the same, the way the universe gets there will differ, depending on my actions, and that matters from the perspective of preference utilitarianism.

Replies from: lmm
comment by lmm · 2014-01-17T00:03:36.943Z · LW(p) · GW(p)

What, then, would you say is the distinction between a utilitarian and a virtue ethicist? Are they potentially just different formulations of the same idea? Are there any moral systems that definitely don't qualify as preference utilitarianism, if we allow this kind of distinction in a utility function?

Replies from: pragmatist
comment by pragmatist · 2014-01-17T04:19:40.556Z · LW(p) · GW(p)

Do you maybe mean the difference between utilitarianism and deontological theories? Virtue ethics is quite obviously different, because it says the business of moral theory is to evaluate character traits rather than acts.

Deontology differs from utilitarianism (and consequentialism more generally) because acts are judged independently of their consequences. An act can be immoral even if it unambiguously leads to a better state of affairs for everyone (a state of affairs where everyone's preferences are better satisfied and everyone is happier, say), or even if it has absolutely no impact on anyone's life at any time. Consequentialism doesn't allow this, even if it allows distinctions between different macroscopic histories that lead to the same macroscopic outcome.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-22T05:41:53.363Z · LW(p) · GW(p)

Deontology differs from utilitarianism (and consequentialism more generally) because acts are judged independently of their consequences.

No, deontologists are simply allowed to consider factors other than consequences.

comment by blacktrance · 2014-01-13T05:10:56.867Z · LW(p) · GW(p)

That's true, but they have preferences before you kill them. In the case of contraception, there is no being to have ever had preferences.

comment by Leonhart · 2014-01-13T22:25:11.053Z · LW(p) · GW(p)

They never do "become nonexistent". You just happen to have found one of their edges.

comment by Pablo (Pablo_Stafforini) · 2014-01-13T05:02:19.809Z · LW(p) · GW(p)

Yes, but there may be a moral difference between frustrating a preference that once existed, and causing a preference not to be formed at all. See my reply to the original question.

comment by hairyfigment · 2014-01-13T07:34:24.864Z · LW(p) · GW(p)

Even within pleasure- or QALY-utilitarianism, which seems technically wrong, you can avoid this by recognizing that those possible people probably exist regardless in some timeline or other. I think. We don't understand this very well. But it looks like you want lots of people to follow the rule of making their timelines good places to live (for those who've already entered the timeline). Which does appear to save utilitarianism's use as a rule of thumb.

comment by Manfred · 2014-01-13T04:25:40.594Z · LW(p) · GW(p)

From a classical utilitarian perspective, yeah, it's pretty much a wash, at least relative to non-fatal crimes that cause similar suffering.

However, around here, "utilitarian" is usually meant as "consistent consequentialism." In that frame we can appeal to motives like "I don't want to live in a society with lots of murder, so it's extra bad."

comment by DanielLC · 2014-01-14T07:32:34.478Z · LW(p) · GW(p)

It takes a lot of resources to raise someone. If you're talking about getting an abortion, it's not a big difference, but if someone has already invested enough resources to raise a child, and then you kill them, that's a lot of waste.

comment by hyporational · 2014-01-13T06:40:42.601Z · LW(p) · GW(p)

Isn't negative utility usually more motivating to people anyway? This seems like a special case of that, if we don't count the important complications of killing a person that pragmatist pointed out.

comment by Lumifer · 2014-01-13T04:03:59.906Z · LW(p) · GW(p)

Making a person and unmaking a person seem like utilitarian inverses

No because time is directional.

comment by Halfwitz · 2014-01-14T01:38:50.380Z · LW(p) · GW(p)

How much does a genius cost? MIRI seems intent on hiring a team of geniuses. I’m curious about what the payroll would look like. One of the conditions of Thiel’s donations was that no one employed by MIRI can make more than one-hundred thousand a year. Is this high enough? One of the reasons I ask is I just read a story about how Google pays an extremely talented programmer over 3 million dollars per year - doesn't MIRI also need extremely talented programmers? Do they expect the most talented to be more likely to accept a lower salary for a good cause?

Replies from: ChristianKl, ChrisHallquist, Dan_Weinand, DanArmak, D_Alex, Chrysophylax
comment by ChristianKl · 2014-01-14T11:47:01.971Z · LW(p) · GW(p)

Do they expect the most talented to be more likely to accept a lower salary for a good cause?

Yes. Any one with the necessary mindset of thinking that AI is the most important issue in the world will accept a lower salary than what's possible in the market elsewhere.

I don't know whether MIRI has an interest in hiring people who don't have that moral framework.

comment by ChrisHallquist · 2014-01-15T06:51:31.070Z · LW(p) · GW(p)

Highly variable with skills, experience, and how badly they want the job. I bet there are some brilliant adjunct professors out there effectively making minimum wage because they really wanted to be professors. OTOH, I bet that google programmer isn't just being paid for talent, but specific skills and experience.

comment by Dan_Weinand · 2014-01-14T06:34:57.560Z · LW(p) · GW(p)

Two notes: First, the term "genius" is difficult to define. Someone may be a "genius" at understanding the sociology of sub-Saharan African tribes, but this skill will obviously command a much lower market value compared to someone who is a "genius" as a chief executive officer of a large company. A more precise definition of genius will narrow the range of costs per year.

Second, and related to the first, MIRI is (to the extent of my knowledge) currently focusing on mathematics and formal logic research rather than programming. This makes recruiting a team of "geniuses" much cheaper. While skilled mathematicians can attract quite strong salaries, highly skilled programmers can demand significantly more. It seems the most common competing job for MIRI's researchers would be that of a mathematics professor (which have a median salary ~88,000$). Based on this, MIRI could likely hire high quality mathematicians while offering them relatively competitive salaries.

comment by DanArmak · 2014-01-18T15:04:59.492Z · LW(p) · GW(p)

Many such geniuses (top intellectual performers in fields where they can out-perform the median by several orders of magnitude) choose their work not just on the basis of payment, but what they work on, where, how, and with whom (preferring the company of other top performers).

If MIRI were to compete with Google at hiring programmers, I expect money be important but not overwhelmingly so. Google let you work with many other top people in your field, develop and use cool new tech, have big resources for your projects, and provides many non-monetary workplace benefits. MIRI lets you contribute to existential risk reduction, work with rationalists, etc.

comment by D_Alex · 2014-02-03T05:32:17.121Z · LW(p) · GW(p)

From some WSJ article:

The setting of Einstein's initial salary at Princeton illustrates his humility and attitude toward wealth. According to "Albert Einstein: Creator & Rebel" by Banesh Hoffmann, (1972), the 1932 negotiations went as follows: "[Abraham] Flexner invited [Einstein] to name his own salary. A few days later Einstein wrote to suggest what, in view of his needs and . . . fame, he thought was a reasonable figure. Flexner was dismayed. . . . He could not possibly recruit outstanding American scholars at such a salary. . . . To Flexner, though perhaps not to Einstein, it was unthinkable [that other scholars' salaries would exceed Einstein's.] This being explained, Einstein reluctantly consented to a much higher figure, and he left the detailed negotiations to his wife."

The reasonable figure that Einstein suggested was the modest sum of $3,000 [about $46,800 in today's dollars]. Flexner upped it to $10,000 and offered Einstein an annual pension of $7,500, which he refused as "too generous," so it was reduced to $6,000. When the Institute hired a mathematician at an annual salary of $15,000, with an annual pension of $8,000, Einstein's compensation was increased to those amounts.

comment by Chrysophylax · 2014-01-14T12:03:18.046Z · LW(p) · GW(p)

Eliezer once tried to auction a day of his time but I can't find it on ebay by Googling.

On an unrelated note, the top Google result for "eliezer yudkowsky " (note the space) is "eliezer yudkowsky okcupid". "eliezer yudkowsky harry potter" is ninth, while HPMOR, LessWrong, CFAR and MIRI don't make the top ten.

Replies from: kalium, drethelin, shminux
comment by kalium · 2014-01-15T02:13:35.882Z · LW(p) · GW(p)

I suspect more of the price comes from his reputation than his intelligence.

comment by drethelin · 2014-01-14T22:14:23.543Z · LW(p) · GW(p)

I believe eliezer started the bidding at something like 4000 dollars

Replies from: DanArmak
comment by DanArmak · 2014-01-18T14:58:53.215Z · LW(p) · GW(p)

But where did it end?

Replies from: drethelin
comment by drethelin · 2014-01-19T21:28:53.797Z · LW(p) · GW(p)

There were no bids

comment by Shmi (shminux) · 2014-01-14T22:54:15.862Z · LW(p) · GW(p)

Actually, his fb profile comes up first in instant search:

Replies from: TheWakalix
comment by TheWakalix · 2018-10-24T01:13:00.067Z · LW(p) · GW(p)

That's not search, that's history.

comment by Kaj_Sotala · 2014-01-15T16:37:01.144Z · LW(p) · GW(p)

Suppose someone has a preference to have sex each evening, and is in a relationship with someone what a similar level of sexual desire. So each evening they get into bed, undress, make love, get dressed again, get out of bed. Repeat the next evening.

How is this different from having exploitable circular preferences? After all, the people involved clearly have cycles in their preferences - first they prefer getting undressed to not having sex, after which they prefer getting dressed to having (more) sex. And they're "clearly" being the victims of a Dutch Book, too - they keep repeating this set of trades every evening, and losing lots of time because of that.

To me this seems to suggest that having circular preferences isn't necessarily the bad thing that it's often made out to be - after all, the people in question probably wouldn't say that they're being exploited. But maybe I'm missing something.

Replies from: Alejandro1
comment by Alejandro1 · 2014-01-15T17:21:36.845Z · LW(p) · GW(p)

The circular preferences that go against the axioms of utility theory, and which are Dutch book exploitable, are not of the kind "I prefer A to B at time t1 and B to A at time t2", like the ones of your example. They are more like "I prefer A to B and B to C and C to A, all at the same time".

The couple, if they had to pay a third party a cent to get undressed and then a cent to get dressed, would probably do it and consider it worth it---they end up two cents short but having had an enjoyable experience. Nothing irrational about that. To someone with the other "bad" kind of circular preferences, we can offer a sequence of trades (first A for B and a cent, then C for A and a cent, then B for C and a cent) after which they end up three cents short but otherwise exactly as they started (they didn't actually obtain enjoyable experiences, they made all the trades before anything happened). It is difficult to consider this rational.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-15T18:50:10.351Z · LW(p) · GW(p)

Okay. But that still makes it sound like there would almost never be actual real-life cases where you could clearly say that the person exhibited circular preferences? At least I can't think of any real-life scenario that would be an example of the way you define "bad" circular preferences.

Replies from: ChristianKl, asr, Douglas_Knight, Alejandro1
comment by ChristianKl · 2014-01-16T14:59:17.601Z · LW(p) · GW(p)

I think there are plenty of cases where people prefer not sit on in front of their computer today over going to the fitness studio while preferring going to the fitness studio to sitting on in front of their computer tomorrow.

Changing the near to a far frame changes references. I know that not an exact example of the Dutch Book but it illustrates the principle that framing matters.

I don't think it's hard to get people in a laboratory and offer them different food choices to get a case where a person prefers A for B, C for A and B for C.

I think it's difficult to find general model real life cases because we don't use the idea as a phenomenal primitive and therefore don't perceive those situations as general situations where people act normal but see those situation as exception where people are being weird.

comment by asr · 2014-01-29T01:57:12.482Z · LW(p) · GW(p)

I feel like it happens to me in practice routinely. I see options A, B, C and D and I keep oscillating between them. I am not indifferent; I perceive pairwise differences but can't find a global optimum. This can happen in commonplace situations, e.g., when choosing between brands of pasta sauce or somesuch. And I'll spend several minutes agonizing before finally picking one.

I had the impression this happened to a lot of people.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-01-29T07:24:33.048Z · LW(p) · GW(p)

That looks like noisy comparisons being made on near indistinguishable things. (Life tip: if they're too difficult to distinguish, it hardly matters which you choose. Just pick one already.)

comment by Douglas_Knight · 2014-01-29T01:05:45.271Z · LW(p) · GW(p)

I can't find it anymore, but years ago I found on LW a recording of an interview with someone who had exhibited circular preferences in an experiment.

comment by Alejandro1 · 2014-01-15T22:50:42.944Z · LW(p) · GW(p)

The Allais paradox is close to being one such example, though I don't know if it can be called "real-life". There may be marketing schemes that exploit the same biases.

A philosophical case where I feel my naive preferences are circular is torture vs. dust specks. As I said here:

I prefer N years of torture for X people to N years minus 1 second of torture for 1000X people, and any time of torture for X people over the same time of very slightly less painful torture for 1000X people, and yet I prefer a very slight momentary pain for any number of people, however large, to 50 years of torture for one person.

If I ever reverse the latter preference, it will be because I will have been convinced by theoretical/abstract considerations that non transitive preferences are bad (and because I trust the other preferences in the cycle more), but I don't think I will ever introspect it as a direct preference by itself.

comment by [deleted] · 2014-01-13T16:04:56.007Z · LW(p) · GW(p)

On the Neil Degrasse Tyson Q&A on reddit, someone asked: "Since time slows relative to the speed of light, does this mean that photons are essentially not moving through time at all?"

Tyson responded "yes. Precisely. Which means ----- are you seated?Photons have no ticking time at all, which means, as far as they are concerned, they are absorbed the instant they are emitted, even if the distance traveled is across the universe itself."

Is this true? I find it confusing. Does this mean that a photon emitted at location A at t0 is absorbed at location B at t0, such that it's at two places at once? In what sense does the photon 'travel' then? Or is the thought that the distance traveled, as well as the time, goes to zero?

Replies from: Anatoly_Vorobey, pragmatist, lmm, gjm, Plasmon, DanielLC, Alejandro1, Luke_A_Somers
comment by Anatoly_Vorobey · 2014-01-13T21:27:17.719Z · LW(p) · GW(p)

There are no photons. There, you see? Problem solved.

(no, the author of the article is not a crank; he's a Nobel physicist, and everything he says about the laws of physics is mainstream)

Replies from: satt, None
comment by satt · 2014-01-14T22:04:22.191Z · LW(p) · GW(p)

There are no photons. There, you see? Problem solved.

Problem evaded. Banning a word fails to resolve the underlying physical question. Substitute "wavepackets of light" for "photons"; what then?

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2014-01-14T22:22:12.322Z · LW(p) · GW(p)

I know, I was joking. And it was a good opportunity to link to this (genuinely interesting) paper.

... well, mostly joking. There's a kernel of truth there. "There are no photons" says more than just banning a word. "Wavepackets of light" don't exist either. There's just the electromagnetic field, its intensity changes with time, and the change propagates in space. Looking at it like this may help understand the other responses to the question (which are all correct).

When you think of a photon as a particle flying in space, it's hard to shake off the feeling that you somehow ought to be able to attach yourself to it and come along for the ride, or to imagine how the particle itself "feels" about its existence, how its inner time passes. And then the answer that for a photon, time doesn't pass at all, feels weird and counter-intuitive. If you tell yourself there's no particle, just a bunch of numbers everywhere in space (expressing the EM field) and a slight change in those numbers travels down the line, it may be easier to process. A change is not an object to strap yourself to. It doesn't have "inner time".

Replies from: satt
comment by satt · 2014-01-15T00:59:15.429Z · LW(p) · GW(p)

I feel I should let this go, and yet...

"Wavepackets of light" don't exist either.

But we can make them! On demand, even.

There's just the electromagnetic field, its intensity changes with time, and the change propagates in space.

By this argument, ocean waves don't exist either. There's only the sea, its height changes with time, and the change propagates in space.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-01-15T15:21:00.121Z · LW(p) · GW(p)

By this argument, ocean waves don't exist either. There's only the sea, its height changes with time, and the change propagates in space.

You say that as a reductio ad absurdum, but it is good for some purposes. Anatoly didn't claim that one should deny photons for all purposes, but only for the purpose of unasking the original question.

Replies from: satt
comment by satt · 2014-01-16T23:07:02.217Z · LW(p) · GW(p)

Anatoly didn't claim that one should deny photons for all purposes, but only for the purpose of unasking the original question.

In this case, unasking the original question is basically an evasion, though, isn't it?

Denying photons may enable you to unask hen's literal question, or the unnamed Reddit poster's literal question, but it doesn't address the underlying physical question they're driving at: "if observer P travels a distance x at constant speed v in observer Q's rest frame, does the elapsed time in P's rest frame during that journey vanish in the limit where v tends to c?"

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-01-16T23:31:20.969Z · LW(p) · GW(p)

I reject the claim that your rephrasing is the "real" question being asked. By rephrasing the question, you are rejecting it just as much as Anatoly. I think it is more accurate to say that you evade the question, while he is up front about rejecting it.

In fact, I think your answer is better and probably it is generally better to rephrase problematic questions to answerable questions before explaining that they are problematic, but the latter is part of a complete answer and I think Anatoly is correct in how he addresses it.

Replies from: satt
comment by satt · 2014-01-17T00:23:16.146Z · LW(p) · GW(p)

I reject the claim that your rephrasing is the "real" question being asked.

That multiple different people automatically treated hen's question like it were my rephrasing backs me up on this one, I reckon.

By rephrasing the question, you are rejecting it just as much as Anatoly. I think it is more accurate to say that you evade the question, while he is up front about rejecting it.

Rephrasing a question can be the first step to confronting it head-on rather than rejecting it. If a tourist, looking for the nearest train station, wandered up to me and asked, "where station is the?", and I rearranged their question to the parseable "where is the station?" and answered that, I wouldn't say I rejected or evaded their query.

comment by [deleted] · 2014-01-14T02:52:20.367Z · LW(p) · GW(p)

Oh. Great!

comment by pragmatist · 2014-01-14T12:17:47.550Z · LW(p) · GW(p)

Other people have explained this pretty well already, but here's a non-rigorous heuristic that might help. What follows is not technically precise, but I think it captures an important and helpful intuition.

In relativity, space and time are replaced by a single four-dimensional space-time. Instead of thinking of things moving through space and moving through time separately, think of them as moving through space-time. And it turns out that every single (non-accelerated) object travels through space-time at the exact same rate, call it c.

Now, when you construct a frame of reference, you're essentially separating out space and time artificially. Consequently, you're also separating an object's motion through space-time into motion through space and motion through time. Since every object moves through space-time at the same rate, when we separate out spatial and temporal motion, the faster the object travels through space the slower it will be traveling through time. The total speed, adding up speed through space and speed through time, has to equal the constant c.

So an object at rest in a particular frame of reference has all its motion along the temporal axis, and no motion at all along the spatial axes. It's traveling through time at speed c and it isn't traveling through space at all. If this object starts moving, then some of the temporal motion is converted to spatial motion. It's speed through space increases, and its speed through time decreases correspondingly, so that the motion through space-time as a whole remains constant at c. This is the source of time dilation in relativity (as seen in the twin paradox) - moving objects move through time more slowly than stationary objects, or to put it another way, time flows slower for moving objects.

Of course, the limit of this is when the object's entire motion through space-time is directed along the spatial axes, and none of it is directed along the temporal axes. In this case, the object will move through space at c, which turns out to be the speed of light, and it won't move through time at all. Time will stand still for the object. This is what's going on with photons.

From this point of view, there's nothing all that weird about a photon's motion. From the space-time perspective, which after all is the fundamental perspective in relativity, it is moving pretty much exactly like any other object. It's only our weird habit of treating space and time as extremely different that makes the entirely spatial motion of a photon seem so bizarre.

Replies from: None
comment by [deleted] · 2014-01-15T03:03:26.218Z · LW(p) · GW(p)

That is helpful, and interesting, though I think I remain a bit confused about the idea of 'moving through time' and especially 'moving through time quickly/slowly'. Does this imply some sort of meta-time, in which we can measure the speed at which one travels through time?

And I think I still have my original question: if a photon travels through space at c, and therefore doesn't travel through time at all, is the photon at its starting and its final position at the same moment? If so, in what sense did it travel through space at all?

Replies from: Alejandro1, pragmatist
comment by Alejandro1 · 2014-01-15T03:24:52.306Z · LW(p) · GW(p)

[Is] the photon at its starting and its final position at the same moment?

At the same moment with respect to whom? That is the question one must always ask in relativity.

The answer is: no, emission and arrival do not occur at the same moment with respect to any actual reference frame. However, as we consider an abstract sequence of reference frames that move faster and faster approaching speed c in the same direction as the photon, we find that the time between the emission and the reception is shorter and shorter.

comment by pragmatist · 2014-01-15T04:05:31.597Z · LW(p) · GW(p)

Does this imply some sort of meta-time, in which we can measure the speed at which one travels through time?

No it doesn't. Remember, in relativity, time is relative to a frame of reference. So when I talk about a moving object traveling slowly through time, I'm not relativizing its time to some meta-time, I'm relativizing time as measured by that object (say by a clock carried by the object) to time as measured by me (someone who is stationary in the relevant frame of reference). So an object moving slowly through time (relative to my frame of reference) is simply an object whose clock ticks appear to me to be more widely spaced than my clock ticks. In the limit, if a photon could carry a clock, there would appear to me to be an infinite amount of time between its ticks.

I will admit that I was using a bit of expository license when I talked about all objects "moving through space-time" at the constant rate c. While one can make sense of moving through space and moving through time, moving through space-time doesn't exactly make sense. You can replace it with this slightly less attractive paraphrase, if you like: "If you add up a non-accelerating object's velocity through space and its (appropriately defined) rate of motion through time, for any inertial frame of reference, you will get a constant."

And I think I still have my original question: if a photon travels through space at c, and therefore doesn't travel through time at all, is the photon at its starting and its final position at the same moment? If so, in what sense did it travel through space at all?

Again, it's important to realize there are many different "time" parameters in relativity, one for each differently moving object. Also, whether two events are simultaneous is relative to a frame of reference.

Relative to my time parameter (the parameter for the frame in which I am at rest), the photon is moving through space, and it takes some amount of (my) time to get from point A to point B. Relative to its own time parameter, though, the photon is at point A and point B (and every other point on its path) simultaneously. Since I'll never travel as fast as a photon, it's kind of pointless for me to use its frame of reference. I should use a frame adapted to my state of motion, according to which the photon does indeed travel in non-zero time from place to place.

Again, this is all pretty non-technical and not entirely precise, but I think it's good enough to get an intuitive sense of what's going on. If you're interested in developing a more technical understanding without having to trudge through a mathy textbook, I recommend John Norton's Einstein for Everyone, especially chapters 10-12. One significant simplification I have been employing is talking about a photon's frame of reference. There is actually no such thing. One can't construct an ordinary frame of reference adapted to a photon's motion (partly because there is no meaningful distinction between space and time for a photon).

comment by lmm · 2014-01-13T23:05:52.883Z · LW(p) · GW(p)

Does this mean that a photon emitted at location A at t0 is absorbed at location B at t0, such that it's at two places at once?

In the photon's own subjective experience? Yes. (Not that that's possible, so this statement might not make sense). But as another commenter said, certainly the limit of this statement is true: as your speed moving from point A to point B approaches the speed of light, the subjective time you experience between the time when you're at A and the time when you're at B approaches 0. And the distance does indeed shrink, due to the Lorentz length contraction.

In what sense does the photon 'travel' then?

It travels in the sense that an external observer observes it in different places at different times. For a subjective observer on the photon... I don't know. No time passes, and the universe shrinks to a flat plane. Maybe the takeaway here is just that observers can't reach the speed of light.

comment by gjm · 2014-01-13T16:31:59.748Z · LW(p) · GW(p)

Not quite either of those.

The first thing to say is that "at t0" means different things to different observers. Observers moving in different ways experience time differently and, e.g., count different sets of spacetime points as simultaneous.

There is a relativistic notion of "interval" which generalizes the conventional notions of distance and time-interval between two points of spacetime. It's actually more convenient to work with the square of the interval. Let's call this I.

If you pick two points that are spatially separated but "simultaneous" according to some observer, then I>0 and sqrt(I) is the shortest possible distance between those points for an observer who sees them as simultaneous. The separation between the points is said to be "spacelike". Nothing that happens at one of these points can influence what happens at the other; they're "too far away in space and too close in time" for anything to get between them.

If you pick two points that are "in the same place but at different times" for some observer, then I<0 and sqrt(-I) is the minimum time that such an observer can experience between visiting them. The separation between the points is said to be "timelike". An influence can propagate, slower than the speed of light, from one to the other. They're "too far away in time and too close in space" for any observer to see them as simultaneous.

And, finally, exactly on the edge between these you have the case where I=0. That means that light can travel from one of the spacetime points to the other. In this case, an observer travelling slower than light can get from one to the other, but can do so arbitrarily quickly (from their point of view) by travelling very fast; and while no observer can see the two points as simultaneous, you can get arbitrarily close to that by (again) travelling very fast.

Light, of course, only ever travels at the speed of light (you might have heard something different about light travelling through a medium such as glass, but ignore that), which means that it travels along paths where I=0 everywhere. To an (impossible) observer sitting on a photon, no time ever passes; every spacetime point the photon passes through is simultaneous.

So: does the distance as well as the time go to 0? Not quite. Neither distance nor time makes sense on its own in a relativistic universe. The thing that does make sense is kinda-sorta a bit like "distance minus time" (and more like sqrt(distance-squared minus time-squared)), and that is 0 for any two points in spacetime that are visited by the same photon.

(Pedantic notes: 1. There are two possible sign conventions for the square of the interval. You can say that I>0 for spacelike separations, or say that I>0 for timelike separations. I arbitrarily chose the first of these. 2. There may be multiple paths that light can take between two spacetime points. They need not actually have the same "length" (i.e., interval). Strictly, "interval" is defined only locally; then, for a particular path, you can integrate it up to get the overall interval. 3. In the case of light propagating through a medium other than vacuum, what actually happens involves electrons as well as photons and it isn't just a matter of a photon going from A to B. Whenever a photon goes from A to B it does it, by whatever path it does, at the speed of light.)

Replies from: None
comment by [deleted] · 2014-01-14T03:07:20.077Z · LW(p) · GW(p)

Thanks, that was very helpful, especially the explanation of timelike and spacelike relations.

comment by Plasmon · 2014-01-13T16:16:40.404Z · LW(p) · GW(p)

The Lorentz factor diverges when the speed approaches c. Because of Length contraction and time dilation, both the distance and the time will appear to be 0, from the "point of view of the photon".

(the photon is "in 2 places at once" only from the point of view of the photon, and it doesn't think these places are different, after all they are in the same place! This among other things is why the notion of an observer traveling at c, rather than close to c, is problematic)

comment by DanielLC · 2014-01-13T23:01:29.257Z · LW(p) · GW(p)

You can't build a clock with a photon.

You can't build a clock with an electron either. You can build one with a muon though, since it will decay after some interval. It's not very accurate, but it's something.

In general, you cannot build a clock moving at light speed. You could build a clock with two photons. Measure the time by how close they are together. But if you look at the center of mass of this clock, it moves slower than light. If it didn't, the photons would have to move parallel to each other, but then they can't be moving away from each other, so you can't measure time.

Replies from: None
comment by [deleted] · 2014-01-14T02:53:42.409Z · LW(p) · GW(p)

I'm not sure what the significance of building a clock is...but then, I'm not sure I understand what clocks are. Anyway, isn't 'you can't build a clock on a photon' just what Tyson meant by 'Photons have no ticking time at all'?

Replies from: DanielLC
comment by DanielLC · 2014-01-14T03:44:09.030Z · LW(p) · GW(p)

Anyway, isn't 'you can't build a clock on a photon' just what Tyson meant by 'Photons have no ticking time at all'?

Yes. I meant that he meant that.

comment by Alejandro1 · 2014-01-13T16:35:42.652Z · LW(p) · GW(p)

Assume there are observers at A and B, sitting at rest relative to each other. The distance between them as seen by them is X. Their watches are synchronized. Alice, sitting at A, emits a particle when her watch says t0; Bob, sitting at B, receives it when his watch says t1. Define T = t1-t0. The speed of the particle is V = X/T.

If the particle is massive, then V is always smaller than c (the speed of light). We can imagine attaching a clock to the particle and starting it when it is emitted. When Bob receives it, the clock's time would read a time t smaller than T, given by the equation:

t = T (1 - V^2/c^2)^(1/2) (this is the Lorentz factor equation mentioned by Plasmon).

As the speed V of the particle gets closer and closer to c, you can see that the time t that has passed "for the particle" gets closer and closer to 0. One cannot attach a clock to a photon, so the statement that "photons are not moving through time" is somewhat metaphoric and its real meaning is the limiting statement I just mentioned. The photon is not "at two places at once" from the point of view of any physical observer, be it Alice and Bob (for whom the travel took a time T = X/c) or any other moving with a speed smaller than c (for whom the time taken may be different but is never 0).

Replies from: None
comment by [deleted] · 2014-01-14T02:51:58.730Z · LW(p) · GW(p)

Thanks, it sounds like Tyson just said something very misleading. I looked up the Lorentz factor equation on Wiki, and I got this:

gamma = 1/[(1 - V^2/c^2)^(1/2)]

Is that right? If that's right, then the Lorentz transformation (I'm just guessing here) for a photon would return an undefined result. Was Tyson just conflating that result with a result of 'zero'?

Replies from: Alejandro1
comment by Alejandro1 · 2014-01-14T03:18:19.938Z · LW(p) · GW(p)

Your equation for the gamma factor is correct. You are also correct in saying that they Lorentz transformation becomes undefined. The significance of this is that it makes no sense to talk about the "frame of reference of photon". Lorentz transformation equations allow us to switch from some set of time and space coordinates to another one moving at speed V < c relative to the first one. They make no sense for V = c or V > c.

I think that what Tyson meant by his somewhat imprecise answer was what I said in my comment above: if you take the equation t *gamma = T (that relates the time t that passes between two events for an object that moves with V from one to the other, with the time T that passes between between the events on a rest frame) and take the limit V approaching c for finite T, you get t = 0. If you want to keep the meaning of the equation in this limit, you have then to say that "no time passes for a photon". The issue is that the equation is just a consequence of the Lorentz transformations, which are inapplicable for V = c, and as a consequence the words "no time passes for a photon" do not have any clear, operational meaning attached to them.

Replies from: None
comment by [deleted] · 2014-01-14T03:44:05.816Z · LW(p) · GW(p)

I think I understand. Thanks very much for taking the time.

comment by Luke_A_Somers · 2014-01-15T14:30:54.902Z · LW(p) · GW(p)

Getting this property for electromagnetic waves was one of the main things that led Einstein to develop Special Relativity: he looked at waves and thought, "If we do a Galileian transform so that light is standing still, the resulting field is an invalid electrostatic field"

comment by newerspeak · 2014-01-15T02:45:54.173Z · LW(p) · GW(p)

What are your best arguments against the reality/validity/usefulness of IQ?

Improbable or unorthodox claims are welcome; appeals that would limit testing or research even if IQ's validity is established are not.

Replies from: pragmatist, ChristianKl, Eugine_Nier, Calvin
comment by pragmatist · 2014-01-15T04:41:25.327Z · LW(p) · GW(p)

These are not my arguments, since I haven't thought about the issue enough. However, the anthropologist Scott Atran, in response to the latest Edge annual question, "What Scientific Idea is Ready for Retirement?", answered "IQ". Here's his response:

There is no reason to believe, and much reason not to believe, that the measure of a so-called "Intelligence Quotient" in any way reflects some basic cognitive capacity, or "natural kind" of the human mind. The domain-general measure of IQ is not motivated by any recent discovery of cognitive or developmental psychology. It thoroughly confounds domain-specific abilities—distinct mental capacities for, say, geometrical and spatial reasoning about shapes and positions, mechanical reasoning about mass and motion, taxonomic reasoning about biological kinds, social reasoning about other people's beliefs and desires, and so on—which are the only sorts of cognitive abilities for which an evolutionary account seems plausible in terms of natural selection for task-specific competencies.

Nowhere in the animal or plant kingdoms does there ever appear to have been natural selection for a task-general adaptation. An overall measure of intelligence or mental competence is akin an overall measure for "the body," taking no special account of the various and specific bodily organs and functions, such as hearts, lungs, stomach, circulation, respiration, digestion and so on. A doctor or biologist presented with a single measure for "Body Quotient" (BQ) wouldn't be able to make much of it.

IQ is a general measure of socially acceptable categorization and reasoning skills. IQ tests were designed in behaviorism's heyday, when there was little interest cognitive structure. The scoring system was tooled to generate a normal distribution of scores with a mean of 100 and a standard deviation of 15.

In other societies, a normal distribution of some general measure of social intelligence might look very different, in that some "normal" members of our society could well produce a score that is a standard deviation from "normal" members of another society on that other society's test. For example, in forced-choice tasks East Asian students (China, Korea, Japan) tend to favor field-dependent perception over object-salient perception, thematic reasoning over taxonomic reasoning, and exemplar-based categorization over rule-based categorization.

American students generally prefer the opposite. On tests that measure these various categorization and reasoning skills, East Asians average higher on their preferences and Americans average higher on theirs'. There is nothing particularly revealing about these different distributions other than that they reflect some underlying socio- cultural differences.

There is a long history of acrimonious debate over which, if any, aspects of IQ are heritable. The most compelling studies concern twins raised apart and adoptions. Twin studies rarely have large sample populations. Moreover, they often involve twins separated at birth because a parent dies or cannot afford to support both, and one is given over to be raised by relatives, friends or neighbors. This disallows ruling out the effects of social environment and upbringing in producing convergence among the twins. The chief problem with adoption studies is that the mere fact of adoption reliably increases IQ, regardless of any correlation between the IQs of the children and those of their biological parents. Nobody has the slightest causal account of how or why genes, singly or in combination, might affect IQ. I don't think it's because the problem is too hard, but because IQ is a specious rather natural kind.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-01-15T15:32:18.954Z · LW(p) · GW(p)

Which of reality, validity, and usefulness is this an argument against? All three? None?

Added: I don't know what it would mean for IQ to be "real." Maybe this is an argument that IQ is not real. Maybe it is an argument that IQ is not ontologically fundamental. But it seems to me little different than arguing that total body weight, BMI, or digit length ratio are not "real"; or even that arguing that temperature is not "real," either temperature of the body or temperature of an ideal gas. The BQ sentence seems to assert that this kind of unreality implies that IQ is not useful, but I'd hardly call that an argument.

Replies from: pragmatist
comment by pragmatist · 2014-01-17T04:26:15.555Z · LW(p) · GW(p)

I tend to interpret "Is X real?" more or less as "Is X a part of the best predictive theory of the relevant domain?" This doesn't require an object/property to be ontologically fundamental, since our best (all things considered) theories of macroscopic domains include reference to macroscopic (non-fundamental) properties.

According to this standard, Atran is arguing that IQ is not real, I think. Temperature would be real (as far as we know), but maybe BMI wouldn't? I don't know enough about the relevant science to make that judgment.

Anyway, given my preferred pragmatist way of thinking about ontology, there isn't much difference between the reality, validity and usefulness of a concept.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-01-17T05:13:15.987Z · LW(p) · GW(p)

I tend to interpret "Is X real?" more or less as "Is X a part of the best predictive theory of the relevant domain?"

It seems excessive to me to define real as a superlative. Isn't it enough to be part of some good predictive theory? Shalizi explicitly takes this position, but it seems insane to me. He says very clearly says that he rejects IQ because he thinks that there is a better model. It's not that he complains that people are failing to adopt a better model, but failing to develop a better model. To the extent that Atran means anything, he appears to mean the same thing.

I think the difference between usefulness and validity is that usefulness is a cost-benefit analysis, considering the cost of using the model in a useful domain.

Replies from: pragmatist
comment by pragmatist · 2014-01-17T05:42:21.062Z · LW(p) · GW(p)

Lorentz ether theory is a good predictive theory, but I don't want to say that ether is real. In general, if there's a better theory currently available that doesn't include property X, I'd say we're justified in rejecting the reality of X.

I do agree that if there's no better theory currently available, it's a bit weird to say "I reject the reality of X because I'm sure we're going to come up with a better theory at some point." Working with what you have now is good epistemic practice in general. But it is possible that your best current theory is so bad at making predictions that you have no reason to place any substantive degree of confidence in its ontology. In that case, I think it's probably a good idea to withhold ontological commitment until a better theory comes along.

Again, I don't know enough about IQ research to judge which, if any, of these scenarios holds in that field.

comment by ChristianKl · 2014-01-16T14:48:36.556Z · LW(p) · GW(p)

What does arguments against "reality" mean?

Arguing against misconceptions of what people believe about IQ? In general I consider arguments against reality pointless. Asking whether IQ is real is the wrong question. It makes much more sense to ask what IQ scores mean.

comment by Eugine_Nier · 2014-01-15T05:21:39.031Z · LW(p) · GW(p)

IQ, or intelligence as commonly understood, is a poor proxy for rationality. In many cases it simple makes people better at rationalizing beliefs they acquired irrational.

comment by Calvin · 2014-01-15T03:56:50.051Z · LW(p) · GW(p)

IQ can be used to give scientific justification to our internalized biases.

I don't want to limit your rights, because you are X. I want to limit your rights because I belong to Y, and as Y does better than X on IQ tests, it is only prudent that we know better what is good for you. I am also not interested in listening to counter-arguments coming from people whose IQ is below 99.

Also, in extreme cases, it can be used to push further policies such as eugenics (the bad kind that everyone has in mind when they hear the word "eugenics"):

Ah... I forgot to say that X shouldn't have the right to have children. No offense meant, but we want to avoid dims out breeding brights. Also, keep your stupid daughter away from my son, as I really don't want my own children to pollute genetic purity of our kind.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2014-01-15T04:12:43.925Z · LW(p) · GW(p)

From the OP:

What are your best arguments against the reality/validity/usefulness of IQ?

-

appeals that would limit testing or research even if IQ's validity is established are not [welcome].

Emphasis mine.

We all know the standard "that's racist" argument already, newerspeak is clearly asking for a factual reason why measures of general intelligence are not real / invalid / not useful. Not to mention that the post did not make any claims about, or even mention, heredity of intelligence or race / gender differences in intelligence.

Replies from: Calvin
comment by Calvin · 2014-01-15T05:06:02.133Z · LW(p) · GW(p)

Let's make distinction between "I have a prejudice against" and "I know something about you"

Assuming I know that IQ is valid and true objective measure, I can use it to judge your cognitive skills, and your opinion about the result does not matter to anyone, just as much as your own opinion about BMI.

Assuming that I am not sure if IQ is valid, then I would rather refrain from reaching any conclusions or acting as if it actually mattered (because I am afraid of consequences), thus making it useless for me in my practical day to day life.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2014-01-15T17:32:32.610Z · LW(p) · GW(p)

So if we assume a measure is invalid, it is useless to us (as an accurate measure anyway; you already pointed out a possible rhetorical use)?

If you'll forgive my saying it, that seems like more of a tautology about measurements in general than an argument about this specific case. If you have evidence that general intelligence as-measured-by-IQ is invalid, or even evidence that people unfamiliar with the field like Dr Atran or Gould take issue with 'reifying' it, that would be closer to what the original question was looking for.

I realize this comes off as a bit rude, but this particular non sequitur keeps coming up and is becoming a bit of a sore spot.

comment by Torello · 2014-01-13T23:28:29.716Z · LW(p) · GW(p)

Doesn't cryonics (and subsequent rebooting of a person) seem obviously too difficult? People can't keep cars running indefinitely, wouldn't keeping a particular consciousness running be much harder?

I hinted at this in another discussion and got downvoted, but it seems obvious to me that the brain is the most complex machine around, so wouldn't it be tough to fix? Or does it all hinge on the "foom" idea where every problem is essentially trivial?

Replies from: Calvin, ChristianKl, RomeoStevens, Luke_A_Somers
comment by Calvin · 2014-01-13T23:33:15.165Z · LW(p) · GW(p)

Most of the explanations found on cryonics site, do indeed seem to base their arguments around the hopeful explanation that given nanotechnology and science of the future every problem connected to as you say rebooting would become essentially trivial.

comment by ChristianKl · 2014-01-14T11:49:01.929Z · LW(p) · GW(p)

There are oldtimer cars that seem to have no problem with running "indefinitely" provided you fix parts here and there.

Replies from: Torello
comment by Torello · 2014-01-15T02:00:58.443Z · LW(p) · GW(p)

This is sort of my point--wouldn't it be hard to keep a consciousness continually running (to avoid the death we feared in the first place) by fixing or replacing parts?

Replies from: gattsuru
comment by gattsuru · 2014-01-15T15:54:54.217Z · LW(p) · GW(p)

Continuity of consciousness very quickly becomes a hard word to define : not only do you interrupt consciousness for several hours on a nightly basis, you actually can go into reduced awareness modes on a regular basis even when 'awake'.

Moreover, it might not be necessary to interrupt continuity of consciousness in order to "replace parts" in the brain. Hemispherectomies demonstrate that large portions of the brain can be removed at once without causing death, for example.

comment by RomeoStevens · 2014-01-14T01:13:55.435Z · LW(p) · GW(p)

error checking on solid state silicon is much easier than error checking neurons.

Replies from: VAuroch
comment by VAuroch · 2014-01-16T06:45:22.586Z · LW(p) · GW(p)

We know a lot more about solid state silicon than neurons. When we understand neurons as well as we currently do solid state silicon, I see no reason why error checking on them should be harder than error checking on silicon is now.

comment by Luke_A_Somers · 2014-01-15T14:26:15.724Z · LW(p) · GW(p)

Too difficult for whom? Us, now? Obviously. Later? Well, how much progress are you willing to allow for 'too difficult' to become 'just doable'?

comment by hyporational · 2014-01-13T06:45:29.105Z · LW(p) · GW(p)

What motivates rationalists to have children? How much rational decision making is involved?

ETA: removed the unnecessary emotional anchor.

ETA2: I'm not asking this out of Spockness, I think I have a pretty good map of normal human drives. I'm asking because I want to know if people have actually looked into the benefits, costs and risks involved, and done explicit reasoning on the subject.

Replies from: gjm, blacktrance, Lumifer, Ishaan, Lumifer, Calvin
comment by gjm · 2014-01-13T12:17:56.614Z · LW(p) · GW(p)

I wouldn't dream of speaking for rationalists generally, but in order to provide a data point I'll answer for myself. I have one child; my wife and I were ~35 years old when we decided to have one. I am by any reasonable definition a rationalist; my wife is intelligent and quite rational but not in any very strong sense a rationalist. Introspection is unreliable but is all I have. I think my motivations were something like the following.

  1. Having children as a terminal value, presumably programmed in by Azathoth and the culture I'm immersed in. This shows up subjectively as a few different things: liking the idea of a dependent small person to love, wanting one's family line to continue, etc.

  2. Having children as a terminal value for other people I care about (notably spouse and parents).

  3. I think I think it's best for the fertility rate to be close to the replacement rate (i.e., about 2 in a prosperous modern society with low infant mortality), and I think I've got pretty good genes; overall fertility rate in the country I'm in is a little below replacement and while it's fairly densely populated I don't think it's pathologically so, so for me to have at least one child and probably two is probably beneficial for society overall.

  4. I expected any child I might have to have a net-positive-utility life (for themselves, not only for society at large) and indeed probably an above-average-utility life.

  5. I expected having a child to be a net positive thing for marital harmony and happiness (I wouldn't expect that for every couple and am not making any grand general claim here).

I don't recall thinking much about the benefits of children in providing care when I'm old and decrepit, though I suppose there probably is some such benefit.

So far (~7.5 years in), we love our daughter to bits and so do others in our family (so #1,#2,#5 seem to be working as planned), she seems mostly very happy (so #4 seems OK so far), it's obviously early days but my prediction is still that she'll likely have a happy life overall (so #4 looks promising for the future) and I don't know what evidence I could reasonably expect for or against #3.

Replies from: Aharon, DaFranker
comment by Aharon · 2014-01-13T21:38:48.397Z · LW(p) · GW(p)

I first wanted to comment on 5, because I had previously read that having children reduces happiness. Interestingly, when searching a link (because I couldn't remember where I had read it), I found this source (http://www.demogr.mpg.de/papers/working/wp-2012-013.pdf) that corrobates your specific expectation: children lead to higher happiness for older, better educated parents.

Replies from: Douglas_Knight, gjm
comment by Douglas_Knight · 2014-01-14T13:53:44.737Z · LW(p) · GW(p)

Having children is an example where two methodologies in happiness research dramatically diverge. One method is asking people in the moment how happy they are; the other is asking how they happy they generally feel about their lives. The first method finds that people really hate child care and is probably what you remembered.

Replies from: adbge
comment by adbge · 2014-01-14T18:54:49.047Z · LW(p) · GW(p)

I think the paper you're thinking of is Kahneman et al's A survey method for characterizing daily life experience: The day reconstruction method.

Notably,

In Table 1, taking care of one's children ranks just above the least enjoyable activities of working, housework, and commuting.

On the other hand, having children also harms marital satisfaction. See, for example, here.

comment by gjm · 2014-01-13T22:15:04.727Z · LW(p) · GW(p)

How excellent! It's nice to be statistically typical :-).

comment by DaFranker · 2014-01-13T13:13:18.141Z · LW(p) · GW(p)

(This might seem obviously stupid to someone who's thought about the issue more in-depth, but if so there's no better place for it than the Stupid Questions Thread, is there?):

and I don't know what evidence I could reasonably expect for or against #3.

I think some tangential evidence could be gleaned, as long as it's understood as a very noisy signal, from what other humans in your society consider as signals of social involvement and productivity. Namely, how well your daughter is doing at school, how engaged she gets with her peers, her results in tests, etc. These things are known, or at least thought, to be correlated with social 'success' and 'benefit'.

Basically, if your daughter is raising the averages or other scores that comprise the yardsticks of teachers and other institutions, then this is information correlated with what others consider being beneficial to society later in life. (the exact details of the correlation, including its direction, depend on the specific environment she lives in)

Replies from: gjm
comment by gjm · 2014-01-13T15:19:45.672Z · LW(p) · GW(p)

That would be evidence (albeit, as you say, not very strong evidence) that my daughter's contribution to net utility is above average. That doesn't seem enough to guarantee it's positive.

Replies from: DaFranker
comment by DaFranker · 2014-01-13T20:59:58.957Z · LW(p) · GW(p)

Good catch. Didn't notice that one sneaking in there. That kind of invalidates most of my reasoning, so I'll retract it willingly unless someone has an insight that saves the idea.

comment by blacktrance · 2014-01-13T16:09:29.661Z · LW(p) · GW(p)

Disclaimer: I don't have kids, won't have them anytime soon (i.e. not in the next 5 years), and until relatively recently didn't want them at all.

The best comparison I can make is that raising a child is like making a painting. It's work, but it's rewarding if done well. You create a human being, and hopefully impart them with good values and set them on a path to a happy life, and it's a very personal experience.

Personally, I don't have any drive to have kids, not one that's comparable to hunger or sexual attraction.

Replies from: hyporational
comment by hyporational · 2014-01-13T18:32:49.544Z · LW(p) · GW(p)

I'd like that personal painting experience if it went well and I have experienced glimpses of it with some kids not of my own.

Unfortunately it's not clear to me at all how much success of the project could be of my own doing, and I've seen enough examples of when things go horribly wrong despite of optimally seeming conditions. I wonder what kinds of studies could be done on the subject of parenting skills and parental satisfaction on the results of upbringing that aren't hugely biased.

ETA: my five year old step brother just barged into my room (holiday at my folks). "You always get new knowledge in this room.", he said, and I was compelled to pour that little vessel full again.

comment by Lumifer · 2014-01-13T08:13:32.319Z · LW(p) · GW(p)

What motivates rationalists to have children?

The same what motivates other people. Being rational doesn't necessarily change your values.

Clearly, some people think having children is worthwhile and others don't, so that's individual. There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection.

The amount of decision-making also obviously varies -- from multi-year deliberations to "Dear, I'm pregnant!" :-)

Replies from: CronoDAS, hyporational
comment by CronoDAS · 2014-01-13T15:45:17.279Z · LW(p) · GW(p)

There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection.

Really? The reproductive urge in humans seems to be more centered on a desire for sex rather than on a desire for children. And, in most animals, this is sufficient; sex leads directly to reproduction without the brain having to take an active role after the exchange of genetic material takes place.

Humans, oddly enough, seem to have evolved adaptations for ensuring that people have unplanned pregnancies in spite of their big brains. Human females don't have an obvious estrus cycle, their fertile periods are often unpredictable, and each individual act of copulation has a relatively low chance of causing a pregnancy. As a result, humans are often willing to have sex when they don't want children and end up having them anyway.

Replies from: Lumifer, Randy_M, hyporational
comment by Lumifer · 2014-01-13T16:16:33.790Z · LW(p) · GW(p)

The reproductive urge in humans seems to be more centered on a desire for sex rather than on a desire for children.

These are not mutually exclusive alternatives.

And, in most animals, this is sufficient; sex leads directly to reproduction without the brain having to take an active role after the exchange of genetic material takes place.

Not in those animals where babies require a long period of care and protection.

Replies from: CronoDAS
comment by CronoDAS · 2014-01-14T08:25:40.476Z · LW(p) · GW(p)

Not in those animals where babies require a long period of care and protection.

Yes, you're right. I didn't think to put the "take care of your children once they're out of the uterus" programming into the same category.

comment by Randy_M · 2014-01-14T16:19:13.565Z · LW(p) · GW(p)

There is certainly an inner drive, more pronounced in women, because species without such a drive don't make it though natural selection.

A developmentally complex species needs a drive to care for offspring. A simple species just needs a drive to reproduce.

ETA: What Lumifer said

comment by hyporational · 2014-01-13T18:34:56.049Z · LW(p) · GW(p)

Women talk to me about baby fever all the time. Lucky me, eh.

comment by hyporational · 2014-01-13T08:20:29.972Z · LW(p) · GW(p)

Being rational doesn't necessarily change your values.

True, but it might make you weigh them very differently if you understand how biased your expectations are. I'm interested if people make some rational predictions about how happy having children will make them for example.

I already have a pretty good idea about how people in general make these decisions, hence the specific question.

comment by Ishaan · 2014-01-13T23:20:24.024Z · LW(p) · GW(p)

rationalists

I think you mean "humans"?

With respect to adoption vs. biological children, having your own child allows you more control over the circumstances and also means the child will probably share a some facets of your / your mate's personality, in ways that are often surprising and pleasurable.

With respect to raising children in general, it's intrinsically rewarding, like a mix of writing a book and being in love. Also, if you're assuming the environment won't radically change, having youth probably makes aging easier.

(I don't have children, but have watched them being raised. Unsure of my own plans.)

Replies from: hyporational
comment by hyporational · 2014-01-14T04:24:27.186Z · LW(p) · GW(p)

I think you mean "humans"?

Nope, not planning to go Spock. I also edited the original question now for clarification.

having your own child allows you more control over the circumstances

I'd like to see some evidence how much control I can have. You're describing just the best case scenario, but having a child can also be incredibly exhausting if things go wrong.

Replies from: Ishaan
comment by Ishaan · 2014-01-14T17:51:38.693Z · LW(p) · GW(p)

Oh, okay. Sorry to misunderstand. (Also, I meant "control" as compared to the control one has when adopting.)

In that case, I have insufficient research for a meaningful answer. I guess one aught to start here or there or there, to get a rough idea?

comment by Lumifer · 2014-01-13T18:44:18.990Z · LW(p) · GW(p)

One more point that I haven't seen brought up -- listen to Queen:

Can anybody find me somebody to love?
Each morning I get up I die a little
Can barely stand on my feet
Take a look in the mirror and cry
Lord what you're doing to me
I have spent all my years in believing you
But I just can't get no relief,
Lord!
Somebody, somebody
Can anybody find me somebody to love?

Replies from: CronoDAS, hyporational
comment by CronoDAS · 2014-01-14T08:32:04.408Z · LW(p) · GW(p)

Personally, I'd recommend a dog or cat to this person.

comment by hyporational · 2014-01-13T18:54:11.534Z · LW(p) · GW(p)

Children as match makers when you're too old to stand on your feet? ;)

Replies from: Lumifer
comment by Lumifer · 2014-01-13T19:08:31.616Z · LW(p) · GW(p)

That's an interesting interpretation :-) it was also fun to watch it evolve :-D

Replies from: hyporational
comment by hyporational · 2014-01-13T19:35:54.019Z · LW(p) · GW(p)

I was calibrating political correctness.

Replies from: Lumifer
comment by Lumifer · 2014-01-13T19:39:24.420Z · LW(p) · GW(p)

Entirely within your own mind? I don't think you got any external feedback to base calibration on :-)

Replies from: hyporational
comment by hyporational · 2014-01-13T19:41:46.609Z · LW(p) · GW(p)

I got plenty of feedback from the intensive simulations I ran.

comment by Calvin · 2014-01-13T06:55:51.631Z · LW(p) · GW(p)

I don't consider myself an explicit rationalist, but the desire to have children stems from the desire to have someone to take care of me when I am older.

Do you see your own conception and further life as a cause for "huge heap of disutility" that can't be surpassed by the good stuff?

Replies from: DaFranker, hyporational
comment by DaFranker · 2014-01-13T13:21:57.477Z · LW(p) · GW(p)

I've always been curious to see the response of someone with this view to the question:

What if you knew, as much as any things about the events of the world are known, that there will be circumstances in X years that make it impossible for any child you conceive to possibly take care of you when you are older?

In such a hypothetical, is the executive drive to have children still present, still being enforced by the programming of Azathoth, merely disconnected from the original trigger that made you specifically have this drive? Or does the desire go away? Or something else, maybe something I haven't thought of (I hope it is!)?

Replies from: Calvin
comment by Calvin · 2014-01-13T15:59:49.454Z · LW(p) · GW(p)

Am I going to have a chance to actually interact with them, see them grow, etc?

I mean, assuming hypothetical case where as soon as a child is born, nefarious agents of Population Police snatch him never to be seen or heard from again, then I don't really see the point of having children.

If on the other hand, I have a chance to actually act as a parent to him, then I guess it is worth it, after all, even if the child disappears as soon as it reaches adulthood and joins Secret Society of Ineffective Altruism never to be heard from again. I get no benefit of care, but I am happy that I introduced new human into the world (uh... I mean, I actually helped to do so, as it is a two-person exercise so to speak). It is not ideal case but I am still consider the effort well spent.

In ideal world, I still have a relation with my child, even as he/her reaches adulthood so that I can feel safer knowing that there is someone who (hopefully) considers all the generosity I have granted to him and holds me dear.

P.S. Why programing of Azathoth? In my mind it makes it sound as if desire to have children was something intristically bad.

Replies from: DaFranker
comment by DaFranker · 2014-01-13T21:03:53.949Z · LW(p) · GW(p)

Thanks for the response! This puts several misunderstandings I had to rest.

P.S. Why programing of Azathoth? In my mind it makes it sound as if desire to have children was something intristically bad.

Programming of Azathoth because Azathoth doesn't give a shit about what you wish your own values were. Therefore what you want has no impact whatsoever on what your body and brain are programmed to do, such as make some humans want to have children even when every single aspect of it is negative (e.g. painful sex, painful pregnancy, painful birthing, hell to raise children, hellish economic conditions, absolutely horrible life for the child, etc. etc. such as we've seen some examples of in slave populations historically)

Replies from: Calvin
comment by Calvin · 2014-01-13T21:39:04.697Z · LW(p) · GW(p)

I suspect our world views might differ for a bit, as I don't wish that my values where any different than they are. Why should I?

If Azathoth decided to instill the value that having children is somehow desirable deep into my mind, than I am very happy that as a first world parent I have all the resources I need to turn it into a pleasant endeavor with a very high expected value (happy new human who hopefully likes me and hopefully shares my values, but I don't have much confidence in a second bet).

comment by hyporational · 2014-01-13T07:04:32.556Z · LW(p) · GW(p)

Not to me obviously. Not necessarily to my parents either, but I think they might have been quite lucky in addition to being good parents.

Doesn't money take care of you when old too? As a side note, if I were old, dying and in a poor enough condition that I couldn't look after myself, I'd rather sign off than make other people take care of me because I can't imagine that being an enjoyable experience.

Replies from: Calvin
comment by Calvin · 2014-01-13T07:12:18.875Z · LW(p) · GW(p)

Still, if it is possible to have a happy children (and I assume happy humans are good stuff), where does the heap of dis-utility come into play?

EDIT: It is hard to form a meaningful relationship with money, and I would reckon that teaching it to uphold values similar to yours isn't an easy task either. As for taking care I don't mean palliative care as much as simply the relationship you have with your child.

Replies from: hyporational, hyporational
comment by hyporational · 2014-01-13T07:27:10.595Z · LW(p) · GW(p)

You can have relationships with other people, and I think it's easier to influence what they're like.

I'll list some forms of disutility later, but I think for now it's better not to bias the answers to the original question further. I removed the "heap of disutility" part, it was unnecessarily exaggerated anyway.

comment by hyporational · 2014-01-13T07:21:54.912Z · LW(p) · GW(p)

You can have a relationship with your friends, but don't expect them to take care of you when you're old.

comment by seez · 2014-02-05T08:04:10.761Z · LW(p) · GW(p)

Why hasn't anyone ever come back from the future and stopped us all from suffering, making it so we never horrible things? Does that mean we ever never learn time travel, or at least time travel+a way to make the original tough experiences be un-experienced?

Replies from: metatroll, seez
comment by metatroll · 2014-02-05T09:51:06.173Z · LW(p) · GW(p)

Whenever they invent time travel, they discover that the ability to change the past becomes the biggest cause of suffering, so in the end they always un-invent it.

comment by seez · 2014-02-05T09:17:32.903Z · LW(p) · GW(p)

And, similarly, should I be depressed that there currently exists NO alien species with the inclination+ability to eliminate horrific suffering in all sentient life-forms?

comment by diegocaleiro · 2014-01-13T18:16:45.866Z · LW(p) · GW(p)

When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?

Utilitarians could say they are trying to maximize the World's something.

But non utiltarians, like I used to be, and like most here still are, are just... doing it like everyone else does it! "Oh, that seems like a cool change, I'll do it! yay!" then two weeks later that particular thing has none of the coolness effect it had before, but they are stuck with the decision for years....... (in case of decisions like job, partner, quitting, smoking, big travels, big decisions, not ice cream flavour stuff)

So, why don't rationalists use data driven happiness research, and reasoning in the happiness spectrum, to decide their stuff?

Replies from: Dahlen, cata, pragmatist, ChristianKl
comment by Dahlen · 2014-01-14T00:13:17.395Z · LW(p) · GW(p)

When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?

I don't know the extent to which this applies to other people, but for me (a non-utilitarian) it does, so here's my data point which may or may not give you some insight into how other non-utilitarians judge these things.

I can't really say I value my own happiness much. Contentment / peace of mind (=/= happiness!) and meaningfulness are more like what I aim for; happiness is too fleeting, too momentary to seek it out all the time. I'm also naturally gloomy, and overt displays of cheerfulness just don't hold much appeal for me, in an aesthetic sense. (They get me thinking of those fake ad people and their fake smiles. Nobody can look that happy all the time without getting paid for it!) There simply are more important things in life than my own happiness; that one can be sacrificed, if need be, for the sake of a higher value. I suppose it's just like those utilitarians you're talking about which are "trying to maximize the world's something" rather than their own pleasure, only we don't think of it in a quantitative way.

But non utiltarians, like I used to be, and like most here still are, are just... doing it like everyone else does it! "Oh, that seems like a cool change, I'll do it! yay!" then two weeks later that particular thing has none of the coolness effect it had before, but they are stuck with the decision for years....... (in case of decisions like job, partner, quitting, smoking, big travels, big decisions, not ice cream flavour stuff)

Well... that's a rather unflattering way of putting it. You don't have to compute utilities in order for your decision-making process to look a wee little more elaborate than that.

comment by cata · 2014-01-13T21:01:08.382Z · LW(p) · GW(p)

I know a lot of LW-ish people in the Bay Area and I see them explicitly thinking carefully about a lot of big life changes (e.g. moving, relationships, jobs, what habits to have) in just the way you recommended. I don't know if it has something to do with utilitarianism or not.

I'm personally more inclined to think in that way than I was a few years ago, and I think it's mostly because of the social effects of from hanging out with & looking up to a bunch of other people who do so.

comment by pragmatist · 2014-01-15T04:36:27.777Z · LW(p) · GW(p)

When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?

"Non-utilitarian" doesn't equate to "ethical egoist". I'm not a utilitarian, but I still think my big life decisions are subject to ethical constraints beyond what will make me happy. It's just that the constraint isn't always (or even usually) the maximization of some aggregate utility function.

comment by ChristianKl · 2014-01-13T22:11:26.799Z · LW(p) · GW(p)

I don't think the predictive power of models build from data driven happiness research is very high. I wouldn't ignore the research completely but there nothing rational about using a model just because it's data based if nobody showed that the model is useful for prediction in the relevant domain.

comment by Ghatanathoah · 2014-01-23T18:58:21.624Z · LW(p) · GW(p)

What amount of disutility does creating a new person generate in Negative Preference Utilitarian ethics?

I need to elaborate in order to explain exactly what question I am asking: I've been studying various forms of ethics, and when I was studying Negative Preference Utilitarianism (or anti-natalism, as I believe it's often also called) I came across what seems like a huge, titanic flaw that seems to destroy the entire system.

The flaw is this: The goal of negative preference utilitarianism is to prevent the existence of unsatisfied preferences. This means that negative preference utilitarians are opposed to having children, as doing so will create more unsatisfied preferences. And they are opposed to people dying under normal circumstances, because someone's death will prevent them from satisfying their existing preferences.

So what happens when you create someone who is going to die, and has an unbounded utility function? The amount of preferences they have is essentially infinite, does that mean that if such a person is created it is impossible to do any more harm, since an infinite amount of unsatisfied preferences have just been created? Does that mean that we should be willing to torture everyone on Earth for a thousand years if doing so will prevent the creation of such a person?

The problem doesn't go away if you assume humans have bounded utility functions. Suppose we have a bounded utility function, so living an infinite number of years, or a googolplex number of years, is equivalent to living a mere hundred billion years for us. That still means that creating someone who will live a normal 70 year lifespan is a titanic harm, a harm that everyone alive on Earth today should be willing to die to prevent it, as it would create 99,999,999,930 years worth of unsatisfied preferences!

My question is, how do negative preference utilitarians deal with this? The ones I've encountered online make an effort to avoid having children, but they don't devote every waking minute of their lives to it. And I don't think akrasia is the cause, because I've heard some of them admit that it would be acceptable to have a child if doing so reduced the preference frustration/suffering of a very large amount of existing people.

So with that introduction out of the way, my questions, on a basic level are:

  1. How much suffering/preference frustration would an antinatalist be willing to inflict on existing people in order to prevent a birth? How much suffering/preference frustration would a birth have to stop in order for it to be justified? For simplicity's sake, let's assume the child who is born has a normal middle class life in a 1st world country with no exceptional bodily or mental health problems.

  2. How exactly did they go about calculating the answer to question 1?

There has to be some answer to this question, there wouldn't be whole communities of anti-natalists online if their ideology could be defeated with a simple logic problem.

Replies from: Kaj_Sotala, RomeoStevens
comment by Kaj_Sotala · 2014-01-24T18:33:24.389Z · LW(p) · GW(p)

(To the extent that I'm negative utilitarian, I'm a hedonistic negative utilitarian, so I can't speak for the preference NUs, but...)

So what happens when you create someone who is going to die, and has an unbounded utility function?

Note that every utilitarian system breaks once you introduce even the possibility of infinities. E.g. a hedonistic total utilitarian will similarly run into the problem that, if you assume that a child has the potential to live for an infinite amount of time, then the child can be expected to experience both an infinite amount of pleasure and an infinite amount of suffering. Infinity minus infinity is undefined, so hedonistic total utilitarianism would be incapable of assigning a value to the act of having a child. Now saving lives is in this sense equivalent to having a child, so the value every action that has even a remote chance of saving someone's life becomes undefined as well...

A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.

The ones I've encountered online make an effort to avoid having children, but they don't devote every waking minute of their lives to it.

I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself. One possible answer is that loudly advocating "you shouldn't have children, it's literally infinitely bad" is a horrible PR strategy that will just get your movement discredited, and e.g. talking about NU in the abstract and letting people piece the full implications themselves may be more effective.

Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account.

How much suffering/preference frustration would an antinatalist be willing to inflict on existing people in order to prevent a birth? How much suffering/preference frustration would a birth have to stop in order for it to be justified? For simplicity's sake, let's assume the child who is born has a normal middle class life in a 1st world country with no exceptional bodily or mental health problems.

Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the answer to that question", you're probably not going to get a very satisfying answer, either.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2014-01-24T20:08:18.931Z · LW(p) · GW(p)

A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.

Yes, and that is my precise point. Even if we assume a bounded utility function for human preferences, I think it's reasonable assume that it's a pretty huge function. Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to this point, but still die with a lot of unfulfilled desires. I find this massively counter-intuitive and want to know how the antinatalist community addresses this.

I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself.

If the disutility they assign to having children is big enough they should still spend every waking hour doing something about it. What if some maniac kidnaps them and forces them to have a child? The odds of that happening are incredibly small, but they certainly aren't zero. If they really assign such a giant negative to having a child they should try to guard even against tiny possibilities like that.

Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account

Yes, but from a preference utilitarian standpoint it doesn't need to actually be possible to live forever. It just has to be something that you want.

Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the answer to that question", you're probably not going to get a very satisfying answer, either.

Well, of course I'm not expecting an exact answer. But a ballpark would be nice. Something like "no more than x, no less than y." I think, for instance, that a total utilitarian could at least say something like "no less than a thousand rush hour frustrations, no more than a million."

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-26T07:32:59.770Z · LW(p) · GW(p)

Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to this point, but still die with a lot of unfulfilled desires.

Is that really how preference utilitarianism works? I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment. Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones.

Yes, but from a preference utilitarian standpoint it doesn't need to actually be possible to live forever. It just has to be something that you want.

I'm confused. How is wanting to live forever in a situation where you don't think that living forever is possible, different from any other unsatisfiable preference?

If the disutility they assign to having children is big enough they should still spend every waking hour doing something about it. What if some maniac kidnaps them and forces them to have a child? The odds of that happening are incredibly small, but they certainly aren't zero. If they really assign such a giant negative to having a child they should try to guard even against tiny possibilities like that.

That doesn't sound right. The disutility is huge, yes, but the probability is so low that focusing your efforts on practically anything with a non-negligible chance of preventing further births would be expected to prevent many times more disutility. Like supporting projects aimed at promoting family planning and contraception in developing countries, pro-choice policies and attitudes in your own country, rape prevention efforts to the extent that you think rape causes unwanted pregnancies that are nonetheless carried to term, anti-natalism in general (if you think you can do it in a way that avoids the PR disaster for NU in general), even general economic growth if you believe that the connection between richer countries and smaller families is a causal and linear one. Worrying about vanishingly low-probability scenarios, when that worry takes up cognitive cycles and thus reduces your chances of doing things that could have an even bigger impact, does not maximize expected utility.

I think, for instance, that a total utilitarian could at least say something like "no less than a thousand rush hour frustrations, no more than a million."

I don't know. At least I personally find it very difficult to compare experiences of such differing magnitudes. Someone could come up with a number, but that feels like trying to play baseball with verbal probabilities - the number that they name might not have anything to do with what they'd actually choose in that situation.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2014-01-27T22:00:56.720Z · LW(p) · GW(p)

I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment

I don't think that would be the case. The main intuitive advantage negative preference utilitarianism has over negative hedonic utilitarianism is that it considers death to be a bad thing, because it results in unsatisfied preferences. If it only counted immediate consciously held goals it might consider death a good thing, since it would prevent an agent from developing additional unsatisfied preferences in the future.

However, you are probably onto something by suggesting some method of limiting which unsatisfied preferences count as negative. "What a person is thinking about at any given moment" has the problems I pointed out earlier, but another formulation could well work better.

Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones.

I believe Total Preference Utilitarianism typically avoids this by regarding the creation of at most types of unsatisfied preferences as neutral rather than negative. While there are some preferences whose dissatisfaction typically counts as negative, such as the preference not to be tortured, most preference creations are neutral. I believe that under TPU, if a person spends the majority of their life not preferring to be dead then their life is considered positive no matter how many unsatisfied preferences they have.

At least I personally find it very difficult to compare experiences of such differing magnitudes.

I feel like I could try to get some sort of ballpark by figuring how much I'm willing to pay to avoid each thing. For instance, if I had an agonizing migraine I knew would last all evening, and had a choice between paying for an instant cure pill, or a device that would magically let me avoid traffic for the next two months, I'd probably put up with the migraine.

I'd be hesitant to generalize across the whole population, however, because I've noticed that I don't seem to mind pain as much as other people, but find boredom far more frustrating than average.

comment by RomeoStevens · 2014-01-23T21:50:20.910Z · LW(p) · GW(p)

Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering. Death is bad and causes negative experiences. I want to solve death before we have more kids, but I recognize this isn't realistic. It's worth pointing out that negative utilitarianism is incoherent. Prioritarianism makes slightly more sense.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2014-01-24T04:31:43.636Z · LW(p) · GW(p)

Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering.

If I understand you correctly, the problem with doing this with negative utilitarianism is that it suggests we should painlessly kill everyone ASAP. The advantage of negative preference utilitarianism is that it avoids this because people have a preference to keep on living that killing would thwart.

It's worth pointing out that negative utilitarianism is incoherent.

Why? For the reason I pointed out, or for a different one? I'm not a negative utilitarian personally, but I think a few aspects of it have promise and would like to see them sorted out.

comment by VAuroch · 2014-01-16T09:26:19.886Z · LW(p) · GW(p)

What does changing a core belief feel like? If I have a crisis of faith, how will I know?

I would particularly like to hear from people who have experienced this but never deconverted. Not only have I never been religious, no one in my immediate family is, none of the extended family I am close with is, and while I have friends who believe in religion I don't think I have any who believe their faith. So I have no real point of comparison.

Replies from: ESRogs, ChristianKl
comment by ESRogs · 2014-01-16T22:36:29.988Z · LW(p) · GW(p)

If I have a crisis of faith, how will I know?

A sense of panic and dread, and a feeling of being lost were some highlights for me. I think it would be hard to not know, though perhaps others experience these things differently.

comment by ChristianKl · 2014-01-16T15:52:58.070Z · LW(p) · GW(p)

I think there are many ways how beliefs get changed.

Take a belief such as: "The world is an hostile place and therefore I have to hide myself behind behind a shield of anonymity when I post online."

Ten years ago I would have feared that somebody associates my online writing with my real identity at that time I thought I needed the shield. Today I don't (Nickname is firstname + first 2 letters of lastname).

How did that process felt like? At the beginning I felt fear and now I don't but it was a gradual process over time.

For most practical concerns I think that we use religion way to often as reference concept. Children get usually taught that it's bad to talk to strangers. In our world it's a useful skill to talk to strangers in an friendly and inviting way.

Most people hit walls very quickly if the try to start to say to hello with a smile to every stranger they pass on the street. The come up with excuses that saying so is weird and that people will hate them if the find out that they will engage in such weird behavior.

If you want to experience a crisis of faith those social beliefs are were I would focus. There are more interesting because they actually have empirical results that you can see and you can't just pretend that you have changed your belief.

comment by CronoDAS · 2014-01-14T08:30:29.789Z · LW(p) · GW(p)

I have tremendous trouble with hangnails. My cuticles start peeling a little bit, usually near the center of the base of my nail, and then either I remove the peeled piece (by pulling or clipping) or it starts getting bigger and I have to cut it off anyway. That leaves a small hole in my cuticle, the edges of which start to wear away and peel more, which makes me cut away more. This goes on until my fingertips are a big mess, often involving bleeding and bandages. What should I do with my damaged cuticles, and how do I stop this cycle from starting in the first place?

Replies from: dougclow, ChristianKl, Chrysophylax
comment by dougclow · 2014-01-14T13:52:43.986Z · LW(p) · GW(p)

To repair hangnails: Nail cream or nail oil. I had no idea these products existed, but they do, and they are designed specifically to deal with this problem, and do a very good job IME. Regular application for a few days fixes my problems.

To prevent it: Keep your hands protected outside (gloves). Minimise exposure of your hands to things that will strip water or oil from them (e.g. detergent, soap, solvents, nail varnish, nail varnish remover), and when you can't avoid those, use moisturiser afterwards to replace the lost oil.

(Explanation: Splitting/peeling nails is usually due to insufficient of oil or more rarely moisture. I've heard some people take a paleo line that we didn't need gloves and moisturiser and nail oil in the ancestral environment. Maybe, but we didn't wash our hands with detergent multiple times a day then either.)

Replies from: CronoDAS
comment by CronoDAS · 2014-01-16T06:59:17.583Z · LW(p) · GW(p)

It's not the nail itself, it's the skin around the nail...

Replies from: dougclow
comment by dougclow · 2014-01-17T10:00:10.289Z · LW(p) · GW(p)

Yes - that's the part I too have trouble with, and that these products and practices help. They also help the nail itself, but fewer people tend to have that problem.

In my explanation should've said "Splitting/peeling nails, and troubles with the skin around them, are usually due to insufficient oil ...", sorry.

There's no reason why you should trust a random Internet person like me with health advice. But think cost/expected benefit. If your hangnails are anything like as painful and distracting as mine were, trying out a tube of nail cream, moisturiser, and a pair of gloves for a week is a small cost compared to even an outside chance that it'll help. (Unless the use of such products causes big problems for your self image.)

Replies from: CronoDAS, ciphergoth
comment by CronoDAS · 2014-01-19T04:16:23.701Z · LW(p) · GW(p)

I'll see if I can find any nail cream at my local supermarket, then. How often should I apply it?

There's no reason why you should trust a random Internet person like me with health advice.

I've seen similar advice on various web pages after I did a Google search on the problem, too. Which means that it's many random Internet people, which is slightly more trustworthy. ;)

Replies from: dougclow
comment by dougclow · 2014-02-15T14:48:20.639Z · LW(p) · GW(p)

:)

I got mine in a large pharmacist, in case you're still looking.

How often should I apply it?

I'd be guided by the instructions on the product and your common sense.

For me, a single application is usually enough these days - so long as I've been able to leave it on for ages and not have to wash my hands. The first time I used it, when my fingernails had got very bad, it took about three or four applications over a week. Then ordinary hand moisturiser and wearing gloves outside is enough for maintenance. Then I get careless and forget and my fingernails start getting bad again and the cycle repeats! But I'm getting better at noticing, so the cycles are getting shallower, and I've not actually had to use the nail cream at all so far this winter. (Although it hasn't been a very cold one where I am.)

(Almost a month late, sorry.)

comment by Paul Crowley (ciphergoth) · 2014-01-17T18:25:18.271Z · LW(p) · GW(p)

I would take a recommendation from Doug as strong evidence that something is a good idea, FWIW.

comment by ChristianKl · 2014-01-14T16:27:40.729Z · LW(p) · GW(p)

Calcium Deficiency could be a possible issue.

comment by Chrysophylax · 2014-01-14T11:56:53.318Z · LW(p) · GW(p)

Nail polish base coat over the cuticle might work. Personally I just try not to pick at them. I imagine you can buy base coat at the nearest pharmaceuticals store, but asking a beautician for advice is probably a good idea; presumably there is some way that people who paint their nails prevent hangnails from spoiling the effect.

Replies from: dougclow
comment by dougclow · 2014-01-14T13:54:43.152Z · LW(p) · GW(p)

I'd be cautious about using nail polish and similar products. The solvents in them are likely to strip more oil from the nail and nail bed, which will make the problem worse, not better. +1 for asking a beautician for advice, but if you just pick a random one rather than one you personally trust, the risk is that they will give you a profit-maximising answer rather than a cheap-but-effective one.

comment by pianoforte611 · 2014-01-17T14:52:19.823Z · LW(p) · GW(p)

Computers work by performing a sequence of computations, one at a time: parallelization can cut down the time for repetitive tasks such as linear algebra, but hits diminishing returns very quickly. This is vey different than the way the brain works. the brain is highly parallel. Is there any reason to think that our current techniques for making algorithms are powerful enough to produce "intelligence" whatever that means.

Replies from: DanArmak
comment by DanArmak · 2014-01-18T15:15:50.521Z · LW(p) · GW(p)

All biological organisms, considered as signalling or information processing networks, are massively parallel: huge amounts of similar cells with slightly different state signalling one another. It's not surprising that the biological evolved brain works the same way. A turing machine-like sequential computer powerful/fast enough for general intelligence would be far less likely to evolve.

So the fact that human intelligence is slow and parallel isn't evidence for thinking you can't implement intelligence as a fast serial algorithm. It's only evidence that the design is likely to be different from that of human brains.

It's likely true that we don't have the algorithmic (or other mathematical) techniques yet to make general intelligence. But that doesn't seem to me to be evidence that such algorithms would be qualitatively different from what we do have. We could just as easily be a few specific algorithmic inventions away from a general intelligence implementation.

Finally, as far as sheer scale goes, we're on track to achieve rough computational parity with a human brain in a single multi-processor cluster within IIRC something like a decade.

Replies from: pianoforte611
comment by pianoforte611 · 2014-01-18T15:35:08.654Z · LW(p) · GW(p)

I'm not trying to play burden of proof tennis here but surely the fact that the only "intelligence" that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially. Unless of course the kind of AI that humans create is nothing like the human mind, in which my question is irrelevant.

But that doesn't seem to me to be evidence that such algorithms would be qualitatively different from what we do have.

But we already know that the existing algorithms (in the brain) are qualitatively different from computer programs. I'm not an expert so apologies for any mistakes but the brain is not massively parallel in the way that computers are. A parallel piece of software can funnel a repetitive task into different processors (like the same algorithm for each value of a vector). But parallelism is a built in feature of how the brain works; neurons and clusters of neurons perform computations semi-independently of each other, yet are still coordinated together in a dynamic way. The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be?

Regarding computational parity: sure I never said that would be the issue.

Replies from: Locaha, DanArmak
comment by Locaha · 2014-01-18T16:20:25.246Z · LW(p) · GW(p)

There is no such thing as qualitatively different algorithms. Anything that a parallel computer can do, a fast enough serial computer also can do.

comment by DanArmak · 2014-01-18T16:29:18.034Z · LW(p) · GW(p)

the fact that the only "intelligence" that we know of is implemented in a massively parallel way should give you pause as to assuming that it can be done serially.

An optimization process (evolution) tried and succeeded at producing massively-parallel biological intelligence.

No optimization process has yet tried and failed to produce serial-processing based intelligence. Humans have been trying for very little time, and our serial computers may be barely fast enough, or may only become fast enough some years from now.

The fact the parallel intelligence could be created, is not evidence that other kinds of intelligence can't be created. Talking about "the only intelligence we know of" ignores the fact that no process ever tried to create a serial intelligence, and so of course none was created.

Unless of course the kind of AI that humans create is nothing like the human mind

That's quite possible.

The question is whether algorithms performing similar functions could be implemented serially. Why do you think that they can be?

All algorithms can be implemented on our Turing-complete computers. The questions is what algorithms we can successfully design.

Replies from: pianoforte611
comment by pianoforte611 · 2014-01-18T16:32:35.948Z · LW(p) · GW(p)

Why do you think that intelligence can be implemented serially?

Replies from: DanArmak
comment by DanArmak · 2014-01-18T20:08:54.816Z · LW(p) · GW(p)

What exactly do you mean by 'serially'? Any parallel algorithm can be implemented on a serial computer. And we do have parallel computer architectures (multicore/multicpu/cluster) that we can use for speedups, but that's purely an optimization issue.

comment by SuspiciousTitForTat · 2014-01-15T02:30:26.151Z · LW(p) · GW(p)

Society, by survival, in the survival of the fittest sense, stimulates people to be of service, be interesting, useful, effective, and even altruistic.

I suspect, and would like to know your opinion, that we are, for that social and traditional reason biased against a life of personal hedonic exploration, even if for some particular kinds of minds, that means, literally, reading internet comics, downloading movies and multiplayer games for free, exercising near your home, having a minimal amount of friends and relationships, masturbating frequently, and eating unhealthy for as long as the cash lasts.

So two questions, do you think we are biased against these things, and do you think doing this is a problem?

Replies from: DanielLC, somervta
comment by DanielLC · 2014-01-16T01:04:59.368Z · LW(p) · GW(p)

What do you mean by biased? Is there a difference between being biased towards something and desiring to do it?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-16T09:47:29.799Z · LW(p) · GW(p)

For example, a bias could be if your prediction of how much you will enjoy X is systematically smaller than how much you actually do enjoy X when you are doing it.

Replies from: DanielLC
comment by DanielLC · 2014-01-17T01:51:17.297Z · LW(p) · GW(p)

So what you're asking is if people are good at maximizing their own happiness?

We are not. Our happiness is set up to make sure we maximize inclusive genetic fitness. Rather than fixing a bias, evolution can simply account for it. For example, the joy of sex does not compare with the discomfort of pregnancy, but due to time discounting, it's enough to make women want to have sex.

As for what would maximize happiness, I'm not an expert. You'd need to ask a psychologist. I'm given to understand that doing things that at first appear to make you happy will tend to reset your hedonic setpoint and have little effect. The obvious conclusion from that is that no matter what you do, your happiness will be the same, but I'm pretty sure that's not right either. People can change how generally happy they are.

I am in favor of happiness, so all else being equal, I'd prefer it if people were more successful at making themselves happy.

comment by somervta · 2014-01-16T04:44:23.911Z · LW(p) · GW(p)

what do you mean by 'personal hedonic exploration'? The things you list don't sound very exploratory...

comment by therufs · 2014-03-17T15:34:23.360Z · LW(p) · GW(p)

What's the most useful thing for a non-admin to do with/about wiki spam?

comment by [deleted] · 2014-01-29T13:31:30.705Z · LW(p) · GW(p)

When will the experience machine be developed?

comment by blacktrance · 2014-01-15T20:39:23.652Z · LW(p) · GW(p)

Average utilitarianism seems more plausible than total utilitarianism, as it avoids the repugnant conclusion. But what do average utilitarians have to say about animal welfare? Suppose a chicken's maximum capacity for pleasure/preference satisfaction is lower than a human's. Does this mean that creating maximally happy chickens could be less moral than non-maximally happy humans?

Replies from: DanielLC
comment by DanielLC · 2014-01-16T00:53:51.168Z · LW(p) · GW(p)

My intuition is that chickens are less sentient, and that is sort of like thinking slower. Perhaps a year of a chicken's life is equivalent to a day of a human's. A day of a chicken's life adds less to the numerator than a day of a human's, but it also adds less to the denominator.

Replies from: Dan_Weinand
comment by Dan_Weinand · 2014-01-16T07:13:53.542Z · LW(p) · GW(p)

Maybe I'm way off base here, but it seems like average utilitarianism leads to disturbing possibility itself. That being 1 super happy person is considered a superior outcome to 1000000000000 pretty darn happy people. Please explain how, if at all, I'm misinterpreting average utilitarianism.

Replies from: DanielLC
comment by DanielLC · 2014-01-17T01:53:35.192Z · LW(p) · GW(p)

I think you just have different intuitions than average utilitarians. I have talked to someone who saw no reason why having a higher population is good in of itself.

I am somewhat swayed by an anthropic argument. If you live in the first universe, you'll be super happy. If you live in the second, you'll be pretty darn happy. Thus, the first universe is better.

Replies from: DanArmak
comment by DanArmak · 2014-01-18T12:32:54.043Z · LW(p) · GW(p)

On the other hand, you often need to consider that you're less likely to live in one universe than in another. For instance, if you could make 10% of population vastly happier by killing the other 90%, you need to factor in the 10% chances of survival.

Replies from: DanielLC
comment by DanielLC · 2014-01-19T03:23:36.044Z · LW(p) · GW(p)

I don't buy into that theory of identity. The way the universe works, observer-moments are arranged in lines. There's no reason this is necessary in principle. It could be a web where minds split and merge, or a bunch of Boltzmann brains that appear and vanish after a nanosecond. You are just a random one of the observer-moments. And you have to be one that actually exist, so there's a 100% chance of survival.

If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.

Replies from: DanArmak
comment by DanArmak · 2014-01-19T11:04:17.186Z · LW(p) · GW(p)

You are just a random one of the observer-moments.

I don't think the word "you" is doing any work in that sentence.

Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it's meaningful to talk about identities (lines of observer-moments) which may die (the line ends).

If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.

I'm not sure I understand what you mean (I don't endorse average utilitarianism in any case). Do you mean that I might want to maximize the average of the utilities of my possible time-lines (due to imperfect knowledge), weighted by the probability of those time-lines? Isn't that just maximizing expected utility?

Replies from: DanielLC
comment by DanielLC · 2014-01-19T22:10:23.630Z · LW(p) · GW(p)

Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it's meaningful to talk about identities (lines of observer-moments) which may die (the line ends).

I don't think that's relevant in this context. You are a random observer. You live.

I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters. But if you just think that you're not going to have as much total happiness because you don't live as long then either you're fundamentally mistaken or the argument I just gave is.

I'm not sure I understand what you mean

If "you" are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn't help.

weighted by the probability of those time-lines?

If you're saying that it's more likely to be a person who has a longer life, then I guess our "different" views on identity probably are just semantics, and you end up with the form of average utilitarianism I was originally suggesting.

Replies from: DanArmak
comment by DanArmak · 2014-01-20T09:15:57.217Z · LW(p) · GW(p)

You are a random observer.

That's very different from saying "you are a random observer-moment" as you did before.

I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters.

I consider it intrinsically important to have a personal future. If I am now a specific observer - I've already observed my present - then I can drastically narrow down my anticipated future observations. I don't expect to be any future observer existing in the universe (or even near me) with equal probability; I expect to be one of the possible future observers who have me in their observer-line past. This seems necessary to accept induction and to reason at all.

If "you" are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn't help.

But in the actual universe, when making decisions that influence the future of the universe, I do not treat myself as a random person; I know which person I am. I know about the Rawlsian veil, but I don't think we should have decision theories that don't allow to optimize the utility of observers similar to myself (or belonging to some other class), rather than all observers in the universe. We should be allowed to say that even if the universe is full of paperclippers who outnumber us, we can just decide to ignore their utilities and still have a consistent utilitarian system.

(Also, it would be very hard to define a commensurable 'utility function' for all 'observers', rather than just for all humans and similar intelligences. And your measure function across observers - does a lizard have as many observer-moments as a human? - may capture this intuition anyway.)

I'm not sure this is in disagreement with you. So I still feel confused about something, but it may just be a misunderstanding of your particular phrasing or something.

If you're saying that it's more likely to be a person who has a longer life,

I didn't intend that. I think I should taboo the verb "to be" in "to be a person", and instead talk about decision theories which produce optimal behavior - and then in some situations you may reason like that.

Replies from: DanielLC
comment by DanielLC · 2014-01-20T19:58:45.960Z · LW(p) · GW(p)

That's very different from saying "you are a random observer-moment" as you did before.

I meant observer-moment. That's what I think of when I think of the word "observer", so it's easy for me to make that mistake.

If I am now a specific observer - I've already observed my present - then I can drastically narrow down my anticipated future observations.

If present!you anticipates something, it makes life easy for future!you. It's useful. I don't see how it applies to anthropics, though. Yous aren't in a different reference class than other people. Even if they were, it can't just be future!yous that are one reference class. That would mean that whether or not two yous are in the same reference class depends on the point of reference. First!you would say they all have the same reference class. Last!you would say he's his own reference class.

I do not treat myself as a random person; I know which person I am.

I think you do if you use UDT or TDT.

Replies from: DanArmak
comment by DanArmak · 2014-01-22T10:16:20.842Z · LW(p) · GW(p)

I think you do if you use UDT or TDT.

I'm not an expert, but I got the impression that UDT/TDT only tells you to treat yourself as a random person from the class of persons implementing the same decision procedure as yourself. That's far more narrow than the set of all observers.

And it may be the correct reference class to use here. Not future!yous but all timeline!yous - except that when taking a timeful view, you can only influence future!yous in practice.

comment by [deleted] · 2014-01-15T20:33:50.923Z · LW(p) · GW(p)

Don't raw utilitarians mind being killed by somebody who thinks they suffer too much?

Replies from: DanArmak, DanielLC
comment by DanArmak · 2014-01-18T15:07:06.882Z · LW(p) · GW(p)

Of course they mind, since they disagree and think that someone is wrong! If they don't disagree, either they've killed themselves already or it becomes an assisted suicide scenario.

Replies from: None
comment by [deleted] · 2014-01-18T18:32:28.559Z · LW(p) · GW(p)

Yeah, right, thanks.

comment by DanielLC · 2014-01-16T00:55:18.897Z · LW(p) · GW(p)

Why would we care what someone else thinks?

As to if I thought I suffer too much, and that I wouldn't do much to help anyone else out, just because I have an explicit preference to die doesn't mean that I don't have instincts that resist it.

Replies from: None
comment by [deleted] · 2014-01-17T17:17:44.083Z · LW(p) · GW(p)

Because, as long as I understand, if utilitarian thinks that killing you will have positive expected utility, he ought to kill you.

doesn't mean that I don't have instincts that resist it

So, if you were completely rational, you would kill yourself without hesitation in this situation, right?

Just in case, I didn't downvote you.

comment by Pablo (Pablo_Stafforini) · 2014-01-15T07:00:02.598Z · LW(p) · GW(p)

_Stupid question: Wouldn't a calorie restriction diet allow Eliezer to lose weight?_

Not a single person who's done calorie restriction consistently for a long period of time is overweight. Hence, it seems that the problem of losing weight is straightforward: just eat less calories than you would normally.

I posted a version of this argument on Eliezer's Facebook wall and the response, which several people 'liked', was that there is a selection effect involved. But I don't understand this response, since "calorie restriction" is defined as restricting calories below what a person would eat on an ad lib diet (as distinct from a diet that involves having a a weight that falls below what the person would weight normally).

ETA: There's now a lucid post on Eliezer's Facebook wall that answers my question very well.

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2014-01-15T10:49:43.220Z · LW(p) · GW(p)

Let's assume the following extremely simplified equation is true:

CALORIES_IN = WORK + FAT

Usually the conclusion is "less calories = less fat". But it also could be "less calories = less work". Not just in the sense that you consciously decide to work less, but also that your body can make you unable to work. Which means: you are extremely tired, unable to focus, in worst case you fall into coma.

The problem with calorie restriction is that it doesn't come with a switch for "please don't make me tired or falling in coma, just reduce my fat". -- Finding the switch is the whole problem.

If your metabolical switch is broken, calorie restriction can simply send you in a zombie mode, and your weight remains the same.

Replies from: niceguyanon, Pablo_Stafforini
comment by niceguyanon · 2014-01-17T20:43:02.862Z · LW(p) · GW(p)

The problem with calorie restriction is that it doesn't come with a switch for "please don't make me tired or falling in coma, just reduce my fat". -- Finding the switch is the whole problem.

Great explanation. I didn't consider this before, will update accordingly.

comment by Pablo (Pablo_Stafforini) · 2014-01-16T05:04:00.805Z · LW(p) · GW(p)

Thanks. When you say that

it also could be "less calories = less work"

do you mean that this is a plausible hypothesis, supported by evidence, or just a speculation which would, if true, explain why calorie restriction might not work for Eliezer? If the former, could you point me to the relevant evidence?

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2014-01-16T09:44:10.384Z · LW(p) · GW(p)

I think I remember Eliezer complaining somewhere that calorie restriction (or something like that) makes him tired or just unable to do his work, but I don't remember the source. I'm pretty sure that he tried it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-16T15:29:55.520Z · LW(p) · GW(p)

Yes, he did. He found that as little as missing a meal knocked him out.

Currently, he's found a ketogenic diet which is working for him at least in the short run.

comment by ChristianKl · 2014-01-16T14:39:54.702Z · LW(p) · GW(p)

Do a three day fast and see if you can still be as active physically at the end as you are at the moment. That's a very easy experiment.

If you don't believe that changes in eating have any effect on how your body works there not even a price involved.

comment by ChristianKl · 2014-01-15T14:45:12.487Z · LW(p) · GW(p)

Not a single person who's done calorie restriction consistently for a long period of time is overweight.

All the people for which the diet produces problems quit it and don't engage in it consistently. If your brain function goes down because you body downregulates your metabolism to deal with having less calories and you want to keep your brain functioning at a high level you will stop engaging consistently in the diet.

There also the issue that you deal with a hunger process that evolved over tens of millions of years and try to beat it with a cerebral decision making process that evolved over 100,000s of years.

It just like telling someone with tachycardia of 100 heart beats pro minute to switch to 80 heart beats per minute. It's just a switch. Just go and with your heart beat. If I sit down on the toilet my body has no problem to go down 20 beats per minute in a few seconds.

I however have no way to fire of the same process by a cognitive decision. Food intake seems like it should be easier to manage than blood pulse via cognitive decisions because you can do voluntary decisions over short time frames. But over long time frames that doesn't seem to be the case.

comment by CaractacusRex · 2014-01-15T02:44:02.628Z · LW(p) · GW(p)

I’m curious, but despite a lot of time poking around Wikipedia, I don’t have the means to discriminate between the possibilities. Please help me understand. Is there reason to believe that an infinite quantity of the conditions required for life is/was/will be available in any universe or combination of universes?

Replies from: VAuroch
comment by VAuroch · 2014-01-16T06:57:04.758Z · LW(p) · GW(p)

I am not an expert, but as far as I know this is essentially a lemma/corollary to the anthropic question, ("Why do we, intelligent humans, exist?") which is obviously a Hard Problem.

All answers to the big question I'm aware of, other than religious ones, seem to answer Yes to the corollary. This is largely on grounds of what amounts to a proof by contradiction; if there wasn't an infinite quantity of the conditions available for life, the probability that we existed would be infinitesimal, and it would strain credibility that we existed.

comment by [deleted] · 2014-01-15T02:37:32.940Z · LW(p) · GW(p)

If the rate of learning of an AGI is t then is it correct to assume that the rate of learning of a FAI would be t+x where x > 0, considering that it would have the necessary additional constraints?

If this is the case, then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built. Are there upper limits on intelligence, or would there be diminishing returns as intellence grows?

Replies from: somervta
comment by somervta · 2014-01-16T04:43:04.549Z · LW(p) · GW(p)

No, there's no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.

However, one of the problems needed to be solved for FAI (stable self-modification) could certainly make an FAI's rate of self-improvement faster than a comparative AI which has not solved that problem. There are other questions that need to be answered there (does the AI realize that modifications will go wrong and therefore not self-modify? If it's smart enough to notice the problem, won't it's first step be to solve it?), and I may be off base here.

I'm not sure it's that useful to talk about an FAI vs a analogue UFAI, though. If an FAI is built, there will be many significant differences between the resulting intelligence and the one that would have been built if the FAI was not, simply due to the different designers. In terms of functioning, the different design choices, even those not relevant to FAI (if that's even meaningful - FAI may well need to be so fully integrated that all the aspects are made with it in mind), may be radically different depending on the designer and are likely to have most of the effect you're talking about.

In other words, we don't know shit about what the first AGI might look like, and we certainly don't know enough to do detailed separate counterfactuals

Replies from: JacekLach
comment by JacekLach · 2014-01-16T15:40:01.729Z · LW(p) · GW(p)

No, there's no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.

I believe you have this backwards - the OP is asking whether a FAI would be worse at learning than an UFAI, because of additional constraints on its improvement. If so:

then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built.

Of course one of the first actions of a FAI would be to prevent any UFAI from being built at all.

Replies from: somervta
comment by somervta · 2014-01-17T02:14:55.173Z · LW(p) · GW(p)

I assumed otherwise because of :

If the rate of learning of an AGI is t then is it correct to assume that the rate of learning of a FAI would be t+x where x > 0,

Which says the FAI is learning faster. But that would make more sense of the last paragraph.

I may have a habit of assuming that the more precise formulation of a statement is the intended/correct interpretation, which, while great in academia and with applied math, may not be optimal here.

Replies from: handoflixue
comment by handoflixue · 2014-01-19T08:42:01.545Z · LW(p) · GW(p)

Read "rate of learning" as "time it takes to learn 1 bit of information"

So UFAI can learn 1 bit in time T, but a FAI takes T+X

Or, at least, that's how I read it, because the second paragraph makes it pretty clear that the author is discussing UFAI outpacing FAI. You could also just read it as a typo in the equation, but "accidentally miswrote the entire second paragraph" seems significantly less likely. Especially since "Won't FAI learn faster and outpace UFAI" seems like a pretty low probability question to begin with...

Erm... hi, welcome to the debug stack for how I reached that conclusion. Hope it helps ^.^