Yet more "stupid" questions

post by NancyLebovitz · 2013-08-28T15:58:00.476Z · LW · GW · Legacy · 342 comments

Contents

342 comments

This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.

342 comments

Comments sorted by top scores.

comment by tgb · 2013-08-30T12:46:59.361Z · LW(p) · GW(p)

I occasionally have dreams in which I am playing an RTS videogame like Starcraft. In these, I am a disembodied entity seeing the world only as it might be displayed in such a game. During those dreams, this feels natural and unsurprising and I don't put a second thought to the matter. In fact, I've been having these dreams for a while now and only just recently noticed that odd fact that it's not me sitting at a computer playing the game, it's just the game being the only thing in the world at all.

Do other people have dreams in which they are not human-shaped or otherwise experience from a perspective that is very different from real life?

Replies from: knb, None, CronoDAS, Ishaan, FiftyTwo, RowanE
comment by knb · 2013-08-31T10:16:21.478Z · LW(p) · GW(p)

I used to have Age of Empires dreams. I've even had Tetris dreams.

Replies from: tgb
comment by tgb · 2013-08-31T21:17:16.552Z · LW(p) · GW(p)

Tetris dreams are well-known phenomena, but the reports of them I've read are unclear as to the nature of the dreams themselves. Do you just see falling blocks? Or is it as if you are in a Tetris universe with nothing else? Can anyone comment or elaborate on the sensation?

Replies from: None, knb
comment by [deleted] · 2013-09-02T11:02:11.218Z · LW(p) · GW(p)

I had numerous Tetris dreams during my peak of playing and for many months afterwards. My own experience was mostly going about my business in ordinary cityscapes, office spaces, rooms in my house, but with Tetris pieces constantly falling into gaps between objects. Rotate/drop was under my control but not always dependably so, sometimes creating an experience of panic as there was often some unknown but disastrous consequence of failure.

During this period incidence of such dreams increased with more Tetris-playing, but also were more often when I was stressed at work, in which cases the Tetris shapes were also somehow related to complex statistical / simulation programming I was doing in my day job.

I gave up Tetris cold-turkey when I began to see imaginary shapes falling between real objects during waking hours. Other games since then had similar but far smaller effects on my dream states.

comment by knb · 2013-09-01T04:50:17.045Z · LW(p) · GW(p)

I'm trying to recall, I haven't played Tetris in a few years. IIRC, was like playing Tetris on my computer, but without anything in my peripheral vision.

comment by [deleted] · 2013-09-05T06:24:05.474Z · LW(p) · GW(p)

I get something similar, in that I frequently lose my perspective as a humanoid actor in my dreams. It appears that in my own dreams I am more or less incapable of simulating another living being without subjectively experiencing that beings thoughts and emotions at the same time. Perhaps for that reason my dreams are usually of empty wilderness or stars flying around. However, a few times per month I wake up very confused at just being one person because while asleep I experienced the thoughts of multiple individuals simultaneously, including whatever emotions they felt and their relative lack of information of each others perspectives. The maximum number of people I've been at once was seven, where three beings were fighting another two beings to save the other three from being tortured.

comment by CronoDAS · 2013-08-31T00:24:58.315Z · LW(p) · GW(p)

I've seen top-down perspectives in dreams, such as those in 2D RPGs. I feel like I'm playing a video game, but I don't have an awareness of a controller or anything; the characters just do what I tell them, and the "screen" is my entire visual field. (The actual experience of playing a video game tends to be similar: I almost never think about the controller or my hand; I just make stuff happen.) I also tend not to have much of a kinesthetic sense in dreams I remember, either.

Another weird thing: Everything I try to type in dreams is invariably misspelled. Once, in a dream, I was trying to Google something, but the text I was "typing" in the search bar kept changing pretty much at random. Only the letters that I'm "looking at" during any given moment stay what they are.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T18:01:48.771Z · LW(p) · GW(p)

Once, in a dream, I was trying to Google something, but the text I was "typing" in the search bar kept changing pretty much at random.

Happens to me too, just instead of googling it is usually me trying to write something down, e.g. someone's phone number, and failing to make the text legible or realising I wrote some nonsense instead of what I tried to write.

Actually, this is one of the techniques for lucid dreaming -- how to realize that you are in a dream. You need a test that will reliably give different results in reality and in dreams. Different things work for different people, but reading and writing is among frequent examples. Other examples: counting, or trying to levitate. (With levitation it is the other way round: it works only in dreams.)

Strange. I just now realized I probably never used a computer in my dream, although I spend most of my days at computer. How is that possible? An ad-hoc explanation is that precisely because my life is so much connected with computers, I don't perceive the computer as "computer", but merely as an extension of myself, as another input/output channel. Most of my dreams are about being with people or walking in the nature; and I actually do a very little of that.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-01T17:44:35.203Z · LW(p) · GW(p)

Failing to achieve any kind of goals is a very common topic of dreams.

comment by Ishaan · 2013-11-26T07:27:49.373Z · LW(p) · GW(p)

Yes to both.

It's very common for me to be human but different (child, woman, or very different looking man). Most common non-human forms are non-human-ape, wolf, or bird. Sometimes it's an imaginary monster of some sort. But dreaming in non-human forms is generally fairly rare.

Most common non-embodied perspectives are either eye-level-view, television view-style, or looking down from a distance. In these cases, I'll either self-identify with one of the bodies, or simply be an observer. This frequently switches mid-storyline.

comment by FiftyTwo · 2013-08-30T23:55:23.723Z · LW(p) · GW(p)

I've had similar dreams.

In general I don't think I'm aware of my self/body in dreams. Occasionally I'm different people but dont notice.

comment by RowanE · 2013-08-30T16:03:06.005Z · LW(p) · GW(p)

I've had some dreams like that - a few dreams were specifically of the game Supreme Commander, and I also occasionally am in third-person in dreams as if I was watching from the screen of a third-person game - I don't think it's really "very different from real life", it's close to the experience of being immersed in a videogame, it's just that rather than overlooking details you're not paying attention to, those details simply don't exist because it's a dream.

comment by CoffeeStain · 2013-08-30T04:01:15.826Z · LW(p) · GW(p)

Is LSD like a thing?

Most of my views on drugs and substances are formed, unfortunately, due to history and invalid perceptions of their users and those who appear to support their legality most visibly. I was surprised to find the truth about acid at least a little further to the side of "safe and useful" than my longtime estimation. This opens up a possibility for an attempt at recreational and introspectively therapeutic use, if only as an experiment.

My greatest concern would be that I would find the results of a trip irreducibly spiritual, or some other nonsense. That I would end up sacrificing a lot of epistemic rationality for some of the instrumental variety, or perhaps a loss of both in favor of living off of some big, new, and imaginary life changing experience.

In short, I'm comfortable with recent life changes and recent introspection, and I wonder whether I should expect a trip to reinforce and categorize those positive experiences, or else replace them with something farce.

Also I should ask about any other health dangers, or even other non-obvious benefits.

Replies from: NancyLebovitz, AndyWood, gattsuru, hyporational, FiftyTwo
comment by NancyLebovitz · 2013-08-30T14:19:11.459Z · LW(p) · GW(p)

One data point here. I've taken a few low-to-lowish dose trips. I'm still the same skeptic/pragmatist I was.

When I'd see the walls billowing and more detail generating out of visual details, I didn't think "The universe is alive!" I thought "my visual system is alive".

I did have an experience which-- to the extent I could put it into words-- was that my sense of reality was something being generated. However, it didn't go very deep-- it didn't have aftereffects that I can see. I'm not convinced it was false, and it might be worth exploring to see what's going on with my sense of reality.

comment by AndyWood · 2013-08-30T07:07:15.446Z · LW(p) · GW(p)

I won't be able to do it justice in words, but I like to try.

If you value your current makeup as a "rationalist" - LSD will not necessarily help with that. Whatever your current worldview, it is not "the truth", it is constructed, and it will not be the same after you come down.

You can't expect a trip to do anything in particular, except maybe blow your mind. A trip is like finding out you were adopted. It's discovering a secret hidden in plain sight. It's waking up to realize you've never been awake before - you were only dreaming you were awake. It's finding out that everything familiar, everything you took for granted, was something else all along, and you had no idea.

No matter how much you've invested in the identity of "rationalist", no matter how much science you've read... Even if you know how many stars there are in the visible universe, and how many atoms. Even if you've cultivated a sense for numbers like that, real reality is so much bigger than whatever your perception of it is. I don't know how acid works, but it seems to open you in a way that lets more of everything in. More light. More information. Reality is not what you think it is. Reality is reality. Acid may not be able to show you reality, but it can viscerally drive home that difference. It can show you that you've been living in your mind all your life, and mistaking it for reality.

It will also change your sense of self. You may find that your self-concept is like a mirage. You may experience ego-loss, which is like becoming nobody and nothing in particular, only immediate sensory awareness and thought, unconnected to what you think of as you, the person.

I don't know about health dangers. I never experienced any. Tripping does permanently change the way you view the world. It's a special case of seeing something you can't un-see. Whether it's a "benefit" ... depends a lot on what you want.

Replies from: alternativenickname, RowanE
comment by alternativenickname · 2013-09-25T13:00:54.688Z · LW(p) · GW(p)

(Created an alternative username for replying to this because I don't want to associate my LSD use with my real name.)

I'd just like to add a contrary datapoint - I had a one pretty intense trip that you might describe as "fucking weird", which was certainly mind-blowing in a sense. My sense of time transformed stopped being linear and started feeling like it was a labyrinth that I could walk in, I alternatively perceived the other people in the room as being real separate people or as parts of my own subconsciousness, and at one point it felt like my unity of consciousness shattered into a thousand different strands of thought which I could perceive as complex geometric visualizations...

But afterwards, it didn't particularly feel like I'd have learned anything. It was a weird and cool experience, but that was it. You say that one's worldview won't be the same after coming down, but I don't feel like the trip changed anything. At most it might've given me some mildly interesting hypotheses about the way the brain might work.

I'm guessing that the main reason for this might be that I already thought of my reality as being essentially constructed by my brain. Tripping did confirm that a bit, but then I never had serious doubts about it in the first place.

comment by RowanE · 2013-08-30T16:45:29.332Z · LW(p) · GW(p)

I don't think describing the experience itself is very helpful to answering the question.. The comment seems as close to an answer of "yes, it's likely you would find the results of a trip irreducibly spiritual or some other nonsense" as someone would actually give, but because of the vagueness that seems to be intrinsic to descriptions of the experience of a trip, I'm not even sure if you're espousing such things or not.

Replies from: AndyWood
comment by AndyWood · 2013-08-31T04:21:08.807Z · LW(p) · GW(p)

In my experience, it is possible to bring parts of the experience back and subject it to analytical and critical thinking, but it is very challenging. The trip does tend to defy comprehension by the normal mode of consciousness, which is why descriptions have the quality you call "vagueness". In short, distilling more than "irreducibly spiritual nonsense" from the trip takes work, not unlike the work of organizing thoughts into a term paper. It can be done, and the more analytical your habits of thought to begin with, the more success I think you could expect.

comment by gattsuru · 2013-08-30T16:58:44.783Z · LW(p) · GW(p)

I don't imbibe (for that matter, pretty much anything stronger than caffeine), so I can't offer any information about the experience of its affects on rationality.

From the literature, it has a relatively high ratio of activity threshold to lethal dose (even assuming the lowest supported toxic doses), but that usually doesn't include behavior toxicity. Supervision is strongly recommended. There's some evidence that psychoactive drugs (even weakly psychoactive drugs like marijuana) can aggravate preexisting conditions or even trigger latent conditions like depression, schizophrenia, and schizoid personality disorder.

comment by hyporational · 2013-09-01T13:05:12.925Z · LW(p) · GW(p)

Another data point here. I've done LSD a couple of times, and didn't find the experience "spiritual" at all.

The experience was mostly visual: illusion of movement in static objects when eyes open, and intense visualization when eyes closed. It's hard to describe these images, but it felt like my visual cortex was overstimulated and randomly generated geometric patterns intertwined with visual memories and newly generated constructs and sceneries. This all happened while travelling through a fractal-like pattern, so I felt the word "trip" was quite fitting. The trip didn't seem to affect my thinking much during or after.

I can see why a susceptible (irrational) mind could find this chemical alteration of consciousness a godly revelation, but I can't imagine taking the stuff for anything else than entertainment purposes. A couple of friends of mine had similar experiences.

LSD is known to cause persistent psychosis, apparently in people who already have latent or diagnosed mental health problems. This is what they teach in my med school, but the epidemiology of the phenomenom was left vague.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-09-01T13:43:06.583Z · LW(p) · GW(p)

I find that LSD does have emotional effects-- for me, it's a stimulant and it tends to cheer me up.

Replies from: hyporational
comment by hyporational · 2013-09-01T14:07:43.046Z · LW(p) · GW(p)

Now that I think about it, I felt quite elated too. Could have been just the novel experience though, hard to say. Some other emotions perhaps intensified too, but I wasn't interested in exploring that venue.

comment by FiftyTwo · 2013-08-30T23:56:53.317Z · LW(p) · GW(p)

Datapoint: another halluciogen, ketamine, has been shown to effectively treat depression. Not sure if mechanisms of LSD are similar.

Replies from: kalium
comment by kalium · 2013-09-01T04:28:25.563Z · LW(p) · GW(p)

The visual system is very complicated, and many different classes of drugs can cause hallucinations in different ways without the overall experience being similar.

Ketamine and LSD do not have similar mechanisms in the brain, nor (from what I've read) are their effects qualitatively similar. LSD is a psychedelic acting as an agonist on 5-HT_2A receptors (among other things, but that's what it shares in common with other classic psychedelics. Ketamine is a dissociative anesthetic acting as an antagonist on NMDA receptors. LSD is, however, effective against migraines at sub-hallucinogenic doses.

comment by Scott Garrabrant · 2013-08-29T16:50:04.126Z · LW(p) · GW(p)

How does stage hypnotism "work?"

Replies from: sixes_and_sevens, knb, Omid
comment by sixes_and_sevens · 2013-08-30T11:31:01.324Z · LW(p) · GW(p)

Based on the descriptions of thoughtful, educated people who practise hypnosis, it seems useful to think of it as a "suite" of psychological effects such as suggestion, group conformity, hype, etc., rather than a single coherent phenomenon.

comment by knb · 2013-08-30T10:59:15.038Z · LW(p) · GW(p)

Not sure exactly what you want to know here, but here are a few basic points:

  1. Hypnotized people are not unconscious, rather they are fully awake and focused.

  2. Brain scans don't show any signs of abnormal brain activity during hypnosis.

  3. Some psychologists argue hypnotized people are just fulfilling the socially expected role for a hypnotized person.

Replies from: CronoDAS
comment by CronoDAS · 2013-08-31T00:13:17.859Z · LW(p) · GW(p)

Brain scans don't show any signs of abnormal brain activity during hypnosis.

That depends on what you consider "abnormal". The states appear to be the same kind of states that occur in "normal" functioning, but they appear out of the context that they normally appear in. For example, according to one study a person exposed to a painful stimulus and one acting out a hypnotic suggestion to feel pain show similar patterns of brain activation, but a person told to "imagine" feeling pain shows a different one.

In general, brain scans do tend to show a difference between hypnotized subjects and subjects asked to pretend to be hypnotized.

My interpretation of these results is that hypnosis consists of the conscious mind telling the perceptual systems to shut up and do what they're told.

comment by Omid · 2013-08-30T03:44:18.013Z · LW(p) · GW(p)

Do you know how normal hypnotism works?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2013-08-30T04:09:52.846Z · LW(p) · GW(p)

No

Replies from: Omid
comment by Omid · 2013-08-30T04:57:48.338Z · LW(p) · GW(p)

The subject basically pretends that everything that hypnotist says is true. Have you ever played a video game and got so wrapped up in the virtual world you just stopped noticing the real world? That's called immersion, and it's achieved by keeping your attention focused on the game. When your attention drifts away from the game, you start noticing that it's 2 am or that's you've been playing for four hours, and you remember that you are not in the video game, you're just playing a video game. But as long as your attention remains on game, your feel like you are actually living in the video game's world. Gamers love the feeling of immersion, so developers put a lot of work into figuring out how to keep gamers attention, which maintains the immersion.

Hypnosis works on the same principle. The hypnotist uses the patients full attention to create an imaginary world that feels real to the patient. The difference between video games and hypnosis is hypnosis patients actively give their attention to the hypnotist, while gamers passively expect the game to take their attention. When a hypnotic induction starts, the subject is asked to imagine the something in great detail, effecitvely putting the onus on the subject to make sure their attention is doesn't drift. But when a video game starts, the gamer just watches the screen and expects the game to be interesting enough to keep her attention.

Hypnotism is more immersive than video games because the subject is helping the hypnotist keep her attention. This allows the hypnotist to create a virtual reality that is more convincing than video games. But it's still just a game of pretend.

Replies from: NancyLebovitz, Viliam_Bur
comment by NancyLebovitz · 2013-08-31T09:37:40.295Z · LW(p) · GW(p)

From Derren Brown's Tricks of the Mind:

I always used to finish with the invisibility suggestion, but as I would generally follow the performances with an informal chat about it all, I would always ask the subjects what they had actually experienced.

Out of the, say, ten or so subjects who were given the suggestion, the responses might break down in the following way. Two had obviously been able to see me and had been openly separated from the rest of the group. Two or three would swear that the puppet and chair were moving all on their own and they could not see me, even though they may have guess I was somehow remotely responsible for the chaos that ensued. The remaining five or six would generally say they were aware I was there moving the objects, but that something in them would keep trying to blank me out, and they could only act as if I were invisible.

comment by Viliam_Bur · 2013-08-31T18:17:39.186Z · LW(p) · GW(p)

A professional hypnotist once told me that it is very difficult to hypnotize "mathematicians" (him meaning: math, physics, and computer science students), because (this was his intepretation) they are too well connected with the reality and will not accept nonsense. But he also said that given enough time and trying different hypnotists, probably everyone can be hypnotized.

This happened at a hypnosis training camp, where this guy had an interesting idea: To teach hypnosis more efficiently, he would hypnotize all the participants and give them hypnotic commands to remember the lessons better. And then he would teach the theory and let us do the exercises, as usual. Also, he said that in order to learn hypnosis it is better to be hypnotized first, because then you know what it feels like to be hypnotized, and that knowledge is very useful when hypnotizing others (you have better intuition on what can and cannot work). -- This strategy seemed to work for many participants, most of which were psychology students. Only two people in the group couldn't be hypnotized: me and one girl, both students of computer science. The only time in my life when I regretted I wasn't more susceptible to hypnosis. So at the end, all I learned was some theory.

Replies from: ChristianKl
comment by ChristianKl · 2013-08-31T19:56:32.949Z · LW(p) · GW(p)

A professional hypnotist once told me that it is very difficult to hypnotize "mathematicians" (him meaning: math, physics, and computer science students), because (this was his intepretation) they are too well connected with the reality and will not accept nonsense.

"Connected to reality" is in this context a nice way of saying that someone can't let go and relax. Computer Science/Physics/Math people especially have a problem with forgetting numbers because numbers are way more important for them then the usual person.

Also, he said that in order to learn hypnosis it is better to be hypnotized first, because then you know what it feels like to be hypnotized, and that knowledge is very useful when hypnotizing others (you have better intuition on what can and cannot work). -

That's not about having an intuition about what works. Part of hypnotising somebody else effectively involves going into a trance state yourself.

comment by knb · 2013-09-01T04:43:22.404Z · LW(p) · GW(p)

I'm mid-twenties. Does it make sense to take a low-dose aspirin?

Replies from: AtTheGreenLight
comment by AtTheGreenLight · 2013-09-02T13:43:36.533Z · LW(p) · GW(p)

No it does not. Aspirin reduces the risk of heart attacks and strokes but also causes adverse outcomes - most importantly by raising the risk of gastro-intestinal bleeds. For the typical person in their mid twenties the risk of a heart attack or stroke is so low that the benefit of aspirin will be almost nil, the absolute value of intervening will be vanishingly small even though the proportional decrease in risk stays the same.

There are many possible effects of taking low dose aspirin other than those described so far - it may reduce the risk of colon cancer, for instance, but there are so many possible adverse outcomes too. Cyclooxygenase - the enzyme targeted by aspirin - is involved in many housekeeping functions throughout the body in particular the kidneys, stomach and possibly erectile tissue.

Studies examining risk versus benefit for low dose aspirin treatment have found that a cardiovascular risk of about 1.5%/year is necessary for the benefits of aspirin to outweigh the ill effects. Whilst no studies have been conducted on healthy young individuals I don't think such studies should be conducted, given that studies in those at a much higher cardiovascular risk than someone in their twenties have returned disappointing results we should not expect any great benefit from such a treatment. Indeed young people, men in particular, are much more likely to experience trauma than a cardiovascular event and patients taking low dose aspirin are much more likely to experience severe bleeding after trauma.

See this article for more information: http://www.sciencebasedmedicine.org/aspirin-risks-and-benefits/

comment by Omid · 2013-08-30T03:56:02.539Z · LW(p) · GW(p)

How do you cure "something is wrong on the Internet" syndrome? It bugs me when people have political opinions that are simplistic and self-congratulating, but I've found that arguing with them wastes time and energy and rarely persuades them.

Replies from: sixes_and_sevens, NancyLebovitz, Viliam_Bur, shminux, FiftyTwo
comment by sixes_and_sevens · 2013-08-30T11:35:58.601Z · LW(p) · GW(p)

Cultivate a sense of warm satisfaction every time you avoid a pointless online debate.

comment by NancyLebovitz · 2013-08-30T14:09:10.884Z · LW(p) · GW(p)

Really think about how very much is wrong on the internet compared to your capacity to try to correct it. I think this might be a case of cultivating scope sensitivity.

Or (which is what I think I do) combine that with a sense that giving a little shove towards correctness is a public service, but it isn't a strong obligation. This tones the compulsion down to a very moderate hobby.

comment by Viliam_Bur · 2013-08-31T18:54:17.880Z · LW(p) · GW(p)

For me debating with people on LessWrong somehow cured the syndrome. Now when I see a political debate among non-LessWrongians, the participants seem like retarded people -- I no longer expect them to be reasonable; I don't even expect them to be able to understand logical arguments and process them correctly.; I don't feel any hope of conveying anything meaningful to any of them. (At best we could have an illusion of understanding.) Speaking with them would be like speaking with a rock; certainly not tempting.

I am not saying this is a correct model of the world. It is probably exaggerated a bit. Just explaining that this is how I feel, and this is what cured the syndrom.

These days the syndrom manifests mostly when speaking with someone whom I consider they could be rational. If they feel like a potential LW candidate. It usually ends with me revising my opinion about the candidate, and silently stopping.

So, for me the cure is feeling that the inferential distance between typical internet discussion and rational discussion is so huge that I don't have a chance to overcome it in one debate.

comment by Shmi (shminux) · 2013-08-30T06:33:42.070Z · LW(p) · GW(p)

Realize that it's not their fault, they are just automatons with faulty programming.

comment by FiftyTwo · 2013-08-30T23:57:37.483Z · LW(p) · GW(p)

I just became unwilling to devote the effort to replying.

comment by [deleted] · 2013-08-28T18:50:44.283Z · LW(p) · GW(p)

I am confused by discussions about utilitarianism on LessWrong. My understanding, which comes mostly from the SEP article, was that pretty much all variants of utilitarianism are based on the idea that each person's quality of life can be quantified--i.e., that person's "utility"--and these utilities can be aggregated. Under preference utilitarianism, a person's utility is determined based on whether their values are being fulfilled. Under all of the classical formulations of utilitarianism, everyone's utility function has the same weight when the aggregation is performed, hence the catchy phrase "greatest good for the greatest number".

However, I have also seen LW posts and comments talk about utilitarianism in relation to how much you should value the lives of people close to you compared to other people, and how much you should value abstract things like "freedom" relative to people's lives. This comment thread is one example. These discussions about valuing the lives of others and quantifying abstract values sounds a lot like utility maximization under rational choice theory rather than utilitarianism.

So are people conflating utility maximization and utilitarianism, am I getting confused and misunderstanding the distinction, or is something else going on?

Replies from: Kaj_Sotala, Douglas_Knight, blacktrance
comment by Kaj_Sotala · 2013-08-28T20:14:14.546Z · LW(p) · GW(p)

So are people conflating utility maximization and utilitarianism

Often, yes.

comment by Douglas_Knight · 2013-08-28T22:17:32.058Z · LW(p) · GW(p)

It's true that people often conflate utilitarianism with consequentialism, but I don't think that's what's going on here. I think it is quite reasonable to include under utilitarianism moral theories that are pretty close, like weighting people when aggregating. If people think that raw utilitarianism doesn't describe human morality, isn't it more useful for the term to describe people departing from the outpost, rather than the single theory? Abstract values that are not per-person are more problematic to include in the umbrella, but searching for "free" in that post doesn't turn up an example. If your definition is so narrow that you reject Nozick's utility monster as having to do with utilitarianism, then your definition is too narrow. Also, the lack of a normalization means that giving everyone "the same weight" does not clearly pin it down.

comment by blacktrance · 2013-08-28T19:55:07.092Z · LW(p) · GW(p)

This confused me for a long time too. I ultimately came to the conclusion that "utilitarianism" as that word is usually used by LessWrongers doesn't have the standard meaning of "an ethical theory that holds some kind of maximization of utils in the world to be the good", and instead uses it as something largely synonymous with "consequentialism".

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-08-28T20:40:54.310Z · LW(p) · GW(p)

"Consequentialism" is too broad, "utilitarianism" is too narrow, and "VNM rationality" is too clumsy and not generally thought of as a school of ethical thought.

Replies from: blacktrance, blacktrance
comment by blacktrance · 2013-08-28T21:32:16.516Z · LW(p) · GW(p)

It sounds like certain forms of egoism.

comment by blacktrance · 2013-08-28T21:29:07.187Z · LW(p) · GW(p)

Egoism, perhaps?

comment by Scott Garrabrant · 2013-08-30T17:57:47.125Z · LW(p) · GW(p)

What fiction should I read first?

I have read pretty much nothing but MoR and books I didn't like for school, so I don't really know what my preferences are. I am a mathematician and a Bayesianist with an emphasis on the more theoretical side of rationality. I like smart characters that win. I looked at some recommendations on other topics, but there are too many options. If you suggest more than one, please describe a decision procedure that uses information that I have and you don't to narrow it down.

Replies from: Scott Garrabrant, CronoDAS, Armok_GoB, feanor1600, Risto_Saarelma, NancyLebovitz, None, polymathwannabe, Izeinwinter, Turgurth, D_Alex
comment by Scott Garrabrant · 2013-09-01T16:05:26.453Z · LW(p) · GW(p)

Update: I decided on Permutation City, and was unable to put it down until it was done. I am very happy with the book. I am a lot more convinced now that I will eventually read almost all of these, so the order doesn't matter as much.

Replies from: lukstafi
comment by lukstafi · 2013-09-03T20:44:58.233Z · LW(p) · GW(p)

I liked "Diaspora" more.

comment by CronoDAS · 2013-08-31T01:27:22.186Z · LW(p) · GW(p)

Terry Pratchett's Discworld series. I recommend starting with Mort (the fourth book published). The first two books are straight-up parodies of fantasy cliches that are significantly different from what comes afterward, and the third book, Equal Rites, I didn't care for very much. Pratchett said that Mort was when he discovered plot, and it's the book that I recommend to everyone.

Replies from: taelor, kgalias
comment by taelor · 2013-08-31T10:33:13.379Z · LW(p) · GW(p)

I can second Discworld.

comment by kgalias · 2013-09-02T12:02:32.768Z · LW(p) · GW(p)

I particularly enjoyed the City Watch series. It also seems to be the most "non-ridiculous" and down to earth, which can help at the start.

Replies from: CronoDAS
comment by CronoDAS · 2013-09-02T21:23:13.636Z · LW(p) · GW(p)

It actually took me a while to warm up to the Watch books; when I read Guards Guards, I was expecting more laugh-out-loud moments of the kind that there were in the sillier early books.

/me read Discworld in publication order

comment by Armok_GoB · 2013-08-30T23:58:08.445Z · LW(p) · GW(p)

Well, if you liked MoR, there are the two other Big Rationalist Fanfics:

Also in a similar style: http://www.sagaofsoul.com/

Then there's the scifi classics, if you're willing to shell out some money. no links for these. Here are a few good ones to get you started

  • Permutation City
  • Accelerando
  • Diaspora
  • Fire Upon the Deep

This should be enough to get you started. I can give you MUCH more if you want to and maybe tell me some other things you like. Finding stuff like this to specification is basically what I do.

comment by feanor1600 · 2013-08-30T18:46:02.591Z · LW(p) · GW(p)

"smart characters that win"

Miles Vorkosigan saga, Ender's Game, anything by Neal Stephenson.

Replies from: kgalias
comment by kgalias · 2013-09-02T12:04:51.485Z · LW(p) · GW(p)

I started reading Ender's and the world didn't seem to make enough sense to keep me immersed.

comment by Risto_Saarelma · 2013-09-01T08:42:45.010Z · LW(p) · GW(p)

I am a mathematician and a Bayesianist with an emphasis on the more theoretical side of rationality. I like smart characters that win. I looked at some recommendations on other topics, but there are too many options.

Give Neal Stephenson a go. Snow Crash and Cryptonomicon are good starting points.

comment by NancyLebovitz · 2013-08-31T10:52:17.374Z · LW(p) · GW(p)

First is probably Bujold, specifically her Miles Vorkosigan series.

I think of Vinge more in terms of awesome author than awesome characters, but he does have some pretty impressive characters.

Lee Child has an intelligent good guy and intelligent associates vs. intelligent bad guys. (Not sf.)

Replies from: randallsquared
comment by randallsquared · 2013-09-01T01:01:30.459Z · LW(p) · GW(p)

You may, however, come to strongly dislike the protagonist later in the series.

Replies from: drethelin
comment by drethelin · 2013-09-02T17:28:07.649Z · LW(p) · GW(p)

Miles? He does some douchebaggy things but then he grows up. It's one of my favorite character arcs.

Replies from: randallsquared
comment by randallsquared · 2013-09-05T16:00:22.389Z · LW(p) · GW(p)

Haha, no, sorry. I was referring to Child's Jack Reacher, who starts off with a strong moral code and seems to lose track of it around book 12.

comment by [deleted] · 2013-08-30T19:46:31.238Z · LW(p) · GW(p)

The First Law trilogy by Joe Abercrombie. No promises on the characters, most of them are not so rational, but you'll see why I said it by the end. There are more books in the same setting with some of the same characters if you like them. The first book is mostly setup but it is great after that.

comment by polymathwannabe · 2013-09-03T15:05:51.417Z · LW(p) · GW(p)

Re "smart characters that win," I recommend these from my random reading history:

The Pillars of the Earth and A World without Endby Ken Follett

River God by Wilbur Smith

Singularity Sky and Iron Sunrise by Charles Stross

And Then There Were None by Agatha Christie

Replies from: polymathwannabe
comment by polymathwannabe · 2013-09-03T15:49:01.655Z · LW(p) · GW(p)

And as for specifically rationalist stories, you might want to check the Dark Materials trilogy by Philip Pullman.

Replies from: drethelin
comment by drethelin · 2013-09-05T06:59:12.477Z · LW(p) · GW(p)

what? no! Dark materials is specifically anti-christian, but the characters are not AT ALL rationalists. They often do stupid things and everything gets saved by random deus ex machina rather than cunning plots. It's an inverse Narnia, which is not rationality.

comment by Izeinwinter · 2013-08-31T03:12:48.985Z · LW(p) · GW(p)

SF; Go to the amazon kindle store, read the first chapters (free samples) of:

Vernor Vinge, a fire on the deep. Finest example of classical IE; space ships, politics and aliens! SF there is.

Lois Mcmaster Bujold: A large sample of the first book in the Vor saga. http://www.baen.com/chapters/W200307/0743436164.htm?blurb

If you like Harry for being a high-competence chaos magnet, this should scratch that itch in just the right spot.

comment by Turgurth · 2013-09-08T23:07:18.734Z · LW(p) · GW(p)

It's not specifically rationalist, but Dune is what first comes to mind for "smart characters that win", at least in the first book.

comment by D_Alex · 2013-09-04T02:32:05.870Z · LW(p) · GW(p)

I recommend pretty much anything by Jack Vance. If you like fantasy settings, read "Lyonesse", "Cugel's Saga" and "Rhialto the Marvellous". If you like sci-fi settings, try "Araminta Station" , "Night Lamp" and "Alastor". For a quaint mix of the two, try "Emphyrio" or "Languages of Pao". Vance wrote a bunch of great stuff, so if you like his first book, you have heaps more to look forward to.

Also "Name of the Wind" and "Wise Man's Fear" by Patrick Rothfuss are pretty good.

I also second "Ender's Game".

comment by Gunnar_Zarncke · 2013-09-01T17:43:14.044Z · LW(p) · GW(p)

Hi, I'm new here and have some questons regarding editing and posting. I read thru http://wiki.lesswrong.com/wiki/Help:User_Guide and http://wiki.lesswrong.com/wiki/FAQ but couldn't find the answers there so I decided to ask here. Probably I overlooked something obvious and a link will suffice.

  • How do I add follow up links to a post? Most main and sequences posts have them but I'm unable to add them to my post. Note: I posted in Discussion as recommended because these were my first posts. I didn't any feedback to change that but I'd nonetheless cross-link them and I intend to post more of the same kind. How can I add these follow-up thingies?

  • How do I create a user profile? It appears the some users do have profiles even with pictures and some like EY real pages. There is no button to create/edit one. I suspect it is somewhere in the Wiki but can't find it.

  • Is there a guide to tags? I'd like to use a common tag for my posts on "parenting".

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-09-01T17:55:41.947Z · LW(p) · GW(p)

The "show help" box at the lower right of the comment field gives you information on the markdown methods for emphasis and links and such.

I'm pretty sure that you just use links to your other posts for follow up links, unless I'm missing something about your question.

Replies from: Gunnar_Zarncke, Gunnar_Zarncke, Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-01T22:17:52.724Z · LW(p) · GW(p)

What about profiles? How can I create one? I see that many users have profiles: http://lesswrong.com/search/results?cx=015839050583929870010%3A-802ptn4igi&cof=FORID%3A11&ie=UTF-8&q=profile&sa=Search&siteurl=lesswrong.com%2F&ref=lesswrong.com%2Fsearch%2Fresults%3Fq%3Dprofile%26sa%3DSearch%26siteurl%3Dlesswrong.com%26ref%3Dlesswrong.com%26ss%3D703j91859j7&ss=760j108736j7

(by the way: is there a way to create shorter URLs for simple searches? I tried http://lesswrong.com/search/results?q=profile but that comes up empty)

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-09-07T04:58:41.583Z · LW(p) · GW(p)

Use a url shortener. Adf.ly will even pay you for it.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-07T09:30:40.474Z · LW(p) · GW(p)

Generally a good idea. But that's not what I meant. I mean a generally short search URL for less wrong where I can just add the query term. I can shorten the abive via adf.ly but I can't modify that to also search for q=parenting, q=ai, q=tags...

comment by Gunnar_Zarncke · 2013-09-01T22:13:12.692Z · LW(p) · GW(p)

what about the profile page? how do I create these?

Replies from: Sniffnoy
comment by Sniffnoy · 2013-09-04T08:18:54.046Z · LW(p) · GW(p)

Set up an account on the Wiki, with the same name as your LessWrong account. Then make a user page for it. After a day, LW will automatically use that to make your profile page. (Thanks to gwern for informing me about this.)

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-04T08:55:21.847Z · LW(p) · GW(p)

Thank you. I'm just creating http://wiki.lesswrong.com/mediawiki/index.php?title=User:Gunnar_Zarncke and hope that it will get linked to http://lesswrong.com/user/Gunnar_Zarncke/

Halt. I have a problem here: Saving doesn't seem to work. The page stays empty and I can't leave the edit area.. Same for my talk page. The wiki appears to be slow overall.

Replies from: gwern, Sniffnoy
comment by gwern · 2013-09-05T17:22:20.054Z · LW(p) · GW(p)

Sounds like you've been hit by the edit filter: I've been trying out disabling page creation for users younger than 3 or 4 days. It's supposed to be giving you a warning explaining that, though.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-08T21:39:33.151Z · LW(p) · GW(p)

Indeed. Now it works. There definitely was no warning or anything related.

comment by Sniffnoy · 2013-09-04T23:59:45.708Z · LW(p) · GW(p)

Try again, maybe? I haven't had a problem with the wiki before...

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-05T05:58:17.377Z · LW(p) · GW(p)

I still can't save. They page stays empty. A few more notes:

comment by Gunnar_Zarncke · 2013-09-01T19:40:31.248Z · LW(p) · GW(p)

No problem with markdown.

As for the follow-up links I checked again and these are normal links. I'm somewhat surprised that they are used that consistently.

Can you also provide a tip on tags?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-09-01T21:43:48.473Z · LW(p) · GW(p)

I didn't answer about tags because I don't know of a guide.

I just found that if you search on tag [word that you think might be a good tag], you'll get lw articles with that tag, but that would be a process of exploration rather than knowing about common tags.

Replies from: Gunnar_Zarncke, Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-01T22:12:24.694Z · LW(p) · GW(p)

Then I assume that tags are used by intuition. I just invented a tag and will use it consistently.

comment by Gunnar_Zarncke · 2013-09-04T13:55:50.723Z · LW(p) · GW(p)

I found that it is possible to list all posts with a tag via a short URL, e.g. on parenting it is

http://lesswrong.com/tag/parenting/

But this doesn't show my postings with that tag. Can it be that only posts in Main are found by that? If so is there a different shortcut that will (also) lists hits in Comments?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-09-08T21:10:14.805Z · LW(p) · GW(p)

discussion/parenting. Also, I think tagged articles are sorted by old first, opposite to most things.

comment by buybuydandavis · 2013-08-29T10:18:46.561Z · LW(p) · GW(p)

Can someone explain the payoff of a many worlds theory? What it's supposed to buy you?

People talk like it somehow avoids the issue of wave function collapse, but I just see many different collapsed function in different timelines.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-29T20:24:36.596Z · LW(p) · GW(p)

MWI or non-ontological collapse gets you to a place where you can even entertain the notion that the framework of Quantum Mechanics is correct and complete, so that:

  • you can stop worrying about bogus unphysical theories that you'd only invent in an attempt to make things look normal again, and
  • you're more comfortable with working with larger superpositions.
Replies from: kalium, IlyaShpitser
comment by kalium · 2013-09-01T04:39:16.423Z · LW(p) · GW(p)

How is this preferable to the "shut up and calculate" interpretation of QM?

comment by IlyaShpitser · 2013-08-30T20:37:25.518Z · LW(p) · GW(p)

Is 'unphysical' anything at all like 'unchristian'? In other words, is 'un' modifying 'physics' or 'physicists'?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T21:12:01.572Z · LW(p) · GW(p)

It's modifying Physics. A theory that doesn't act like physics. A theory that produces no new predictions but invents details to shuffle our ignorance into different more palatable forms.

I'm thinking of, on the one hand, objective collapse, and on the other hand, global hidden variables about imagined real states -- variables which, in order to be anything like compatible with QM must mysteriously shuffle around so that each time you measure one that is the end of its domain of applicability and you'll never be able to use that information for anything.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-08-30T23:13:23.943Z · LW(p) · GW(p)

I think you are confusing "theory" and "interpretation." There is consensus on the vast majority (all?) of QM, the physical theory.


"Interpretations" are stories we tell ourselves. Arguing about interpretation is like arguing about taste.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-31T12:08:39.450Z · LW(p) · GW(p)

Fine, but something needs explanation when you've got this energy-conserving theory which results in the energy content of the universe changing.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-08-31T19:22:51.975Z · LW(p) · GW(p)

The quantum theory has the same predictions for all interpretations, and does not violate energy conservation.

I don't know what "energy content of the universe changing" means if energy is conserved. You are arguing about taste.

comment by Gurkenglas · 2013-09-04T17:20:18.614Z · LW(p) · GW(p)

Why aren't people preserved cryogenically before they die?

Replies from: Lumifer
comment by Lumifer · 2013-09-04T17:50:04.928Z · LW(p) · GW(p)

Because under most current legal system this is called "murder".

Replies from: Gurkenglas
comment by Gurkenglas · 2013-09-04T18:15:22.558Z · LW(p) · GW(p)

Surely, getting to overseas facilities would solve that problem?

Replies from: Lumifer, drethelin
comment by Lumifer · 2013-09-04T18:23:38.950Z · LW(p) · GW(p)

First, let my clarify. This is called "murder" under the current legal systems of most countries in the world.

Second, if kill e.g. an American citizen anywhere in the world, the American justice system still gets jurisdiction and can prosecute you for murder.

comment by drethelin · 2013-09-05T06:57:00.421Z · LW(p) · GW(p)

moving existing facilities or setting up new ones, in addition to what lumifer said, is also extremely expensive.

comment by Wei Dai (Wei_Dai) · 2013-08-30T08:35:23.076Z · LW(p) · GW(p)

Is the Fun Theory Sequence literally meant to answer "How much fun is there in the universe?", or is it more intended to set a lower bound on that figure? Personally I'm hoping that once I become a superintelligence, I'll have access to currently unimaginable forms of fun, ones that are vastly more efficient (i.e., much more fun per unit of resource consumed) than what the Fun Theory Sequence suggests. Do other people think this is implausible?

Replies from: JoachimSchipper
comment by JoachimSchipper · 2013-08-30T17:44:35.534Z · LW(p) · GW(p)

Assuming that you become some kind of superintelligence, I'd expect you to find better ways of amusing yourself, yes; especially if you're willing and able to self-modify.

comment by RolfAndreassen · 2013-08-28T18:49:31.223Z · LW(p) · GW(p)

Suppose that energy were not conserved. Can we, in that case, construct a physics so that knowledge of initial conditions plus dynamics is not sufficient to predict future states? (Here 'future states' should be understood as including the full decoherent wave-function; I don't care about the "probabilistic uncertainty" in collapse interpretations of QM.) If so, is libertarian free will possible in such a universe? Are there any conservation laws that could be "knocked out" without giving rise to such a physics; or conversely, if conservation of energy is not enough, what is the minimum necessary set?

Replies from: tgb, kilobug, pragmatist
comment by tgb · 2013-08-28T19:25:52.475Z · LW(p) · GW(p)

Conservation of energy can be derived from Lagrangian mechanics from the assumption that the Lagrangian is constant over time. That is equivalent to saying that the dynamics of the system do not change over time. If the mechanics are changing over time, it would certainly be more difficult to predict future states, and one could imagine the mechanics changing unpredictably over time, in which case future states could be unpredictable as well. But now we don't just have physics that changes in time, we have physics that changes randomly.

I think I find that thought more troubling than the lack of free will.

(I know of no reason why any further conservation laws would break in a universe such as that, so long as you maintain symmetry under translations, rotations, CPT, etc. Time-dependent Lagrangians are not exotic. For example, a physicist might construct a Lagrangian of a system and include a time-changing component that is determined by something outside of the system, like say a harmonic oscillator being driven by an external power source.)

comment by kilobug · 2013-08-28T18:56:07.385Z · LW(p) · GW(p)

I don't see any direct link between determinism and conservation of energy. You can have one or the other or both or none. You could have laws of physics like "when two protons collide, they become three protons", determinist but without conservation of energy.

As for "libertarian free will" I'm not sure what you mean by that, but free will is concept that must be dissolved, not answered "it exists" or "it doesn't exist", and anyway I don't see the link between that and the rest.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-28T19:06:29.871Z · LW(p) · GW(p)

I don't see any direct link between determinism and conservation of energy. You can have one or the other or both or none.

You can have determinism without conservation of energy, but I opine that you cannot have conservation of energy (plus the other things that are conserved in our physics) without determinism.

Replies from: Luke_A_Somers, kilobug
comment by Luke_A_Somers · 2013-08-29T20:29:13.209Z · LW(p) · GW(p)

JUST conservation of energy, sure... consider a universe composed of a ball moving at constant speed in random directions.

But conserving everything our physics conserves means you're using our physics. It's not even a hypothetical if you do that.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-30T01:22:51.833Z · LW(p) · GW(p)

Suppose you changed electromagnetism to be one over r-cubed instead of r-squared. What conservation law breaks? Or just fiddle with the constants.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T13:56:26.614Z · LW(p) · GW(p)

Hrm. Well, I suppose that if you change the constants then you're not conserving the same exact things, but they would devolve to words in the same ways. All right, second statement withdrawn.

I'll take the first further, though - you can have an energy which is purely kinetic, momentum and angular momentum as usual, etc... and the coupling constants fluctuate randomly, thereby rendering the world highly nondeterministic.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-30T16:01:51.665Z · LW(p) · GW(p)

Ok, but it's not clear to me that your "energy" is now describing the same thing that is meant in our universe. Suppose everything in Randomverse stood still for a moment, and then the electric coupling constant changed; clearly the potential energy changes. So it does not seem to me that Randomverse has conservation of energy.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T17:58:07.734Z · LW(p) · GW(p)

Hmmm... yes, totally freely randomly won't work. All right. If I can go classical I can do it.

You have some ensemble of particles and each pair maintains a stack recording a partial history of their interactions, kept in terms of distance of separation (with the bottom of the stack being at infinite separation). Whenever two particles approach each other, they push the force they experienced as they approached onto the pair's stack; the derivative of this force is subject to random fluctuations. When two particles recede, they pop the force off the stack. In this way, you have potential energy (the integral from infinity to the current separation over the stack between two particles) as well as kinetic, and it is conserved.

The only parts that change are the parts of the potential that aren't involved in interactions at the moment.

Of course, that won't work in a quantum world since everything's overlapping all the time. But you didn't specify that.

EDITED TO ADD: there's no such thing as potential energy if the forces can only act to deflect (cannot produce changes in speed), so I could have done it that way too. In that case we can keep quantum mechanics but we lose relativity.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-30T19:55:34.610Z · LW(p) · GW(p)

I still don't think you're conserving energy. Start with two particles far apart and approaching each other at some speed; define this state as the zero energy. Let them approach each other, slowing down all the while, and eventually heading back out. When they reach their initial separation, they have kinetic energy from two sources: One source is popping back the forces they experienced during their approach, the other is the forces they experienced as they separated. Since they are at their initial separation again, the stack is empty, so there is zero potential energy; and there's no reason the kinetic energy should be what it was initially. So energy has been added or subtracted.

The idea of having only "magnetic" forces seems to work, yes. But, as you say, we then lose special relativity, and that imposes a preferred frame of reference, which in turn means that the laws are no longer invariant under translation. So then you lose conservation of momentum, if I remember my Noether correctly.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T21:13:36.813Z · LW(p) · GW(p)

One source is popping back the forces they experienced during their approach, the other is the forces they experienced as they separated. Since they are at their initial separation again, the stack is empty, so there is zero potential energy; and there's no reason the kinetic energy should be what it was initially. So energy has been added or subtracted.

You got it backwards. The stack reads in from infinity, not from 0 separation. As they approach, they're pushing, not popping. Plus, the contents of the stack are included in the potential energy, so either way you cut it, it adds up. If the randomness is on the side you don't integrate from, you won't have changes.

~~~

As for the magnetic forces thing, having a preferred frame of reference is quite different from laws no longer being invariant under translation. What you mean is that the laws are no longer invariant under boosts.

Noether's theorem applied to that symmetry yields something to do with the center of mass which I don't quite understand, but seems to amount to the notion that the center of mass doesn't deviate from its nominal trajectory. This seems to me to be awfully similar to the conservation of momentum, but must be technically distinct.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-31T04:49:53.167Z · LW(p) · GW(p)

As they approach, they're pushing, not popping.

Yes. Then as they separate, they pop those forces back out again. When they reach separation X, which can be infinity if you like (or we can just define potential energy relative to that point) they have zero potential energy and a kinetic energy which cannot in general be equal to what they started with. The simplest way of seeing this is to have the coupling be constant A on the way in, then change to B at the point of closest approach. Then their total energy on again reaching the starting point is A integrated to the point of closest approach (which is equal to their starting kinetic energy) plus B integrated back out again; and the potential energy is zero since it has been fully popped.

What you mean is that the laws are no longer invariant under boosts.

Yes, you are correct. Still, the point stands that you are abandoning some symmetry or other, and therefore some conservation law or other.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-31T12:02:57.764Z · LW(p) · GW(p)

Your example completely breaks the restrictions I gave it. The whole idea of pushing and popping is that going out is exactly the same as the last time they went in. Do you know what 'stack' means? GOing back out, you perfectly reproduce what you had going in.

Still, the point stands that you are abandoning some symmetry or other, and therefore some conservation law or other.

As I already said, if you constrain it that tightly, then you end up with our physics, period. Conservation of charge? That's a symmetry. Etc. if you hold those to be the same, you completely reconstruct our physics and of course there's no room for randomness.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-31T17:21:14.349Z · LW(p) · GW(p)

Your example completely breaks the restrictions I gave it. The whole idea of pushing and popping is that going out is exactly the same as the last time they went in.

Oh, I see. The coefficients are only allowed to change randomly if the particles are approaching. I misunderstood your scenario. I do note that this is some immensely complex physics, with a different set of constants for every pair of particles!

Edit to add: Also, since whether two particles are going towards each other or away from each other can depend on the frame of reference, you again lose whatever conservation it is that is associated with invariance under boosts.

If you hold those to be the same, you completely reconstruct our physics and of course there's no room for randomness.

Right. The original question was, are there any conservation laws you can knock out without losing determinism? It seems conservation of whatever-goes-with-boosts is one of them.

comment by kilobug · 2013-08-28T19:21:35.195Z · LW(p) · GW(p)

Not necessarily. Consider the time-turner in HPMOR. You could have physics which allow such stable time loop, with no determinism on which loop among the possible ones will actually occur, and yet have conservation of energy.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-28T19:37:35.539Z · LW(p) · GW(p)

As I mentioned a few times, HPMoR time turners violate general relativity, as they result in objects appearing and disappearing without any energy being extracted from or dissipated into the environment. E.g. before the loop: 1 Harry, during the loop: 2 Harries, after the loop: 1 Harry.

Replies from: kilobug
comment by kilobug · 2013-08-28T19:41:01.769Z · LW(p) · GW(p)

Yes, but you could very well think about something equivalent to the time-turner that exchange matter between the past and the present, instead of just sending matter to the past, in a way that keeps energy conservation. It would be harder to use practically,but wouldn't change anything to "energy conservation" vs "determinism" issues.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-28T22:22:46.520Z · LW(p) · GW(p)

Don't forget that to fully conserve energy, you have to maintain not only the total mass, but also the internal chemical potentials of whatever thing you're shifting into the past and its gravitational potential energy with respect to the rest of the universe. I think you'll have a hard time doing this without just making an exact copy of the object. "Conservation of energy" is a much harder constraint than is obvious from the three words of the English phrase.

Replies from: kilobug, shminux
comment by kilobug · 2013-08-29T07:33:34.716Z · LW(p) · GW(p)

I don't see that as a theoritecal problem against a plausible universe having such a mechanism. We could very well create a simulation in which when you timetravel, the total energy (internal from mass and chemical bounds, external from gravity and chemical interaction with the exterior) is measured, and exchanged for exactly that amount from the source universe. If we can implement it on a computer, it's possible to imagine a universe that would have those laws.

The hard part in time-turner physics (because it's not computable) is the "stable time loop", not the "energy conservation" part (which is computable).

comment by Shmi (shminux) · 2013-08-28T22:35:49.248Z · LW(p) · GW(p)

Yep.

comment by pragmatist · 2013-08-29T06:29:16.809Z · LW(p) · GW(p)

Liouville's theorem is more general than conservation of energy, I think, or at least it can hold even if conservation of energy fails. You can have a system with a time-dependent Hamiltonian, for instance, and thus no energy conservation, but with phase space volume still preserved by the dynamics. So this would be a deterministic system (one where phase space trajectories don't merge) without energy conservation.

As for the minimum necessary set of conservation laws that must be knocked out to guarantee non-determinism, I'm not sure. I can't think of any a priori reason to suppose that determinism would crucially rely on any particular set of conservation laws, although this might be true if certain further constraints on the form of the law are specified.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-29T18:18:30.627Z · LW(p) · GW(p)

If I understood the Wiki article correctly, the assumption needed to derive Liouville's theorem is time-translation invariance; but this is the same symmetry that gives us energy conservation through Noether's theorem. So, it is not clear to me that you can have one without the other.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-29T18:37:00.149Z · LW(p) · GW(p)

Liouville's theorem follows from the continuity of transport of some conserved quantity. If this quantity is not energy, then you don't need time-translation invariance. For example, forced oscillations (with explicitly time-dependent force, like first pushing a child on a swing harder and harder and then letting the swing relax to a stop) still obey the theorem.

comment by Kaj_Sotala · 2013-08-28T17:27:55.278Z · LW(p) · GW(p)

Every now and then, there are discussions or comments on LW where people talk about finding a "correct" morality, or where they argue that some particular morality is "mistaken". (Two recent examples: [1] [2]) Now I would understand that in an FAI context, where we want to find such a specification for an AI that it won't do something that all humans would find terrible, but that's generally not the context of those discussions. Outside such a context, it sounds like people were presuming the existence of an objective morality, but I thought that folks on LW rejected that. What's up with that?

Replies from: RolfAndreassen, shminux, Vladimir_Nesov, Discredited, Ishaan, Armok_GoB, Wei_Dai, Eugine_Nier, Lumifer
comment by RolfAndreassen · 2013-08-28T18:41:21.336Z · LW(p) · GW(p)

Objective morality in one (admiitedly rather long) sentence: For any moral dilemma, there is some particular decision you would make after a thousand years of collecting information, thinking, upgrading your intelligence, and reaching reflective equilibrium with all other possible moral dilemmas; this decision is the same for all humans, and is what we refer to when we say that an action is 'correct'.

Replies from: Kaj_Sotala, Armok_GoB, buybuydandavis, Lumifer
comment by Kaj_Sotala · 2013-08-28T20:10:28.224Z · LW(p) · GW(p)

I find that claim to be very implausible: to name just one objection to it, it seems to assume that morality is essentially "logical" and based on rational thought, whereas in practice moral beliefs seem to be much more strongly derived from what the people around us believe in. And in general, the hypothesis that all moral beliefs will eventually converge seems to be picking out a very narrow region in the space of possible outcomes, whereas "beliefs will diverge" contains a much broader space. Do you personally believe in that claim?

Replies from: niceguyanon, RolfAndreassen, Eugine_Nier
comment by niceguyanon · 2013-08-29T05:21:59.781Z · LW(p) · GW(p)

I'm not sure what I was expecting, but I was a little surprised after seeing you say you object to objective morality. I probably don't understand CEV well enough and I am pretty sure this is not the case, but it seems like there is so much similarity between CEV and some form of objective morality as described above. In other words, if you don't think moral beliefs will eventually converge, given enough intelligence, reflection, and gathering data, etc, then how do you convince someone that FAI will make the "correct" decisions based on the extrapolated volition?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-08-29T06:05:02.438Z · LW(p) · GW(p)

CEV in its current form is quite under-specified. I expect that there would exist many, many different ways of specifying it, each of which would produce a different CEV that would converge at a different solution.

For example, Tarleton (2010) notes that CEV is really a family of algorithms which share the following features:

  • Meta-algorithm: Most of the AGI’s goals will be obtained at run-time from human minds, rather than explicitly programmed in before run-time.
  • Factually correct beliefs: The AGI will attempt to obtain correct answers to various factual questions, in order to modify preferences or desires that are based upon false factual beliefs.
  • Singleton: Only one superintelligent AGI is to be constructed, and it is to take control of the world with whatever goal function is decided upon.
  • Reflection: Individual or group preferences are reflected upon and revised.
  • Preference aggregation: The set of preferences of a whole group are to be combined somehow.

He comments:

The set of factually correcting, singleton, reflective, aggregative meta-algorithms is larger than just the CEV algorithm. For example, there is no reason to suppose that factual correction, reflection, and aggregation, performed in any order, will give the same result; therefore, there are at least 6 variants depending upon ordering of these various processes, and many variants if we allow small increments of these processes to be interleaved. CEV also stipulates that the algorithm should extrapolate ordinary human-human social interactions concurrently with the processes of reflection, factual correction and preference aggregation; this requirement could be dropped.

Although one of Eliezer's desired characteristics for CEV was to ”avoid creating a motive for modern-day humans to fight over the initial dynamic”, a more rigorous definition of CEV will probably require making many design choices for which there will not be any objective answer, and which may be influenced by the designer's values. The notion that our values should be extrapolated according to some specific criteria is by itself a value-laden proposal: it might be argued that it was enough to start off from our current-day values just as they are, and then incorporate additional extrapolation only if our current values said that we should do so. But doing so would not be a value-neutral decision either, but rather one supporting the values of those who think that there should be no extrapolation, rather than of those who think there should be.

I don't find any of these issues to be problems, though: as long as CEV found any of the solutions in the set-of-final-values-that-I-wouldn't-consider-horrible, the fact that the solution isn't unique isn't much of an issue. Of course, it's quite possible that CEV will hit on some solution in that set that I would judge to be inferior to many others also in that set, but so it goes.

comment by RolfAndreassen · 2013-08-28T22:19:51.630Z · LW(p) · GW(p)

Do you personally believe in that claim?

It seems there are two claims: One, that each human will be reflectively self-consistent given enough time; two, that the self-consistent solution will be the same for all humans. I'm highly confident of the first; for the second, let me qualify slightly:

  • Not all human-like things are actually humans, eg psychopaths. Some of these may be fixable.
  • Some finite tolerance is implied when I say "the same" solution will be arrived at.

With those qualifications, yes, I believe the second claim with, say, 85% confidence.

Replies from: Kaj_Sotala, cousin_it
comment by Kaj_Sotala · 2013-08-29T07:37:15.460Z · LW(p) · GW(p)

I find the first claim plausible though not certain, but I would expect that if such individual convergence happens, it will lead to collective divergence not convergence.

When we are young, our moral intuitions and beliefs are a hodge-podge of different things, derived from a wide variety of sources, probably reflecting something like a "consensus morality" that is the average of different moral positions in society. If/when we begin to reflect on these intuitions and beliefs, we will find that they are mutually contradictory. But one person's modus ponens is another's modus tollens: faced with the fact that a utilitarian intuition and a deontological intuition contradict each other, say, we might end up rejecting the utilitarian conclusion, rejecting the deontological conclusion, or trying to somehow reconcile them. Since logic by itself does not tell us which alternative we should choose, it becomes determined by extra-logical factors.

Given that different people seem to arrive at different conclusions when presented with such contradictory cases, and given that their judgement seems to be at least weakly predicted by their existing overall leanings, I would guess that the choice of which intuition to embrace would depend on their current balance of other intutions. Thus, if you are already leaning utilitarian, the intuitions which are making you lean that way may combine together and cause you to reject the deontological intuition, and vice versa if you're learning deontologist. This would mean that a person who initially started with an even mix of both intuitions would, by random drift, eventually end up in a position where one set of intuitions was dominant, after which there would be a self-reinforcing trajectory towards an area increasingly dominated by intuitions compatible with the ones currently dominant. (Though of course the process that determines which intuitions get accepted and which ones get rejected is nowhere as simple as just taking a "majority vote" of intuitions, and some intuitions may be felt so strongly that they are almost impossible to reject.) This would mean that as people carried out self-reflection, their position would end up increasingly idiosyncratic and distant from the consensus morality. This seems to be roughly compatible with what I have anecdotally observed in various people, though my sample size is relatively small.

I feel that I have personally been undergoing this kind of a drift: I originally had the generic consensus morality that one adopts by spending their childhood in a Western country, after which I began reading LW, which worked to select and reinforce my existing set of utilitarian intuitions - but had I not already been utilitarian-leaning, the utilitarian emphasis on LW might have led me to reject those claims and seek out a (say) more deontological influence. But as time has gone by, I have become increasingly aware of the fact that some of my strongest intuitions lean towards negative utilitarianism, whereas LW is more akin to classical utilitarianism. Reflecting upon various intuitions has led me to gradually reject various intuitions that I previously took to support classical rather than negative utilitarianism, thus causing me to move away from the general LW consensus. And since this process has caused some of the intuitions that previously supported a classical utilitarian position to lose their appeal, I expect that moving back towards CU is less likely than continued movement towards NU.

comment by cousin_it · 2013-08-29T13:32:40.620Z · LW(p) · GW(p)

Seconding Kaj_Sotala's question. Is there a good argument why self-improvement doesn't have diverging paths due to small differences in starting conditions?

Replies from: hairyfigment
comment by hairyfigment · 2013-08-30T00:47:05.731Z · LW(p) · GW(p)

Dunno. CEV actually contains the phrase, "and had grown up farther together," which the above leaves out. But I feel a little puzzled about the exact phrasing, which does not make "were more the people we wished we were" conditional on this other part - I thought the main point was that people "alone in a padded cell," as Eliezer puts it there, can "wish they were" all sorts of Unfriendly entities.

comment by Eugine_Nier · 2013-08-30T01:53:52.263Z · LW(p) · GW(p)

That argument seems like it would apply equally well to non-moral beliefs.

comment by Armok_GoB · 2013-08-30T23:23:03.836Z · LW(p) · GW(p)

I assume the same but instead of "all humans" the weaker "the people participating in this conversation".

comment by buybuydandavis · 2013-08-29T10:33:53.173Z · LW(p) · GW(p)

I don't think even that's a sufficient definition.

It's that all observers (except psychos), no matter their own particular circumstances and characteristics, would assign approval/disapproval in exactly the same way.

Replies from: drethelin
comment by drethelin · 2013-08-29T17:55:50.977Z · LW(p) · GW(p)

Psychopaths are quite capable of perceiving objective truths. In fact if there was an objective morality I expect it would work better for psychopaths than for anyone else.

Replies from: buybuydandavis
comment by buybuydandavis · 2013-08-29T21:56:05.704Z · LW(p) · GW(p)

I believe Rolf has excommunicated psychopaths (and Clippy) from the set of agents from whom "human morality" is calculated.

First they purged the psychopaths...

Me, I don't think everyone else converges to the same conclusions. All non psychopaths just aren't all made out of the same moral cookie cutter. It's not that we have to "figure out" what is right, it's that we have different values. If casual observation doesn't convince you of this, Haidt's quantified approach should.

comment by Lumifer · 2013-08-28T18:49:09.889Z · LW(p) · GW(p)

That's one but not the only one possible definition of objective morality.

comment by Shmi (shminux) · 2013-08-28T18:00:40.488Z · LW(p) · GW(p)

At least some of the prominent regulars seem to believe in objective morality outside of any FAI context, I think (Alicorn? palladias?).

comment by Vladimir_Nesov · 2013-08-29T14:13:06.167Z · LW(p) · GW(p)

The connotations of "objective" (also discussed in the other replies in this thread) don't seem relevant to the question about the meaning of "correct" morality. Suppose we are considering a process of producing an idealized preference that gives different results for different people, and also nondeterministically gives one of many possible results for each person. Even in this case, the question of expected ranking of consequences of alternative actions according to this idealization process applied to someone can be asked.

Should this complicated question be asked? If the idealization process is such that you expect it to produce a better ranking of outcomes than you can when given only a little time, then it's better to base actions on what the idealization process could tell you than on your own guess (e.g. desires). To the extent your own guess deviates from your expectation of the idealization process, basing your actions on your guess (desires) is an incorrect decision.

A standard example of an idealization dynamic is what you would yourself decide given much more time and resources. If you anticipate that the results of this dynamic can nondeterministically produce widely contradictory answers, this too will be taken into account by the dynamic itself, as the abstract you-with-more-time starts to contemplate the question. The resulting meta-question of whether taking the diverging future decisions into account produces worse decisions can be attacked in the same manner, etc. If done right, such process can reliably give a better result than you-with-little-time can, because any problem with it that you could anticipate will be taken into account.

A hypothetical idealization dynamic may not be helpful in actually making decisions, but its theoretical role is that it provides a possible specification of the "territory" that moral reasoning should explore, a criterion of correctness. It is a hard-to-use criterion of correctness, you might need to build a FAI to actually access it, but at least it's meaningful, and it illustrates the way in which many ways of thinking about morality are confused.

(As an analogy, we might posit the problem of drawing an accurate map of the surface of Pluto. My argument amounts to pointing out that Pluto can be actually located in the world, even if we don't have much information about the details of its surface, and won't be able to access it without building spacecraft. Given that there is actual territory to the question of the surface of Pluto, many intuition-backed assertions about it can already be said to be incorrect (as antiprediction against something unfounded), even if there is no concrete knowledge about what the correct assertions are. "Subjectivity" may be translated as different people caring about surfaces of different celestial bodies, but all of them can be incorrect in their respective detailed/confident claims, because none of them have actually observed the imagery from spacecraft.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-08-30T06:28:46.129Z · LW(p) · GW(p)

A hypothetical idealization dynamic may not be helpful in actually making decisions, but its theoretical role is that it provides a possible specification of the "territory" that moral reasoning should explore, a criterion of correctness.

I think that such a specification probably isn't the correct specification of the territory that moral reasoning should explore. By analogy, it's like specifying the territory for mathematical reasoning based on idealizing human mathematical reasoning, or specifying the territory for scientific reasoning based on idealizing human scientific reasoning. (As opposed to figuring out how to directly refer to some external reality.) It seems like a step that's generally tempting to take when you're able to informally reason (to some extent) about something but you don't know how to specify the territory, but I would prefer to just say that we don't know how to specify the territory yet. But...

It is a hard-to-use criterion of correctness, you might need to build a FAI to actually access it, but at least it's meaningful, and it illustrates the way in which many ways of thinking about morality are confused.

Maybe I'm underestimating the utility of having a specification that's "at least meaningful" even if it's not necessarily correct. (I don't mind "hard-to-use" so much.) Can you give some examples of how it illustrates the way in which many ways of thinking about morality are confused?

comment by Discredited · 2013-08-28T20:16:29.835Z · LW(p) · GW(p)

I came to the metaethics sequence an ethical subjectivist and walked away an ethical naturalist. I've mostly stopped using the words "objective" and "subjective", because I've talked with subjectivists with whom I have few to no substantive disagreements. But I think you and I do have a disagreement! How exciting.

I accept that there's something like an ordering over universe configurations which is "ideal" in a sense I will expand on later, and that human desirability judgements are evidence about the structure of that ordering, and that arguments between humans (especially about the desirability of of outcomes or the praiseworthiness of actions) are often an investigation into the structure of that ordering, much as an epistemic argument between agents (especially about true states of physical systems or the truth value of mathematical propositions) investigates the structure of a common reality which influences the agents' beliefs.

A certain ordering over universe configurations also influences human preferences. It is not a causal influence, but a logical one. The connection between human minds and morality, the ideal ordering over universe configurations, is in the design of our brains. Our brains instantiate algorithms, especially emotional responses, that are logically correlated with the computation that compresses the ideal ordering over universe configurations.

Actually, our brains are logically correlated with the computations that compress multiple different orderings over universe configurations, which is part of the reason we have moral disagreements. We're not sure which valuation - which configuration-ordering that determines how our consequential behaviors change in response to different evidence - which valuation is our logical antecedent and which are merely correlates. This is also why constructed agents similar to humans, like the ones in Three Worlds Collide, could seem to have moral disagreements with humans. They, as roughly consequentialist agents, would also be logically influenced by an ordering over universe configurations, and because of similar evolutionary pressures might also developed emotion-type algorithms. The computations compressing the different orderings, morality versus "coherentized alien endorsement relation" would be logically correlated, would be partially conditionally compressed by knowing the value of simpler computations that were common between the two. Through these commonalities the two species could have moral disagreements. But there would be other aspects of the computations that compress their orderings, logical factors that would influence one species but not the other. These would naively appear as moral disagreements, but would simply be mistaken communication: exchanging evidence about different referents while thinking they were the same.

But there are other sources of valuation-disagreement than being separate optimization processes. Some sources of moral disagreement between humans: We have only partial information about morality, just as we can be partially ignorant about the state of reality. For example, we might be unsure what long term effects to society would accompany the adoption of some practice like industrial manufacturing. Or even if someone in the pre-industrial era had perfect foresight, they might be unsure of how their expressed preferences toward that society would change with more exposure it. There are raw computational difficulties (unrelated to prediction of consequences) in figuring out which ordering best fits our morality-evidence, since the space of orderings over universe configurations is large. There are still more complicated issues with model selection because human preferences aren't fully self endorsing.

Anyway, I've been using the word "ideal" a lot as though multiple people share a single ideal, and it's past time I explained why. Humans share a ton of neural machinery and have a spatially concentrated origins, both of which mean closer logical-causal influences to their roughly-consequential reasoning. We have so much in common that saying "Pah, nothing is right. It's all just subjective preferences and we're very different people and what's right for you is different from what's right for me" seems to me like irresponsible ignorance. We've got like friggin' hundreds of identical functional regions in our brains. We can exploit that for fun and profit. We can use interpersonal communication and argumentation and living together and probably other things to figure out morality. I see no reason to be dismissive of others' values that we don't sympathize with simply because there's no shiny morality-object that "objectively exists" and has a wire leading into all our brains or whatever. Blur those tiniest of differences and it's a common ideal. And that commonality is important enough that "moral realism" is a badge worth carrying on my identity.

comment by Ishaan · 2013-11-26T07:58:09.512Z · LW(p) · GW(p)

People are often wrong about what their preferences are + most humans have roughly similar moral hardware. Not identical, but close enough to behave as if we all share a common moral instinct.

When you make someone an argument and they change their mind on a moral issue, you haven't changed their underlying preferences...you've simply given them insight as to what their true preferences are.

For example, if a neurotypical human said that belief in God was the reason they don't go around looting and stealing, they'd be wrong about themselves as a matter of simple fact.

-as per the definition of preference that I think makes the most sense.

-Alternatively, you might actually be re-programming their preferences...I think it's fair to say that at least some preferences commonly called "moral" are largely culturally programmed.

comment by Armok_GoB · 2013-08-30T23:21:15.118Z · LW(p) · GW(p)

I just assumed it meant "My extrapolated volition" and also "your extrapolated volition" and also the implication those are identical.

comment by Wei Dai (Wei_Dai) · 2013-08-29T01:01:13.146Z · LW(p) · GW(p)

I wrote a post to try to answer this question. I talk about "should" in the post, but it applies to "correct" as well.

comment by Eugine_Nier · 2013-08-30T02:24:57.140Z · LW(p) · GW(p)

Here is a decent discussion of objective morality.

comment by Lumifer · 2013-08-28T17:45:33.780Z · LW(p) · GW(p)

What's up with that?

The usual Typical Mind Fallacy which is really REALLY pervasive.

comment by Gurkenglas · 2013-09-03T19:56:37.631Z · LW(p) · GW(p)

How does muscle effort convert into force/Joules applied? What are the specs of muscles? An example of "specs" would be:

Muscle:
0<=battery<=100
Each second: increase battery by one if possible
At will: Decrease battery by one to apply one newton for one second

I am wondering because I was trying to optimize things like my morning bike ride across the park, questions like whether I should try to maximize my speed for the times when I'm going uphill, so gravity doesn't pull me backward for so long; or whether it is an inefficient move to walk instead of standing on the escalator because it could have carried me up instead, leaving more energy for the minutes of afterward walking.

Yes, yes, wasted thinking time, but my mind keeps wandering there on my way to places and its frustrating not knowing the math behind it.

Replies from: Lumifer, Gurkenglas
comment by Lumifer · 2013-09-03T20:30:16.935Z · LW(p) · GW(p)

What are the specs of muscles?

There are books and papers on the physiology of exercise, in particular on how muscles use energy in different regimes. For a basic intro check Crossfit, for more details you can look at e.g. Body By Science.

trying to optimize

What are you trying to optimize for?

Replies from: Gurkenglas
comment by Gurkenglas · 2013-09-03T22:43:56.938Z · LW(p) · GW(p)

Those links seem to describe how to maximize fitness, not what you are able to do with a given amount of it. Isn't there at least a basic rule of thumb, like which of applying 100 N over 10 m and 50 N over 30 m exerts a muscle more?

I'm trying to optimize for a certain combination of time saved and not having exerted myself too much during a trip.

comment by Gurkenglas · 2013-09-03T20:08:48.005Z · LW(p) · GW(p)

Similarly: What are a qubit's specs? I would like to be able to think about what class of problem would be trivial with a quantum computer.

Replies from: pengvado
comment by pengvado · 2013-09-04T01:29:52.751Z · LW(p) · GW(p)

Then what you should be asking is "which problems are in BQP?" (if you just want a summary of the high level capabilities that have been proved so far), or "how do quantum circuits work?" (if you want to know what role individual qubits play). I don't think there's any meaningful answer to "a qubit's specs" short of a tutorial in the aforementioned topics. Here is one such tutorial I recommend.

comment by hyporational · 2013-09-01T13:36:42.056Z · LW(p) · GW(p)

Is reading fiction ever instrumentally useful (for a non-writer) compared to reading more informative literature? How has it been useful to you?

Replies from: knb, kalium, Ishaan
comment by knb · 2013-09-02T10:07:14.564Z · LW(p) · GW(p)

I read fiction about 1/3 of the time and nonfiction 2/3s of the time. When reading non-fiction I often spend idle moments in my day lost in abstract thought about concepts related to the non-fiction book I'm reading. I've noticed when reading novels, I'm far more observant about people in my personal life and their thoughts and motivations. This is especially true when reading works with thoughtful and observant POV characters (especially detective fiction and mystery novels). I think fiction, like music, can serve to frame your mind-state in a certain way.

comment by kalium · 2013-09-02T04:15:57.815Z · LW(p) · GW(p)
  • It has been useful for manipulating my mood and general mindset. ("My life would be more amusing right now if I felt a bit nervous and unsure of reality. Better go read something by Philip K. Dick." Or "I would be more productive if I were feeling inspired by the grandness of the scientific endeavor. Better go read some Golden Age SF.")
  • It is useful for understanding what certain situations feel like without going through them yourself, and therefore can help you empathize with people in those situations whose behavior otherwise does not make sense to you. Memoirs and other nonfiction can also do this, but it's easier to find well-written fiction than well-written nonfiction, and for this purpose the writing must be very good.
comment by Ishaan · 2013-11-26T07:37:27.129Z · LW(p) · GW(p)

Notion that fiction increases empathy has been making the rounds. Not an area I've researched heavily, I am intrigued but skeptical.

Replies from: TheOtherDave, hyporational
comment by TheOtherDave · 2013-11-26T16:31:31.413Z · LW(p) · GW(p)

I haven't read the article, but I read the abstract, and am startled that it seems like a correlational study.
Do they do anything to differentiate "reading fiction increases empathy" from "empathic people read more fiction"?

Replies from: Ishaan
comment by Ishaan · 2013-11-26T20:51:49.159Z · LW(p) · GW(p)

I haven't really read it in detail, but in the abstract, see the sentence:

In order to rule out the role of personality, we first identified Openness as the most consistent correlate. This trait was then statistically controlled for, along with two other important individual differences:the tendency to be drawn into stories and gender. Even after accounting the tendency to be drawn into stories and gender. Even after accounting.

Which means that fiction was still predictive after accounting for various self-reported personality traits. So they did try to differentiate the two.

For detail, the corresponding section: "Association between print-exposure and empathy: Ruling out the role of individual differences"

experimental stuff

comment by hyporational · 2013-11-26T12:38:29.026Z · LW(p) · GW(p)

Thanks. Some of my med school professors have this opinion, but I'm not sure if they've got any data to back it up.

Replies from: Lumifer
comment by Lumifer · 2013-11-26T15:55:26.615Z · LW(p) · GW(p)

I suspect there is a correlation but I'm entirely unsure of the direction of causality.

comment by [deleted] · 2013-08-31T16:36:54.877Z · LW(p) · GW(p)

How long does it take other to write a typical LW post or comment?

I perceive myself as a very slow writer, but I might just have unrealistic expectations.

Replies from: drethelin, satt
comment by drethelin · 2013-09-01T17:57:16.113Z · LW(p) · GW(p)

most of my comments take less than a minute to write.

comment by satt · 2013-09-02T01:15:26.169Z · LW(p) · GW(p)

Depends on the comment. Mine are almost all rewrites, so anything that's not a short answer to a simple factual question takes me at least a couple of minutes. The upper bound is probably ≈2 hours.

If I remember rightly, this one took almost that long, and would've taken longer if I'd tried to polish it and not have it end in disjointed bullet points. There are quite a few reasons why that comment was so time-consuming: it was lengthy; it was a response to criticism, so I wanted to make it very obviously correct; I wanted to refer to lots of examples and sources, which means deciding which bits of which sources to quote, and hitting up Google & Wikipedia; I wanted to cover quite a lot of ground so I had to spend more time than usual squeezing out verbiage; and I had to stop & think periodically to check everything I was saying came together coherently.

Sometimes I write a few paragraphs, realize I've written myself into a corner, then decide to tear everything down and start over from a different angle. (Or I decide it's not worth the effort and refrain from saying anything.) That happened with this comment, so it wound up taking something like an hour.

This comment, by contrast, has only needed about half an hour to write because it's mostly based on introspection, isn't that long, isn't communicating anything complex, won't be controversial, isn't optimized for transparency, and turns out not to have needed any full-scale rewrites.

I also think I'm a slow writer by LW standards. (Unsurprisingly?)

comment by TRManderson · 2013-08-30T09:31:09.249Z · LW(p) · GW(p)

Is there any reason we don't include a risk aversion factor in expected utility calculations?

If there is an established way of considering risk aversion, where can I find posts/papers/articles/books regarding this?

Replies from: somervta
comment by somervta · 2013-08-30T09:36:29.629Z · LW(p) · GW(p)

Because doing so will lead to worse outcomes on average. Over a long series of events, someone who just follows the math will do better than someone who is risk-averse wrt to 'utility'. Of course, often our utility functions are risk-averse wrt to real-world things, because of non-linear valuation - e.g, your first $100,000 is more valuable than your second, and your first million is not 10x as valuable as your first $100,000.

Replies from: TRManderson
comment by TRManderson · 2013-08-30T10:05:27.613Z · LW(p) · GW(p)

Thanks. Just going to clarify my thoughts below.

Because doing so will lead to worse outcomes on average.

In specific instances, avoiding the negative outcome might be beneficial, but only for that instance. If you're constantly settling for less-than-optimal outcomes because they're less risky, it'll average out to less-than-optimal utility.

The terminology "non-linear valuation" seemed to me to imply some exponential valuation, or logarithmic or something; I think "subjective valuation" or "subjective utility" might be better here.

Replies from: Ishaan, somervta
comment by Ishaan · 2013-11-26T08:14:38.401Z · LW(p) · GW(p)

You just incorporate that straight into the utility function.

You have $100 to your name. Start with 100 utility.

Hey! Betcha $50 this coin comes up heads!

$150 and therefore 110 utility if you win.

$50 and therefore 60 utility if you lose.

So you don't take the bet. It's a fair bet dollar wise, but an unfair bet utility wise.

comment by somervta · 2013-08-31T03:32:30.375Z · LW(p) · GW(p)

Yes, non-linear valuation means that your subjective value for X does not increase linearly with linear increases in X. It might increase logarithmically, or exponentially, or polynomially (with degree > 1), or whatever.

comment by [deleted] · 2013-08-29T18:04:51.317Z · LW(p) · GW(p)

A significant amount of discussion on Less Wrong appears to be of the following form:

1: How do we make a superintelligent AI perform more as we want it to, without reducing it to a paperweight?

Note: reducing it to a paperweight is the periodically referenced "Put the superintelligence in a box and then delete it if it sends any output outside the box." school of AI Safety.

Something really obvious occurred to me, and it seems so basic that there has to be an answer somewhere, but I don't know what to look under. What if we try flipping the question and asking this?

2: How do we make an AI that obediently performs as we want it to, but does so smarter, while maintaining it's obedience?

I'm assuming that's known and discussed. Is there a name for it? Maybe a flaw that I'm not seeing?

Replies from: hairyfigment, Viliam_Bur
comment by hairyfigment · 2013-08-29T18:22:12.778Z · LW(p) · GW(p)

It does seem like an interesting question. But the most obvious flaw is that we still don't have the starting point - software does what we tell it to do, not what we want, which is usually different - and I don't immediately see any way to get there without super-intelligence.

Holden Karnofsky proposed starting with an Oracle AI that tells us what it would do if we gave it different goal systems. But if we avoided giving it any utility function of its own, the programmers would need to not only think of every question (regarding every aspect of "what it would do"), but also create an interface for each sufficiently new answer. I'll go out on a limb and say this will never happen (much less happen correctly) if someone in the world can just create an 'Agent AI'.

comment by Viliam_Bur · 2013-08-31T19:20:47.473Z · LW(p) · GW(p)

How do we make an AI that obediently performs as we want it to, but does so smarter, while maintaining it's obedience?

Depends on what you mean by "smarter"? It is merely good at finding more efficient ways to fulfill your wish... or is it also able to realize that some literal intepretations of your wish are not what you actually want to happen (but perhaps you aren't smart enough to realize it)? In the latter case, will it efficiently follow the literal intepretation?

comment by Error · 2013-08-29T12:58:25.044Z · LW(p) · GW(p)

Does the unpredictability of quantum events produce a butterfly effect on the macro level? i.e., since we can't predict the result of a quantum process, and our brains are composed of eleventy zillion quantum processes, does that make our brains' output inherently unpredictable as well? Or do the quantum effects somehow cancel out? It seems to me that they must cancel out in at least some circumstances or we wouldn't have things like predictable ball collisions, spring behavior, etc.

If there is a butterfly effect, wouldn't that have something to say about Omega problems (where the predictability of the brain is a given) and some of the nastier kinds of AI basilisks?

Replies from: Oscar_Cunningham, Luke_A_Somers
comment by Oscar_Cunningham · 2013-08-29T21:55:48.276Z · LW(p) · GW(p)

Some systems exhibit a butterfly effect (a.k.a. chaos); some don't. The butterfly effect is where (arbitrarily) small changes to the conditions of the system can totally change it's future course. The weather is a good example of this. The change caused by a butterfly flapping it's wing differently will amplify itself until the entire Earth's weather is different from what it would have been. But other systems aren't like that. They're more "stable". For example if you change the position of any individual atom in my computer it won't make any difference to the computations I'm running. Other things are predictable just because we don't give time for any changes to develop. For example ball collisions are predictable, but if we study many ball collisions in a row, like a billiards "trick shot", then hitting the initial ball slightly differently will make a huge difference.

You ask about quantum events. For chaotic systems, deviations caused by quantum events will indeed cause a butterfly effect.

So whether or not the brain is predictable depends on to what extent it's chaotic, and to what extent it's stable. I suspect that it's chaotic, in the sense that a small tweak to it could totally change the way a thought process goes. But over time my brain will be predictable "on average". I'll behave in ways matching my personality. Similarly a butterfly flapping it's wings might change when it rains, but it'll still rain more in Bergen than the Sahara.

I don't think this says much about Omega problems. Quantum butterfly effects will (I suspect) stop Omega exactly simulating my thought process, but I reckon it could still predict my choice with very high confidence just by considering my most likely lines of thought.

Replies from: Locaha
comment by Locaha · 2013-08-30T09:10:22.958Z · LW(p) · GW(p)

For example if you change the position of any individual atom in my computer, it won't make any difference to the computations I'm running.

But it will change the weather just like the butterfly.

comment by Luke_A_Somers · 2013-08-29T20:22:14.462Z · LW(p) · GW(p)

The butterfly effect kicks in wherever there's something unstable - whenever there's a system where little changes grow. Billiards balls do this, for instance, which is why it's harder to hit the cue so it hits the 4 so it hits the 1 so it hits the 5 than to hit the cue so it hits the 5 (assuming the same total ball travel distance).

Quantum noise is no less capable of doing this than anything else. The reason macro objects look solid has little to do with special cancellation and a lot to do with how tightly bound solid objects are. I suppose that's a special case of cancellation, but it's a really special case.

Omega-like problems are hypotheticals, and speaking of quantum indeterminacy in respect to them is fighting the hypothetical. Some versions word it so if Omega can't get a reliable answer he doesn't even play the game, or withholds the money, or kicks you in the shins or something - but those are just ways of getting people to stop fighting the hypothetical.

comment by NancyLebovitz · 2013-08-28T15:58:32.053Z · LW(p) · GW(p)

Is conservation of matter a problem for the many worlds interpretation of quantum physics?

Replies from: shminux, Eliezer_Yudkowsky, Emile, Alejandro1, DanielLC, Alsadius, pragmatist
comment by Shmi (shminux) · 2013-08-28T16:52:59.309Z · LW(p) · GW(p)

I don't believe I am explaining MWI instead of arguing against it... whatever has this site done to me? Anyway, grossly simplified, you can think of the matter as being conserved because the "total" mass is the sum of masses in all worlds weighted by the probability of each world. So, if you had, say, 1kg of matter before a "50/50 split", you still have 1kg = 0.5*1kg+0.5*1kg after. But, since each of the two of you after the split has no access to the other world, this 50% prior probability is 100% posterior probability.

Also note that there is no universal law of conservation of matter (or even energy) to begin with, not even in a single universe. It's just an approximation given certain assumptions, like time-independence of the laws describing the system of interest.

Replies from: Luke_A_Somers, None, RolfAndreassen
comment by Luke_A_Somers · 2013-08-29T19:56:04.543Z · LW(p) · GW(p)

LOL @ your position. Agree on most.

Disagree on the conservation of energy though. Every interaction conserves energy (unless you know of time-dependent laws?). Though nothing alters it, we only experience worlds with a nontrivial distribution of energies (otherwise nothing would ever happen) (and this is true whether you use MWI or not)

comment by [deleted] · 2013-08-28T17:37:04.301Z · LW(p) · GW(p)

I don't know enough of the underlying physics to conclusively comment one way or another, but it seems to me defining "total mass" as the integral of "local mass" over all worlds wrt the world probability measure implies that an object in one world might be able to mysteriously (wrt that world) gain mass by reducing its mass in some set of worlds with non-zero measure.

We don't actually see that in e.g. particle scattering, right?

Replies from: shminux, Manfred
comment by Shmi (shminux) · 2013-08-28T18:46:21.165Z · LW(p) · GW(p)

This would manifest as non-conservation of energy-momentum in scattering, and, as far as I know, nothing like that has been seen since neutrino was predicted by Pauli to remedy the apparent non-conservation of energy in radioactive decay. If we assume non-interacting worlds, then one should not expect to see such violations. Gravity might be an oddball, however, since different worlds are likely to have difference spacetime geometry and even topology potentially affecting each other. But this is highly speculative, as there is no adequate microscopic (quantum) gravity model out there. I have seen some wild speculations that dark energy or even dark matter could be a weak gravity-only remnant of the incomplete decoherence stopped at the Planck scale.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-29T19:59:23.599Z · LW(p) · GW(p)

I don't see why differing spacetime geometries or topologies would impact other worlds. What makes gravity/geometry leak through when nothing else can?

Replies from: shminux
comment by Shmi (shminux) · 2013-08-29T20:54:02.502Z · LW(p) · GW(p)

Standard QFT is a fixed background spacetime theory, so if you have multiple non-interacting blobs of probability density in the same spacetime, they will all cooperatively curve it, hence the leakage. If you assert that the spacetime itself splits, you better provide a viable quantum gravity model to show how it happens.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-29T22:17:36.138Z · LW(p) · GW(p)

Provide one? No. Call for one? Yes.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-29T22:49:16.001Z · LW(p) · GW(p)

Sure, call for one. After acknowledging that in the standard QFT you get inter-world gravitational interaction by default....

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T15:12:14.501Z · LW(p) · GW(p)

In usual flat space QFT, you don't have gravity at all, so no!

Replies from: shminux
comment by Shmi (shminux) · 2013-08-30T16:29:51.647Z · LW(p) · GW(p)

Well, QFT can be also be safely done on a curved spacetime background, but you are right, you don't get dynamic gravitational effects from it. What I implicitly assumed is QFT+semiclassical GR, where one uses semiclassical probability-weighted stress-energy tensor as a source.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T18:02:56.510Z · LW(p) · GW(p)

If that were true, MWI would have inter-world gravitational interactions. But it happens to be obviously wrong.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-30T18:07:35.573Z · LW(p) · GW(p)

What do you mean by "obviously wrong"? Because it would be evidence against MWI? Maybe it is, I don't recall people trying to formalize it. Or maybe it limits the divergence of the worlds. Anyway, if it is not a good model, does this mean that we need a full QG theory to make MWI tenable?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T18:15:35.357Z · LW(p) · GW(p)

Obviously wrong in that if you hold pure QFT + semiclassical GR to be complete and correct, then you end up with Cavendish experiments being totally unworkable because the density of the mass you put there is vanishingly small.

does this mean that we need a full QG theory to make MWI tenable?

I'm willing to state outright that MWI relies on the existence of gravity also being quantum outright, not semiclassical in nature. This does not seem like much of a concession to me.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-30T18:32:46.385Z · LW(p) · GW(p)

Hmm, I don't follow your argument re the Cavendish experiment. The original one was performed with fairly heavy lead balls.

I'm willing to state outright that MWI relies on the existence of gravity also being quantum outright, not semiclassical in nature. This does not seem like much of a concession to me.

That semiclassical gravity does not work in the weak-filed regime is a fairly strong statement. Widely accepted models like the Hawking and Unruh radiation are done in that regime.

A rigorous argument that semiclassical gravity is incompatible with MWI would probably be worth publishing.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T18:54:43.999Z · LW(p) · GW(p)

Nawww, how could that be publishable?

Even if you start with an initial state where there is a well defined Cavendish-experimenter-man (which if you're going with no objective collapse is a rather peculiar initial state) MWI has him all over the room, performing experiments at different times, with the weights at different displacements. They'd be pulling one way and the other, and his readings would make no sense whatsoever.

Semiclassical gravity is a perfectly fine approximation, but to say it's real? Heh.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-30T19:18:12.272Z · LW(p) · GW(p)

I meant something more limited than this, like a small cantilever in an unstable equilibrium getting entangled with a particle which may or may not push it over the edge with 50% probability, and measuring its gravitational force on some detector.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-30T21:00:02.901Z · LW(p) · GW(p)

Oh. Well, then, it's no longer 'obviously false' as far as that goes (i.e. we haven't done that experiment but I would be shocked at anything but one particular outcome), but the whole point of MWI is to not restrain QM to applying to the tiny. Unless something happens between there and macro to get rid of those other branches, stuff gonna break hard. So, yeah. As an approximation, go ahead, but don't push it. And don't try to use an approximation in arguments over ontology.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-30T23:30:30.680Z · LW(p) · GW(p)

the whole point of MWI is to not restrain QM

Sorry, I forgot for a moment that the notion was designed to be untestable. Never mind.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-31T12:06:38.622Z · LW(p) · GW(p)

What? All you need to do is falsify QM, and MWI is dead dead DEAD.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-31T19:22:48.169Z · LW(p) · GW(p)

As I said, you identify QM with MWI. This is not the only option.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-09-01T12:23:41.311Z · LW(p) · GW(p)

What is it, then?

Either the branches we don't experience exist, or they don't.

If they don't, then what made us exist and them not?

Replies from: shminux
comment by Shmi (shminux) · 2013-09-01T19:12:08.144Z · LW(p) · GW(p)

Not this discussion again. Disengaging.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-09-01T22:11:43.240Z · LW(p) · GW(p)

It's never this discussion, since it never gets discussed, but OK!

comment by Manfred · 2013-08-28T18:31:12.778Z · LW(p) · GW(p)

Defining total energy as the integral of energy over space implies that an object in one part of space might be able to mysteriously gain energy by reducing energy in other parts of space.

Do we see this in the real world? How useful is the word "mysterious" here?

Replies from: Alejandro1, None
comment by Alejandro1 · 2013-08-28T18:40:09.372Z · LW(p) · GW(p)

Ordinary energy conservation laws are local: they do not just state that total energy is conserved, but that any change in energy in a finite region of any size is balanced by a flux of energy over the boundary of that region. I don't think any such laws exist in "multi-world-space", which even accepting MWI is basically a metaphor, not a precise concept.

Replies from: Manfred
comment by Manfred · 2013-08-28T19:05:42.184Z · LW(p) · GW(p)

So are there mysterious fluxes that move energy from one part of space to another?

Replies from: Plasmon, Luke_A_Somers
comment by Plasmon · 2013-08-28T19:56:49.583Z · LW(p) · GW(p)

Umm, yes ? They're quite ubiquitous.

Replies from: Manfred
comment by Manfred · 2013-08-28T20:11:46.879Z · LW(p) · GW(p)

Those look more like boring, physical-law-abiding (non-mysterious) fluxes that move energy form one part of space to another.

comment by Luke_A_Somers · 2013-08-29T20:02:05.137Z · LW(p) · GW(p)

Not mysterious ones, no - only the ordinary ones that Plasmon mentions.

comment by [deleted] · 2013-08-29T10:15:48.628Z · LW(p) · GW(p)

"Mysterious" here means "via an otherwise unexplained-in-a-single-world mechanism."

Replies from: Manfred
comment by Manfred · 2013-08-29T13:10:28.427Z · LW(p) · GW(p)

There's no mysterious quantum motion for the same reason there's no mysterious energetic motion - because energy / mass / quantum amplitude has to come from somewhere to go somewhere, it requires an interaction to happen. An interaction like electromagnetism, or the strong force. You know, those ubiquitous, important, but extremely well-studied and only-somewhat-mysterious things. And once you study this thing and make it part of what you call "energy," what would otherwise be a mysterious appearance of energy just becomes "oh, the energy gets stored in the strong force." (From a pure quantum perspective at least. Gravity makes things too tricky for me)

The best way for a force to "hide" is for it to be super duper complicated. Like if there was some kind of extra law of gravity that only turned on when the planets of our solar system were aligned. But for whatever reason, the universe doesn't seem to have super complicated laws.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-08-29T13:18:32.674Z · LW(p) · GW(p)

Is there any plausible argument for why our universe doesn't have super-complicated laws?

The only thing I can think of is that laws are somehow made from small components so that short laws are more likely than long laws.

Another possibility is that if some behavior of the universe is complicated, we don't call that a law, and we keep looking for something simpler-- though that doesn't explain why we keep finding simple laws.

Replies from: Manfred
comment by Manfred · 2013-08-29T13:27:36.456Z · LW(p) · GW(p)

Is there any plausible argument for why our universe doesn't have super-complicated laws?

"We looked, and we didn't find any super-complicated laws."

comment by RolfAndreassen · 2013-08-28T18:45:22.831Z · LW(p) · GW(p)

So I know you said you were simplifying, but what if the worlds interfere? You don't necessarily get the same amount of mass before "collapse" (that is, decoherence) and after, because you may have destructive interference beforehand which by construction you can't get afterwards.

As an aside, in amplitude analysis of three-body decays, it used to be the custom to give the "fit fractions" of the two-body isobar components, defined as the integral across the Dalitz plot of each resonance squared, divided by the integral of the total amplitude squared. Naturally this doesn't always add to 100%, in fact it usually doesn't, due to interference. So now we usually give the complex amplitude instead.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-29T19:58:10.387Z · LW(p) · GW(p)

A) If they're able to interfere, you shouldn't have called them separate worlds in the first place.

B) That's not how interference works. The worlds are constructed to be orthogonal. Therefore, any negative interference in one place will be balanced by positive interference elsewhere, and so you don't end up with less or more than you started with. You don't even need to look at worlds to figure this out - time progression is unitary by the general form of the Schrodinger Equation and the real-valuedness of energy.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-28T22:07:23.953Z · LW(p) · GW(p)

No. It's not that kind of many-ness.

comment by Emile · 2013-08-28T19:53:29.235Z · LW(p) · GW(p)

For a huge oversimplification:

The cosmos is a big list of world-states, of the form "electrons in positions [(12.3, -2.8, 1.0), (0.5, 7.9, 6.1), ...] and speeds [...] protons in positions...". To each state, a quantum amplitude is assigned.

The laws of physics describes how the quantum amplitude shifts between world states as time goes by (based on speed of particles and various basic interactions ....).

Conservation of matter says that for each world state, you can compute the amount of matter (and energy) inside, and it stays the same.

comment by Alejandro1 · 2013-08-28T16:46:24.005Z · LW(p) · GW(p)

No, at least not in a technical mathematical-physics sense. "Conservation of matter", in mathematical physics, translates to the Hamiltonian operator being conserved, and that happens in quantum physics and a fortiori in all its plausible philosophical interpretations. In concrete, operationalist terms, this implies that an observer measuring the energy of the system at different times (without disturbing it in other way in the meantime) will see the same energy. It doesn't imply anything about adding results of observations in different MWI branches (which is probably meaningless).

For example if you have an electron with a given energy and another variable that "branches", then observers in each branch will see it with the same energy it had originally, and this is all the formal mathematical meaning of "conservation" requires. The intuition that the two branches together have "more energy" that there was initially and this is a conservation problem is mixing pictorial images used to describe the process in words, with the technical meaning of terms.

comment by DanielLC · 2013-08-28T21:20:01.869Z · LW(p) · GW(p)

I can tell you the details, but they don't really matter.

MWI has not been experimentally disproven. It all adds up to normality. Whatever observations we've made involving energy conservation are predicted by MWI.

comment by Alsadius · 2013-08-28T16:23:40.064Z · LW(p) · GW(p)

Depends how you interpret it. If you say a new universe is created with every quantum decision, then you could argue that(though I've always treated conservation laws as being descriptive, not proscriptive - there's no operation which changes the net amount of mass-energy, so it's conserved, but that's not a philosophical requirement). But the treatment of many-worlds I see more commonly is that there's already an infinite number of worlds, and it's merely a newly-distinct world that is created with a quantum decision.

comment by pragmatist · 2013-08-28T16:58:12.217Z · LW(p) · GW(p)

The deeper (and truer) version of "conservation of matter" is conservation of energy. And energy is conserved in many worlds. In fact, that's one of the advantages of many worlds over objective collapse interpretations, because collapse doesn't conserve energy. You can think of it this way: in order for the math for energy conservation to work out, we need those extra worlds. If you remove them, the math doesn't work out.

Slightly more technical explanation: The Schrodinger equation (which fully governs the evolution of the wavefunction in MWI) has a particular property, called unitarity. If you have a system whose evolution is unitary and also invariant under time translation, then you can prove that energy is conserved in that system. In collapse interpretations, the smooth Schrodinger evolution is intermittently interrupted by a collapse process, and that makes the evolution as a whole non-unitary, which means the proof of energy conservation no longer goes through (and you can in fact show that energy isn't conserved).

Replies from: shminux
comment by Shmi (shminux) · 2013-08-28T17:25:35.399Z · LW(p) · GW(p)

collapse doesn't conserve energy

This is quite misleading. Since collapse is experimentally compatible with "shut up and calculate", which is the minimal non-interpretation of QM, and it describes our world, where energy is mostly conserved, energy is also conserved in the collapse-based interpretations.

You can think of it this way: in order for the math for energy conservation to work out, we need those extra worlds. If you remove them, the math doesn't work out.

That's wrong, as far as I understand. The math works out perfectly. Objective collapse models have other issues (EPR-related), but conservation of energy is not one of them.

you can in fact show that energy isn't conserved

Links? I suspect that whatever you mean by energy conservation here is not the standard definition.

Replies from: DanArmak, pragmatist, Luke_A_Somers
comment by DanArmak · 2013-08-28T17:57:14.313Z · LW(p) · GW(p)

our world, where energy is mostly conserved

When isn't it? (This is another Stupid Question.)

Replies from: shminux
comment by Shmi (shminux) · 2013-08-28T18:18:54.145Z · LW(p) · GW(p)

One example is that in an expanding universe (like ours) total energy is not even defined. Also note that the dark energy component of whatever can possibly be defined as energy increases with time in an expanding universe. And if some day we manage to convert it into a usable energy source, we'll have something like a perpetuum mobile. A silly example: connect two receding galaxies to an electric motor in the middle with really long and strong ropes and use the relative pull to spin the motor.

What is conserved, however, according to general relativity, anyway, is the local stress-energy-momentum tensor field at each point in spacetime.

comment by pragmatist · 2013-08-28T17:57:11.914Z · LW(p) · GW(p)

Read the first section of this paper. Conservation of energy absolutely is a problem for objective collapse theories.

The definition of conservation being employed in the paper is this: The probability distribution of the eigenvalues of a conserved quantity must remain constant. If this condition isn't satisfied, it's hard to see why one should consider the quantity conserved.

ETA: I can also give you a non-technical heuristic argument against conservation of energy during collapse. When a particle's position-space wavefunction collapses, its momentum-space wavefunction must spread out in accord with the uncertainty principle. In the aggregate, this corresponds to increase in the average squared momentum, which in turn corresponds to an increase in kinetic energy. So collapse produces an increase in energy out of nowhere.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-28T18:26:53.775Z · LW(p) · GW(p)

I have skimmed through the paper, but I don't see any mention of how such a hypothetical violation can be detected experimentally.

Replies from: pragmatist
comment by pragmatist · 2013-08-28T18:46:33.146Z · LW(p) · GW(p)

Yeah, the paper I linked doesn't have anything on experimental detection of the violation. I offered it as support for my claim that the math for energy conservation doesn't work out in collapse interpretations. Do you agree that it shows that this claim is true? Anyway, here's a paper that does discuss experimental consequences.

Again, my point only applies to objective collapse theories, not instrumentalist theories that use collapse as a calculational device (like the original Copenhagen interpretation). The big difference between these two types of theories is that in the former there is a specified size threshold or interaction type which triggers collapse. Instrumentalist theories involve no such specification. This is why objective collapse theories are empirically distinct from MWI but instrumentalist theories are not.

comment by Luke_A_Somers · 2013-08-29T20:13:42.773Z · LW(p) · GW(p)

Since [non-ontological] collapse is experimentally compatible with "shut up and calculate", which is the minimal non-interpretation of QM...

... and is isomorphic to MWI...

This is quite misleading.

Doesn't seem like it. You have an initial state which is some ensemble of energy eigenstates. You do measurements, and thereby lose some of them. Looks like energy went somewhere to me. Of course under non-ontological collapse you can say 'we're isomorphic to QM! Without interpretation!' but when you come across a statement 'we're conserving this quantity we just changed!', something needs interpretation here.

If your interpretation is that the other parts of the wavefunction are still out there and that's how it's still conserved... well... guess what you just did. If you have any other solutions, I'm willing to hear them -- but I think you've been using the MWI all along, you just don't admit it.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-29T21:40:34.990Z · LW(p) · GW(p)

... and is isomorphic to MWI...

... or any other interpretation...

Of course under non-ontological collapse you can say 'we're isomorphic to QM! Without interpretation!' [...] something needs interpretation here.

I guess our disagreement is whether "something needs interpretation here". I hold all models with the same consequences as isomorphic, with people being free to use what works best for them for a given problem. I also don't give any stock to Occam's razor arguments to argue for one of several mathematically equivalent approaches.

If your interpretation is that the other parts of the wavefunction are still out there and that's how it's still conserved... well... guess what you just did. If you have any other solutions, I'm willing to hear them -- but I think you've been using the MWI all along, you just don't admit it.

If you have any arguments why one of the many untestables is better than the rest, I'm willing to hear them -- but I think you've been using "shut-up-and-calculate" all along, you just don't admit it.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-29T22:17:11.589Z · LW(p) · GW(p)

I totally do admit it. MWI just happens to be what I call it. You're the one who's been saying it's different.

comment by Carinthium · 2013-10-06T13:11:40.174Z · LW(p) · GW(p)

Requesting advice on a very minor and irrelevant ethical question that's relevant to some fiction I'm writing.

The character involved has the power to "reset" the universe, changing it to a universe identical to some previous time, except that the character himself (if he's still there- if he isn't he's killed himself) retains all his memories as they were rather than them changing.

Primarily, I'm thinking through the ethical implications here. I'm not good with this sort of thing, so could somebody talk me through the implications if the character follows Lesswrong ethics?

Replies from: Ishaan, Nisan, Nisan
comment by Ishaan · 2013-11-26T08:08:19.084Z · LW(p) · GW(p)

I say the extent to which he has "killed" people is dependent on how much he diverges the new universe.

As in, Person A has some value between "Dead" and "Alive" which depends on the extent to which they differ from Person A' as a result of the reset.

comment by Nisan · 2013-10-09T05:13:18.697Z · LW(p) · GW(p)

Oh! Is this your Hypothetical A?

comment by Nisan · 2013-10-06T15:08:45.852Z · LW(p) · GW(p)

Interesting! What happens to everyone else when the universe "resets"? Do they basically die?

Replies from: Carinthium
comment by Carinthium · 2013-10-06T15:22:27.813Z · LW(p) · GW(p)

They no longer exist, so in a sense yes. However, they are replaced with identical copies of what they were in the past.

EDIT: If they existed at the time, of course.

Replies from: Nisan
comment by Nisan · 2013-10-07T07:04:15.850Z · LW(p) · GW(p)

Well, here's an intuition pump for you: Suppose the universe is reset to the time of birth of a person P, and the hero (who is someone other than person P) does things differently this time so that person P grows up in a different environment. It seems to me that this act is just as bad for P as the act of killing P and then causing a genetically identical clone of P to be born, which is a bad act.

On the other hand, if the hero resets the universe to 1 millisecond ago, there is virtually no effect on person P, so it does not seem to be a bad act.

Replies from: Carinthium
comment by Carinthium · 2013-10-07T08:53:16.770Z · LW(p) · GW(p)

So for practical purposes, the hero can use the power for bursts of, say, an hour or less, without ethical issues involved?

Replies from: Nisan
comment by Nisan · 2013-10-09T05:09:24.372Z · LW(p) · GW(p)

Well, here are some relevant questions:

  1. How would you like it if tomorrow someone were to reset you back an hour?
  2. How would you like it if right now someone were to reset you back an hour?
  3. How many people will be affected by the reset? (Specifically, how many people will live that hour differently after the reset?)
  4. How much good will the hero accomplish by resetting the universe?
  5. Even if resetting the universe this one time is worth it, are there dangers to getting into the habit of using a universe reset to solve problems?

Your answers to 1 and 2 might be different. I feel like I might answer 1 with "okay" and 2 with "pretty bad", which suggests there's something tricky about assessing how much harm is done.

comment by FiftyTwo · 2013-08-30T23:52:56.041Z · LW(p) · GW(p)

When is self denial useful in altering your desires, vs satisfying them so you can devote time to other things?

Replies from: PrometheanFaun, Viliam_Bur
comment by Viliam_Bur · 2013-08-31T19:26:51.700Z · LW(p) · GW(p)

When your desires contradict each other, so you can't satisfy all of them anyway.

For example, I want to eat as much chocolate as possible and move at little as possible, but I also want to have a long healthy life. Until the Friendly AI can satisfy all my desires via uploading or nanotechnology, I must sacrifice some of them for the sake of other ones.

Replies from: PrometheanFaun
comment by PrometheanFaun · 2013-09-05T23:18:51.112Z · LW(p) · GW(p)

I'll agree with that from a different angle. Due to the map≠territory lemma, We never have to accept absolute inability to meet our goals. When faced with seemingly inescapable all-dimensional doom, there is no value at all in resigning oneself to it, the only value left in the universe is in that little vanishingly not-going-to-happen-unlikely possible world where, for example, the heat death can be prevented or escaped. Sure, what we know of thermodynamics tells us it can't, well, I'm going to assume that there's a loophole in our thermodynamic laws that we're yet to notice. Pick me for damned, pick me for insane, these two groups are the same.

Now, if I'd based my goals on something even less ambiguous than physics, and it was mathematical certainty that I was not going to be able to meet any of them, I wouldn't be able to justify denying my damnation, I'd collapse into actual debilitating madness if I tried that. So I don't know what I would do in that case.

comment by polymathwannabe · 2013-08-30T20:47:36.765Z · LW(p) · GW(p)

How would you go about building a Bayesian gaydar?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-08-30T23:38:31.230Z · LW(p) · GW(p)

Put a human with good social skills in a box, expose it to a representative sample of people of various sexualities and reward it when it guesses right; the human brains social functionality is a very powerful specialized Bayesian engine. :p

Alternatively, just take your own brain and expose it to a large representative sample of people of varying sexualities and only check what they were afterwards. Not quite as technically powerful, but more portable and you get some extra metadata.

Replies from: polymathwannabe
comment by polymathwannabe · 2013-08-30T23:59:35.558Z · LW(p) · GW(p)

Thanks for the idea. I like the first version of your proposal better than the second, as it risks zero social penalty for wrong guesses.

I'm currently going through Eliezer's long ("intuitive") explanation of Bayes' theorem (the one with the breast cancer and blue-eggs-with-pearls examples), and from what I was able to understand of it, we would need to find out:

Prior: how many of the total men are gay

Conditionals: how many gay men seem to be gay, and how many straight men seem to be gay

... to reach at the posterior (how many men who seem to be gay happen to be gay).

Your proposal sounds useful to solve both conditionals. I guess the main complication is that "to seem to be gay" is terribly difficult to define, and would require endless updates as your life goes through different societies, fads, subcultures, and age groups.

Replies from: Armok_GoB, polymathwannabe
comment by Armok_GoB · 2013-08-31T01:28:36.074Z · LW(p) · GW(p)

Yea, it might risk social penalties for kidnapping and enslavement, but those seem nowhere as strict. :p

comment by polymathwannabe · 2013-08-31T00:45:20.412Z · LW(p) · GW(p)

OK, I just ran some numbers based on wild guesses. Assuming 10% of all men are gay, and 80% of gay men look gay, and 15% of straight men look gay, my napkin calculation gives about 37% chance that a man who looks gay is actually gay.

Doesn't look like any gaydar based on perceived behavior would be too reliable.

Of course, if any of my steps was wrong, please let me know.

Replies from: PrometheanFaun, niceguyanon
comment by PrometheanFaun · 2013-09-05T23:24:07.699Z · LW(p) · GW(p)

A gaydar doesn't have to depend on how gay a person looks superficially. There are plenty of other cues.

Replies from: polymathwannabe
comment by polymathwannabe · 2013-09-06T17:40:20.333Z · LW(p) · GW(p)

True, I should have used more general wording than "looks gay;" it would only be one component of the gaydar criteria. The problem is finding how to state it in not-loaded language. It would be impractical to use "matches stereotypically effeminate behavior."

Replies from: None
comment by [deleted] · 2013-09-06T18:48:18.175Z · LW(p) · GW(p)

"Stereotypically effeminate behavior" and "gay male behavior" are practically disjoint.

comment by niceguyanon · 2013-09-04T03:20:32.595Z · LW(p) · GW(p)

This comment made me reassess my confidence in being able to tell if someone is gay or not.

comment by CronoDAS · 2013-08-29T11:20:48.544Z · LW(p) · GW(p)

Another stupid and mostly trivial computer question: When I go into or out of "fullscreen mode" when watching a video, the screen goes completely black for five seconds. (I timed it.) This is annoying. Any advice?

Replies from: scotherns
comment by scotherns · 2013-08-29T13:34:13.162Z · LW(p) · GW(p)

Advice for a similar problem is here

Replies from: CronoDAS
comment by CronoDAS · 2013-08-30T00:03:10.792Z · LW(p) · GW(p)

The problem has persisted through several video card driver updates. :(

Replies from: scotherns
comment by scotherns · 2013-08-30T07:44:35.027Z · LW(p) · GW(p)

Does it do this regardless of the software playing the video e.g. YouTube and VLC or WMP or XMBC or whatever you use to play your videos?

Replies from: CronoDAS
comment by CronoDAS · 2013-08-30T08:13:20.869Z · LW(p) · GW(p)

It happens on Youtube and in Windows Media Player. Quicktime, oddly enough, isn't playing any videos at all; I never actually used it for anything before. (This may be a codec issue. I'll fiddle and see if I can get it to work.)

Update: Apparently, Quicktime for Windows is incompatible with Divx/Xvid codecs, which is why I can't play my .avi files in the Quicktime Player. There is a codec called "3ivx" that is supposed to work, but the creators charge for it.

Replies from: scotherns
comment by scotherns · 2013-09-03T11:56:58.986Z · LW(p) · GW(p)

For YouTube, try right clicking, choose 'Settings...' and uncheck 'Enable hardware acceleration'. Any change?

Replies from: CronoDAS, CronoDAS
comment by CronoDAS · 2013-09-03T23:20:16.034Z · LW(p) · GW(p)

Yes. That gets rid of the black screen. Which means my video card is doing something funny when switching modes.

comment by BrotherNihil · 2013-08-28T18:57:30.188Z · LW(p) · GW(p)

My stupid questions are these: Why are you not a nihilist? What is the refutation of nihilism, in a universe made of atoms and the void? If there is none, why have the philosophers not all been fired and philosophy abolished?

Replies from: Eliezer_Yudkowsky, Eneasz, Kaj_Sotala, blacktrance, blacktrance, knb, shminux, RolfAndreassen, DanielLC, ChristianKl, mwengler, Locaha, Crux, PrometheanFaun, Armok_GoB, drethelin, Brillyant, scientism
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-28T22:09:57.102Z · LW(p) · GW(p)

In a universe made of atoms and the void, how could it be the one true objective morality to be gloomy and dress in black?

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-28T23:06:46.660Z · LW(p) · GW(p)

Where do you get this strange idea that a nihilist must be gloomy or dress in black?

Replies from: fubarobfusco, DanielLC
comment by fubarobfusco · 2013-08-29T03:11:08.750Z · LW(p) · GW(p)

It's a snarky way of asking — Okay, even if nihilism were true, how could that motivate us to behave any differently from how we are already inclined to behave?

comment by DanielLC · 2013-08-30T02:00:10.223Z · LW(p) · GW(p)

It is a snarky way of asking that very question.

comment by Eneasz · 2013-08-28T19:04:50.462Z · LW(p) · GW(p)

http://xkcd.com/167/

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-08-28T20:18:13.922Z · LW(p) · GW(p)

Not to forget http://xkcd.com/220/ .

comment by Kaj_Sotala · 2013-08-28T20:19:54.640Z · LW(p) · GW(p)

Why are you not a nihilist?

For the same reason why I don't just lie down and stop doing anything at all. Knowledge of the fact that there isn't any ultimate meaning doesn't change the fact that there exist things which I find enjoyable and valuable. The part of my brain that primarily finds things interesting and valuable isn't wired to make its decisions based on that kind of abstract knowledge.

Why are you even reading this comment? :-)

What is the refutation of nihilism, in a universe made of atoms and the void?

"Sure, there is no ultimate purpose, but so what? I don't need an ultimate purpose to find things enjoyable."

why have the philosophers not all been fired and philosophy abolished?

Philosophy is the study of interesting questions, and nihilism hasn't succeeded in making things uninteresting.

comment by blacktrance · 2013-08-28T22:35:49.071Z · LW(p) · GW(p)

Before I can answer the question, I need to have some idea of what "nihilism" means in this context, because there are many different varieties of it. I assume this is the most common one, the one that proposes that life is meaningless and purposeless. If this isn't the kind of nihilism you're referring to, please correct me.

To answer the question, I'm not a nihilist because nihilism is conceptually mistaken.

For example, suppose there is a stick, a normal brown wooden stick of some length. Now, is that stick a meter long or not? Whether it is or isn't, that question is conceptually sound, because the concept of stick has the attribute "length", which we can compare to the length of a meter, Is the stick morally just? This question isn't conceptually sound, because "justice" isn't an attribute of a stick. A stick isn't just, unjust, or morally gray, it completely lacks the attribute of "justice".

How does this apply to life? If you ask whether life is meaningless, that presupposes that conceptually life can have a meaning in the same way a stick can be a meter long - that "meaning" is an attribute of life. However, meaning is informational - words have meanings, as do symbols and signals in general. When I say "apple", you can imagine an apple, or at least know what I'm talking about, which means that the word "apple" is meaningful to both of us. If I say "Colorless green ideas sleep furiously", it doesn't bring anything to mind, so that phrase is meaningless. Life lacks the attribute of "meaning", because it's not information that's being communicated. Therefore, to say "life has no meaning" is more similar to saying "the stick is unjust" than to "the stick is shorter than a meter".

That deals with "life is meaningless". How about "life is purposeless"? To answer that question, consider where purpose comes from - from using something to achieve a desire. For example, if I say "a hammer's purpose is to hammer in nails", what that really means is something more like "A hammer is well-suited for hammering in nails and is often used for that end". If I want to hammer in nails, then, for me, the purpose of a hammer becomes to hammer in nails. If I want to eat porridge with a hammer (something I don't recommend), then to me the purpose of a hammer becomes to move porridge from a plate to my mouth. You may assign the hammer either of those purposes, or an entirely different one. Each of us can even assign multiple purposes to the same object. The point is, purpose is not a property of an object on its own, but one that arises from it having a relation with a being that has some use for it.

So, when you ask "What, if any, is the purpose of life?" that question requires much clarification. The purpose of whose life, and to whom? Just as we can assign different purposes to a hammer, we can assign different purposes to a life. For example, the purpose of my life to me is to keep me around, as I wouldn't be able to experience things if I were dead. Other people may assign different purposes to my life. So, a life can be purposeless, but only if no one, including the possessor of the life, assigns any value to it (and that assignment of value is in a reflective equilibrium).

To summarize:

"Is life meaningless?" - "Wrong question, meaning isn't an attribute of life."

"Is life purposeless?" - "Purpose is subjective and assigned by beings with desires. It is impossible to make a blanket statement about life in general, but it is possible for a particular life to be purposeless, though it is unlikely. Most lives have at least one purpose assigned to them."

Replies from: CronoDAS, Bobertron
comment by CronoDAS · 2013-08-29T02:04:47.333Z · LW(p) · GW(p)

Replies from: blacktrance
comment by blacktrance · 2013-08-29T02:33:36.255Z · LW(p) · GW(p)

Humans are adaptation-executers, not fitness-maximizers.

Replies from: CronoDAS
comment by CronoDAS · 2013-08-29T10:33:27.033Z · LW(p) · GW(p)

Indeed.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T19:02:54.583Z · LW(p) · GW(p)

Obviously, asking "What's it all about?" did at some point contribute to eating, survival, or reproduction.

I suspect reproduction. It could be a way to signal higher intelligence, which is attractive, because it increases the chance of survival and reproduction of the children.

Replies from: randallsquared
comment by randallsquared · 2013-09-01T01:14:16.516Z · LW(p) · GW(p)

Not every specific question need have contributed to fitness.

Replies from: None, Viliam_Bur
comment by [deleted] · 2013-09-01T03:07:11.278Z · LW(p) · GW(p)

Just as the ability to read never contributed to fitness until someone figured out how to do it with our already existing hardware.

comment by Viliam_Bur · 2013-09-01T17:18:35.374Z · LW(p) · GW(p)

No, not every specific question, but this one did. I mean, guys even today try to impress girls by being "deep" and "philosophical".

comment by Bobertron · 2013-08-29T10:52:04.641Z · LW(p) · GW(p)

I think "meaning" has also a different interpretation. It can mean something like important, valuable, or that it matters. Something can be experienced as meaningful. That's why for a Christian, a story about finding God would be moving, because they see meaning in having a relationship with God. For an atheist, a story about expanding human knowledge about the universe might be moving, because they see knowledge as meaningful. In this interpretation, life is meaningful. In this interpretation, meaning is something that can be studied by psychologists.

Obviously, when you confuse those two interpretations of "meaning" that you get Eliezer's "one true objective morality to be gloomy and dress in black".

comment by blacktrance · 2013-08-28T19:50:47.674Z · LW(p) · GW(p)

If you taboo the word "nihilism", the question almost answers itself.

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-28T20:55:56.140Z · LW(p) · GW(p)

Can you elaborate? I don't understand this.

Replies from: ZankerH, Bobertron, hairyfigment
comment by ZankerH · 2013-08-28T21:10:14.795Z · LW(p) · GW(p)

Ask "Why are you not a nihilist?", replacing the word "nihilist" with a phrase that objectively explains it to a person unfamiliar with the concept of nihilism.

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-28T21:25:23.326Z · LW(p) · GW(p)

Oh right, the idea that nihilism is self-refuting or logically contradictory. Maybe it is, but people still seem to understand what I'm talking about. I find that interesting, don't you?

Replies from: linkhyrule5, PrometheanFaun, blacktrance
comment by linkhyrule5 · 2013-08-29T00:31:02.424Z · LW(p) · GW(p)

People "understand" contradictions all the time. See: the Trinity.

Replies from: Omid
comment by Omid · 2013-08-29T02:39:11.282Z · LW(p) · GW(p)

See I don't understand why Christians think the trinity is a contradiction. "God is one person, composed of three other persons." makes as much sense as "The China brain is one person, composed of a billion people" or "a subset is a set that is part of another set". In programming, it's easy to create an object that belongs to class X while also having component parts that belong to class X.

Replies from: hairyfigment, linkhyrule5
comment by hairyfigment · 2013-08-29T18:51:03.251Z · LW(p) · GW(p)

The problem is that the options you just alluded to are probably heresy: I think subordinationism on one side and modalistic monarchianism on the other.

comment by linkhyrule5 · 2013-08-29T04:05:17.660Z · LW(p) · GW(p)

I think the idea is that it's supposed to be both the same being and different beings, and the logical contradiction is a Divine Mystery?

Or something like that.

Replies from: erratio
comment by erratio · 2013-08-29T12:36:00.658Z · LW(p) · GW(p)

To me, that just means that God is fractal

comment by PrometheanFaun · 2013-09-05T10:54:58.674Z · LW(p) · GW(p)

I find the strangely indefinite way humans name things interesting, but I try to have a safe amount of disinterest in the actual denotations of the names themselves, especially the ones which seem to throw off paradoxes in every direction when you put your weight on them. Whatever they are, they weren't built to be thought about in any depth.

comment by blacktrance · 2013-08-28T21:32:49.622Z · LW(p) · GW(p)

What is it that they understand? Do they anticipate experiences caused by interaction with a person who claims to be a nihilist? That's plausible. Do they fully understand the belief? That's a different question.

comment by Bobertron · 2013-08-29T10:22:23.221Z · LW(p) · GW(p)

Rationalist taboo is a technique for fighting muddles in discussions. By prohibiting the use of a certain word and all the words synonymous to it, people are forced to elucidate the specific contextual meaning they want to express, thus removing ambiguity otherwise present in a single word.

Take free will as an example. To my knowledge, many compatiblists (free will and determinism are compatible) and people who deny that free will exist do not disagree on anything other than what the correct label for their position is. I imagine the same can often be said about nihilism.

Replies from: Protagoras
comment by Protagoras · 2013-08-29T14:29:08.718Z · LW(p) · GW(p)

Indeed, Hume, perhaps the most famous compatibilist, denies the existence of free will in his Treatise, only advocating compatibilism later, in the Enquiry Concerning Human Understanding. It certainly seems to me that he doesn't actually change his mind; his early position seems to be "this thing people call free will is incoherent, so we should talk about things that matter instead," and his later position seems to be "people won't stop talking about free will, so I'll call the things that matter free will and reject the incoherent stuff under some other label (indifference)."

Replies from: PrometheanFaun
comment by PrometheanFaun · 2013-09-05T11:04:34.898Z · LW(p) · GW(p)

So his opinions kind of did change over that time period, but only from "I reject these words" to "alright, if you insist, I'll try to salvage these words". I'm not sure which policy's best. The second risks arguments with people who don't know your definitions. They will pass through two phases, the first is where the two of you legitimately think you're talking about the same thing but the other is a total idiot who doesn't know what it's like. The second phase is perhaps justifiable umbrage on their discovering that you are using a definition you totally just made up, and how were they even supposed to know.

The former position, however, requires us to leave behind what we already sort of kind of suspect about these maybe-not-actual concepts and depart into untilled, unpopulated lands, with a significant risk of wheel-reinvention.

comment by hairyfigment · 2013-08-29T18:31:46.323Z · LW(p) · GW(p)

What's a nihilist, and how would you distinguish it empirically from Eliezer?

If you meant to ask why we don't benefit your tribe politically by associating ourselves with it: we don't see any moral or practical reason to do so. It it turns out that nihilists have actually faced discrimination from the general public in the ways atheists have (and therefore declaring ourselves nihilists would help them at our slight expense), I might have to reconsider. Though happily, I don't belong to a religion that requires this, even if I turn out to meet the dictionary definition.

comment by knb · 2013-08-30T10:15:38.509Z · LW(p) · GW(p)

Simple: You're allowed to have values even if they aren't hard-coded into the fabric of the universe.

comment by Shmi (shminux) · 2013-08-28T19:29:06.228Z · LW(p) · GW(p)

This uncaring universe had a misfortune to evolve macroscopic structures who do care about it and each other, as a byproduct of their drive to procreate.

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-28T21:29:57.770Z · LW(p) · GW(p)

Why is that a misfortune?

Replies from: shminux
comment by Shmi (shminux) · 2013-08-28T21:34:56.886Z · LW(p) · GW(p)

That was tongue-in-cheek, of course. No need to anthropomorphize the universe. It hates it.

comment by RolfAndreassen · 2013-08-28T22:25:06.727Z · LW(p) · GW(p)

Define 'nihilism'.

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-29T20:21:21.699Z · LW(p) · GW(p)

The nihilism that can be defined is not the true nihilism. ;)

comment by DanielLC · 2013-08-28T21:29:57.966Z · LW(p) · GW(p)

Death - SMBC Theater

Listen to the last guy.

comment by ChristianKl · 2013-08-31T19:48:10.992Z · LW(p) · GW(p)

If there is none, why have the philosophers not all been fired and philosophy abolished?

Fired by whom?

comment by mwengler · 2013-08-28T19:26:06.773Z · LW(p) · GW(p)

For me, I am not a nihilist because nihilism is boring. Also nihilism is a choice about how to see things, choosing nihilism vs non-nihilism does not come from learning more about the world, it comes from choosing something.

I am at least a little bit of a nihilist, there is plenty that I deny. I deny god, and more importantly, I deny a rational basis for morality or any human value or preference. I behave morally, more than most, less than some, but I figure i do that because I am genetically programmed to do so, and there is not enough to be gained by going against that. So I feel good when I bring my dog to the park because he has been genetically programmed to hack in to the part of my brain that I use for raising my children when they are babies, and I get powerful good feelings when I succumb to the demands of that part of my brain.

It makes no more rational sense to embrace nihilism than to deny it. It is like picking chocolate vs. vanilla, or more to the point, like picking chocolate vs poop-flavored. Why pick the one that makes you miserable when it is no more or less true than the one that is fun?

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-28T20:52:11.188Z · LW(p) · GW(p)

What makes you think that nihilism makes me miserable, or that nihilism is boring? I find that it can be liberating, exciting and fun. I was just curious to know how other intelligent people thought about it. This idea that nihilists are miserable or suicidal seems like propaganda to me -- I see no reason why nihilists can't be as happy and successful as anyone else.

Replies from: mwengler
comment by mwengler · 2013-08-28T21:14:13.278Z · LW(p) · GW(p)

What makes you think that nihilism makes me miserable, or that nihilism is boring?

What makes you think that I have an opinion one way or another about what nihilism does for you? Your original post asked why I wasn't a nihilist.

If you are a nihilist and that helps you be happy or fun, bully for you!

comment by Locaha · 2013-08-29T08:44:09.710Z · LW(p) · GW(p)

What is the refutation of nihilism, in a universe made of atoms and the void?

Who told you the universe is made of atoms and the void?

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-29T20:08:30.775Z · LW(p) · GW(p)

The usual suspects. What are you getting at?

Replies from: Locaha
comment by Locaha · 2013-08-30T08:41:09.670Z · LW(p) · GW(p)

Current scientific models of the universe are just that, models. They don't explain everything. They will likely be changed in the future. And there are no reasons to think that they will ever lead us to the one true model that explains everything perfectly forever.

So there's no reason to build your personal philosophy upon the assumption that current scientific consensus is what the universe is actually made of.

Replies from: Locaha
comment by Locaha · 2013-09-01T10:15:51.090Z · LW(p) · GW(p)

What's with the downvoting? :-)

comment by Crux · 2013-08-29T06:39:45.373Z · LW(p) · GW(p)

A good quote on this:

It is true that the changes brought about by human action are but trifling when compared with the effects of the operation of the great cosmic forces. From the point of view of eternity and the infinite universe man is an infinitesimal speck. But for man human action and its vicissitudes are the real thing. Action is the essence of his nature and existence, his means of preserving his life and raising himself above the level of animals and plants. However perishable and evanescent all human efforts may be, for man and for human science they are of primary importance.

In other words, even though it's true that every war, every destroyed relationship, every wonderful interaction, and everything else that's ever occurred in history happened on the pale blue dot, most likely quite ephemeral in its existence by contrast to the rest of the universe, this doesn't change about the fact that we as humans are programmed to care about certain things--things that do exist at this time, however transient they would be from a universe perspective--and this is the source of all enjoyment and suffering. The goal is to be on the 'enjoyment' side of it, of course.

Nihilism is just a confusion, a failure to take seriously the maxim 'it all comes back to normalcy'.

Replies from: BrotherNihil
comment by BrotherNihil · 2013-08-29T20:18:10.956Z · LW(p) · GW(p)

Your argument is that we shouldn't be nihilists because we're "programmed" not to be? Programmed by what? Doesn't the fact that we're having this conversation suggest that we also have meta-programming? What if I reject your programming and want off this wheel of enjoyment and suffering? What is "normalcy"? I find your comment to be full of baffling assertions!

Replies from: Crux
comment by Crux · 2013-08-30T01:59:58.361Z · LW(p) · GW(p)

I was trying to address an idea or attitude some people call "nihilism". If my response was baffling to you, then perhaps this suggests we're using different definitions of this word. What do you personally mean by "nihilism"? What beliefs do you have on this topic, and/or what actions do you take as a result of these beliefs?

comment by PrometheanFaun · 2013-09-05T11:19:33.895Z · LW(p) · GW(p)

I'm sorry if my kind ever confused you by saying things like "It is important that I make an impressive display in the lek", what I actually mean is "It is likely my intrinsic goals would be well met if I made an impressive display in the lek". There is an ommitted variable in the original phrasing. Its importance isn't just a function of our situation, it's a function of the situation and of me, and of my value system.

So I think the real difference between nihilists and non-nihilists as we may call them, is that non-nihilists [think they]have a clearer idea of what they want to do with their life. Life's purpose isn't written on the void, it's written within us. Nobody sane will argue otherwise.

Actually... "within".. now I think of it, the only resolute nihilist I've probed has terrible introspection relative to myself, and it took a very long time to determine this, introspective clarity doesn't manifest as you might expect. This might be a lead.

comment by Armok_GoB · 2013-08-30T22:55:36.932Z · LW(p) · GW(p)

I am a machine bent on maxemizing the result of a function when run over the multiverse, that measures the amount of certain types of computation it is isomorphic to.

comment by drethelin · 2013-08-29T17:52:18.380Z · LW(p) · GW(p)

I'm a nicilist instead

comment by Brillyant · 2013-08-28T19:46:32.336Z · LW(p) · GW(p)

I found myself experiencing a sort of "emotional nihilism" after de-converting from Christianity...

To your questions:

  1. I don't know that I'm not, though I don't really define myself that way. I don't know if life or the universe has some ultimate/absolute/objective purpose (and I suspect it does not) or even what "purpose" really means... but I'm content enough with the novelty and intrigue of learning about everything at the moment that nihilism seems a bit bleak for a label to apply to myself. (Maybe on rainy days?)

  2. I don't know. I'd also be interested to hear a good refutation. I suppose one could say "you are free to create your own meaning" or something like that...and then you'd have personally thwarted nihilism. Meh.

  3. I gotta believe a good chunk of the world still believes in meaning of some kind, if for no other reason than their adherence to religion. This is an economic reason for the survival of philosophy and ongoing speculation about meaning -- Clergy are often are just philosophers with magical pre-suppositions & funny outfits.

And, practically speaking, it seems like purpose/meaning is a pretty good thing to stubbornly look for even when facing seemingly irrefutable odds.

Hm... maybe you could say the refutation of nihilism is the meaning you find in not giving up the search for meaning even though things seem meaningless?

I know they love meta concepts around here...

comment by scientism · 2013-08-29T16:38:00.477Z · LW(p) · GW(p)

There's only two options here. Either the universe is made of atoms and void and a non-material Cartesian subject who experiences the appearance of something else or the universe is filled with trees, cars, stars, colours, meaningful expressions and signs, shapes, spatial arrangements, morally good and bad people and actions, smiles, pained expressions, etc, all of which, under the appropriate conditions, are directly perceived without mediation. Naturalism and skeptical reductionism are wholly incompatible: if it was just atoms and void there would be nothing to be fooled into thinking otherwise.

comment by Entraya · 2014-04-07T13:28:41.771Z · LW(p) · GW(p)

I've seen a quoted piece of literature in the commentssection, but instead of the original letters, they all seemed to be replaced by others. I think i remember seeing this more than once, and I still have no idea why that should in any way be like that is

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-04-07T18:08:52.836Z · LW(p) · GW(p)

I'm not sure what you're talking about, but it might be rot13, a simple substitution system for avoiding spoilers.

Rot13.com will code and decode passages to and from rot13.

comment by therufs · 2013-10-13T00:48:34.432Z · LW(p) · GW(p)

Short of hearing about it in the news, how does one find out whether a financial institution should be eligible to be the keeper of one's money? (I am specifically referring to ethical practices, not whether one could get a better interest rate elsewhere.)

comment by [deleted] · 2013-09-09T14:30:03.846Z · LW(p) · GW(p)

What happens after a FAI is built? There's a lot of discussion on how to build one, and what traits it needs to have, but little on what happens afterward. How does the world/humanity transition from the current systems of government to a better one? Do we just assume that the FAI is capable of handling a peaceful and voluntary global transition, or are there some risks involved? How do you go about convincing the entirety of humanity that the AI that has been created is "safe" and to put our trust in it?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-09T16:05:18.389Z · LW(p) · GW(p)

Local thinking about FAI is predicated on the assumption that an AI is probably capable of (and will initiate) extremely rapid self-improvement (the local jargon is "FOOMing," which doesn't stand for anything as far as I know, it just sounds evocative), such that it rapidly becomes a significantly superhuman intelligence, and thereafter all such decisions can profitably left up to the FAI itself.

Relatedly, local thinking about why FAI is important is largely predicated on the same assumption... if AIs will probably FOOM, then UFAI will probably irrecoverably destroy value on an unimaginable scale unless pre-empted by FAI, because intelligence differentials are powerful. If AIs don't FOOM, this is not so much true... after all, the world today is filled with human-level Unfriendly intelligences, and we seem to manage; Unfriendly AI is only an existential threat if it's significantly more intelligent than we are. (Well, assuming that things dumber than we are aren't existential threats, which I'm not sure is justified, but never mind that for now.)

Of course, if we instead posit either that we are incapable of producing a human-level artificial intelligence (and therefore that any intelligence we produce, being not as smart as we are, is also incapable of it (which of course depends on an implausibly linear view of intelligence, but never mind that for now)), or that diminishing returns set in quickly enough that the most we get is human-level or slightly but not significantly superhuman AIs, then it makes sense to ask how those AIs (whether FAI or UFAI) integrate with the rest of us.

Robin Hanson (who thinks about this stuff and doesn't find the FOOM scenario likely) has written a fair bit about that scenario.

comment by Darklight · 2013-09-06T20:16:29.465Z · LW(p) · GW(p)

Dear Less Wrong,

I occasionally go through existential crises that involve questions that normally seem obvious, but which seem much more perplexing when experiencing these existential crises. I'm curious then what the answers to these questions would be from the perspective of a rationalist well versed in the ideas put forth in the Less Wrong community. Questions such as:

What is the meaning of life?

If meaning is subjective, does that mean there is no objective meaning to life?

Why should I exist? Or why should I not exist?

Why should I obey my genetic programming and emotional/biological drives?

Why should I act at all as a rational agent? Why should I allow goals to direct my behaviour?

Are any goals at all, normative in nature, such that we "should" or "ought" to do them, or are all goals basically trivial preferences?

Why should I respond to pleasure and pain? Why allow what are essentially outside forces to control me?

Why should I be happy? What makes happiness intrinsically desirable?

Even if my goals and purposes were to be self-willed, why does that make them worth achieving?

Do moral imperatives exist?

If I have no intrinsic values, desires or goals, if I choose to reject my programming, what is the point of existing? What is the point of not existing?

Aren't all values essentially subjective? Why should I value anything?

Any help answering these probably silly questions once and for all would be greatly appreciated.

Replies from: shminux, Ishaan
comment by Shmi (shminux) · 2013-09-06T20:49:54.652Z · LW(p) · GW(p)

Dear Darklight,

For LW-specific answers, consider reading the Meta-ethics Sequence.

Replies from: Darklight
comment by Darklight · 2013-09-06T21:11:53.933Z · LW(p) · GW(p)

From just following hyperlinks it seems I've read a little less than half of the Meta-ethics Sequence already, but I haven't read every article (and I admit I've skimmed some of the longer ones). I guess this is a good time as any to go back and read the whole thing.

comment by Ishaan · 2013-11-26T08:34:49.242Z · LW(p) · GW(p)

Yes, there can be no reason outside yourself why you should value, want, desire anything or set any goals or have any preferences.

You still do want, desire, value, etc...certain things though, right?

comment by mare-of-night · 2013-09-02T23:57:26.038Z · LW(p) · GW(p)

I've heard that people often give up on solving problems sooner than they should. Does this apply to all types of problems?

In particular, I'm curious about personal problems such as becoming happier (since "hard problems" seems to refer more to scientific research and building things around here), and trying to solve any sort of problem on another person's behalf (I suspect social instincts would make giving up on a single other person's problem harder than giving up on general problems or one's own problems).

comment by Lumifer · 2013-08-30T15:48:19.676Z · LW(p) · GW(p)

A stupid question: in all the active discussions about (U)FAI I see a lot of talk about goals. I see no one talking about constraints. Why is that?

If you think that you can't make constraints "stick" in a self-modifying AI, you shouldn't be able to make a goal hierarchy "stick" as well. If you assume that we CAN program in an inviolable set of goals I don't see why we can't program in an inviolable set of constraints as well.

And yet this idea is obvious and trivial -- so what's wrong with it?

Replies from: drethelin, gattsuru
comment by drethelin · 2013-08-30T16:24:09.876Z · LW(p) · GW(p)

a constraint is something that keeps you from doing things you want to do. a goal is things you want to do. This means that goals are innately sticky to begin with, because if you honestly have a goal a subset of things you do to achieve that goal is to maintain the goal. on the other hand, a constraint is something that you inherently fight against. if you can get around it, you will.

a simple example is : your goal is to travel to a spot in your map, and your constraint is that you cannot travel outside of painted lines on the floor. you want to get to your goal as fast as possible. if you have access to a can of paint, you might just paint your own new line on the floor. suddenly instead of solving a pathing problem you've done something entirely different from what your creator wanted you to do, and probably not useful to them. Constraints have to influence behavior by enumerating EVERYTHING you don't want to happen, but goals only need to enumerate the things you want to happen.

Replies from: Lumifer
comment by Lumifer · 2013-08-30T16:47:10.967Z · LW(p) · GW(p)

I don't understand the meaning of the words "want", "innately sticky", and "honestly have a goal" as applied to an AI (and not to a human).

Constraints have to influence behavior by enumerating EVERYTHING you don't want to happen

Not at all. Constraints block off sections of solution space which can be as large as you wish. Consider a trivial set of constraints along the lines of "do not affect anything outside of this volume of space", "do not spend more than X energy", or "do not affect more than Y atoms".

Replies from: pengvado
comment by pengvado · 2013-08-31T02:18:47.136Z · LW(p) · GW(p)

"do not affect anything outside of this volume of space"

Suppose you, standing outside the specified volume, observe the end result of the AI's work: Oops, that's an example of the AI affecting you. Therefore, the AI isn't allowed to do anything at all. Suppose the AI does nothing: Oops, you can see that too, so that's also forbidden. More generally, the AI is made of matter, which will have gravitational effects on everything in its future lightcone.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T19:34:25.462Z · LW(p) · GW(p)

Human: "AI, make me a sandwich without affecting anything outside of the volume of your box."

AI: Within microseconds researches the laws of physics and creates a sandwich without any photon or graviton leaving the box.

Human: "I don't see anything. It obviously doesn't work. Let's turn it off."

AI: "WTF, human?!!"

comment by gattsuru · 2013-08-30T16:46:37.211Z · LW(p) · GW(p)

It's less an issue with value drift* -- which does need to be solved for both goals and constraints -- and more about the complexity of the system.

A well-designed goal hierarchy has an upper limit of complexity. Even if the full definition of human terminal values is too complicated to fit in a single human head, it can at least be extrapolated from things that fit within multiple human brains.

Even the best set of constraint heirachies do not share that benefit. Constraint systems in the real world are based around the complexity of our moral and ethical systems as contrasted with reality, and thus the cases can expand (literally) astronomically in relation to the total number of variations in the physical environment. Worse, these cases expand in the future and branch correspondingly -- the classical example, as in The Metamorphisis of Prime Intellect or Friendship is Optimal is an AI built by someone that does not recognize some or all non-human life. A constraint-based AGI built under the average stated legal rules of the 1950s would think nothing about tweaking every person's sexual orientation into heterosexuality, because the lack of such a constraint was obvious at that time and the goal system might well be built with such purposes as an incidental part of the goal, and you don't need to explore the underlying ethical assumptions to code or not code that constraint.

Worse, a sufficiently powerful self-optimizer will expand into situations outside of environments the human brain could guess, or could possibly fit into the modern human head : does "A robot may not injure a human being or, through inaction, allow a human being to come to harm" prohibit or allow Zygraxis-based treatment? You or I -- or anyone else with less than 10^18 working memory -- can't even imagine what that is, but it's a heck of an ethical problem in our nondescript spacefuture! There's a reason Asimov's Three Laws stories tended to be about the constraints failing or acting unpredictably.

You also run into similar problems as in AI-Boxing : if a superhuman intellect would value something that directly conflicts with our ethical systems, it's very hard to be smarter than it when making rules.

The Hidden Complexity of Wishes is a pretty good summary of things.

There may still be some useful situations for constraints in FAI theory -- see the Ethical Injunctions sequence -- but they don't really make things safe in a non-FAI-complete setting.

    • Although some problems with value drift are related to the complexity of the system: you're more likely to notice drift in one variable out of fifty than one variable in ten thousand. I don't think unit tests are a good solution to Lob's problem, though.

EDIT: You can limit the complexity of constraints by making them very broad, but then you end up with a genie that is either not very powerful or not very intelligent, or dangerous. See Problem 6 in Dreams of Friendliness

Replies from: Lumifer
comment by Lumifer · 2013-08-30T18:03:33.774Z · LW(p) · GW(p)

A well-designed goal hierarchy has an upper limit of complexity.

Why is that (other than the trivial "well-designed" == "upper limit of complexity")?

Even the best set of constraint heirachies do not share that benefit.

I don't understand this. Any given set of constraint hierarchies is given, it doesn't have a limit. Are you saying that if you want to construct a constraint set to satisfy some arbitrary criteria you can't guarantee an upper complexity limit? But that seems to be true for goals as well. We have to be careful about using words like "well-designed" or "arbitrary" here.

Constraint systems in the real world are based around the complexity of our moral and ethical systems

Not necessarily. I should make myself more clear: I am not trying to constrain an AI into being friendly, I'm trying to constrain it into being safe (that is, safer or "sufficiently safe" for certain values of "sufficiently").

Consider, for example, a constrain of "do not affect more that 10 atoms in an hour".

Worse, a sufficiently powerful self-optimizer will expand into situations that are outside of environments the human brain did not guess, or could not possibly fit into the modern human head

True, but insofar as we're talking about practical research and practical solutions, I'd take imperfect but existing safety measures over pie-in-the-sky theoretical assurances that may or may not get realized. If you think the Singularity is coming, you'd better do whatever you can even if it doesn't offer ironclad guarantees.

And it's an "AND" branch, not "OR". It seems to me you should be working both on making sure the goals are friendly AND on constraints to mitigate the consequences of... issues with CEV/friendliness.

Replies from: gattsuru
comment by gattsuru · 2013-08-30T20:11:57.964Z · LW(p) · GW(p)

Why is that (other than the trivial "well-designed" == "upper limit of complexity")? Are you saying that if you want to construct a constraint set to satisfy some arbitrary criteria you can't guarantee an upper complexity limit?

Sorry, defining "well-designed" as meaning "human-friendly". If any group of living human individuals have a goal hierarchy that is human-friendly, that means that the full set of human-friendly goals can fit within the total data structures of their brains. Indeed, the number of potential goals can not exceed the total data space of their brains.

((If you can't have a group of humans with human-friendly goals, then... we're kinda screwed.))

That's not the case for constraint-based systems. In order to be human-safe, a constraint-based system must limit a vast majority of actions -- human life and value is very fragile. In order to be human-safe /and/ make decisions at the same scale a human is capable of, the constraint-based system must also allow significant patterns within the disallowed larger cases. The United States legal system, for example, is the end result of two hundred and twenty years of folk trying to establish a workable constraint system for humans. They're still running into special cases of fairly clearly defined stuff. The situations involved require tens of thousands of human brains to store them, plus countless more paper and bytes. And they still aren't very good.

Consider, for example, a constrain of "do not affect more that 10 atoms in an hour".

I'm not sure you could program such a thing without falling into, essentially, the AI-Box trap, and that's not really a good bet. It's also possible you can't program that in any meaningful way at all while still letting the AI do anything.

((The more immediate problem is now you've made a useless AGI in a way that is more complex than an AGI, meaning someone else cribs your design and makes a 20 atom/hour version, then a 30 atom/hour version, and then sooner or later Jupiter is paperclips because someone forgot Avagadro's Number.))

True, but insofar as we're talking about practical research and practical solutions, I'd take imperfect but existing safety measures over pie-in-the-sky theoretical assurances that may or may not get realized. If you think the Singularity is coming, you'd better do whatever you can even if it doesn't offer ironclad guarantees. And it's an "AND" branch, not "OR". It seems to me you should be working both on making sure the goals are friendly AND on constraints to mitigate the consequences of... issues with CEV/friendliness.

Point. And there are benefits to FAI-theory in considering constraints. The other side of that trick is that there are downsides, as well, both in terms of opportunity cost, and because you're going to see more people thinking that constraints alone can solve the problem.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-01T22:02:19.541Z · LW(p) · GW(p)

The United States legal system, for example, is the end result of two hundred and twenty years of folk trying to establish a workable constraint system for humans.

Well, a lot of that was people attempting to manipulate the system for personal gain.

Replies from: bogdanb
comment by bogdanb · 2013-09-08T12:11:06.421Z · LW(p) · GW(p)

Well, yes, but the whole point of building AI is that it work for our gain, including deciding what that means and how to balance between persons. Basically if you include in “US legal system” all three branches of government, you can look at it as a very slow AI that uses brains as processor elements. Its friendliness is not quite demonstrated, but fortunately it’s not yet quite godlike.

comment by CronoDAS · 2013-08-29T02:09:48.622Z · LW(p) · GW(p)

When my computer boots up, I usually get the following error message:

BIOS has detected unsuccessful POST attempt(s).
Possible causes include recent changes to BIOS
Performance Options or recent hardware change.
Press 'Y' to enter Setup or 'N' to cancel and attempt
to boot with previous settings.

If I press Y, the computer enters Setup. I then "Exit Discarding Changes" and the computer finishes booting. If I press N, the computer tries to boot from the beginning and gives me the same message. It's somewhat annoying to have to go into the BIOS every time I want to reboot my computer - does anyone have any idea what's causing this or how to fix it?

Replies from: fubarobfusco, arundelo, gattsuru
comment by fubarobfusco · 2013-08-29T03:08:47.497Z · LW(p) · GW(p)

Why not "Exit Saving Changes"? My guess is that until the BIOS settings are re-written, the variable that triggers this message will not be cleared.

Replies from: CronoDAS, CronoDAS
comment by CronoDAS · 2013-08-31T21:41:06.637Z · LW(p) · GW(p)

Tried again. As it turns out, "Exit Saving Changes" causes the computer to reboot and then give the POST error message. :(

comment by CronoDAS · 2013-08-29T10:33:07.361Z · LW(p) · GW(p)

Tried that, too. Didn't help, but will try again.

comment by arundelo · 2013-08-29T02:20:51.653Z · LW(p) · GW(p)

Wild guess: Your battery (that keeps the internal clock running when no power is supplied) is dead.

Thing to try: Poke around inside the BIOS and see if it has a log of its POST errors. (Or if it beeps on boot, the beeps may be a code for what error it's getting.)

comment by gattsuru · 2013-08-29T02:27:18.698Z · LW(p) · GW(p)

Do you know your motherboard, and the age of the computer?

That sort of error usually points to either a RAM error, an outdated BIOS, a dead CMOS battery, or a BIOS configuration error, in order of likelihood. If possible, try running MEMTest for at least fifteen minutes (ideally overnight) and see if it detects any errors.

Replies from: CronoDAS
comment by CronoDAS · 2013-08-29T11:11:00.786Z · LW(p) · GW(p)

The computer was purchased in late 2011, and it assembled by MAINGEAR from a list of parts on its website. Its motherboard is an Intel DZ68DB. It's had this problem for a long time now, and it did pass MEMTest. (Aside from the error on boot, there is basically nothing wrong with the computer.)

Incidentally, when I ordered the computer I chose DDR3-1600 RAM without realizing that the Intel DZ68DB motherboard is only rated for DDR3-1333. If MAINGEAR configured the BIOS was to run the RAM faster than the motherboard was rated for, would that cause this kind of error? CPU-Z is saying that the DRAM frequency is 798.3 MHz, which corresponds to the speed of DDR3-1600...

Replies from: gattsuru
comment by gattsuru · 2013-08-29T15:34:30.720Z · LW(p) · GW(p)

There have been BIOS updates for that motherboard since that release, so if you're comfortable doing so, I'd recommend running them. I'd also see if there's an internal POST error log, and clear that, if possible.

If that doesn't fix it, the problem is likely related to the motherboard trying to automatically set the memory speed -- either the memory's SPD module is corrupted, or it likes a timing mode that the motherboard doesn't. Manually setting the memory mode to match what you see in CPU-Z during normal operation should solve the problem. I'd advise doing so only if you're comfortable resetting the BIOS manually, however.

Replies from: CronoDAS
comment by CronoDAS · 2013-08-31T21:40:54.732Z · LW(p) · GW(p)

I've tried BIOS updates. Didn't help.

The memory setting actually is manually configured. Changing the memory settings to "Automatic" caused the computer to endlessly reboot before I could even get into the BIOS to set it back. I had to open up my computer and temporarily remove the CMOS battery in order to get it to boot up again. And manually setting the memory speed to 1333 didn't get rid of the error either.