Open Thread, November 1 - 7, 2013

post by witzvo · 2013-11-02T16:37:23.797Z · LW · GW · Legacy · 301 comments

Contents

301 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

301 comments

Comments sorted by top scores.

comment by ChrisHallquist · 2013-11-04T01:45:01.725Z · LW(p) · GW(p)

Can someone explain nanotech enthusiasm to me? Like, I get that nanotech is one of the sci-fi technologies that's actually physics-compliant, and furthermore it should be possible because biology.

But I get the impression that among transhumanist types slightly older than me, there's a widespread expectation that it will lead to absolutely magical things on the scale of decades, and I don't get where that comes from, even after picking up Engines of Creation.

I'm thinking of, e.g. Eliezer talking about how he wanted to design nanotechnology before he got into AI, or how he casually mentions nanotechnology as being one of the big ways a super-intelligent AI could take over the world. I always feel totally mystified when I come across something like that, like it's a major gulf between me and slightly older nerds.

Replies from: Armok_GoB, Eliezer_Yudkowsky, Cyan, mwengler, Lumifer, passive_fist
comment by Armok_GoB · 2013-11-04T02:43:22.259Z · LW(p) · GW(p)

Trying for minimal technicalities: There are at least 3 different technologies with not much surface-level usage similarities referred to as "nanotech".

Assemblers: basically 3d printers, but way more flexible and able to make things like food, robots, or more assemblers.

Materials: diamondoids, buckytubes, circuitry. We already have some of these really, it's just that we'd get more kinds of them, and they'd be really cheap to make with a nanotech assembler. Stronger, faster, more powerful versions of what modern tech already can do.

Nanobots, particularly medical: Basically can do all the things living cells can do, but better, and also being able to do most of the things machines can do, and commandable in exact detail. There are also a number of different ways they'd grant immortality enough that they are almost sure to do so even if most of them end up not working out.

Now you can ask questions about each one of these in order, with more specifics.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-04T03:09:56.894Z · LW(p) · GW(p)

The question is about all these technologies - though it's about 2 mainly insofar as 2 is an extension of 1.

So the question is why expect any of these technologies to mature on a timescale of decades?

(Or, assuming FOOM, why assume they'd be relatively low-hanging fruit for a FOOMing AI, such that "trick humans into building me nano assemblers" is a prime strategy for a boxed AI to escape?)

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-04T05:52:21.588Z · LW(p) · GW(p)

As I said, 2 is already here, and it's becoming more here gradually.

For 3, we have a proof of concept to rip of: biological cells. Those also happens to have a specialized assembler in them already; the ribosome. And we can print instructions for it already. There's only 1 problem left and that's the protein folding problem. The protein folding problem is somewhat rapidly made progress on software wise, and even if that were to fail it won't be all that long before we ca simply brute force it with computing power. Now, the other kinds of nanobots are less clear.

The assembler (1) is trickier; however, Drexler already sorta made a blueprint for one I think, and 3 will help a great deal with it as well.

For the fooming, it's the 3 one, and ways to use it. As I said we already have the hardware, and things like the protein folding problem is exactly what an AI would be great at. Once it's solved that it has full control over biology and can essentially make The Thing and/or a literal mind control virus, and take over that way.

Replies from: ChrisHallquist, DanielLC
comment by ChrisHallquist · 2013-11-04T16:19:03.957Z · LW(p) · GW(p)

There's only 1 problem left and that's the protein folding problem. The protein folding problem is somewhat rapidly made progress on software wise, and even if that were to fail it won't be all that long before we ca simply brute force it with computing power.

Okay, so one sub-piece of puzzlement I have is why talk of protein folding as a problem that is either solved or unsolved - as if we (or more frighteningly, an AI) could suddenly go from barely being able to do it to 100% capable.

I was also under the impression that protein folding was mathematically horrible in a way that makes it unlikely to be brute forced any time soon, though I just now realized that I may have been thinking of the general problem of predicting chemistry from physics, maybe protein folding is much easier.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-04T22:55:41.504Z · LW(p) · GW(p)

Predicting chemistry from physics should be easy with a quantum computer, but appears hard with a classical computer. Often people say that even once you make a classical approximation, ie, assume that the dynamics are easy on a classical computer, that the problem of finding the minimum energy state of a protein is NP-hard. That's true, but a red herring, since the protein isn't magically going to know the minimum energy state. Though it's still possible that there's some catalyst to push it into the right state, so simulating the dynamics in a vacuum won't get you the right answer (cf prions). Anyhow, there's some hope that evolution has found a good toolbox for designing proteins and that if can figure out the abstractions that evolution is using, it will all become easy. In particular, there are building blocks like the alpha helix. Certainly an engineer, whether evolution or us, doesn't need to understand every protein, just know how to make enough.

I think the possibility that a sufficiently smart AI would quickly find an adequate toolbox for designing proteins is quite plausible. I don't know what Eliezer means, but the possibility seems to me adequate for his arguments.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-04T23:07:23.335Z · LW(p) · GW(p)

Ah, that's helpful.

comment by DanielLC · 2013-11-05T05:00:53.095Z · LW(p) · GW(p)

I'm not sure protein folding can be brute forced without quantum computers. There's too many ways for it to fold. In real life, I'm pretty sure quantum tunneling gets involved. Simulations have worked, but I there's a limit to that.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-11-05T07:55:01.370Z · LW(p) · GW(p)

Try Nanosystems perhaps.

comment by Cyan · 2013-11-08T02:32:00.782Z · LW(p) · GW(p)

An analogy might help give a sense of scale here. This isn't an argument, but it hints at the scope of the unknown unknowns in nanotech space. Here on our macroscopic scale, some wonders wrought by evolution include the smasher mantis shrimp's kinetic attack, a bee hive's eusocial organization, the peregrine falcon's flight speed, and the eagle's visual system. But evolution is literally mindless -- by actually knowing how to do things, human engineering created electromagnetic railguns, networks of international trade, the SR-71 Blackbird, and the Hubble telescope. Now apply that kind of thinking to "because biology" on the nano scale...

comment by mwengler · 2013-11-07T22:02:32.105Z · LW(p) · GW(p)

Consider a machine as smart as a cellphone but the size of a blood cell.

In a sense, a protein or a drug is a smart molecule. It keys in to a very limited number of things and ignores the rest. There are many different smart proteins or smart molecules to be used as drugs with many different purposes. Even so, chemotherapy, for example, is primarily about ALMOST killing everything while differentially being a bit more toxic to the cancer cells.

Now increase the intelligence of ths smartest molecule by 10 fold, 100 fold, 1000 fold. Perhaps you give it the ability for a simple 2-way communication to the outside world. If its intelligence is increased, there should be MANY ways to allow it to distinguish a tumor, a micro-tumor, a cancerous cell, from all the good things in your body. All the sudden, the differential toxicity of "chemo" therapy (now nanotherapy) will be 10, 100X as high as it is for smart molecules.

Now consider these smart little machines doing surgery. Inoperable tumor? Not inoperable for a host of machines the size of bloodcells that will literally be able to operate on the most remote of tumors from inside of them.

Tendency towards obesity? How hard will it be to have a system of nanites that screw with your metabolism in such a way to eliminate all the stored fat in cells until told that, or until they measure that, we are down to a good level.

These are just a few stories from medicine. I expect anybody who does not wish to get sick and die would be enthusiastic about these, but YMMV.

comment by Lumifer · 2013-11-04T04:30:36.571Z · LW(p) · GW(p)

and I don't get where that comes from, even after picking up Engines of Creation.

Probably comes from Neal Stephenson's The Diamond Age: Or, A Young Lady's Illustrated Primer :-)

Replies from: None
comment by [deleted] · 2013-11-05T05:16:56.186Z · LW(p) · GW(p)

I definitely have found that this forum is NOT immune to fictional evidence.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-05T19:55:17.112Z · LW(p) · GW(p)

I'm pretty sure that the people Chris is talking about are Stephenson's source, not vice versa.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-06T20:45:46.497Z · LW(p) · GW(p)

Eliezer definitely definitely seems to have caught the nano-enthusiasm bug pre-Diamond Age, but maybe the book had a big impact on other people?

Replies from: mwengler, Douglas_Knight
comment by mwengler · 2013-11-07T22:04:07.474Z · LW(p) · GW(p)

The book had a gigantic impact on me. In a broad range of ways from hypertext through nanotech through various schemes for social organization and the long list of human needs such organization serves.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-08T08:13:22.496Z · LW(p) · GW(p)

One of many things I liked was an illustration that economical improvement is not enough to make people live well. If I remember correctly, in the fictional world the food was free; anyone could go to a public matter-builder and take food from it. Yet some children were hungry... because their parents didn't care enough for them to get outside of the house and bring them the food.

Moral of the story: however good situation you have, humans can make it bad by simply not caring even the smallest. (Unless we get to a situation where humans are replaced by robots completely.)

This seems to me like a hyperbole of the world we have now. Economically, the life in developed countries is so great that for most people who lived in the past it would be almost like a paradise. Yet we have a lot of suboptimality simply because people don't care. (Maybe it's because the wealth made social pressure less relevant, and many people just naturally don't give a fuck about anything, and without the social pressure now they don't even have to pretend.) The good life does not make us automatically stronger; it often makes us lazy. I believe the possibility is still there, but without outside pressure most people don't care about becoming stronger.

comment by Douglas_Knight · 2013-11-06T21:54:12.580Z · LW(p) · GW(p)

Who, specifically are you talking about?
I'm thinking of the extropians, who coined "transhumanism." I'm not sure of the timeline; the original group was definitely into MNT before Stephenson, but maybe they expanded a lot after him, and maybe that was because of him.

comment by passive_fist · 2013-11-22T19:22:10.887Z · LW(p) · GW(p)

Perhaps the reason is that the ideas we're used to nowadays - like reconfiguring matter to make dirt and water into food or repair microcellular damage (for example, to selectively destroy cancer tumors) - were absolutely radical and totally unheard of when they were first proposed. As far as I know, Feynman was the first to seriously suggest that such a thing was possible, and most reactions to him at the time were basically either confusion, disbelief, or dismissal. Consider the average technologist in 1950. Hand-wound computer memories were state of the art, no one knew what DNA looked like, famines seemed a natural part of the order of things, and as far as everyone knew, the only major technological difference between the present and the future was maybe going to be space travel. Now someone comes along and tells you that there could be this new technology that allows you to store the library of congress in the head of a pin and carry out any chemical reaction just by writing down the formula - including the chemical reactions of life. The consequences would be, for instance, the ability to feed everyone on the planet at basically free. To you, such a technology would seem "Indistinguishable from magic." Would it be a dramatic inferential step to then say that it could do stuff that literally is magic?

Nanotechnology never promised magic, of course. All it promises is the ability to rearrange atoms into a subset of those structures allowed by physics (a subset that is far larger than our current technology can do, but a subset nonetheless). It promised nothing more, nothing less. This is in itself dramatic enough, and it would allow all sorts of things that we probably couldn't imagine today.

comment by NancyLebovitz · 2013-11-05T23:10:39.555Z · LW(p) · GW(p)

New ligament discovered in the human knee as a result of surgeons trying to figure out why some people didn't recover fully after knee injuries.

I'm tempted to deduce "Keep paying attention, you never know what might have been missed"-- I really would have expected that all the ligaments had been discovered a long time ago.

Another conclusion might be "Try to solve real problems, you're more likely to find out something new that way than by just poking around."

Replies from: None, Manfred, Mitchell_Porter
comment by [deleted] · 2013-11-07T01:31:33.386Z · LW(p) · GW(p)

Does someone have the medical knowledge to explain how this is possible? My layperson guess is that once cut up a knee, you can more or less see all the macroscopic structures. Did they just think it was unimportant?

Replies from: NancyLebovitz, NancyLebovitz
comment by NancyLebovitz · 2013-11-07T13:24:57.614Z · LW(p) · GW(p)

My layperson guess is that once you're told what to expect to see, you stop looking.

This makes Eliezer's weirdtopia idea of science being kept secret so as not to spoil people's fun of discovery more interesting-- it's not just that people would independently discover the same things (and I wonder what the protocol for sharing information would be), given enough time and intelligence, much more might get discovered.

comment by NancyLebovitz · 2013-11-07T14:03:22.283Z · LW(p) · GW(p)

Someone who seemed a bit better informed

Could be a few things - looks like part of one of the other ligaments, is usually damaged doing a 'standard' dissection, plain old 'you see what you think you should see' bias, some combo of all of the above...

And that comment is answered by:

Medicine needs more Masters and PhD students. I'm sure if they had as many students studying the body in extreme detail, like the eleventy billion English majors who write thesis/dissertations on say, Shakespeare, this would've been hammered out decades ago. XD

Which is interesting-- sometimes studying things in extreme detail "just because" (probably because the object of study has high status-- consider early observations of the planets) can pay off big.

Replies from: Vaniver
comment by Vaniver · 2013-11-07T17:39:20.528Z · LW(p) · GW(p)

The "new ligament discovered" angle gets less impressive (to me, at least) when I read this part:

Their starting point: an 1879 article by a French surgeon that postulated the existence of an additional ligament located on the anterior of the human knee.

Replies from: gwern
comment by gwern · 2013-11-07T20:06:06.125Z · LW(p) · GW(p)

I'm more impressed, actually, in terms of the unevenness of progress - it took ~134 years to confirm his postulate? It's not like corpses were unavailable for dissection in 1879.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-09T15:25:47.146Z · LW(p) · GW(p)

It inspires more awe at our collective failures, but suggests that we should not be so impressed with the new people as if they had a method that would make us sure that we hadn't missed even more ligaments.

comment by Manfred · 2013-11-08T04:48:18.222Z · LW(p) · GW(p)

The media giveth sensationalism, and the media taketh away.

reddit - "So that "new" ligament? Here's a study from 2011 that shows the same thing. It's not even close to a new development and has been seen many times over the past 100 years." Summary quote: "The significance of the Belgian paper was to link [the ligament's] functionality to what they called "pivot shift", and knee reinjuries after ACL surgery. The significance of this paper, I believe, is that in the near future surgeons performing these operations will have an additional ligament to inspect and possibly repair during ACL surgery, which will hopefully reduce recurrence rates, and likely the rates of developing osteoarthritis in the injured knee down the line."

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-08T12:23:37.875Z · LW(p) · GW(p)

sigh

comment by Mitchell_Porter · 2013-11-07T02:47:24.641Z · LW(p) · GW(p)

Another example of low-hanging gristle on the knee of trollage.

comment by Viliam_Bur · 2013-11-03T17:35:08.853Z · LW(p) · GW(p)

So I get home from a weekend trip and go directly to the HPMOR page. No new chapter yet. But there is a link to what seems to be a rationalist Death Note.

The way he saw it, the world was a pretty awful place. Corrupt politicians, cruel criminals, evil CEOs and even day-to-day evil acts made it that way, but everyday stupidity ensured it would stay like that. Nobody could make even a simple utility calculation. The only saving grace was that this was as true for the villains as for the heroes.

I am going to read it. Here are my next thoughts:

So, it seems like Eliezer succeeded to create a whole new genre of literature: rationalist fiction. Nice job!

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.

Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry. Would that lead to a spiral of even more and more rational heroes?

What exactly could the MoreRational!Harry do? It would be pretty awesome if he could somehow deduce the existence of magic before he was contacted from Hogwarts. For example, he could start doing some research about his biological parents; after realizing they were killed he could try to find out the villain, and gradually discover the existence of magic.

Only one problem: MoreRational!Voldemort would have killed MoreRational!Harry as a baby. Using a knife.

Replies from: None, fubarobfusco, ChrisHallquist, Ishaan, MathiasZaman, DanielLC, hyporational, ChristianKl, gattsuru, maia
comment by [deleted] · 2013-11-03T18:57:57.669Z · LW(p) · GW(p)

Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry.

An idea came to my mind. Would it be possible to make a story in which Harry is less intelligent, in a way that he would score less in an IQ test for example, but at the same time more rational? HJPEV seems to be a highly intelligent prodigy even without the rationality addition. I would like to see how a more normal boy would do.

Replies from: lmm
comment by lmm · 2013-11-05T15:59:41.710Z · LW(p) · GW(p)

One could argue that he appears intelligent only because he's spent his life so far learning effectively.

Replies from: gattsuru
comment by gattsuru · 2013-11-05T16:51:34.990Z · LW(p) · GW(p)

Rationalist!Harry is calibrated to match the knowledge and recall of a 34-year-old autodidact. Even presuming a very friendly environment and that said 34-year-old autodidact's training was not optimal, I just don't think there's enough time.

I can buy a 10 year old reading Ender's Game and The Lord of the Rings and maybe even Lensmen. It's a bit harder to imagine one that would consider wanting to want the math behind proving N=NP, nevermind going further than that.

Replies from: CAE_Jones, lmm
comment by CAE_Jones · 2013-11-05T17:17:41.253Z · LW(p) · GW(p)

I believe it's been stated somewhere that EY draws primarily on the skills he had around 18 and intentionally keeps things from beyond that out of Harry's reach. So Harry is more like a brilliant high school student than an adult (and, extra seven years worth of rationalist training aside, the way he approaches problems is a lot like a middle schooler with superpowers: "I can win, you can't, deal with it, 'cause I'm awesome and you know it." Which manages to annoy everyone in-universe and out.). Time isn't really a problem, either, if Harry has nothing else to occupy his time; exercise and social interaction are apparently not his thing, and he wound up out of the public school system after a few years, so he really does have way more time than most kids his age to read all the books. And he has that mysterious dark side and that sleeping disorder, whatever those contribute.

The other strangely adult-like children, however, are not so easily justified. (Draco gets most of those complaints, from what I've read.)

comment by lmm · 2013-11-05T20:27:50.593Z · LW(p) · GW(p)

I wanted the maths behind relativity and QM at age 10. And I wasted a lot of time in school.

comment by fubarobfusco · 2013-11-03T18:20:53.153Z · LW(p) · GW(p)

Is "a story where the protagonist behaves rationally" really a new genre of literature?

I think what you are referring to here is "a story where the protagonist describes their actions and motivations using rationality terminology" or maybe "a story where the rational thinking of the protagonist motivates the plot or moves it along". At least some of the genre of detective fiction — early examples being Poe's Auguste Dupin stories — would be along these lines.

Stories where protagonists behave rationally (without using rationality terrminology) wouldn't look like stories about rationality. They look like stories where protagonists do things that make sense.

comment by ChrisHallquist · 2013-11-04T01:10:37.241Z · LW(p) · GW(p)

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.

Yup. At least sort-of. If you haven't read Eliezer's old post Lawrence Watt-Evans's Fiction I recommend it. However, conspicuous failures of rationality in fiction may be mostly an issue with science fiction and fantasy. If you want to keep the characters in your cop story from looking like idiots, you can do research on real police methods, etc. and if you do it right, you have a decent shot at writing a story that real police officers will read without thinking your characters are idiots.

On the other hand, when an author is trying to invent an entire fictional universe, with futuristic technology and/or magic, it can be really hard to figure out what would constitute "smart behavior" in that universe. This may be partly because most authors aren't themselves geniuses, but even more importantly, the fictional universe, if it were real, would have millions of people trying to figure out how to make optimal use of the resources that exist in that universe. It's hard for one person, however smart, to compete with that.

For that matter, it's hard for one author to compete with an army of fans dissecting their work, looking for ways the characters could have been smarter.

comment by Ishaan · 2013-11-05T01:18:12.842Z · LW(p) · GW(p)

which stories should be x-rationalizated next

This leads to another comment on rationalist fiction: Most of it seems to be restricted to fan-fiction. The mold appears to be: "Let's take a story in which the characters underutilized their opportunities and bestow them with intelligence, curiosity, common sense, creativity and genre-awareness". The contrast between the fanfic and the canon is a major element of the story, and the canon an existing scaffold which saves the writer from having to create a context.

This isn't a bad thing necessarily, just an observation.

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature?

So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?

Every genre has a theme...romance, adventure, etc.

So where are the stories which are, fundamentally, about stuff like epistemology and moral philosophy?

Replies from: MathiasZaman, Viliam_Bur
comment by MathiasZaman · 2013-11-05T09:55:32.606Z · LW(p) · GW(p)

So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?

Every genre has a theme...romance, adventure, etc.

I'd say the difference between "rationalists" stories and "non-rationalist" stories lies in the moral of the story, of the lessons the story teaches you.

I don't think it's a genre in the same way romance or adventure are. It's more of a qualifier. You can have rationalist romance novels or rationalist adventure movies.

Although you could argue that it is a genre. While discussions about "genre" are often hard, since people don't tend to agree on what makes something a genre.

But rationalist fiction already has a couple of genre conventions, such as no-one being allowed to hold the idiot ball or teaching the audience new and useful techniques for overcoming challenges.

comment by Viliam_Bur · 2013-11-05T09:28:02.859Z · LW(p) · GW(p)

So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?

That's a great question. (And related to how to recognize rational people in real life.)

I'd say that there must be some characters which are obviously smarter than most people around them. Because that's what happens in real life: there is the bell curve, so if all your characters are on a similar level, then the story is (a) not realistic, (b) the characters are selected by some intelligence filter which should be explicitly mentioned, or (c) the characters are all from the middle of the bell curve. Also, in real life the relative power of intelligent people is often reduced by compartmentalization, but this reduction would be much smaller for a rationalist hero.

So I'd say it's behaving rationally while most of other people aren't. The character should somehow reflect on the stupidity of others; whether by frustration from their inability to cooperate, or by enjoyment of how easily they are manipulated.

Replies from: Ishaan
comment by Ishaan · 2013-11-05T18:08:05.530Z · LW(p) · GW(p)

The character should somehow reflect on the stupidity of others; whether by frustration from their inability to cooperate, or by enjoyment of how easily they are manipulated.

I'm not sure I like that criteria. By that criteria alone, the original death note anime was rationalist fiction (judging by the first half), as is Artemis Fowl, Ender's Game, and to some extent even Game of Thrones. There are a lot of stories where some characters are much smarter than others and know it, but consuming these works won't teach anyone how to be smarter. (Other than the extent to which reading good fiction in general improves various things)

None of these stories actually teach the reader anything about epistemology. Even the linked Death Note fan fic...it uses rationality-associated words like "utility" and "prior" but if I didn't already know what those words meant I would have just come away confused. (Granted, it's still early in the story - but even so)

Also, it hasn't yet broken the conceit of the story (For example, even a normal person of average intelligence would be surprised and curious about the existence of the supernatural, and would investigate that). I'd say that breaking the story conceit is another feature of rationalist fanfiction stories that has nothing to do with the character's intelligence.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-05T20:02:28.813Z · LW(p) · GW(p)

Well, I was disappointed with the Death Note fan fic, because it doesn't seem to have added value beyond the original story. And I agree that exploring the supernatural should be a high priority for a rational person, once the supernatural is experimentally proven. Would it be so difficult to ask Ryuk whether there are additional magical items that could also be abused? I guess Ryuk would use an excuse of having "rules" against that, but at least it's worth trying.

Having a rational superhero is a necessary condition for a rationalist story, not a sufficient condition. Ender's Game could be a rationalist literature if it explained Ender's reasoning better, and if Ender strategically tried to improve his understanding of the world. Okay, another necessary condition is not just that the superhero is super smart, but also that the super smartness is at least partially a result of a good strategy, which is shown to the reader.

comment by MathiasZaman · 2013-11-04T09:44:49.323Z · LW(p) · GW(p)

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature?

I think there's a difference between what I've been describing as rationalist!fic (or rationalist!fiction) and fiction in which the -agonists (PCs is the right terminology, I guess) are rational/clever. Rationalist!fic doesn't just feature rationalist characters, they're expressively written to teach the audience about rationality.

Examples:

  • Doctor Who features a sufficiently advanced alien who is, within the rules of the universe, pretty rational (in that he is good at reaching his goals). The message of the show however, is not: "be clever and rational," it's: "humanity is awesome and you should feel some wonder about the universe." Not rationalist!fic.
  • The Conqueror's Shadow, by Ari Marmell features rationalist agonists and the message the audience goes away with is: "be clever and creative when it comes to reaching worthwhile goals." Rationalist!fic.
comment by DanielLC · 2013-11-05T04:56:18.679Z · LW(p) · GW(p)

Erfworld is a piece of rationalist fiction not related to HP:MoR. It was discussed on here a while back. There must be others.

Also, I suggest calling it Rational!Rational!Harry.

comment by hyporational · 2013-11-04T15:14:17.660Z · LW(p) · GW(p)

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.

I get your sentiment, but I don't think this is true. Anyways, wouldn't this just mean that rational minds usually pursue other goals than writing fiction? Not saying that there shouldn't be rationalist fiction, but this doesn't sound like such a bad state of affairs to me.

I haven't read HPMOR. Do I have to know anything about the HP universe to enjoy this thing? Will I learn anything new if I've read the sequences?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-04T15:55:19.976Z · LW(p) · GW(p)

I guess you don't need to know anything from the HP canon. It could perhaps be even more interesting that way. I don't think you would learn new information. It might have a better emotional impact, but that is difficult to predict.

wouldn't this just mean that rational minds usually pursue other goals than writing fiction? Not saying that there shouldn't be rationalist fiction, but this doesn't sound like such a bad state of affairs to me.

I would consider the world better if there were more rational people sharing the same values as me. We could cooperate on mutual goals, and learn from each other.

Problem is, rational people don't just appear randomly in the world. Okay, sometimes they do, but the process if far from optimal. If there is a chance to make rationality spread more reliable, we should try.

But we don't exactly know how. We tried many things, with partial success. For example the school system -- it is great in taking an illiterate peasant population and producing an educated population within a century. But it has some limits: students learn to guess their teachers' passwords, there are not enough sufficiently skilled teachers, the pressure from the outside world can bring religion to schools and prevent teaching evolution, etc. And the system seems difficult to improve from inside (been there, tried that).

Spreading rationality using fiction is another thing worth trying. There is a chance to attract a lot of people, make some of them more rational, and create a lot of utility. Or maybe despite there being dozens of rationalist fiction stories, they would all be read by the same people; unable to attract anyone outside of the chosen set. I don't know.

The point is, if you are rational and you think the world would be better with more rational people... it's one problem you can try to solve. So before Eliezer we had something like the Drake equation: how many people are rational × how many of them think making more people rational is the best action × how many of them think fiction is the best tool for that = almost zero. I am curious about the specific numbers; especially whether one of them is very close to zero, or whether it's merely a few small numbers that give almost zero result when multiplied together.

Replies from: hyporational
comment by hyporational · 2013-11-06T02:38:11.372Z · LW(p) · GW(p)

I'd probably want more people who share my values than more rational people. Rational people who share my values is better. Rational people who don't share my values would be the worst outcome.

I don't think the school system was built by rationalists, so I'm not sure where you were going with that example.

How effective has fiction been in spreading other ideas compared to other methods?

comment by ChristianKl · 2013-11-04T12:25:39.448Z · LW(p) · GW(p)

Only one problem: MoreRational!Voldemort would have killed MoreRational!Harry as a baby. Using a knife.

Given that the spell never failed in the past, I'm not sure that it would have been rational to use a knife.

comment by gattsuru · 2013-11-05T05:19:03.620Z · LW(p) · GW(p)

In addition to the other's already listed, DataPacRat's Myou've Got To Be Kidding Me follows a perspective of a character thrown into a setting and trying to analyze the basic rules in order to optimize them. There are some interesting concepts, but I don't know that I can recommend it : It has not been updated in over a year, and was part of some big conglomeration of fanfic writers which had some pretty widely varying quality (although thankfully nothing necessary to Myou've plotline).

comment by maia · 2013-11-04T02:02:43.153Z · LW(p) · GW(p)

MoreRational!Voldemort would have killed MoreRational!Harry as a baby. Using a knife.

Fbzr sna gurbevrf ubyq gung Dhveeryzbeg vf hfvat Ibyqrzbeg nf n chccrg vqragvgl va beqre gb tnva cbjre. Fb Ibyqrzbeg'f erny tbny vfa'g gb xvyy Uneel; vg'f gb unir n qenzngvp fubjqbja gung trgf ybgf bs nggragvba naq fpnerf crbcyr.

Guhf gur snpg gung Ibyqrzbeg qvqa'g xvyy Uneel jvgu n xavsr vf abg orpnhfr ur'f abg engvbany rabhtu, ohg orpnhfr ur unf aba-boivbhf tbnyf.

comment by Nick_Tarleton · 2013-11-04T01:42:46.052Z · LW(p) · GW(p)

Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?

Replies from: Eliezer_Yudkowsky, Nick_Tarleton
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-11-06T20:55:26.556Z · LW(p) · GW(p)

There's no brief answer. I've been slowly gravitating towards, but am not yet convinced, by the suspicion that making a computer out of twice as much material causes there to be twice as much person inside. Reason: No exact point where splitting a flat computer in half becomes a separate causal process, similarity to behavior of Born probabilities. But that's not an update to the anthropic trilemma per se.

Replies from: Armok_GoB, Psy-Kosh
comment by Armok_GoB · 2013-11-06T21:42:24.055Z · LW(p) · GW(p)

Hmm, conditional on that being the case, do you also believe that the closer to physics the mind is the more person it is in it? Example: action potentials encoded in the position of rods in a babbage engine vs. spread over fragmented ram used by a functional programing language using lazy evaluation in the cloud.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-11-07T02:49:42.928Z · LW(p) · GW(p)

Good question. Damned if I know.

comment by Psy-Kosh · 2013-11-09T21:59:30.276Z · LW(p) · GW(p)

That seems to be seriously GAZP violating. Trying to figure out how to put my thoughts on this into words but... There doesn't seem to be anywhere that the data is stored that could "notice" the difference. The actual program that is being the person doesn't contain a "realness counter". There's nowhere in the data that could "notice" the fact that there's, well, more of the person. (Whatever it even means for there to be "more of a person")

Personally, I'm inclined in the opposite direction, that even N separate copies of the same person is the same as 1 copy of the same person until they diverge, and how much difference between is, well, how separate they are.

(Though, of course, those funky Born stats confuse me even further. But I'm fairly inclined toward the "extra copies of the exact same mind don't add more person-ness. But as they diverge from each other, there may be more person-ness. (Though perhaps it may be meaningful to talk about additional fractions of personness rather than just one then suddenly two hole persons. I'm less sure on that.)

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2013-11-25T20:35:58.898Z · LW(p) · GW(p)

Why not go a step further and say that 1 copy is the same as 0, if you think there's a non-moral fact of the matter? The abstract computation doesn't notice whether it's instantiated or not. (I'm not saying this isn't itself really confused - it seems like it worsens and doesn't dissolve the question of why I observe an orderly universe - but it does seem to be where the GAZP points.)

Replies from: Psy-Kosh
comment by Psy-Kosh · 2013-12-02T22:09:23.411Z · LW(p) · GW(p)

Hrm... The whole exist vs non exist thing is odd and confusing in and of itself. But so far it seems to me that an algorithm can meaningfully note "there exists an algorithm doing/perceiving X", where X represents whatever it itself is doing/perceiving/thinking/etc. But there doesn't seem there'd be any difference between 1 and N of them as far as that.

comment by Nick_Tarleton · 2013-11-08T05:04:24.482Z · LW(p) · GW(p)

I wonder if it would be fair to characterize the dispute summarized in/following from this comment on that post (and elsewhere) as over whether the resolutions to (wrong) questions about anticipation/anthropics/consciousness/etc. will have the character of science/meaningful non-moral philosophy (crisp, simple, derivable, reaching consensus across human reasoners to the extent that settled science does), or that of morality (comparatively fuzzy, necessarily complex, not always resolvable in principled ways, not obviously on track to reach consensus).

comment by lukeprog · 2013-11-04T17:39:45.986Z · LW(p) · GW(p)

Brian Leiter shared an amusing quip from Alex Rosenberg:

So, the... Nobel Prize for “economic science” gets awarded to a guy who says markets are efficient and there are no bubbles—Eugene Fama (“I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning”—New Yorker, 2010), along with another economist—Robert Shiller, who says that markets are pretty much nothing but bubbles, “Most of the action in the aggregate stock market is bubbles.” (NY Times, October 19, 2013) Imagine the parallel in physics or chemistry or biology—the prize is split between Einstein and Bohr for their disagreement about whether quantum mechanics is complete, or Pauling and Crick for their dispute about whether the gene is a double helix or a triple, or between Gould and Dawkins for their rejection of one another’s views about the units of selection. In these disciplines Nobel Prizes are given to reward a scientist who has established something every one else can bank on. In economics, “Not so much.” This wasn’t the first time they gave the award to an economist who says one thing and another one who asserts its direct denial. Cf. Myrdal and Hayek in 1974. What’s really going on here? Well, Shiller gave the game away in a NY Times interview when he said of Fama, “It’s like having a friend who is a devout believer of another religion.” Actually it’s probably two denominations in the same religion.

Replies from: badger
comment by badger · 2013-11-04T21:33:19.465Z · LW(p) · GW(p)

Ugh. The prize was first and foremost in recognition of Fama, Shiller, and Hansen's empiricism in finance. In the sixties, Fama proposed a model of efficient markets, and it held up to testing. Later, both Fama, Shiller, and Hansen showed further tests didn't hold up. Their mutual conclusion: the efficient market hypothesis is mostly right, and while there is no short-term predictability based on publicly available information, there is some long-term predictability. Since the result is fairly messy, Fama and Shiller have differences about what they emphasize (and are both over-rhetorical in their emphasis). Does "mostly right" mean false or basically true?

What's causing the remaining lack of agreement, especially over bubbles? Lack of data. Shiller thinks bubbles exist, but are rare enough he can't solidly establish them, while Fama is unconvinced. Fama and Shiller have done path-breaking scientific work, even if the story about asset price fluctuation isn't 100% settled.

Replies from: mwengler
comment by mwengler · 2013-11-07T21:54:02.018Z · LW(p) · GW(p)

Does "mostly right" mean false or basically true?

Mostly right means false. The hypothesis that securities markets are pretty darn efficient, and everybody goes through a broad range of ideas of inefficiencies that turn out not to be "real" (or exploitable) is, I think, virtually uncontested by anyone. Including uncontested by people who think there was a tech bubble in the late 1990s and a housing bubble in the mid 00's.

I hear Fama interviewed after he got the prize. He denies that the internet bubble and the housing bubble were bubbles, in the sense that they were knowable enough to be acted upon. In particular, he claims that anybody who detects the internet bubble and/or the housing bubble will also detect a bunch of non-bubbles such that any action they take to make money off their knowledge of the real bubbles will be (at least) completely negated by what they lose when they are exploiting unreal bubbles.

Efficient Market Hypothesis denies knowable bubbles, at least according to Fama interviewed within the last month.

comment by JoshuaZ · 2013-11-04T01:35:47.449Z · LW(p) · GW(p)

New research suggests that the amount of variance in DNA among individual cells in a person may be much higher than is normally believed. See here.

Replies from: witzvo, None
comment by witzvo · 2013-11-04T19:01:10.190Z · LW(p) · GW(p)

... researchers isolated about 100 neurons from three people posthumously. The scientists took a high-level view of the entire genome -- looking for large deletions and duplications of DNA called copy number variations or CNVs -- and found that as many as 41 percent of neurons had at least one unique, massive CNV that arose spontaneously, meaning it wasn't passed down from a parent. The CNVs are spread throughout the genome, the team found.

Edit: see the paper for more precise statements.

comment by [deleted] · 2013-11-05T05:19:59.095Z · LW(p) · GW(p)

I've already seen work to the effect that somatic cells often have ~10x the point mutations per human generation as the germline, which is protected by a small number of divisions per generation and low levels of metabolism and transcription. It was in mitochondrial rather than nuclear DNA, but the idea is similar.

comment by [deleted] · 2013-11-03T18:39:45.450Z · LW(p) · GW(p)

SPOILERS FOR "FRIENDSHIP IS OPTIMAL"

Why is 'Friendship is optimal' "dark" and "creepy"? I've read many people refer it that way. Only things that are clearly bad are the killings of all the other lifeforms, but otherwise this is scenario is one of the best that humanity could come across. It's not perfect, but it's good enough and much better than the world we have today. I'm not sure if it's realistic to ask for more. Considering how likely it is that humanity will end in some incredibly fucked up way full of suffering, then I would definitely defend this kind of utopia.

Replies from: Leonhart, gattsuru, blacktrance, Multiheaded
comment by Leonhart · 2013-11-03T21:29:27.598Z · LW(p) · GW(p)

(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)

OK, I'll have a go (will be incomplete).

People in general will find the Optimalverse unpleasant for a lot of reasons I'll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.

People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.

(rot13 some bits that might consitute spoilers)

  • Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength, Free to Optimize.

  • The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.

  • Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevrapr vf abj vzcbffvoyr gb rire ernyvfr. Eryrinag: Hzz... znlor Value is Fragile? Abg dhvgr. Uryc zr bhg urer, thlf! (nyfb, vafreg lbhe bja cersreerq snaqbz wbxr nobhg cbbe Ylen arire trggvat gb unir unaqf rgp.)

  • Nf lbh zragvbarq, gur jnl va juvpu gur NV'f cnegvphyne qrsvavgvba bs "uhzna" jnf abg evtug naq pna arire or zbqvsvrq, urapr nyvra ncbpnylcfrf. Eryrinag: The Hidden Complexity of Wishes

Themes that are more explicit after the extra worldbuilding in Caelum est Conterrens:

  • Zbqvslvat uhzna zvaqf va gur jnl gur hcybnqf ner qrfpevorq nf orvat zbqvsvrq vf ernyyl, ernyyl, ernyyl, ernyyl uneq, naq zvtug or vzcbffvoyr jvgubhg oernxvat crefbany pbagvahvgl Growing Up is Hard. (Guvf vf zber bs n ubeebe fbhepr guna na nethzrag, orpnhfr gur fgbel pna or ernq nf fgvchyngvat gung gur NV vf trggvat vg evtug).

Gjb cbffvoyr svany nggenpgbef sbe uhzna tebjgu ner cerfragrq (Ybbc naq Enl Vzzbegnyf):

  1. ybbcvat raqyrffyl jvgu zrzbel biresybj (gung vf, va gur raq nyy yvirf snvy gur pbaqvgvbaf va Emotional Involvement ol orpbzvat n qvfpbaarpgrq frevrf bs rcvfbqrf)
  2. qrcnegvat sebz gur uhznar inyhr senzrjbex, ("bhgtebjvat ybir")
    Fbzr urer ner abg fngvfsvrq jvgu rvgure naq ernyyl, ernyyl ubcr gurer vf n guveq jnl sbe uhznaf gb npuvrir haobhaqrq tebjgu gung erznvaf zrnavatshy (ol gurve yvtugf).

Notes:

  • I'm sympathetic to your position; this is the substance of my comment here that I think I understand what's supposed to horrify me.

  • That comment of mine is no doubt wrong; there will be things that don't horrify me that I didn't even realise were supposed to.

  • There are quick and obvious comebacks to nearly all the above points. In a lot of cases, those quick comebacks are dealt with in the linked articles. Read the Fun Theory Sequence; it's my favorite sequence, despite the fact that I disagree with more of it than any of the others.

Replies from: Kaj_Sotala, None, Eliezer_Yudkowsky, None
comment by Kaj_Sotala · 2013-11-04T17:35:27.853Z · LW(p) · GW(p)

Upvoted, but I'd like to request that you'd ROT13 either everything or nothing past a certain point. Being unable to just select all of it to be deciphered, and having to instead pick out a few pieces at a time, was mildly annoying.

Replies from: Leonhart
comment by Leonhart · 2013-11-04T20:51:19.393Z · LW(p) · GW(p)

Done, thanks for saying. I was trying to avoid thinking about the interaction between rot13 and links (leaving the anchor text un-rot13ed seems like acceptable practice?) but I should just have spent the extra two minutes.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-11-05T04:42:21.907Z · LW(p) · GW(p)

Thanks! Much better now. :-) (As for the links, one can just paint over them as well and think "oh it was just some link" when they show up as garbled in the translation.)

comment by [deleted] · 2013-11-04T14:58:04.071Z · LW(p) · GW(p)

Now that I've thought about your post I realized that the biggest question in this story is what the phrase "satisfy values" actually means. Currently it's a pretty big hand wave in the story. Especially your first point seems to imply that we understood it a bit differently.

In my understanding, if I value real challenge, the possibility of things going badly, or even some level of pain, then the Optimalverse will somehow maximize those values and at least provide the feeling of real challenge and possibility of things going badly. And I don't know why the Optimalverse couldn't even provide the real thing. The way Light Sparks tries to pass the Intermediate Magic test seems an awfully lot like real challenge. Of course the Optimalverse wouldn't allow you to die because in most cases the dislike of death overrides the longing for real challenge in the value system, but that still leaves a lot of options free. I got the impression that this is how it's actually handled in the story. There's this passage

Cbavrf unq ab cerqngbef; orvat ‘rngra’ ol n zbafgre va gur Rireserr sberfg whfg raqrq jvgu gur cbal va gur ubfcvgny va dhvgr n ovg bs cnva. Fngvfslvat inyhrf jnfa’g whfg nobhg unccvarff; univat zbafgref yrg cbavrf grfg gurve fgeratgu be oenirel. Rneyl ba, evtug nsgre gur pbairefvba bs Rnegu, n zrer sbhe uhaqerq cbavrf unq crgvgvbarq Cevaprff Pryrfgvn gb yrg gurz qvr, naq Cevaprff Pryrfgvn unq bayl nterrq gung qbvat fb jbhyq fngvfsl gurve inyhrf va rvtugl-fvk pnfrf. Abcbal unq qvrq va frireny Rdhrfgevna fhowrpgvir zvyyraavn.

Your second point is of course a real concern for some people, but personally it doesn't feel very relevant. My actions don't currently feel very important in the big scheme of things and I don't know how a superintelligence would change things all that much. If I'm not personally doing anything important, then it doesn't really matter to me if the important things are done by other humans or by a superintelligence. Anyway, this will always be a problem with AGI and if the AGI is friendly then the benefits outweight the negatives IMO. I think the alternative is worse.

The way I understood it is that the "ponies" in this story are essentially human in a pony disguise with four legs (two of them which can almost work like hands). A paragraph from the story:

V zbqvsvrq lbhe zbgbe pbegrk fb lbh pbhyq qrny jvgu lbhe arjsbhaq dhnqehcrqny zbirzrag, nybat jvgu bgure qvssreraprf orgjrra n uhzna naq cbal obql. V unir znqr gur zvavzny frg bs cbffvoyr punatrf; lbhe crefbanyvgl vf hapunatrq.

A big part of being human is due to our mind and hormones. Walking with two legs or being able to use hands extensively are more trivial points. If the psychology of a person doesn't change in the transition from human to pony, then this eliminates most of the problems in your third point.

I haven't read Caelum Est Conterrens and can't fully comment on those points. But it seems that those are more like technicalities. I don't know if it's actually possible to turn a person into a pony without losing the person in the process. But if you're not changing the brain parameters and the psychology doesn't change in the process like it seems to be in this story then I would be inclined to say it's possible. Clearly it can't be worse for your identity than losing all your limbs or becoming a quadriplegic? Anyway, one of the axioms in this story seems to be that it's possible.

I actually read the Fun theory sequence in its entirety before I read 'Friendship is optimal' and I thought FIO more faithful to the spirit of the sequence than 99% utopian stories out there. This is mostly because Celestia maximizes people's values, not their happiness. This is a very vague concept, and a lot depends on how it's implemented, but if it's implemented the way I picture it, there shouldn't be problems with things mentioned in High Challenge, Complex Novelty, Sensual Experience, Living By Your Own Strength, Free to Optimize, In Praise of Boredom, Interpersonal Entanglement and so on.

Of course, I have problems with applying things I read about to all my experiences, so it could be I misremember some things in the sequence or didn't understand them correctly to begin with.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-04T15:12:29.488Z · LW(p) · GW(p)

Clearly it can't be worse for your identity than losing all your limbs or becoming a quadriplegic?

Well, this is not clear, though it might be true.

I have frequently had the experience of not doing anything with my left leg; losing the ability to ever do anything with my left leg means I'm prevented from ever doing anything with it. This is horrible, of course, but it's the horror of being prevented from doing things I often choose not to do. Losing all my limbs is a more extreme version of the same thing.

Having different limbs might be more identity-distorting, by virtue of providing experiences that are completely unfamiliar.

Then again it might not.

For my own part, I'm not all that attached to preserving my current identity, so I'm not sure the question matters to me. If my choice is between an identity-altering pony body, and an identity-preserving quadriplegic body, I might well choose the former.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-11-05T07:57:08.571Z · LW(p) · GW(p)

Endorsed as a good summary.

comment by [deleted] · 2013-11-04T22:25:44.833Z · LW(p) · GW(p)

I read Caelum Est Conterrens, now I can better understand why some aspects of the scenario are a bit disconcerting if not horrifying. I find all the options loop immortality, ray immortality and exponential immortality kinda unpleasant, but maybe that is as good as it gets. Still, it feels like many of those things are not exclusive to this scenario, but are part of the world anyway.

Related to this, what did you think about the "normal" ending in the Three worlds collide?

Replies from: Leonhart
comment by Leonhart · 2013-11-04T23:26:51.442Z · LW(p) · GW(p)

From flaky memory, I think I find the Normal Ending far less acceptable than anything in the Optimalverse - one feels the premature truncation of human nature, rather than the natural exhaustion of it (or the choice to become inexhaustible) - but hey, maybe I'm inconsistent.

comment by gattsuru · 2013-11-05T00:47:16.273Z · LW(p) · GW(p)

At least to me, it's increasingly difficult to distinguish between a paradise machine and wireheading, and I dislike wireheading. Each shard of the Equestria Online simulation is built to be as fulfilling (of values through ponies and friendship) as possible, for the individual placed within that shard.

That sounds great! .... what happens when you're wrong?

I mean, look at our everyman character, David. He's set up in a shard of his own, with one hundred and thirty two artificial beings perfectly formatted to fit his every desire and want, and with just enough variation and challenge to keep from being bored. It's not real variation, or real challenge, but he'd not experience that in the real world, either, so it's a moot point. But look at the world he values. His challenges are the stuff of sophmore programming problems. His interpersonal relationships include a score counter for how many orgasms he gives or receives.

Oh, his lover is sentient and real, if that helps, but look at that relationship in specific. Butterscotch is created as just that little bit less intelligent than David is -- whether this is because David enjoys teaching, or because he's wrapped around the idea of women being less powerful than he is, or both, is up to the reader. Sculpted in her memories to exactly fit David's desires, and even a few memories that David has of her she never experiences, so that the real Butterscotch wouldn't have to have experienced unpleasant things that CelestAI used to manipulate David into liking/protecting her.

There are, to a weak approximation, somewhere between five hundred billion and one trillion artificial beings in the simulation, by the time most of humanity uploads. That number will only scale up over time. Let's ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better. We're making artificial optimized for enjoying slaking your desires, which I would be surprised if it happened to also be optimized for what we as society would really like.

Lars is even worse: he is actively made to not not want his life of debauchery -- see the obvious overlap with the guy modifying himself to not get bored with a million years of catgirl sex.

At a deeper level, what if your own values are wrong?

The basic example, brought up in the Rules of The Universe document, is a violent psychopath. Upon being uploaded, CelestAI would quite happily set our psychopath up in a private shard with one hundred and fifty artificial ponies, all of which are perfectly molded to value being shot, stabbed, lit on fire, and violated in a way that is as satisfying as possible to a Dexter villain.

Or I can provide a personal example. I can go both ways, preferring guys, and was an unusually late bloomer. I can look back through time to see an earlier version of myself's values, and remember how they changed. Even in a fairly tolerant society and even with a very collaborative environment, this was not something that came according to my values or without external stimulus. ((There is a political position version of this, but for the sake of brevity I'll just mention that it's possible. More worryingly, I'm not sure there's a way to formalize this concern, as much as it hits me at a gut level. For the most part, value drift is something we don't want.))

Or, for an in-story example :

Ybbx ng jung unccraf gb Unaan / 'Cevaprff Yhan'. Gur guvat fur inyhrf zbfg, ng gur raq bs gur fgbel, vf oryvrivat gung fur qvq abg znxr n zvfgnxr hayrnfuvat PryrfgNV. Naq PryrfgNV vf dhvgr pncnoyr bs fubjvat ure whfg gur orfg rknzcyrf bs ubj guvatf ner orggre. Vg qbrfa'g znggre jung gur ernyvgl vf, naq vaqrrq gur nhgube gryyf hf gung Unaan pbhyq unir qbar orggre. Zrnajuvyr, Unaan vf xrcg whfg ba gur obeqre bs zvfrenoyr nf gur fgbel raqf.

It's a very good dysutopia -- I'd rather live there than here, and heck it even beats a good majority of conventional fluffy cloud heaven afterlives -- but it's still got a number of really creepy issues..

Replies from: Leonhart, NancyLebovitz
comment by Leonhart · 2013-11-05T23:50:33.500Z · LW(p) · GW(p)

Let's ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better.

No, let's not ignore it. Let's confront it, because I want a better explanation. Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).

I mean, seriously? Why would I want to mix any noise into this process?

Replies from: gattsuru
comment by gattsuru · 2013-11-06T04:02:33.881Z · LW(p) · GW(p)

Good point. I've not uncompressed the thoughts behind that statement nearly enough.

Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).

The artificial sentients value being people that make your life better (through friendship and ponies). Your values don't necessarily change. And artificial sentients, unlike real ones, have no drive toward coherent or healthy spaces of design of minds : they do not need to have boredom, or sympathy, or dislike of pain. If your values are healthily formed, then that's great! If not, not so much. You can be a psychopath, and find yourself surrounded by people where "making their lives better" happens only because you like the action "cause them pain for arbitrary reasons". Or you could be a saint, and find yourself surrounded by people who value being healed, or who need to be protected, and what a coincidence that danger keeps happening. Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance. There are a lot of things you can value, and that we can make sentient minds value, that will make my skin crawl.

Now, the Optimalverse gets rid of some potential for abuse due to setting rules -- it's post-scarcity on labor, starvation or permanent injury are nonsense, CelestAI really really knows your mind so there's no chance of misguessing your values, so we can rule out a lot of incidental house elf abuse -- but it doesn't require you to be a good person. Nor does it require CelestAI to be. CelestAI cares about satisfying values through friendship and ponies, not about the quality of the values themselves. The machine does not and can not judge.

If it's moral to create a person and if you're a sufficiently moral person, then there's nothing wrong with artificial beings. My criticism isn't that CelestAI made a trillion sentient beings or a trillion trillion sentient beings -- there's nothing meaningfully worrying about that. The creepy factor is that CelestAI made one being, both less intelligence than possible and less intelligent than need be.

That may well be an unexamined reaction or even incorrect response. I like to think I'm open-minded, but I'm willing to recognize that I can overestimate it, and have done so in the past. There are real-world right-now folk who enjoy being (in specific contexts and while in control) hurt or being hurt and comforted, which I can accept. Maybe I'm being parochial when I judge David for wanting a woman he can always teach, or Lars for his sex groupies; that's not a mind space I empathize with terribly well, and a good deal of my revulsion comes from real-world constraints that wouldn't apply here. There's a reason that we're using the word creepy, rather than wrong. But it does make my skin crawl.

Replies from: Leonhart
comment by Leonhart · 2013-11-07T21:45:57.938Z · LW(p) · GW(p)

Thank you for trying to explain.

You can be a psychopath, and find yourself surrounded by people where "making their lives better" happens only because you like the action "cause them pain for arbitrary reasons". Or you could be a saint, and find yourself surrounded by people who value being healed, or who need to be protected, and what a coincidence that danger keeps happening.

I'm curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?

Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance.

The above sounds like a description of a "good parent", as commonly understood! To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?
(Note that this being even possible depends on whether we can simulate someone's past without that simulation still counting as it having happened, which is nonobvious.)

The creepy factor is that CelestAI made one being, both less intelligence than possible and less intelligent than need be.

If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn't find it as creepy. (Correct me if that's not so). But the situation is symmetrical. Why is it important who came first?

Replies from: gattsuru, lmm
comment by gattsuru · 2013-11-12T00:57:41.369Z · LW(p) · GW(p)

Thank you for the questions, and my apologies for the delayed response.

I'm curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?

Yes, with the admission that there are specific attributes to masochism and sadism that are common but not universal to all possible relationships or even all sexual relationships with heavy differences in power dynamics(1). It's less negative in the immediate term, because one hundred and fifty masochists making a single sadist results in a maximum around forty million created beings instead of one trillion. In the long term, the equilibrium ends up pretty identical.

(1) For contrast, the structures in wanting to perform menial labor without recompense are different from those wanting other people to perform labor for you, even before you get to a post-scarcity society. Likewise, there are difference in how prostitution fantasies generally work versus how fantasies about hiring prostitutes do.

Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance. The above sounds like a description of a "good parent", as commonly understood!

I'm not predisposed toward child-raising, but from my understanding the point of "good parent" does not value making someone weak: it values making someone strong. It's the limitations of the tools that have forced us to deal with years of not being able to stand upright. Parents are generally judged negatively if their offspring are not able to operate our their own by certain points.

To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?

If it were possible to simulate or otherwise avoid the joys of the terrible twos, I'd probably consider it more ethical. I don't know that I have the tools to properly evaluate the loss in values between the two actions, though. Once you've got eternity or even a couple reliable centuries, the damages of ten or twenty years bother me a lot less.

These sort of created beings aren't likely to be in that sort of ten or twenty year timeframe, though. At least according to the Caelum est Conterrens fic, the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset. You're not talking about someone being weak for a year or a decade or even a century: they'll be powerless forever.

I haven't thought on it enough to say that creating such beings should be banned (although my gut reaction favors doing so), but I do know it'd strike me as very creepy. If it were possible to significantly reduce or eliminate the number of negative development experiences entities undergo, I'd probably encourage it.

If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn't find it as creepy. (Correct me if that's not so). But the situation is symmetrical. Why is it important who came first?

In that particular case, the equilibrium is less bounded. Butterscotch isn't able to become better than David or even to desire becoming better than David, and a number of pathways for David's desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.

That's not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.

In the case of intelligence, that's not that bad. Real-world people tend toward a bounded curve on that, and there are reasons we prefer socializing within a relatively narrow bound downward. Other closed equilibria are more unpleasant. I don't have the right to say that Lars' fate is wrong -- it at least gets close to the catgirl volcano threshold -- but it's shallow enough to be concerning. This sort of thing isn't quite wireheading, but it's close enough to be hard to tell the precise difference.

More generally, some people -- quite probably all people -- are going to go into the future with hangups. Barring some really massive improvements in philosophy, we may not even know the exacts of those hangups. I'm really hesitant to have a Machine Overlord start zapping neurons to improve things without the permission of the owner's brains (yes, even recognizing that a sufficiently powerful AI will get the permission it wants).

As a result, that's going to privilege the values of already-extant entities in ways that I won't privilege creating new ones: some actions don't translate through time because of this. I'm hesitant to change David's (or, once already created, Butterscotch's) brain against the owner's will, but since we're already making Butterscotch's mind from scratch both the responsibilities and the ethical questions are different.

Me finding some versions creepier than others reflects my personal values, and at least some of those personal values reflect structures that won't exist in the FiOverse. It's not as harmful when David talks down to Butterscotch, because she really hasn't achieved everything he has (and the simulation even gives him easy tools to make sure he's only teaching her subjects she hasn't achieved yet), where part of why I find it creepy is because a lot of real-world people assume other folk are less knowledgeable than themselves without good evidence. Self-destructive cycles probably don't happen under CelestAI's watch. Lars and his groupies don't have to worry about unwanted pregnancy, or alcoholism, or anything like that, and at least some of my discomfort comes from those sort of things.

At the same time, I don't know that I want a universe that doesn't at least occasionally tempt up beyond or within our comfort zones.

Replies from: Leonhart
comment by Leonhart · 2013-11-12T21:17:23.028Z · LW(p) · GW(p)

Sorry, I'm not following your first point. The relevant "specific attribute" that sadism and masochism seem to have in this context are that they specifically squick User:gattsuru. If you're trying to claim something else is objectively bad about them, you've not communicated.

I'm not predisposed toward child-raising, but from my understanding the point of "good parent" does not value making someone weak: it values making someone strong.

Yes, and my comparison stands; you specified a person who valued teaching and protecting people, not someone who valued having the experience of teaching and protecting people. Someone with the former desires isn't going to be happy if the people they're teaching don't get stronger. You seem to be envisaging some maximally perverse hybrid of preference-satisfaction and wireheading, where I don't actually value really truly teaching someone, but instead of cheaply feeding me delusions, someone's making actual minds for me to fail to teach!

the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset.

We are definitely working from very different assumptions here. "stay within a fairly limited set of experiences and values based on their initial valueset" describes, well, anything recognisable as a person. The alternative to that is not a magical being of perfect freedom; it's being the dude from Permutation City randomly preferring to carve table legs for a century.

In that particular case, the equilibrium is less bounded. Butterscotch isn't able to become better than David or even to desire becoming better than David, and a number of pathways for David's desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.

I don't think that's what we're given in the story, though. If Butterscotch is made such that she desires self-improvement, then we know that David's desires cannot in fact collapse in such a way, because otherwise she would have been made differently. Agreed that it's a problem if the creator is less omniscient, though.

That's not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.

Butterscotch is that person. That is my point about symmetry.

I don't have the right to say that Lars' fate is wrong -- it at least gets close to the catgirl volcano threshold -- but it's shallow enough to be concerning. This sort of thing isn't quite wireheading, but it's close enough to be hard to tell the precise difference.

But then - what do you want to happen? Presumably you think it is possible for a Lars to actually exist. But from elsewhere in your comment, you don't want an outside optimiser to step in and make them less "shallow", and you seem dubious about even the ability to give consent. Would you deem it more authentic to simulate angst und bange unto the end of time?

comment by lmm · 2013-11-10T03:27:57.256Z · LW(p) · GW(p)

Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?

That seems less worrying, but I think the asymmetry is inherited from the behaviours themselves - masochism seems inherently creepy in a way that sadism isn't (fun fact: I'm typing this with fingers with bite marks on them. The recursion is interesting, and somewhat scary - usually if your own behaviour upsets or disgusts you then you want to eliminate. But it seems easy to imagine (in the FiOverse or similar) a masochist who would make themselves suffer more not because they enjoyed suffering but because they didn't enjoy suffering, in some sense. Like someone who makes themselves an addict because they enjoy being addicted (which would also seem very creepy to me))

To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?

Yes. Though I wouldn't go around saying that for obvious political reasons. (Observation: people who enjoy roleplaying parent/child seem to be seen as perverts even by many BDSM types).

If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn't find it as creepy. (Correct me if that's not so). But the situation is symmetrical. Why is it important who came first?

I think creating someone less intelligent than you is more creepy than creating someone more intelligent than you for the same reason that creating your willing slave is creepier than creating your willing master - unintelligence is maladaptive, perhaps even self-destructive.

Replies from: Leonhart
comment by Leonhart · 2013-11-10T19:43:34.550Z · LW(p) · GW(p)

But it seems easy to imagine (in the FiOverse or similar) a masochist who would make themselves suffer more not because they enjoyed suffering but because they didn't enjoy suffering, in some sense.

Well, OK, but I'm not sure this is interesting. So a mind could maybe be built that was motivated by any given thing to do any other given thing, accompanied by any arbitrary sensation. It seems to me that the intuitive horror here is just appreciating all the terrible degrees of freedom, and once you've got over that, you can't generate interesting new horror by listing lots of particular things that you wouldn't like to fill those slots (pebble heaps! paperclips! pain!)

In any case, it doesn't seem a criticism of FiO, where we only see sufficiently humanlike minds getting created.

Like someone who makes themselves an addict because they enjoy being addicted (which would also seem very creepy to me))

Ah, but now you speak of love! :)

I take it you feel much the same regarding romance as you do parenting?

(Observation: people who enjoy roleplaying parent/child seem to be seen as perverts even by many BDSM types)

That seems to be a sacred-value reaction - over-regard for the beauty and rightness of parenting - rather than "parenting is creepy so you're double creepy for roleplaying it", as you would have it.

I think creating someone less intelligent than you is more creepy than creating someone more intelligent than you for the same reason that creating your willing slave is creepier than creating your willing master - unintelligence is maladaptive, perhaps even self-destructive.

Maladaptivity per se doesn't work as a criticism of FiO, because that's a managed universe where you can't self-destruct. In an unmanaged universe, sure, having a mentally disabled child is morally dubious (at least partly) because you won't always be there to look after it; as would be creating a house elf if there was any possibility that their only source of satisfaction could be automated away by washing robots.

But it seems like your real rejection is to do with any kind of unequal power relationship; which sounds nice, but it's not clear how any interesting social interaction ever happens in a universe of perfect equals. You at least need unequal knowledge of each other's internal states, or what's the point of even talking?

Replies from: lmm
comment by lmm · 2013-11-11T12:45:12.338Z · LW(p) · GW(p)

Well, OK, but I'm not sure this is interesting. So a mind could maybe be built that was motivated by any given thing to do any other given thing, accompanied by any arbitrary sensation. It seems to me that the intuitive horror here is just appreciating all the terrible degrees of freedom, and once you've got over that, you can't generate interesting new horror by listing lots of particular things that you wouldn't like to fill those slots (pebble heaps! paperclips! pain!)

You're right, I understated my case. I'm worried that there's no path for masochists in this kind of simulated universe (with self-modification available) to ever stop being masochists - I think it's mostly external restraints that push people away from it, and without those we would just spiral further into masochism, to the exclusion of all else. I guess that could apply to any other hobby - there's a risk that people would self-modify to be more and more into stamp-collecting or whatever they particularly enjoyed, to the exclusion of all else - but I think for most possible hobbies the suffering associated with becoming less human (and, I think, more wireheady) would pull them out of it. For masochism that safety doesn't exist.

I take it you feel much the same regarding romance as you do parenting?

I think normal people don't treat romance like an addiction, and those that do ("clingy") are rightly seen as creepy.

That seems to be a sacred-value reaction - over-regard for the beauty and rightness of parenting - rather than "parenting is creepy so you're double creepy for roleplaying it", as you would have it.

Maybe. I think the importance of being parented for a child overrides the creepiness of it. We treat people who want to parent someone else's child as creepy.

Maladaptivity per se doesn't work as a criticism of FiO, because that's a managed universe where you can't self-destruct. In an unmanaged universe, sure, having a mentally disabled child is morally dubious (at least partly) because you won't always be there to look after it; as would be creating a house elf if there was any possibility that their only source of satisfaction could be automated away by washing robots.

Sure, so maybe it's not actually a problem, it just seems like one because it would be a problem in our current universe. A lot of human moral "ick" judgements are like that.

Or maybe there's another reason. But the creepiness in undeniably there. (At least, it is for me. Whether or not you think it's a good thing on an intellectual level, does it not seem viscerally creepy to you?)

But it seems like your real rejection is to do with any kind of unequal power relationship; which sounds nice, but it's not clear how any interesting social interaction ever happens in a universe of perfect equals. You at least need unequal knowledge of each other's internal states, or what's the point of even talking?

Well I evidently don't have a problem with it between humans. And like I said, creating your superiors seems much less creepy than creating your inferiors. So I don't think it's as simple as objecting to unequal power relationships.

Replies from: Leonhart
comment by Leonhart · 2013-11-12T21:38:41.021Z · LW(p) · GW(p)

I'm worried that there's no path for masochists in this kind of simulated universe (with self-modification available) to ever stop being masochists - I think it's mostly external restraints that push people away from it, and without those we would just spiral further into masochism, to the exclusion of all else.

I think we're using these words differently. You seem to be using "masochism" to mean some sort of fully general "preferring to be frustrated in one's preferences". If this is even coherent, I don't get why it's a particularly dangerous attractor.

I think normal people don't treat romance like an addiction, and those that do ("clingy") are rightly seen as creepy.

Disagree. The source of creepiness seems to be non-reciprocity. Two people being equally mutually clingy are the acme of romantic love.

We treat people who want to parent someone else's child as creepy.

I queried my brain for easy cheap retorts to this and it came back with immediate cache hits on "no we don't, we call them aunties and godparents and positive role models, paranoid modern westerners, it takes a village yada yada yada".
All that is probably unfounded bullshit, but it's immediately present in my head as part of the environment and so likely in yours, so I assume you meant something different?

(At least, it is for me. Whether or not you think it's a good thing on an intellectual level, does it not seem viscerally creepy to you?)

No, not as far as I can tell. But I suspect I'm an emotional outlier here and you are the more representative.

Replies from: lmm
comment by lmm · 2013-11-13T21:56:06.227Z · LW(p) · GW(p)

I queried my brain for easy cheap retorts to this and it came back with immediate cache hits on "no we don't, we call them aunties and godparents and positive role models, paranoid modern westerners, it takes a village yada yada yada". All that is probably unfounded bullshit, but it's immediately present in my head as part of the environment and so likely in yours, so I assume you meant something different?

No, those examples really didn't come to mind. Aunties and godparents are expected to do a certain amount of parent-like stuff, true, but I think there are boundaries to that and overmuch interest would definitely seem creepy (likewise with professional childcarers). But yeah, that could easily be very culture-specific.

comment by NancyLebovitz · 2013-11-05T01:05:49.081Z · LW(p) · GW(p)

A little fiction on related topics: "Hell Is Forever" by Alfred Bester-- what if your dearest wish ts to create universes?You're given a pocket universe to live in forever, and that's when you find out that your subconscious keeps leaking into your creations (they're on the object level, not the natural law level), and you don't like your subconscious.

Saturn's Children by Charles Stross. The human race is gone. All that's left is robots, who were built to be imprinted on humans. The vast majority of robots are horrified at the idea of recreating humans.

comment by blacktrance · 2013-11-04T18:33:54.486Z · LW(p) · GW(p)

Having just finished reading "Friendship is Optimal" literally less than 10 minutes ago, I didn't find it dark or creepy at all. There are certain aspects of it that are suboptimal (being ponies, not wireheading), but other than that, it sounds like a great world.

Replies from: None
comment by [deleted] · 2013-11-04T18:49:12.465Z · LW(p) · GW(p)

There are certain aspects of it that are suboptimal (being ponies, not wireheading)

Can you elaborate? Do you mean that not being able to wirehead is suboptimal?

Replies from: blacktrance
comment by blacktrance · 2013-11-06T02:44:21.456Z · LW(p) · GW(p)

Yes. I think wireheading is the optimal state (assuming it can make me as happy as possible). I recognize this puts me at odds with an element of the LessWrong consensus.

comment by Multiheaded · 2013-11-04T04:51:07.464Z · LW(p) · GW(p)

Poll: how many readers who have not found FiM to be substantially "Dark" and "Creepy" have also supported the Normal Option in Ch. 5 of Three Worlds Collide? I naturally suspect a strong crossover. [pollid:575]

comment by Metus · 2013-11-03T00:13:52.109Z · LW(p) · GW(p)

Am I the only one who is bothered that these threads don't start on Monday anymore?

Posting a request from a past open thread again: Does anyone have a table of probabilities for major (negative) life events, like divorse or being in a car accident? I ask this to have a priority list of events to be prepared for, either physically or mentally.

Replies from: hyporational, philh
comment by hyporational · 2013-11-03T07:43:22.900Z · LW(p) · GW(p)

The lifetime risk of developing cancer is 44 % in males and 38 % in females. The lifetime risk of dying from cancer is 23 % in males and 19 % in females. It's worth mentioning that the methods for gathering medical mortality statistics are pretty biased, if not completely bonkers.

Replies from: Lumifer
comment by Lumifer · 2013-11-04T04:09:20.524Z · LW(p) · GW(p)

methods for gathering medical mortality statistics are pretty biased, if not completely bonkers.

Would you be willing to expand on this?

Replies from: hyporational
comment by hyporational · 2013-11-04T05:32:23.410Z · LW(p) · GW(p)

ETA: Apparently a new WHO recommendation for filling death certificates was introduced in 2005-2006 and this caused a significant drop in pneumonia mortality in Finland.

I'm not entirely sure if it works this way in the whole EU, but it probably does. It's more complicated than what I explain below, but it's the big picture that matters.

The most common way to record mortality statistics is that the doctor who was treating the patient fills a death certificate. There are three types of causes of death that can be recorded in a death certificate. There are immediate causes of death and there are underlying causes of death. There are also intermediate causes of death, but nobody really cares about those because recording them is optional. The statistics department in Finland is interested in recording only the underlying causes of death and that's what gets published as mortality statistics. Only one cause of death per patient gets recorded.

If someone with advanced cancer gets pneumonia and dies, a doctor fills the death certificate saying that the underlying cause of death was cancer and the immediate cause of death was pneumonia. Cancer gets recorded as the one and only cause of death by the statistics department. Depending on the patient, possible underlying causes of death could also be alcoholism, coronary heart disease or alzheimer's disease or whatever is accepted by a department that checks these certificates.

The doctor's opinion of whether it was the pneumonia or the chronic disease that killed the patient doesn't really matter. If he also fills the underlying cause of death as pneumonia, he gets a scolding letter and has to fill it again until he gets it right.

What if the patient has several chronic diseases that could have been underlying causes of death? Well, you only get to pick one, and only that one gets recorded as the cause of death. You can list the other diseases too as contributory causes of death, but this doesn't really effect any statistics. I guess it would be less biased to flip a coin or something, but I think most doctors just pick something fitting.

A colleague of mine once tried to record pneumonia as the underlying cause of death, the patient was an alcoholic (not sure how bad it was). He got a letter saying he should fix the certificate and that people in developed countries don't die of pneumonia anymore. Wonder why that is...

Replies from: gsgs, Lumifer
comment by gsgs · 2013-11-05T15:21:38.431Z · LW(p) · GW(p)

in USA they can fill in 20 secondary causes on the death certificates and all the anonymized death certificates since 1959 are available online from NCHS in computer-readable form to check/search for conditions. Irregularities usually appear when there is a switch from one ICD-Code to a new one, so in 1969,1979,1999. Other irregularities are often checked, compared with other states,countries,conditions and the reason discovered

Replies from: hyporational
comment by hyporational · 2013-11-05T16:35:30.435Z · LW(p) · GW(p)

What if the patient has several chronic diseases that could have been underlying causes of death? Well, you only get to pick one, and only that one gets recorded as the cause of death. You can list the other diseases too, but not as causes of death.

It seems I miscommunicated here. What I meant to say that listing these other diseases has no meaningful impact on the mortality statistics, although technically speaking they are causes of death. If the point is to gather accurate statistics, listing them feels like a consolation prize, because statisticians don't seem to be interested in them.

In Finland a direct translation for these would be "contributory causes of death". That's probably the same thing as secondary causes of death. The problem is, it's difficult for someone who makes these into statistics to know how important they were. Almost anything the patient has can be listed as a contributory cause of death.

Even a bigger problem is that listing them is completely optional. If almost nobody fills them in properly (because they usually have better things to do), that is another good reason for a statistician not to use them.

Is filling in the secondary causes mandatory in US? Are there clear restrictions for what can be listed? If not, I'm not sure if they provide all that useful information, statistically speaking. Are they really used in meaningful way in any statistics?

Irregularities usually appear when there is a switch from one ICD-Code to a new one, so in 1969,1979,1999.

I suppose WHO recommendations for filling these certificates impact the US too.

comment by Lumifer · 2013-11-04T17:07:41.424Z · LW(p) · GW(p)

Very interesting, thank you.

I have a pet interest -- carefully looking at how standard, universally-accepted, real-life, empirical data is collected and produced and whether it actually represents what everyone blindly assumes it does. In the field of economics, for example, closely examining how, say, the GDP or the inflation numbers are calculated is... illuminating.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-04T21:53:42.060Z · LW(p) · GW(p)

closely examining how, say, the GDP or the inflation numbers are calculated is... illuminating.

Details?

Replies from: Lumifer
comment by Lumifer · 2013-11-05T00:12:50.254Z · LW(p) · GW(p)

The problem is that the problems aren't summarizeable in a neat half a page list. And it's not like the calculations are wrong, rather they are right under a certain set of assumptions and boundary conditions -- and the issue is that people forget about these assumptions and conditions and just assume they're right unconditionally.

For an introduction take a look at e.g. Shadowstats. I don't necessarily agree with everything there, but it's a useful starting point.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-05T00:58:52.641Z · LW(p) · GW(p)

Thanks.

I twitch when changes in GDP are reported to a tenth of a percent-- it seems to me that it couldn't be measured with such precision. Do you think I'm being reasonable?

Replies from: ahbwramc, Lumifer
comment by ahbwramc · 2013-11-07T17:02:27.753Z · LW(p) · GW(p)

My own (uninformed) intuition is that GDP changes would be much more accurate than absolute GDP values, just because systematic errors could largely cancel out.

comment by Lumifer · 2013-11-05T02:56:36.011Z · LW(p) · GW(p)

GDP as reported is the product of a particular well-defined calculation. That product can easily be calculated to whatever precision you feel like.

When you say "it couldn't be measured with such precision", how do you define the Gross Domestic Product that couldn't be measured precisely?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-07T13:19:38.999Z · LW(p) · GW(p)

I'm assuming that the GDP is some sort of measure of the health of the economy-- that's why people are concerned with it. The health of the economy seems to me like rather an approximate sort of thing.

Replies from: Lumifer
comment by Lumifer · 2013-11-07T15:41:36.862Z · LW(p) · GW(p)

GDP -- Gross Domestic Product -- basically means the sum of the value (in the economic sense) of all goods produced domestically during a given period, e.g. a year.

If you want to measure the "health of the economy", that's quite different. You'll have to define what do you mean by that expression and then decide which measurements do you want to consider. For example, some people might consider the unemployment rate to be one those measurements, or, say, the Gini index, or the median income, or... the possibilities are endless.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-07T16:09:59.356Z · LW(p) · GW(p)

Why do people measure the value of all the goods produced domestically during a year?

If nothing else, there has to be a fudge factor because some of the economy is underground.

Replies from: Lumifer
comment by Lumifer · 2013-11-07T16:21:41.563Z · LW(p) · GW(p)

Why do people measure the value of all the goods produced domestically during a year?

From Wikipedia: "GDP was first developed by Simon Kuznets for a US Congress report in 1934. ... After the Bretton Woods conference in 1944, GDP became the main tool for measuring a country's economy."

Yes, the GDP number is, of course, imprecise. By itself it's not a problem -- most of our measurements are imprecise.

I am not sure what are you getting at. Do you think that GDP is useless or cannot be measured or what?

comment by philh · 2013-11-03T15:22:44.552Z · LW(p) · GW(p)

Am I the only one who is bothered that these threads don't start on monday anymore?

I like it for purely selfish reasons. I can't easily post between Sunday bedtime and Tuesday evening. If I want to post and the thread starts on Monday, my post will be less visible.

comment by ChrisHallquist · 2013-11-04T01:32:27.068Z · LW(p) · GW(p)

In honor of NaNoWriMo, I offer up this discussion topic for fans of HPMOR and rationalist fiction in general:

How many ways can we find that stock superpowers (magical abilities, sci-fi tech, whatever), if used intelligently, completely break a fictional setting? I'm particularly interested in subtly game-breaking abilities.

The game-breaking consequences of mind control, time travel, and the power to steal other powers are all particularly obvious, but I'm interested in things like e.g. Eliezer pointing out that he had to seriously nerf the Unbreakable Vow in HPMOR to keep the entire story from being about that.

Replies from: Armok_GoB, CAE_Jones
comment by Armok_GoB · 2013-11-04T02:54:42.248Z · LW(p) · GW(p)

I seem to be able to do this with almost any power to various degrees. Including ones I actually have, and ones that are common among humans. Any specifics you had in mind?

Really, ANY ability will reroll some chaotic stuff and be a valuable asset simply because it's rare. Even a deliberating curse, if rare and interesting enough, can do things like be useful for research or provide unique perspectives to be studied. So really, the only limit to where a power stops being useful is where it's only useful to someone else controlling you.

Hence, why anything properly rationalist that's not going to be largely about breaking the setting must do something like give MANY the ability so the low hanging fruit is already gone, or make it inherently mysterious and unreplicable, or have some deliberate intelligence preventing it from getting well known, or something like that.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-04T03:29:24.039Z · LW(p) · GW(p)

We may be operating on different definitions of what it means to "break" a setting. For example, how useful is flight by itself, really?

Many abilities seem potentially very useful if other people don't know you have them, but become much less useful once you get found out:

  • The "energy blast" type abilities common among wizards and superheroes are not terribly combat useful in a setting with modern weapons. Their big use would be assassination: slip past metal detectors and pat-downs, baffle detectives... but once you get found out, no one wants you around and the police know the otherwise inexplicable burning death was probably you.
  • Mind-reading, if it has standard limitations on range, similarly becomes a lot less useful once you get found out and banned from everywhere.
  • Super-senses on a level comparable to what's already possible with technology: lets you spy on people from a distance without anyone wondering what you're doing with those binoculars... otherwise not so useful.
Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-04T05:43:07.176Z · LW(p) · GW(p)

Thats because those are among the worst possible ways to use those abilities.

The energy blasts as usually depicted break conservation of energy; with a bit of physics trickery you can get time travel out of that. Even if not, they make you an extremely portable and efficient energy source, perfect for a spaceship where mass is critical and a human needs to come along anyway but it doesn't matter in particular who since it's for PR reasons.

Mind reading is a means of communication that does not require cooperation or any abilities in the target, and cant be lied through. Communication with locked-in patients, interrogation, extraction of testimonials from animals. And if you an find a way to yourself precommit, you also have fully reliable precommitment checking for everyone, lie detection for political promises, and the ultimate forensics tool.

If you combine the strengths of 2 kinds of system, you get something greater than the sum of it's parts. So it is with human senses and digital sensors. The key here is bandwidth, and analysis. Sure, you can get all the same data onto a computer, but it won't do much good there. Someone with true super-senses as flexible and integrated as their normal biological ones, after a few years for the brain to adapt if they were not available from birth, would most likely be able to see patterns at a glance that'd take large teams days or months to discover in some database format. The exact applications of this depend on what sense we're talking about.

Common factors: Focus on things done with to cooperation of large numbers of other people, finding an economic niche, fundamental physics exploits, and/or using large and expensive equipment. If not that, look for a niche within the military as a specialized technician, most likely not in the field and if in the field then in some large vehicle with a crew of many. Almost NEVER is an efficient use of a power found in brawling or acting alone like all your descriptions where.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-04T15:54:30.967Z · LW(p) · GW(p)

The energy blasts as usually depicted break conservation of energy; with a bit of physics trickery you can get time travel out of that.

Wait, explain that? What is "a bit of physics trickery" here?

I know in HPMOR, Harry points out that violations of conservation of mass in magic imply FTL signaling, and I know from relativity FTL implies time travel, but Harry doesn't even consider running off to get time travel from common spells. Assumed turning the theory into practice would be far from straightforward.

Heck, it's not even obvious to me how you turn FTL travel into time travel in practice, if you don't have control over what frame of reference you're FTL in.

Replies from: None, So8res
comment by [deleted] · 2013-11-05T15:25:06.166Z · LW(p) · GW(p)

In the approximation of the True Laws Of Physics which is in use today --- The Relativistic Standard Model --- FTL (and subsidiarily time travel) is nonsense. Like, it's gibberish. It is a description of a situation which not only does not happen, but which is a mathematical falsehood. It is impossible. It is like violation of conservation of energy, or violation of entropic developments.

The maths plainly states that assumptions like that leads to a contradiction, and we do in fact know that the Standard Model is complete and consistent (i.e. cannot encode formulas as objects).

Any hypothetical scenarios involving time travel will invariably be contrite, non-causal, and require classical mechanics. It is fiction, make of it what you will.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-05T21:33:39.226Z · LW(p) · GW(p)

Oooh, that's even better: from a contradiction, anything can be proved. Thus, if we can break conservation of energy we gain literal omnipotence.

Replies from: None
comment by [deleted] · 2013-11-10T14:56:46.787Z · LW(p) · GW(p)

... I am pretty sure that is not how it works.

comment by So8res · 2013-11-07T02:24:17.469Z · LW(p) · GW(p)

To make a time machine out of an FTL drive, simply travel somewhere at near lightspeed and FTL back. Now you're where you started, before you left.

comment by CAE_Jones · 2013-11-04T17:07:51.758Z · LW(p) · GW(p)

Bit over a decade ago, when I was still a naive wild-eyed idealist blissfully unaware of anything realistic about people at all, I set up the foundations for most of my sci fi/fantasy/etc fictions. The piece of supertech I immediately had to nerf was technology based around a form of space compression--somewhere between the MCron Crystal (Marvel) and Dynocaps (Dragonball). And today, I'm still coming up with more reasons to nerf it heavily, to the extent that the civilization based around this technology must certainly have had a shady counsil of vagueness who developed a limited-scope AI to control production and block any of the scarier uses. This after I tied it to FTL capabilities (I'm trying very hard to prevent this from turning into timetravel, not that I'm too afraid to abuse that option for a crisis crossover or something).

It took me much longer to realize how much nerfing of powers unrelated to the above tech I needed to do, partially because I was trying to avoid breaking the laws of physics too horribly (this universe runs on a sort of mangling of GR and M-Theory that assumes fundamental forces each get something resembling a dimension, and that attempts to break the light-speed barrier have lots of wide-reaching consequences).

A few things I've had to seriously remodel (mostly for the sake of in-universe physics not breaking, though I'm sure there are obvious uses/flaws here I've missed):

  • Light manipulation. The version I started out with was written as superweight (tvtropes) type IV, when the applications I had in mind were more indicative of a potential V or VI. Since tried to tone down so that this character hovers between II and III (the distinction is a little vague), but I'm sure I'm still underutilizing this character even with all the restrictions I've added. (Invisibility, holograms, filtering specific wavelengths to minimize radiation risks, screwing with radio communication, sometimes can weaponize directly with lasers and x-rays and such).
  • Gravity manipulation. This one perplexes me most, even though it seems like it's become relatively clear what limitations it should have--denser matter is more easily affected, sources of gravity can be boosted or reduced on specific objects, all within a finite range. Not precise enough to simulate telekinesis, yet I feel like I'm missing some crucial issues with this power.
  • Variations on "interversal connections"; that is, we have a multiverse that would take too long to explain, but spaces or objects in universes can remain connected, after a fashion. These are generally unstable, so those that we see are generally related to intelligence, just because intelligences can find ways to maintain them. The broadest versions tend to catch any high-energy particles entering the affected area and replace them with particles from the connected universe (so if you shot a stream of gamma rays at one of these, you'd get a small amount of hydrogen out of it). These can become more specialized until we have people that can summon and manipulate stellar matter (these tend to be the scariest mofos around, and get appropriate attention from any governments who learn of them), or in some unique cases, people that can "turn into water" (or rather, swap with a mass of water in the connected universe), generate ice/magma/metal, etc. It hasn't really come up yet, but I expect that special relativity tries to be conserved after a fashion, with matter elsewhere being fed into the power-supplying universe, and if anyone measured the delay, it would turn out as one would expect given distances involved.
  • I had nebulous "ki power", which I've since remodeled as being related to a unique sort of substance that I'm pretty sure would turn the first quantum physicist with appropriate equipment to study it into a god. However, I only realized that god part very recently. I don't think I can explain it without a long trek through my mythos, but things we've seen related to this substance (and its even scarier source) include: walking talking skeletons (can produce the stuff as a byproduct of feeding on organic matter), people with the most intense exposure becoming effectively immune to aging but with continually degrading cognitive function (and as a bonus, they can pretty much ignore gravity and wind resistance exactly as much as they want), lesser exposure has been associated with abilities such as fireballs, limited flight/hovering, some degree of pyro/electrokinesis. And the ability of the substance to replicate, of course.
  • In a completely different direction, there is a more fantasy-esque universe with souls, gods, and magic as facts of nature, except this is part of the same multiverse as the above, so I realized that the laws of physics couldn't be all that different. So I had the creator-god (effectively a backup of a pre-scientific human mind with way more resources to devote to naive conceptions of wisdom), reallocate the weak nuclear force to manage souls, and the sun god (or rather, the superintelligence the creator built to handle the finer details of physics) having a more reductionist/lifist stance. Shenanigans ensue, which resulted in me realizing that, if one of this world's souls wound up in our world, it would turn into a transmutation bomb. Commence panic when I realized that I had a character that I really like getting stuck in one of the more Earth-like worlds in desperate need of a soul-shield, less all the neutrinos give him soul-cancer before all the surrounding gases get zapped with a pound of W and Z bosons and start the biggest Fluorine explosion ever. On the "bright" side, I can now have a supervillain summon the dead from that world and use them as transmutation bombs whenever I need something sufficiently horrifying.
  • Oh, and I keep glossing over wind manipulation. It's treated as simple and boring in-universe, when it probably shouldn't be. I'm sure if I did some number-crunching, there'd be a lot of scary possibilities here that I've overlooked. The character who specializes in this can use it to fly and may or may not be able to summon hurricane-force winds or stronger. Most other examples are much more mundane; the most impressive demonstrations are some directed gusts being used to push people a little. Combine this with the first two (light and gravity), and the space program might get considerably cheaper.

In all of this, I expect I'm missing something world-breaking that half of LW would notice within five minutes of having the powers explained (or maybe just introduced--do you need to know that someone generates his fireballs by burning in a weird chemical unwanted bodyhair (and subdermal fat when that runs out) to think of issues that arise from someone with the power to toss fireballs around?).

comment by Joshua_Blaine · 2013-11-06T03:00:53.136Z · LW(p) · GW(p)

Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.

Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantage to having one, such as parallel focus, more "outside" self analysis, etc. I don't really know much of anything right now, which is why I'm asking if there's been any decent research done already.

Replies from: TheOtherDave, Ishaan, Tenoke, Armok_GoB, ChristianKl, IlyaShpitser, Vulture, Lumifer, Mitchell_Porter
comment by TheOtherDave · 2013-11-06T03:03:26.396Z · LW(p) · GW(p)

Have you read the earlier discussions on this topic?

Replies from: Joshua_Blaine
comment by Joshua_Blaine · 2013-11-06T13:54:38.187Z · LW(p) · GW(p)

I had not, actually. The link you've given just links me to Google's homepage, but I did just search LW for "Tulpa" and found it fine, so thanks regardless.

edit: The link's original purpose now works for me. I'm not sure what the problem was before, but it's gone now.

comment by Ishaan · 2013-11-09T23:14:53.064Z · LW(p) · GW(p)

There's tons of easily discovered information on the web about it.

I'm not sure the Tulpa-crowed would agree with this, but I think a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The "learning process" and stuff seem pretty similar - the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.

Come to think of it, that's probably a really good method for creating Tulpas quickly - building off a real or fictional character for whom you already have a relatively sophisticated mental model. It's probably also important that you are predisposed to take seriously the notion that this thing might actually be an agent which interacts with you...which might be why God works so well, and why the Tulpa-crowed keeps insisting that Tulpas are "real" in the sense that they carry moral weight. It's an imagination-belief driven phenomenon.

It might also illustrate some of the "dangers" - for example, some people who grew up with notions of the angry sort of God might always feel guilty about certain "sinful" things which they might not intellectually feel are bad.

I've also heard claims of people who gain extra abilities / parallel processing / "reminders" with Tulpas....basically, stuff that they couldn't do on their own. I don't really believe that this is possible, and if this were demonstrated to me I would need to update my model of the phenomenon. To the tupla-community's credit, they seem willing to test the belief.

Replies from: Vulture
comment by Vulture · 2013-11-10T02:48:41.448Z · LW(p) · GW(p)

a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The "learning process" and stuff seem pretty similar - the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.

Very good! A psychologist who studies evangelicals recognized it as the same phenomenon.

I've also heard claims of people who gain extra abilities / parallel processing / "reminders" with Tulpas

There is pretty good empirical evidence against the parallel-processing idea now.

comment by Tenoke · 2013-11-06T17:56:44.600Z · LW(p) · GW(p)

I don't know any obvious reason not to

What is stopping is me is the possibility that I will be potentially permanently relinquishing cognitive resources for the sake of the Tulpa.

comment by Armok_GoB · 2013-11-06T16:57:57.808Z · LW(p) · GW(p)

I've been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.

There are a myriad very different things tulpas are described as and thus "tulpas exist in the way people describe them" is not well defined.

There undisputably exist SOME specific interesting phenomena that's the referent of the word Tulpa.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.

I am to lazy to find citations or examples right now but I probably could. I've tried to be a good rationalist and am fairly certain of most of these claims.

Replies from: NancyLebovitz, TheOtherDave, hylleddin
comment by NancyLebovitz · 2013-11-07T13:28:48.719Z · LW(p) · GW(p)

Has anyone worked on making a tulpa which is smarter than they are? This seems at least possible if you assume that many people don't let themselves make full use of their intelligence and/or judgement.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-07T17:04:34.311Z · LW(p) · GW(p)

Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the "self" or the "tulpa".

What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don't share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.

One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as "smarter than me" and thus all the brains good ideas get credited to it.

Disclaimer: this is based of only lots of anecdotes I've read, gut feeling, and basic stuff that should be common knowledge to any LWer.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-07T17:40:46.724Z · LW(p) · GW(p)

I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.

So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"

I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.

But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.

Replies from: gwern, Armok_GoB
comment by gwern · 2013-11-07T20:04:37.253Z · LW(p) · GW(p)

See also 'rubberducking' and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an 'adversary' which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I'd expect that adversarial thinking triggers system II more compared to 'normal' self-centered thinking).

Replies from: TheOtherDave, Kaj_Sotala
comment by TheOtherDave · 2013-11-07T20:21:14.519Z · LW(p) · GW(p)

Yes. Indeed, I suspect I've told this story before on LW in just such a discussion.

I don't necessarily buy your account -- it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.

This is also related to the circumlocution strategy for dealing with aphasia.

comment by Kaj_Sotala · 2014-01-02T14:29:17.353Z · LW(p) · GW(p)

My basic theory is that reasoning was developed for adversarial purposes

Obligatory link.

comment by Armok_GoB · 2013-11-07T18:49:06.573Z · LW(p) · GW(p)

Yea in that case presumably the tulpa would help - but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.

Basically, a tulpa can technically do almost anything you can... but the absence of a tulpa can do them to, and for almost all of them there's some much easier and at least as effective way to do the same thing.

Replies from: ChristianKl, TheOtherDave
comment by ChristianKl · 2013-11-08T16:04:09.688Z · LW(p) · GW(p)

Basically, a tulpa can technically do almost anything you can...

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

A tulpa can do things like that. There are many mental processes that you can't access directly but that a tulpa might be able to access.

Replies from: Armok_GoB, hesperidia
comment by Armok_GoB · 2013-11-08T17:22:40.126Z · LW(p) · GW(p)

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it's less powerful and has a bunch of logistic and moral problems. I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-09T05:39:00.802Z · LW(p) · GW(p)

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

My point isn't so much that it impossible but that it isn't easy.

Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.

Let's say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn't be worth the effort.

I'm not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.

It takes a lot of energy to create it the first time but afterwards you reap the benefits.

I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Tulpa creation involves quite a lot of effort so it doesn't seem the lazy road.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-09T16:29:04.570Z · LW(p) · GW(p)

Hmm, you have a point, I hadn't thought about it that way. If it wasn't so dangerous I would have asked you to experiment.

comment by hesperidia · 2013-12-02T04:00:36.252Z · LW(p) · GW(p)

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

I do not have "wake up at a specific time" ability, but I have trained myself to have "wake up within ~1.5 hours of the specific time" ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.

The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.

The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):

  1. Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.

  2. Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.

  3. Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).

  4. During a night's sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.

  5. Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I'm reconstructing from memory here. It's probably possible to make this work in either order.)

  6. You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.

I then spent the rest of summer break with a biphasic "first/second sleep" rhythm, which disappeared once I was in school and had to wake up at specific times again.

To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I've had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they're at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It's a cool skill to have, but it has its downsides.

comment by TheOtherDave · 2013-11-07T19:57:55.111Z · LW(p) · GW(p)

a tulpa can technically do almost anything you can...

Yes, I would expect this.
Indeed, I'm surprised by the "almost" -- what are the exceptions?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-07T21:26:58.627Z · LW(p) · GW(p)

Anything that requires you using your body and interacting physically with the world.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-07T21:33:58.502Z · LW(p) · GW(p)

I'm startled. Why can't a tulpa control my body and interact physically with the world, if it's (mutually?) convenient for it to do so?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-07T21:44:54.486Z · LW(p) · GW(p)

Well if you consider that the tulpa doing it on it's own then no I can't think of any specific exceptions. Most tulpas can't do that trick though.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-07T22:03:39.688Z · LW(p) · GW(p)

Well if you consider that the tulpa doing it on it's own

Well, let me put it this way: suppose my tulpa composes a sonnet (call that event E1), recites that sonnet using my vocal cords (E2), and writes the sonnet down using my fingers (E3).

I would not consider any of those to be the tulpa doing something "on its own", personally. (I don't mean to raise the whole "independence" question again, as I understand you don't consider that very important, but, well, you brought it up.)

But if I were willing to consider E1an example of the tulpa doing something on its own (despite using my brain) I can't imagine a justification for not considering E2 and E3 equally well examples of the tulpa doing something on its own (despite using my muscles).

But I infer that you would consider E1 (though not E2 or E3) the tulpa doing something on its own. Yes?

So, that's interesting. Can you expand on your reasons for drawing that distinction?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-08T01:07:38.747Z · LW(p) · GW(p)

I feel like I'm tangled up in a lot of words and would like to point out that I'm not an expert and don't have a tulpa, I just got the basics from reading lots of anecdotes on reddit.

You are entirely right here- although I'd like to point out most tulpas wouldn't be able to do E2 and E3, independent or not. Also, that something like "composing a sonnet" is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities. But I could be wrong both about that and what kind of activity sonet composing is.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-08T01:52:21.270Z · LW(p) · GW(p)

"composing a sonnet" is probably more the kind of thing brains do when their resources are dedicated to it by identities, not something identities do, and tulpas are mainly just identities

Interesting! OK, that's not a distinction I'd previously understood you as making.
So, what do identities do, as distinct from what brains can be directed to do?
(In my own model, FWIW, brains construct identities in much the same way brains compose sonnets.)

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-08T16:53:55.111Z · LW(p) · GW(p)

I guess I basically think of identities as user accounts, in this case. I just grabbed the closest fitting language dichotomy to "brain" (which IS referring to the physical brain) and trying to define and it further now will just lead to overfitting, especially since it almost certainly varies far more than either of us expect (due to the typical mind fallacy) from brain to brain.

And yea, brains construct identities the same way they construct sonnets. And just like music it can be small (jingle, minor character in something you write) or big (long symphony, Tulpa). And identities only slightly more compose sonnets, than sonnets create identities.

It's all just mental content, that can be composed, remixed, deleted, executed, etc. Now, brains have a strong tendency to in the lack of an identity create one and give it root access, and this identity end up WAY more developed and powerful than even the most ancient and powerful tulpas, but there is no probably no or very little qualitative difference.

There are a lot of confounding factors. For example, something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of "themselves" and feel as if they are violated if their body is. Put in their perspective, it's not surprising most people can't disentangle parts of their own brain(s), mind(s), and identities without meditating for years until they get it shaved in their face via direct perception, and even then probably often get it wrong. Although I guess my illness has shaved it in my face just as anviliciouslly.

Disclaimer: I got tired trying to put disclaimers on the dubious sources on each individual sentence, so just take it with a grain of salt OK and don't assume I believe everything I say in any persistent way.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-08T18:32:13.964Z · LW(p) · GW(p)

OK... I think I understand this. And I agree with much of it.

Some exceptions...

Now, brains have a strong tendency to in the lack of an identity create one and give it root access,

I don't think I understand what you mean by "root access" here. Can you give me some examples of things that an identity with root access can do, that an identity without root access cannot do?

something that I consider impossibly absurd seems to be the norm for most humans; considering their physical body as a part of "themselves"

This is admittedly a digression, but for my own part, treating my physical body as part of myself seems no more absurd or arbitrary to me than treating my memories of what I had for breakfast this morning as part of myself, or my memories of my mom, or my inability to juggle. It's kind of absurd, yes, but all attachment to personal identity is kind of absurd. We do it anyway.

All of that said... well, let me put it this way: continuing the sonnet analogy, let's say my brain writes a sonnet (S1) today and then writes a sonnet (S2) tomorrow. To my way of thinking, the value-add of S2 over and above S1 depends significantly on the overlap between them. If the only difference is that S2 corrects a mis-spelled word in S1, for example, I'm inclined to say that value(S1+S2) = value(S2) ~= value(S1) .

For example, if S1 -> S2 is an improvement, I'm happy to discard S1 if I can keep S2, but I'm almost as happy to discard S2 if I can keep S1 -- while I do have a preference for keeping S2 over keeping S1, it's noise relative to my preference for keeping one of them over losing both.

I can imagine exceptions to the above, but they're contrived.

So, the fix-a-mispelling case is one extreme, where the difference between S1 and S2 is very small. But as the difference increases, the value(S1+S2) = value(S2) ~= value(S1) equation becomes less and less acceptable. At the other extreme, I'm inclined to say that S2 is simply a separate sonnet, which was inspired by S1 but is distinct from it, and value(S1+S2) ~= value(S2) + value(S1).

And those extremes are really just two regions in a multidimensional space of sonnet-valuation.

Does that seem like a reasonable way to think about sonnets? (I don't mean is it complete; of course there's an enormous amount of necessary thinking about sonnets I'm not including here. I just mean have I said anything that strikes you as wrong?)

Does it seem like an equally reasonable way to think about identities?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-09T03:54:30.632Z · LW(p) · GW(p)

Root access was probably a to metaphorical choice of words. Is "skeletal musculature privileges" clearer?

All those things like memories or skillsets you list as part of identity does seem weird, but even irrelevant software not nearly as weird as specific hardware. I mean seriously attaching significance to specific atoms? Wut? But of course, I know it's really me thats weird and most humans do it.

I agree about what you say about sonnets, it's very well put in fact. And yes identities do follow the same rules. Trying to come up with fitting tulpa stuff in the metaphor. Doesn't really work though because I don't know enough about it.

This is getting a wee bit complicated and I think we're starting to reach the point where we have to dissolve the classifications and actually model things in detail on continuums, which means more conjecture and guesswork and less data and what data we have being less relevant. We've been working mostly in metaphors that doesn't really go this far without breaking down. Also, since we're getting into more and more detail, it also means th stuff we are examining is likely to be drowned out in the differences between brains, and the conversation turn into nonsense due to the typical mind fallacy.

As such, I am unwilling to widely sprout what's likely to end up half nonsense at least publicly. Contact me by PM if you're really all that interested in getting my working model of identities and mental bestiary.

comment by TheOtherDave · 2013-11-06T17:11:42.717Z · LW(p) · GW(p)

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

Would you classify a novel in the same "moral-status" tier as these four examples?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-06T21:49:39.307Z · LW(p) · GW(p)

No, thats much much lower. As in torture a novel for decades in order to give a tulpa a quick amusement being a moral thing to do lower.

Assuming you mean either a physical book, or the simulation of the average minor character in the author's mind, here. Main characters or RPing PCs can vary in complexity of simulation from author to author a lot and it's a theory that some become effectively tulpas.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-06T22:11:45.438Z · LW(p) · GW(p)

Your answer clarifies what I was trying to get at with my question but wasn't quite sure how to ask, thanks; my question was deeply muddled.

For my own part, treating a tulpa as having the moral status of an independent individual distinct from its creator seems unjustified. I would be reluctant to destroy one because it is the unique and likely-unreconstructable creative output of a human being, much like I would be reluctant to destroy a novel someone had written (as in, erase all copies of such that the novel itself no longer exists), but that's about as far as I go.

I didn't mean a physical copy of a novel, sorry that wasn't clear.

Yes, destroying all memory of a character someone played in an RPG and valued remembering I would class similarly.

But all of these are essentially property crimes, whose victim is the creator of the artwork (or more properly speaking the owner, though in most cases I can think of the roles are not really separable), not the work of art itself.

I have no idea what "torture a novel" even means, it strikes me as a category error on a par with "paint German blue" or "burn last Tuesday".

Replies from: Armok_GoB, ChristianKl
comment by Armok_GoB · 2013-11-06T23:02:45.419Z · LW(p) · GW(p)

Ah. No, I think you'd change your mind if you spent a few hours talking to accounts that claim to be tulpas.

A newborn infant or alzheimer's patient is not an independent individual distinct from it's caretaker either. Do you count their destruction as property crime as well? "Person"-ness is not binary, it's not even a continuum. It's a cluster of properties that usually correlate but in the case of tulpas does not. I recommend re-reading Diseased Thinking.

As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.

Replies from: army1987, TheOtherDave
comment by A1987dM (army1987) · 2013-11-12T15:17:55.172Z · LW(p) · GW(p)

As for your category error: /me argues for how german is a depressing language and spends all that was gained in that day on something that will not last. Then a pale-green tulpa snores in an angry manner.

I picture a sheet of paper with a paragraph in each of several languages, a paintbrush, and watercolours. Then boring-sounding environmental considerations make me feel outraged without me consciously realizing what's happening.

comment by TheOtherDave · 2013-11-06T23:50:53.044Z · LW(p) · GW(p)

I agree that person-ness is cluster of properties and not a binary.

I don't believe that tulpas possess a significant subset those properties independent of the person whose tulpa they are.

I don't think I'm failing to understand any of what's discussed in Diseased Thinking. If there's something in particular you think I'm failing to understand, I'd appreciate you pointing it out.

It's possible that talking to accounts that claim to be tulpas would change my mind, as you suggest. It's also possible that talking to bodies that claim to channel spirit-beings or past lives would change my mind about the existence of spirit-beings or reincarnation. Many other people have been convinced by such experiences, and I have no especially justified reason to believe that I'm relevantly different from them.

Of course, that doesn't mean that reincarnation happens, nor that spirit-beings exist who can be channeled, or that tulpas possess a significant subset of the properties which constitute person-ness independent of the person whose tulpa they are.

A newborn infant or alzheimer's patient is not an independent individual distinct from it's caretaker either.

Eh?

I can take a newborn infant away from its caretaker and hand it to a different caretaker... or to no caretaker at all... or to several caretakers. I would say it remains the same newborn infant. The caretaker can die, and the newborn infant continues to live; and vice-versa.

That seems to me sufficient justification (not necessary, but sufficient) to call it an independent individual.

Why do you say it isn't?

Do you count their destruction as property crime as well?

I count it as less like a property crime than destroying a tulpa, a novel, or an RPG character. There are things I count it as more like a property crime than.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-07T16:52:57.176Z · LW(p) · GW(p)

Seems I were wrong about you not understanding the word thing. Apologies.

You keep saying that word "independent". I'm starting to think we might not disagree about any objective properties of tulpas, just things need to be "independent" or only the most important count towards your utility, but I just add up the identifiable patterns not caring about if they overlap. Metaphor: tulpas are "10101101", you're saying "101" occurs 2 times, I'm saying "101" occurs 3 times.

I'm fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you. If I believed that doing that would predictably shift my beliefs I'd already have those beliefs. Conservation of Expected Evidence.

((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-07T17:20:26.554Z · LW(p) · GW(p)

You keep saying that word "independent".

(shrug) Well, I certainly agree that when I interact with a tulpa, I am interacting with a person... specifically, I'm interacting with the person whose tulpa it is, just as I am when I interact with a PC in an RPG.

What I disagree with is the claim that the tulpa has the moral status of a person (even a newborn person) independent of the moral status of the person whose tulpa it is.

I'm fairly certain talking to bodies that claim those things would not change my probability estimates on those claims unless powerful brainwashing techniques were used, and I certainly hope the same is the case for you.

On what grounds do you believe that? As I say, I observe that such experiences frequently convince other people; without some grounds for believing that I'm relevantly different from other people, my prior (your hopes notwithstanding) is that they stand a good chance of convincing me too. Ditto for talking to a tulpa.

((You can move a tulpa between minds to, probably, it just requires a lot of high tech, unethical surgery, and work. And probably gives the old host permanent severe brain damage. Same as with any other kind of incommunicable memory.))

(shrug) I don't deny this (though I'm not convinced of it either) but I don't see the relevance of it.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-07T18:53:41.042Z · LW(p) · GW(p)

Yea this seems to definitely be just a fundamental values conflict. Let's just end the conversation here.

comment by ChristianKl · 2013-11-08T16:05:18.454Z · LW(p) · GW(p)

What do you think about the moral status of torturing an uploaded human mind that's in silicon?

Does that mind have a different moral status than one in a brain?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-08T16:10:45.841Z · LW(p) · GW(p)

Certainly not by virtue of being implemented in silicon, no. Why do you ask?

comment by hylleddin · 2013-11-08T01:40:30.113Z · LW(p) · GW(p)

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-08T17:05:10.957Z · LW(p) · GW(p)

You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.

I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Replies from: hylleddin
comment by hylleddin · 2013-11-08T22:43:03.348Z · LW(p) · GW(p)

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.

comment by ChristianKl · 2013-11-08T15:51:57.381Z · LW(p) · GW(p)

Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.

That brings up a moral question. To what extend is it immoral to create a Tulpa and have it be in pain?

Tulpa are supposed to suffer from not getting enough attention so if you can't commit to giving it a lot of attention for the rest of your life you might commit an immoral act by creating it.

Replies from: Armok_GoB, Lumifer
comment by Armok_GoB · 2013-11-08T17:11:38.311Z · LW(p) · GW(p)

Just so facts without getting entangled in the argument: In anecdotes tulpas seem to report more abstract and less intense types of suffering than humans. The by far dominant source of suffering in tulpas seems to be via empathy with the host. The suffering from not getting enough attention is probably fully explainable by loneliness, and sadness over fading away losing the ability to think and do things.

Replies from: Vulture
comment by Vulture · 2013-11-10T02:54:36.361Z · LW(p) · GW(p)

This is very useful information if true. Could you link to some of the anecdotes which you draw this from?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-11-10T21:49:14.502Z · LW(p) · GW(p)

Look around yourself on http://www.reddit.com/r/Tulpas/ or ask some yourself on the verius IRC rooms that can be reached form there. I only have vague memories built from threads buried noths back on that redit.

comment by Lumifer · 2013-11-08T16:18:05.690Z · LW(p) · GW(p)

Tulpa creation is effectively the creation of a form of sentinent AI that runs on the hardware of your brain instead of silicon.

No, I don't think so. It's notably missing the "artificial" part of AI.

I think of tulpa creation as splitting off a shard of your own mind. It's still your own mind, only split now.

Replies from: Vulture, ChristianKl
comment by Vulture · 2013-11-10T02:52:10.203Z · LW(p) · GW(p)

I think the really relevant ethical question is whether a tulpa has a separate consciousness from its host. From my own researches in the area (which have been very casual, mind you), I consider it highly unlikely that they have separate consciousness, but not so unlikely that I would be willing to create a tulpa and then let it die, for example.

In fact, my uncertainty on this issue is the main reason I am ambivalent about creating a tulpa. It seems like it would be very useful: I solve problems much better when working with other people, even if they don't contribute much; a tulpa more virtuous than myself could be a potent tool for self-improvement; it could help ameliorate the "fear of social isolation" obstacle to potential ambitious projects; I would gain a better understanding of how tulpas work; I could practice dancing and shaking hands more often; etc. etc. But I worry about being responsible for what may be (even with only ~15% subjective probability) a conscious mind, which will then literally die if I don't spend time with it regularly (ref).

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-10T04:10:40.782Z · LW(p) · GW(p)

Just to clarify this a little... how many separate consciousnesses do you estimate your brain currently hosts?

Replies from: Vulture
comment by Vulture · 2013-11-10T05:11:21.962Z · LW(p) · GW(p)

By my current (layman's) understanding of consciousness, my brain currently hosts exactly one.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-10T14:00:24.088Z · LW(p) · GW(p)

OK, thanks.

comment by ChristianKl · 2013-11-08T16:32:43.407Z · LW(p) · GW(p)

No, I don't think so. It's notably missing the "artificial" part of AI.

It's not your normal mind, so it's artifical for ethical considerations.

I think of tulpa creation as splitting off a shard of your own mind. It's still your own mind, only split now.

As far as I read stuff written by people with Tulpa's they treat them as entity who's desires matter.

Replies from: Vulture, Lumifer
comment by Vulture · 2013-11-10T02:53:18.328Z · LW(p) · GW(p)

It's not your normal mind, so it's artifical for ethical considerations.

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

Replies from: ChristianKl
comment by ChristianKl · 2013-11-10T15:36:35.597Z · LW(p) · GW(p)

This might be a stupid question, but what ethical considerations are different for an "artificial" mind?

When talking about AGI few people label it as murder to shut down the AI that's in the box. At least it's worth a discussion whether it is.

Replies from: army1987, Vulture
comment by A1987dM (army1987) · 2013-11-11T20:16:51.065Z · LW(p) · GW(p)

Only if it's not sapient, which is a non-trivial question.

Replies from: Vulture
comment by Vulture · 2013-11-12T04:35:23.823Z · LW(p) · GW(p)

Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.

comment by Vulture · 2013-11-10T20:27:59.344Z · LW(p) · GW(p)

Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.

Replies from: ChristianKl, TheOtherDave
comment by ChristianKl · 2013-11-11T16:12:31.654Z · LW(p) · GW(p)

"Sufficiently accurate simulation of consciousness" is a subset of set of things that are artificial minds. You might have a consensus for that class. I don't think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.

Replies from: Vulture
comment by Vulture · 2013-11-11T19:03:12.435Z · LW(p) · GW(p)

At least for me, personally, the relevant property for moral status is whether it has consciousness.

comment by TheOtherDave · 2013-11-11T02:32:42.354Z · LW(p) · GW(p)

That's my understanding as well.... though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole "a tulpa {is,isn't} an artificial intelligence" discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don't think it matters much in context.

comment by Lumifer · 2013-11-08T16:47:20.522Z · LW(p) · GW(p)

It's not your normal mind, so it's artifical for ethical considerations.

I don't find this argument convincing.

As far as I read stuff written by people with Tulpa's they treat them as entity who's desires matter.

Yes, and..?

Let me quote William Gibson here:

Addictions ... started out like magical pets, pocket monsters. They did extraordinary tricks, showed you things you hadn't seen, were fun. But came, through some gradual dire alchemy, to make decisions for you. Eventually, they were making your most crucial life-decisions. And they were ... less intelligent than goldfish.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-08T16:52:55.590Z · LW(p) · GW(p)

Yes, and..?

There a good chance that you will also hold that belief when you will interact with the Tulpa on a daily basis. As such it makes sense to think about the implications of the whole affair before creating one.

Replies from: Lumifer
comment by Lumifer · 2013-11-08T17:12:17.646Z · LW(p) · GW(p)

I still don't see what you are getting at. If I treat a tulpa as a shard of my own mind, of course its desires matter, it's the desires of my own mind.

Think of having an internal dialogue with yourself. I think of tulpas as a boosted/uplifted version of a party in that internal dialogue.

comment by IlyaShpitser · 2013-11-08T15:48:56.312Z · LW(p) · GW(p)

Well, if you think that the human illusion of unified agency is a good ideal to strive for, it then seems that messing around w/ tulpas is a bad thing. If you have really seriously abandoned that ideal (very few people I know have), then knock yourself out!

Replies from: Vulture
comment by Vulture · 2013-11-10T05:30:50.605Z · LW(p) · GW(p)

Why would it be considered important to maintain a feeling of unified agency?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-11-10T14:04:59.219Z · LW(p) · GW(p)

Is this a serious question? Everything in our society, from laws to social conventions, is based on unified agency.

The consequentialist view of rationality as expressed here seems to be based on the notion of unified agency of people (the notion of a single utility function is only coherent for unified agents).


It's fine if you don't want to maintain unified agency, but it's obviously an important concept for a lot of people. I have not met a single person who truly has abandoned this concept in their life, interactions with others, etc. The conventional view is someone without unified agency has demons to be cast out ("my name is Legion, for we are many.")

Replies from: Vulture
comment by Vulture · 2013-11-10T22:35:34.730Z · LW(p) · GW(p)

By "agency", are you referring to physical control of the body? As far as I can tell, the process of "switching" (allowing the tulpa to control the host's body temporarily) is a very rare process which is a good deal more difficult than just creating a tulpa, and which many people who have tulpas cannot do at all even if they try.

comment by Vulture · 2013-11-08T04:56:25.921Z · LW(p) · GW(p)

Welp, look at that, I just found this thread after finishing up a long comment on the subject in an older thread. Go figure. (By the way, I do recommend reading that entire discussion, which included some actual tulpas chiming in).

comment by Lumifer · 2013-11-06T21:44:11.444Z · LW(p) · GW(p)

I don't know any obvious reason not to

A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.

I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don't think schizophrenia is fun.

Replies from: Adele_L
comment by Adele_L · 2013-11-07T20:40:26.991Z · LW(p) · GW(p)

A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.

This is a concern I share. However...

I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don't think schizophrenia is fun.

This is the worst argument in the world.

Replies from: Lumifer
comment by Lumifer · 2013-11-07T21:01:22.864Z · LW(p) · GW(p)

This is the worst argument in the world.

I don't think so, it can be rephrased tabooing emotional words. I am not trying to attach some stigma of mental illness, I'm pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.

Replies from: Kaj_Sotala, ChristianKl
comment by Kaj_Sotala · 2014-01-02T14:47:20.406Z · LW(p) · GW(p)

I'm pointing out that tulpas are basically a self-inflicted case of what the medical profession calls dissociative identity disorder and that it has significant mental costs.

Taylor et al. claim that although people who exhibit the illusion of independent agency do score higher than the population norm on a screening test of dissociative symptoms, the profile on the most diagnostic items is different from DID patients, and scores on the test do not predict IIA:

The writers also scored higher than general population norms on the Dissociative Experiences Scale. The mean score across all 28 items on the DES in our sample of writers was 18.52 (SO = 16.07), ranging from a minimum of 1.43 to a maximum of 42.14. This mean is significantly higher from the average DES score of 7.8 found in a general population sample of 415 [27], ((48) = 8.05, p < .001.

In fact, the writers' scores are closer to the average DES score for a sample of 61 schizophrenics (schizophrenic M = 17.7) [27]. Seven of the writers scored at or above 30, a commonly used cutoff for "normal scores" [29]. There was no difference between men's and women's overall DES scores in our sample, a finding consistent with results found in other studies of normal populations [26].

With these comparisons, our goal is to highlight the unusually high scores for our writers, not to suggest that they were psychologically unhealthy. Although scores of 30 or above are more common among people with dissociative disorders (such as Dissociative Identity Disorder), scoring in this range does not guarantee that the person has a dissociative disorder, nor does it constitute a diagnosis of a dissociative disorder [27,29]. Looking at the different subscales of the DES, it is clear that our writers deviated from the norm mainly on items related to the absorption and changeability factor of the DES. Average scores on this subscale (M = 26.22, SD = 14.45) were significantly different from scores on the two subscales that are particularly diagnostic for dissociative disorders: derealization and depersonalization subscale (At = 7.84, SD = 7.39) and the amnestic experiences subscale (M = 6.80, SD = 8.30), F(l,48) = 112.49, p < ,001. These latter two subscales did not differ from each other, F(l, 48) = ,656, p = .42. Seventeen writers scored above 30 on the absorption and changeability scale, whereas only one writer scored above 30 on the derealization and depersonalization scale and only one writer (a different participant) scored above 30 on the amnestic experiences scale.

A regression analysis using the IRI subscales (fantasy, empathic concern, perspective taking, and personal distress) and the DES subscales (absorption and changeability, arnnestic experiences, and derealization and depersonalization) to predict overall IIA was run. The overall model was not significant r^2 = .22, F(7, 41) = 1.63, p = .15. However, writers who had higher IIA scores scored higher on the fantasy suhscale of IRI, b = .333, t(48) = 2.04, p < .05 andmarginally lower on the empathic concern subscale, b = -.351, t(48) = -1.82, p < .10 (all betas are standardized). Because not all of the items on the DES are included in one of the three subscales, we also ran a regression model predicting overall IIA from the mean score across DES items. Neither the r^2 nor the standardized beta for total DES scores was significant in this analysis.

comment by ChristianKl · 2013-11-08T15:50:59.051Z · LW(p) · GW(p)

Could you describe the relevant mental costs that you would expect as a sideeffect of creating a tulpa?

Replies from: Lumifer
comment by Lumifer · 2013-11-08T16:14:46.627Z · LW(p) · GW(p)

Loss of control over your mind.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-08T16:34:07.855Z · LW(p) · GW(p)

What does that mean?

Replies from: Lumifer
comment by Lumifer · 2013-11-08T16:48:48.456Z · LW(p) · GW(p)

An entirely literal reading of that phrase.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-08T16:50:26.905Z · LW(p) · GW(p)

So you mean that you are something that's separate from your mind? If so, what's you and how does it control the mind?

Replies from: Lumifer
comment by Lumifer · 2013-11-08T17:08:23.328Z · LW(p) · GW(p)

Your mind is a very complicated entity. It has been suggested that looking at it as a network (or an ecology) of multiple agents is a more useful view than thinking about it as something monolithic.

In particular, your reasoning consciousness is very much not the only agent in your mind and is not the only controller. An early example of such analysis is Freud's distinction between the id, the ego, and the superego.

Usually, though, your conscious self has sufficient control in day-to-day activities. This control breaks down, for example, under severe emotional stress. Or it can be subverted (cf. problems with maintaining diets). The point is that it's not absolute and you can have more of it or less of it. People with less are often described as having "poor impulse control" but that's not the only mode. Addiction would be another example.

So what I mean here is that the part of your mind that you think of as "I", the one that does conscious reasoning, will have less control over yourself.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-08T17:12:58.985Z · LW(p) · GW(p)

So what I mean here is that the part of your mind that you think of as "I", the one that does conscious reasoning, will have less control over yourself.

So you mean having less willpower and impulse control?

Replies from: Lumifer
comment by Lumifer · 2013-11-08T17:18:05.384Z · LW(p) · GW(p)

Not only, I mean a wider loss of control.

For example someone who is having hallucinations is usually powerless to stop them. She lost control and it's not exactly an issue of willpower.

If you're scared your body dumped a lot of adrenaline in your blood and you are shaking, your hands are trembling and you can't think straight. You're on the verge of losing control and again it's not really a matter of controlling your impulses.

Replies from: Vulture
comment by Vulture · 2013-11-10T05:32:19.830Z · LW(p) · GW(p)

My understanding is that in the case of tulpas, the hallucinations are voluntary and can be stopped and started at will.

comment by Mitchell_Porter · 2013-11-06T20:36:37.790Z · LW(p) · GW(p)

While you make your tulpa, you may also want to investigate whether you are a reincarnated Nazi or a good reptilian.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-06T20:54:46.389Z · LW(p) · GW(p)

I really want to downvote this for being mean.
Except that I laughed so hard I spit coffee out my nose.

Well, OK, I sort of want to downvote this for making me spit coffee out my nose, too.
But now I no longer trust my impartiality.

comment by CronoDAS · 2013-11-02T23:59:18.004Z · LW(p) · GW(p)

If I made a game in RPG Maker, would anyone actually play it?

::is trying to decide whether or not to attempt a long-term project with uncertain rewards::

Replies from: lmm, ChristianKl, Protagoras
comment by lmm · 2013-11-03T01:10:48.241Z · LW(p) · GW(p)

Only if I heard particularly good things about it.

Most creative endeavors you could undertake have a very small chance of leading to external reward, even the validation of people reading/watching/playing them - there's simply too much content available these days for people to read yours. So I'd advise against making such a thing, unless you find making it to be rewarding enough in itself.

Replies from: CronoDAS
comment by CronoDAS · 2013-11-03T02:06:14.020Z · LW(p) · GW(p)

Would you have given Alicorn the same advice if she asked for it before writing "Luminosity"?

Replies from: lmm, Dorikka
comment by lmm · 2013-11-03T17:07:47.853Z · LW(p) · GW(p)

Yes. Do you think I would have been wrong?

Replies from: CronoDAS
comment by CronoDAS · 2013-11-03T20:02:55.692Z · LW(p) · GW(p)

/me shrugs

It seems to have found an audience.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-11-04T17:26:37.894Z · LW(p) · GW(p)

Obviously some works will always be popular. That doesn't change the fact that the prior odds for any particular one doing so are very low.

comment by Dorikka · 2013-11-04T20:48:37.380Z · LW(p) · GW(p)

Think that would have been a good move. Advice can be pretty good at presenting the outside view -- isn't as good at presenting the inside view unless the advice-giver really knows the advice-receiver well (ETA: meaning relevant details to receiver's specific case, etc.). Receiver should keep this in mind and update on relevant evidence (especially inside view evidence) that giver likely did not take into account.

comment by ChristianKl · 2013-11-03T07:56:09.320Z · LW(p) · GW(p)

What do you hope to achieve? Making money through selling the game? Artistic expression? Pushing memes?

Replies from: CronoDAS
comment by CronoDAS · 2013-11-03T15:59:35.642Z · LW(p) · GW(p)

My underlying motivation is to feel better about myself. I feel that my life so far has lacked meaningful achievements. Pushing memes is a side benefit.

I do not expect to make money by selling the game, but if I do manage to make something that turns out to be pretty good, I think it would be a big help in getting a job in the video game industry.

comment by Protagoras · 2013-11-03T00:45:52.609Z · LW(p) · GW(p)

I've played several RPG maker games made by amateurs. Some of them seemed to have significant followings, though I wasn't interested enough to make a serious effort to estimate the numbers, since I wasn't the creator. What kind of game were you thinking of making?

Replies from: CronoDAS
comment by CronoDAS · 2013-11-03T02:27:21.941Z · LW(p) · GW(p)

I have a game I've been fantasizing about and I think I could make it work. It has to be a game, not a story, because I want to pull a kind of trick on the player. It's not that unusual in fiction for a character to start out on the side of the "bad guys", have a realization that his side is the one that's bad, and then go on to save the day. (James Cameron's Avatar is a recent example.) I want to start the player out on the side of bad guys that appear good, as in Eliezer's short story "The Sword of Good", and then give the player the opportunity to fail to realize that he's on the wrong side. There would be two main story branches: a default one, and one that the player can only get to by going "off-script", as it were, and not going along with what it seems like you have to do to continue the story. (At the end of the default path, the player would be shown a montage of the times he had the chance to do the right thing, but chose not to.)

The actual story would be something like the anti-Avatar; a technological civilization is encroaching on a region inhabited by magic-using, nature-spirit-worshiping nomads. The nature spirits are EVIL (think: "nature, red in tooth and claw") and resort to more and more drastic measures to try to hold back the technological civilization, in which people's lives are actually much better.

Does this sound appealing?

Replies from: Risto_Saarelma, Moss_Piglet, Armok_GoB, passive_fist, KaynanK, witzvo, ephion, Lumifer
comment by Risto_Saarelma · 2013-11-03T08:54:38.324Z · LW(p) · GW(p)

That sounds fun, and something that'd actually translate nicely to the RPG Maker template. It's also something that takes skill to pull off well, you'll need to play with how the player will initially frame the stuff you show to be going on, and how the stuff should actually be interpreted. Not coming off as heavy-handed is going to be tricky. Also, pulling this off is based on knowing how to use the medium, so if this is the first RPG Maker thing you're going to be doing, it's going to be particularly challenging.

There might also be a disconnect between games and movies here. Movies tend to always go out of their way to portray the protagonist's side as good, while games have a lot more of just semi-symmetric opposing factions. You get to play as the kill-happy Zerg or Undead Horde, and nobody pretends you're siding with the noble savages against the inhuman oppressors. So the players might just go, "ooh, I'm the Zerg, cool!" or "I guess I'm supposed to defect from Zerg to Terran here".

Random other thoughts, Battlezone 2 has a similar plot twist with off-script player action needed, though both factions are high-tech. Dominions 4 has Asphodel that's a neat corrupted nature spirit faction. Though I'm guess you're going for nature just being inherent bastards instead of the more common corrupted nature striking back trope.

Also, games really train people to stay on the script nowadays. Games that let you go rogue with an actual in-game-world action instead of choosing 'yes' on the blinking "DEFECT TO TERRAN SIDE" dialog are rare, since letting the player go off the script in-game and meaningfully interpreting their actions is really hard in the general case, and really frustrating for the player if they have to guess the particular special case where the off-script action actually opens a different plot branch instead of just leading nowhere like it did in the 10 previous levels. The original Deus Ex did have bits where you could mitigate the shit your early game actually evil employers were pulling with quick in-game thinking, but going over to the rebels was still always in the script.

So, overall, challenging project. You need to figure out RPG Maker and where to get the art assets and such, if you're not already skilled with it, you need to do worldbuilding for two worlds, and neither can be a cardboard cutout for the conceit to work, and you need to figure out how to make the game narration work so that the player can both get effectively tricked and has all the necessary pieces to put together the alternative choice during play.

Replies from: philh, CronoDAS
comment by philh · 2013-11-03T15:29:42.426Z · LW(p) · GW(p)

Also, games really train people to stay on the script nowadays.

When I played Zelda games, I would always work out what option I was supposed to take, then take the other one, confident that I would get to see a few extra lines of dialogue before being presented with the same option again.

(I say "always", but when I first played, I would carefully make the correct choice, for fear that something bad would happen if I didn't agree to help Zelda. I don't remember when I developed the opposite habit.)

comment by CronoDAS · 2013-11-03T14:26:59.001Z · LW(p) · GW(p)

Yeah, it'll be hard. Right now I haven't worked out much more than the basic concept; I'd have a lot of writing to do, in addition to level design, learning RPG Maker, and so on.

As for art, RPG Maker does come with some built-in art and offers some more in expansion packs. If I have to, I can use placeholder art from the built-in assets and find some way to replace it once I'm happy with everything else.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-11-03T15:02:06.841Z · LW(p) · GW(p)

Have you thought about how much time you are ready to put into the project? I'd ballpark the timescale for this as at least two years if you work on this alone, aren't becoming a full-time game developer and want to put a large-scale competent CRPG together.

EDIT: I'm guessing this would look like something like what Zeboyd Games puts out. They had a two-man team working full-time and took three months to make the short and simple Breath of Death. Didn't manage to find information on how long their more recent bigger games took to develop, but they seem to have released around one game a year since.

Replies from: CronoDAS
comment by CronoDAS · 2013-11-03T16:50:06.364Z · LW(p) · GW(p)

Honestly, I'd probably start by trying to throw something much simpler together with RPG Maker, just to learn the system and see what it's like. And I don't actually have a "real job", so the amount of time I spend is mostly limited by my own patience.

And using RPG Maker might help speed up the technical work.

comment by Moss_Piglet · 2013-11-04T03:31:47.075Z · LW(p) · GW(p)

I like the idea, mainly because I spent most of Avatar rooting for Quarditch (easily the biggest badass in the last decade of cinema), but it seems like there's another way to do it that might have a bit more power;

Why not have them both be "right," according to their own value systems anyway, and then have the end-game slideshow in both branches tell the player the story of what they did from the perspective of the other side?

In terms of workload, it seems minimal; from a story perspective you already need both sides to have sympathetic and unsavory elements anyway, while from a design perspective all you need to add is a second set of narration captions for the slideshow contingent on which side the player supported.

And in terms of appeal, it certainly seems more engaging than most AAA games. Spec Ops: the Line proved that players are masochists and that throwing guilt trips at them is a great way to get sales and good reviews, while Mass Effect 3's failure shows that genuine choice in endings is pretty important for a game built on moral choice.

Replies from: CronoDAS
comment by CronoDAS · 2013-11-05T03:19:43.434Z · LW(p) · GW(p)

Why not have them both be "right," according to their own value systems anyway, and then have the end-game slideshow in both branches tell the player the story of what they did from the perspective of the other side?

This would ruin the point I'm trying to make.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-05T09:19:48.366Z · LW(p) · GW(p)

You don't have to make both branches equivalent. Both of them could feel "right" from inside, but only one of them could contain an information which makes the other one wrong.

In one ending, the hero only has limited information, and based on that limited information, the hero thinks they made the right choice. Sure, some things went wrong, but the hero considers that a necessary evil.

On another ending, the hero has more information, and now it is obvious that this choice was right, and all the good feelings from the other branch are merely lack of information or reasoning.

This way, if you only saw the first ending, you would think it is the good one, but if you saw both of them, it would be obvious the second one is the good one.

Replies from: drethelin
comment by drethelin · 2013-11-06T19:54:04.044Z · LW(p) · GW(p)

I like this idea but it seems hard to differentiate between "You did what you thought was right but you need to be more careful about what you believe" and "you got the bad ending because you missed this little thing", which is something many games have done before.

An example is Iji where the game plays out significantly differently if you make a moral decision not to kill, but if you take the default path it doesn't let you know you could've chosen to be peaceful the whole time. It involves an active decision rather than a secret thing you can miss, but it also doesn't frame it as a "MORAL CHOICE TIME GO"

comment by Armok_GoB · 2013-11-04T02:19:04.213Z · LW(p) · GW(p)

That sounds awesome... except now that I know about that twist it's ruined. And if you publish it under a different name and don't reveal it it wont sound awesome so I'll never discover it.

The only way to do this justice would be nagging enough people to play it that they can insist that it's better than it sounds and someone should really play it for reasons they can't spoil.

Replies from: CronoDAS
comment by CronoDAS · 2013-11-05T03:11:14.796Z · LW(p) · GW(p)

/me shrugs

For some reason, people still like games such as Bioshock and Spec Ops: The Line after knowing about their twists...

comment by passive_fist · 2013-11-03T05:17:11.447Z · LW(p) · GW(p)

It sounds very appealing to me, but as KaynanK pointed out, you have to be very careful about keeping the twist secret. To this end, I'd suggest not revealing to the players that they could have gone off-script, unless they do.

comment by Multicore (KaynanK) · 2013-11-03T03:53:10.809Z · LW(p) · GW(p)

It seems like an interesting story idea, but, of course, the twist can't be revealed to any prospective player without spoiling it, so it might seem cliched on the surface.

comment by witzvo · 2013-11-04T19:55:47.218Z · LW(p) · GW(p)

As an example of a flash game with similar story branches (albeit a pretty different plot), there's endeavor.

comment by ephion · 2013-11-04T18:25:32.505Z · LW(p) · GW(p)

That does sound very appealing. I'm not well versed at all in game creation, but I do remember playing with RPG maker a few years ago and it was pretty limiting. Rather than RPG maker, why not make a Skyrim mod? That would be much more fun to play and would have a much larger potential userbase.

Replies from: CronoDAS
comment by CronoDAS · 2013-11-05T02:56:05.328Z · LW(p) · GW(p)

JRPGs are what I know best.

(And RPG Maker makes standalone programs; someone who wants to play an RPG Maker game doesn't need to have RPG Maker themselves.)

comment by Lumifer · 2013-11-04T04:13:49.304Z · LW(p) · GW(p)

Does this sound appealing?

Well, that's just the twist idea, but what's your framework? Are you thinking about first-person shooters (Deus Ex style, for example) or about tactical turn-based RPGs or about 2-D platformers or what?

Replies from: CronoDAS
comment by CronoDAS · 2013-11-05T02:57:03.165Z · LW(p) · GW(p)

RPG Maker, by default, makes games that look like SNES-era JRPGs.

comment by fubarobfusco · 2013-11-06T17:24:02.253Z · LW(p) · GW(p)

http://www.refsmmat.com/statistics/

Statistics Done Wrong is a guide to the most popular statistical errors and slip-ups committed by scientists every day, in the lab and in peer-reviewed journals. Many of the errors are prevalent in vast swathes of the published literature, casting doubt on the findings of thousands of papers. Statistics Done Wrong assumes no prior knowledge of statistics, so you can read it before your first statistics course or after thirty years of scientific practice.

comment by Tenoke · 2013-11-03T10:42:09.813Z · LW(p) · GW(p)

Not particularly important ,but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments, please feel free to do so.

Replies from: Tenoke
comment by Tenoke · 2014-07-08T07:17:43.879Z · LW(p) · GW(p)

test

comment by Omid · 2013-11-06T16:31:52.592Z · LW(p) · GW(p)

Has anyone else had this happen to them?

  • You got into an argument with a coworker (or someone else you see regularly). You had a bitter falling out.
  • You were required to be around them again (maybe due to work, or whatever). You make awkward small-talk but it's still clear you hate each other.
  • You continue to make awkward small talk anyway, pretending that it doesn't make you uncomfortable.
  • Your enemy reciprocates. The two of you begin to climb the intimate conversations ladder.
  • Both of you act like friends. But, at least from your end, it's not clear if you really are friends. Neither one of you has apologized, nor have you agreed to disagree, or really made any commitment to end hostility. You have no idea whether your enemy has moved on from your fight, and is ready to resume friendship; or if they're simply carrying on a charade of friendship like you.
  • Conversations with this person become really awkward, as you're not sure whether to engage the "enemy-with-whom-I-treat-like-a-friend-just-to-act-civilized" protocol or the "real friend" protocol.

Any advice? Am I the only one that's experienced this?

Replies from: niceguyanon, shminux, TheOtherDave, ChristianKl
comment by niceguyanon · 2013-11-06T22:18:06.600Z · LW(p) · GW(p)

It looks like you have an unspoken treaty of non-hostility. People don't just forget those kind of things; you didn't. My advice is to make good with the person and acknowledge your prior differences, it will be less awkward going forward and you would gain his/her respect. And who knows, they might even gain your respect. Friends for the most part are always better than enemies.

comment by Shmi (shminux) · 2013-11-06T20:10:53.242Z · LW(p) · GW(p)

You got into an argument with a coworker (or someone else you see regularly). You had a bitter falling out.

"It takes two to tangle" and such. Is the reason for the falling out still there? Or is the residual hate just one of those lost purposes?

Replies from: PECOS-9, Omid
comment by Omid · 2013-11-07T00:06:55.217Z · LW(p) · GW(p)

Yes, I'm still angry with him. He did something cruel to someone weak, and he got angry with me for saying that was wrong. I wish I could delete him from my life but he works near me.

comment by TheOtherDave · 2013-11-06T17:14:58.323Z · LW(p) · GW(p)

I've experienced variation on the theme.
My usual approach is to decide whether I value treating them as an enemy for some reason. If I do, then I continue to do so (which can include pretending to treat them like a friend, depending on the situation). If I don't, then I move on. Whether they've actually moved on or not is their problem.

comment by ChristianKl · 2013-11-08T16:57:52.890Z · LW(p) · GW(p)

I generally don't think it makes much sense to label other people as enemies.

comment by hyporational · 2013-11-06T03:56:53.040Z · LW(p) · GW(p)

Was some change made in the lw code in the past couple of weeks or so? I can't browse this site with my android smartphone anymore, have tried several browsers. The site either frequently freezes the browser or shows a blank page after the page has finished loading. This happens more with bigger threads.

Anyone else having this problem?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-06T17:45:34.208Z · LW(p) · GW(p)

I have the same problem for pages like recent posts, which look OK at first, but then become blank. Article pages are more likely to load correctly. Solution: turn off javascript. (Android 2.2)

Replies from: hyporational
comment by hyporational · 2013-11-07T02:03:07.421Z · LW(p) · GW(p)

Thanks. This obviously disables a lot of funtionality. Another fix I found for the blank page problem is simply interrupt the loading of the page once you start seeing stuff.

comment by A1987dM (army1987) · 2013-11-03T09:24:04.079Z · LW(p) · GW(p)

Do you think there should be a new LW survey soon?

[pollid:574]

Replies from: gwern
comment by gwern · 2013-11-03T17:04:13.499Z · LW(p) · GW(p)

If Yvain is (understandably) too busy to run it this year, I am willing to do it. But I will be making changes if I do it, including reducing the number of free responses and including a basilisk question.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2013-11-04T00:53:14.367Z · LW(p) · GW(p)

Give me a few days to see if I can throw something together and otherwise I will turn it over to your capable hands (reluctantly; I hate change).

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-04T21:24:36.844Z · LW(p) · GW(p)

Have you started doing modafinil or something by any chance?

comment by bramflakes · 2013-11-02T19:22:56.226Z · LW(p) · GW(p)

How do I decrease my time-preference?

Replies from: None, hyporational, savageorange
comment by [deleted] · 2013-11-03T01:01:45.429Z · LW(p) · GW(p)

Read about hyperbolic discounting, if you haven't already.

Assuming a conflict between short- and long-term decisions, the general advice is to mentally bundle a given short-term decision with all similar decisions that will occur in the future. For example, you might think of an unhealthy snack tonight as "representing" the decision to eat an unhealthy snack every night.

comment by hyporational · 2013-11-03T05:55:39.905Z · LW(p) · GW(p)

Optimize your environment for decreased time-preference when you have the most control:

Fill your refrigerator when you're not hungry. Apply effortful-to-dismantle restrictions on your computer when you're not bored and tired. Walk to the university library to study so it takes effort to come back home to your hobbies.

I'd like to read and collect other similar strategies for my toolbox.

ETA: I just realized I do this for exercise too. There's a lake near my house with a circumference of about 6 kilometers, and I go jogging around it frequently. I have a strong desire to quit once I've gotten to the other side, but I have no choice but to run the whole route at that point. Sometimes I decide to walk the other half, but I guess it's better than nothing. Another option would be to just run in one direction and then back, but I find the idea too boring, even if I change my route a bit.

comment by savageorange · 2013-11-02T19:48:51.727Z · LW(p) · GW(p)

-- Time less ;)

-- this question feels like it's missing a word or two. What does time-preference mean?

EDIT: Thanks, Arundelo. So basically, time preference ~= level of short-sighted optimization.

In that case, do some projects that strictly require long-sighted optimization. A deadline is one good tool; Telling others what you're doing (in an unequivocal way, so that the greatest disappointment/irritation/harassment is achieved). Of course these tools are nothing new, the point is to increase the pressure as high as you can stand and reduce the amount of 'slack' time you have to allocate to a minimum.

On a more meta level, you can try things like doing some mindfulness meditation every day, which I personally find makes it easier to ignore irrelevant stimuli, worry less, and stick to my priorities.

An even more general observation: Introverts typically have lower time preference relative to extraverts, so ask them about how they dispel distraction. (I say this in the sense described by Dorothy Rowe: 'extraverts are basically worried about belonging and feel understimulated, introverts are basically worried about keeping control of themselves and feel overstimulated' , and not the vague 'Extraverts are social, introverts are not, derp' that seems to be the misapprehension of the average person)

In case there's any question, I'm an extravert, so yeah, I tend to struggle with this issue too.

Replies from: arundelo, hyporational
comment by arundelo · 2013-11-02T19:56:21.465Z · LW(p) · GW(p)

Wikipedia:

In economics, time preference (or "discounting") is the relative valuation placed on a good at an earlier date compared with its valuation at a later date

[...] Someone with a high time preference is focused substantially on his well-being in the present and the immediate future relative to the average person, while someone with low time preference places more emphasis than average on their well-being in the further future.

comment by hyporational · 2013-11-03T06:08:51.207Z · LW(p) · GW(p)

Telling others what you're doing

I read somewhere, might have been on lw, that telling what you're doing might decrease your chance of success, because it provides a way to get compliments without actually having achieved anything yet. I suppose this depends on how you do it, though.

Introverts typically have lower time preference relative to extraverts, so ask them about how they dispel distraction.

I'm an introvert, have terrible problems with time-preference, and don't understand the rationalization by Dorothy Rowe you provide.

Any empirical sources for your claim?

Replies from: VincentYu, MathiasZaman, savageorange
comment by VincentYu · 2013-11-03T17:28:31.376Z · LW(p) · GW(p)

Telling others what you're doing

I read somewhere, might have been on lw, that telling what you're doing might decrease your chance of success, because it provides a way to get compliments without actually having achieved anything yet. I suppose this depends on how you do it, though.

Gollwitzer et al. (2009). When intentions go public: Does social reality widen the intention-behavior gap?

Abstract (emphasis mine):

Based on Lewinian goal theory in general and self-completion theory in particular, four experiments examined the implications of other people taking notice of one's identity-related behavioral intentions (e.g., the intention to read law periodicals regularly to reach the identity goal of becoming a lawyer). Identity-related behavioral intentions that had been noticed by other people were translated into action less intensively than those that had been ignored (Studies 1–3). This effect was evident in the field (persistent striving over 1 week's time; Study 1) and in the laboratory (jumping on opportunities to act; Studies 2 and 3), and it held among participants with strong but not weak commitment to the identity goal (Study 3). Study 4 showed, in addition, that when other people take notice of an individual's identity-related behavioral intention, this gives the individual a premature sense of possessing the aspired-to identity.


Paper link posted to LW dicussion in 2012 by Barry_Cotter.

comment by MathiasZaman · 2013-11-03T09:46:10.821Z · LW(p) · GW(p)

I read somewhere, might have been on lw, that telling what you're doing might decrease your chance of success, because it provides a way to get compliments without actually having achieved anything yet. I suppose this depends on how you do it, though.

I know this TED-talk says similar things. It's where I first heard of that concept.

comment by savageorange · 2013-11-03T10:32:32.216Z · LW(p) · GW(p)

I definitely agree that you can't -just- tell a person what you're doing, you need to pick the right person, and cultivate the right attitude (From my observation of myself, it succeeds when I am in the mindset where I can take plenty of teasing equitably, accepting any pokes as potential observations about reality without -stressing- about that.). ..

What rationalization of Rowe's? It's a summary of what they themselves report when 'laddered' (a process which basically consists of asking them what the most terrible thing that could possibly happen to them is, followed by iterative 'why?' until they can no longer go to the next lowest level).

For extroverts, being utterly abandoned == total personal disintegration; For introverts, utter loss of self-control == total personal disintegration. (I do paraphrase here; read The Successful Self for the whole picture.)

If anything, any rationalization is mine: I observe that introverts I know are reliably better at moving long term projects forward than I, or any extravert I know, seems to be. Not that they are not weak in this way -- they just seem to be less weak as a consequence of the difference in their focus. (my inference bolded.)

I'm neutral to your statement of introversion, basically because my prior for people being hilariously terrible at assessing this stuff is quite high.

No empirical sources as far as I know. Nobody even really manages to agree on the definition of introvert and extrovert, so far. Dorothy Rowe is just the only writer I've found on the subject who manages to describe a system that is relateable, consistent, and can be applied in the real world.

comment by Kaj_Sotala · 2013-11-04T16:44:09.877Z · LW(p) · GW(p)

Guy decides to do his PhD thesis on Dungeons & Dragons, acquires funding via Kickstarter.

Replies from: hyporational
comment by hyporational · 2013-11-06T03:47:05.800Z · LW(p) · GW(p)

I wonder if there's research that rationalists should do that could be funded this way. I'd pay for high quality novel review articles about topics relevant to lw.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-08T17:04:08.795Z · LW(p) · GW(p)

How about computer games that teach rationality skills?

Replies from: hyporational
comment by hyporational · 2013-11-09T21:21:49.627Z · LW(p) · GW(p)

That fruit doesn't hang low enough, I think.

comment by gwern · 2013-11-03T17:05:05.366Z · LW(p) · GW(p)

Incidentally, I'm making a hash precommitment:

43a4c3b7d0a0654e1919ad6e7cbfa6f8d41bcce8f1320fbe511b6d7c38609ce5a2d39328e02e9777b339152987ea02b3f8adb57d84377fa7ccb708658b7d2edc

See http://www.reddit.com/r/DarkNetMarkets/comments/1pta82/precommitment/

Replies from: fubarobfusco, Azathoth123, Douglas_Knight
comment by fubarobfusco · 2013-11-03T18:22:22.593Z · LW(p) · GW(p)

43a4c3b7d0a0654e1919ad6e7cbfa6f8d41bcce8f1320fbe511b6d7c38609ce5a2d39328e02e9777b339152987ea02b3f8adb57d84377fa7ccb708658b7d2edc

Looking forward to this one ...

comment by Azathoth123 · 2014-11-07T05:11:05.979Z · LW(p) · GW(p)

Well, it's been a year. When can we expect this to be revealed?

Replies from: gwern
comment by gwern · 2014-11-08T04:00:48.804Z · LW(p) · GW(p)

Already has been, see Reddit.

Replies from: Lumifer, Adele_L
comment by Adele_L · 2014-11-08T04:33:34.058Z · LW(p) · GW(p)

What was the string that generated the hash, then?

ETA: See Lumifer's link above.

comment by Douglas_Knight · 2013-11-04T00:24:00.506Z · LW(p) · GW(p)

It seems to me that a relevant detail is time frame is ~7 months (as you say elsewhere). Ideally, hashes would be commitments to reveal the plaintext in a specified time. Don't you discuss this somewhere?

Replies from: gwern
comment by gwern · 2013-11-04T04:33:32.821Z · LW(p) · GW(p)

Not sure what you mean.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-04T05:23:35.660Z · LW(p) · GW(p)

There is a danger with publishing hashes that you might publish opposite ones. Ideally, you should be committing to one answer or the other. High entropy predictions mitigates against this, but the effect is still there. And we can't tell if it's high entropy until it is revealed. Publishing dates mitigates against this.

I thought I recently saw an essay on the uses of hash precommitments, including this kind of problem. If that doesn't ring a bell, I guess it wasn't by you.

Replies from: gwern
comment by gwern · 2013-11-04T15:03:51.121Z · LW(p) · GW(p)

Oh. Yeah, I did start a little discussion on an isomorphic trick in Umineko. In this case, the date on which I posted the hash is provided automatically by Reddit/Lesswrong/Twitter/etc and one can also verify I didn't post any other hashes recently to those fora.

The trick also only works for 'small keyspaces', if you will - if for example I was trying to fake a precommitment to a 100-digit number, the trick isn't going to work because it's not feasible to publish precommitments to even a tenth of the potential 100-digit numbers without people noticing and calling foul - 'so, gwern, why are you publishing that many precommitments and when can we expect them all to be revealed...?'

comment by CAE_Jones · 2013-11-07T23:03:50.727Z · LW(p) · GW(p)

I'm a bit emotionally tense at the moment, so this observation might not be as valuable as it seems to me, but it occurs to me that there are two categories of things I do: thinking things through in detail, and acting on emotion with very little forethought involved. The category that we want--thinking an action through, then performing it--is mysteriously absent.

It's possible to get around this to some extent, but it requires the emotionally-driven, poorly-thought out things to involve recurring or predictable stimuli. In those cases, I can think through and commit to a more rational plan during the intermediate time of inaction. Drama happens either when an emotionally-charged situation appears unexpectedly, or when I need to carry out some plan I've thought through but can't generate the emotional charge.

I can't really bluff my own hardware well enough to combat either end of the spectrum, but if there's some way to make conscientiousness and intelligence play nice together, that'd be nice.

comment by TsviBT · 2013-11-03T20:58:44.874Z · LW(p) · GW(p)

A way to fall asleep and/or gain gut intuition for "exponentially slow": count in binary, in your head, at a regular beat. YMMV.

comment by passive_fist · 2013-11-02T19:49:34.676Z · LW(p) · GW(p)

Here's a more difficult version of the AI box experiment. I haven't seen this particular version anywhere, but I'd be pleased to be proven wrong.

Imagine we've come up with a very intelligent AI that is free to manipulate the environment and uses the action-reward system like Hutter's AIXI. Also imagine that we've somehow figured a way to make the rewards very hard to counterfeit (perhaps we require the rewards to be cryptographically signed). It's clear that in such a system, the 'weak point' would be the people in control of the private key. In this case the AI will not attempt to modify its own reward system (to see why, look at Hutter's AIXI book, where he discusses this in some detail).

How could such an AI convince someone to hand over the encryption key? Note that it can't promise things like e.g. ending human suffering, because it already has the means to do that (it is 'free') as well as the incentive (obtaining reward).

Replies from: Adele_L, ChristianKl, solipsist
comment by Adele_L · 2013-11-02T19:58:03.534Z · LW(p) · GW(p)

How could such an AI convince someone to hand over the encryption key? Note that it can't promise things like e.g. ending human suffering, because it already has the means to do that (it is 'free') as well as the incentive (obtaining reward).

Torture is probably the easiest way. Another way could be to examine the key-keeper's mind and extract the key directly from that, but this is needlessly complicated.

Replies from: Eliezer_Yudkowsky, passive_fist
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-11-02T20:42:48.425Z · LW(p) · GW(p)

Torture might stand too great a chance of destroying the encryption key. Though I suppose if nanotech were sufficiently difficult to obtain, the possible key-destructive effects of torture might be balanced against the probability of a car running over the keyholder in the meantime.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-02T22:18:32.640Z · LW(p) · GW(p)

I would think that confusion (set things up so the key-keeper is confused and distracted, then do some phishing) is in the same reliability range as torture, and less likely to get the AI in trouble.

comment by passive_fist · 2013-11-02T20:42:14.652Z · LW(p) · GW(p)

I suspect the answer to be more complex than this. The AI knows that if it attempted something like that there is the very huge risk of being cut off from all reward, or even having negative reward administered. In other words: tit for tat. If it tries to torture, it will itself be tortured. Remember that before it has the private key, we are in control.

comment by ChristianKl · 2013-11-03T07:54:27.081Z · LW(p) · GW(p)

The first step would start by starting to control information flow to the people who hold the key. Only tell the person about all the good thing the AI does, and hide evidence of any wrongdoing that might reduce the rewards that the AI gets.

The second step is to break the security which gets used to protect the key. Install a keylogger.

comment by solipsist · 2013-11-05T03:27:44.488Z · LW(p) · GW(p)

I don't understand how this encryption would work. What do people physically do to reward the AI, and how do you ensure that only people can do that? Would humans compute RSA signatures in their head? Would humans typing reusable passwords onto a "secure" reward computer that is "outside the AI's control"? Do humans precompute and memorize a finite number of one-time reward phrases before the AI is turned on, and reward the AI by uttering a phrase aloud?

In the precomputed, one-time cookie case, I'd just make the human think about the reward phrase. I'm sure humans leak thoughts like a sieve through subvocalization, nerve impulses, etc.

Replies from: passive_fist
comment by passive_fist · 2013-11-05T04:10:05.312Z · LW(p) · GW(p)

What I had in mind was the reward being administered through a consensus cryptography system, perhaps via some elected board or somesuch, but I really didn't give that aspect of the problem much thought. If the key is distributed, the AI would have to extract it from each individual holding a part of it.

This in itself is an interesting problem imo, and if a good solution is found it might have important implications for FAI research.

Replies from: solipsist
comment by solipsist · 2013-11-05T06:07:38.456Z · LW(p) · GW(p)

It's clear that in such a system, the 'weak point' would be the people in control of the private key.

If the AI is out of the box, I don't think humans are the weak point.

Humans physically do something when they reward the AI. To get a reward, the AI has only to figure out what the humans would physically do and mimic that itself. If the human reward the AI by pressing a big red button, then the AI can just kill the human and press the big red button itself. It wouldn't matter if the big red button uses 512 bit elliptic curve cryptography -- the AI just has to find a paperweight and put it on the button.

If humans can perform RSA encryption#Encryption) silently in their heads, then you might be on to something. A human could memorize a private key and produce a cryptographically signed reward for the AI when the human deemed the AI worthy. The AI would not know the private key, would not be able to produce signed rewards, and would not be able to mimic humans. This setup works because it is cryptographically difficult to mimic a human doing RSA. But if the human did not perform the cryptography silently in their head, it would not be cryptographically difficult to mimic their rewarding behavior.

But I doubt that humans could perform RSA or elliptic curve cryptography mentally. Unless humans can compute trapdoor functions silently in their head, I don't see how public key cryptography could buy you anything over a reward-button.

Replies from: passive_fist
comment by passive_fist · 2013-11-05T07:32:10.914Z · LW(p) · GW(p)

For the same reason that Linus Torvalds is not able to hack every single Linux system in existence, it is reasonable to assume that the probability of 'backdoor'-type attacks could be reduced or eliminated.

If you're not talking about a back door, then I'm not sure what you're trying to say here. Are you implying that the AI will find the 'reward producing machine' and somehow use it to produce rewards? It doesn't work that way because the machine would still need the key to produce a cryptographically-valid reward, and the key would not be stored on the machine. That's the whole point of using cryptography. For instance, you could do something like http://en.wikipedia.org/wiki/Secret_sharing where the key is divided among several participants and never actually assembled in one place.

Of course any cryptography system has vulnerabilities. The point is not to build a 100% secure system. The point is to make it so that attempting to hack the system has less expected utility than simply doing what the humans say. And if that fails, then the AI will attempt to hack the system using its weakest point: the people controlling the key. Hence my question.

Replies from: solipsist
comment by solipsist · 2013-11-05T14:48:44.069Z · LW(p) · GW(p)

If you're not talking about a back door, then I'm not sure what you're trying to say here.

Yeah, we're talking past each other. I think I understand what you're saying, and I'll try to rephrase what I'm saying.

The AI is out. It is free to manipulate the world at its will. Sensors are everywhere. The AI can hear every word you say, feel every keystroke you make, and see everything you see. The only secrets left are the ones in your head.

How do humans reward the AI? You say "cryptographically", but cryptography requires difficult arithmetic. How do you perform difficult arithmetic on a secret that can't leave your head?

Replies from: passive_fist
comment by passive_fist · 2013-11-05T20:19:55.925Z · LW(p) · GW(p)

Too many assumptions are being made here. What is the basis for believing the AI will have sensors everywhere, especially while it's still under human control? And if it has the ability to put clandestine sensors in even the most secure locations, why couldn't it plant clandestince brain implants in the people controlling the key?

comment by niceguyanon · 2013-11-07T20:44:26.442Z · LW(p) · GW(p)

Beeminder users, did you pledge? Do you find that it works better if you do?

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2013-11-10T07:43:54.917Z · LW(p) · GW(p)

Yes and yes.

If you're already beeminding without the pledge and it's not working perfectly, I'd suggest trying a small pledge for the value of information.

comment by Mitchell_Porter · 2013-11-07T22:51:29.047Z · LW(p) · GW(p)

Russell's teapot springs a leak... OK, that's enough one-liners for this week.

comment by pan · 2013-11-02T17:48:11.530Z · LW(p) · GW(p)

I've seen a few posts about the sequences being released as an ebook, is there a time frame on this?

I'd really like to get the ebook printed out by some online service so I can underline/write on them as I read through them.

Replies from: MathiasZaman
comment by MathiasZaman · 2013-11-02T17:54:43.458Z · LW(p) · GW(p)

Doesn't this already exist? Or is this not what you meant?

I'm reading that pdf version on my phone and it looks fine.

Replies from: pan
comment by pan · 2013-11-03T00:45:08.457Z · LW(p) · GW(p)

From posts like this one I got the impression that they were being edited and released together in a possibly new order. Maybe I am mistaken?

Replies from: RomeoStevens, MathiasZaman
comment by RomeoStevens · 2013-11-03T07:02:29.466Z · LW(p) · GW(p)

There was a plan to release two books. That was scrapped in favor of other uses of MIRI's time/resources.

Replies from: None
comment by [deleted] · 2013-11-04T08:53:39.350Z · LW(p) · GW(p)

I think the two rationality books were supposed to be complete rewrites, and I think this is separate from the sequence ebooks (not confident, sort of confused)

comment by MathiasZaman · 2013-11-03T01:09:28.882Z · LW(p) · GW(p)

I can't speak for anything else, but I've read up to the Meta-ethics sequence without encountering any gaps. I can't vouch for anything after that, but the pdf seems complete. Maybe someone else can shed a light on your question.

comment by Shmi (shminux) · 2013-11-07T20:20:13.637Z · LW(p) · GW(p)

Why does this forum spend so much time and effort discussing untestables and unprovables? I'ts disappointing.

Replies from: TheOtherDave, drethelin, mwengler
comment by TheOtherDave · 2013-11-07T20:38:05.998Z · LW(p) · GW(p)

In the interests of shaping behavior by praising approximations of the desired behavior, can you identify three threads that are most like what you'd like to see more of?

Replies from: shminux
comment by drethelin · 2013-11-07T20:58:14.727Z · LW(p) · GW(p)

because it's based on the works of someone who wrote volumes about untestables and unprovables?

Replies from: shminux
comment by Shmi (shminux) · 2013-11-07T21:03:27.880Z · LW(p) · GW(p)

... while singing praise to testability and provability?

comment by mwengler · 2013-11-07T21:12:06.723Z · LW(p) · GW(p)

Generally, there are many things which are unproven or not tested, a smaller (but still large) number of things which difficult to test or difficult to prove, a smaller number of things which are testable or provable relatively easily, and finally a small number of things which are tested or proven.

One can expect some people to consider only truths which cluster at these ends. Academia I think tends in that direction. At one level this makes sense, one can expect a lot of useful work to be done on things which are relatively easy to prove, while things that are very difficult to prove one can expect a much lower density of utility in the work and discussion on them.

However, one cannot expect interesting and useful truths to cluster at the "provable" or "proven" end of these distributions. Indeed, given the high amount of work done at the provable end, one might expect the most useful provable truths to be pretty well described already, and the supply of provable truths yet to be proven to be more and more abstract and less and less useful. We pick the fruit that is low-hanging, and well we should.

So one would expect the more interesting truths still open to question to be concentrated along the spectrum of harder or very hard to prove.

Replies from: shminux
comment by Shmi (shminux) · 2013-11-07T21:32:33.398Z · LW(p) · GW(p)

So one would expect the more interesting truths still open to question to be concentrated along the spectrum of harder or very hard to prove.

Indeed. But reframing and carving testables from untestables and provables from unprovables should be an explicit goal.

Replies from: mwengler
comment by mwengler · 2013-11-07T22:08:53.623Z · LW(p) · GW(p)

Indeed. But reframing and carving testables from untestables and provables from unprovables should be an explicit goal.

OK. And so should a theoretical exploration of the space of hypotheses about the not-yet provables with the intent of getting the most truth for the buck when these expensive experiments are finally done.