Stupid Questions April 2015

post by Gondolinian · 2015-04-02T21:29:36.177Z · LW · GW · Legacy · 146 comments

Contents

146 comments

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

146 comments

Comments sorted by top scores.

comment by ArisKatsaris · 2015-04-02T21:49:40.484Z · LW(p) · GW(p)

I've been a bit out of touch from the community the past year or so, so I think I've rather missed things about the "Future of Life Institute", which mostly came to my attention because I think Elon Musk gave that big donation to it.

I don't quite understand what's the precise connection of FLI with everything else? How does it relate to MIRI/LessWrong/CFAR/FHI, historically/in the present/in its planned future?

Replies from: Manfred
comment by Manfred · 2015-04-03T15:10:28.118Z · LW(p) · GW(p)

Best way to find out is to ask the LWer Vika, who I'm pretty sure was the driving force (Max Tegmark probably had something to do with it too). I think their niche is to be a more celebrity-centered face of existential risk reduction (compared to FHI), but they've also made some moves to try to be a host of discussions, and this grant really means that now they have to play funding agency.

Replies from: Vika
comment by Vika · 2015-04-06T22:52:44.925Z · LW(p) · GW(p)

I'm flattered, but I have to say that Max was the driving force here. The real reason FLI got started was that Max finished his book in the beginning of 2014, and didn't want to give that extra time back to his grad students ;).

MIRI / FHI / CSER are research organizations that have full-time research and admin staff. FLI is more of an outreach and meta-research organization, and is largely volunteer-run. We think of ourselves as sister organizations, and coordinate a fair bit. Most of the FLI founders are CFAR alumni, and many of the volunteers are LWers.

comment by efim · 2015-04-03T15:12:35.452Z · LW(p) · GW(p)

I am reposting my question from February thread since it got no response last time:

Just now I noticed fundraser from CFAR. I checked their 'about' pages and everithing i could find on their long term goals.

Somehow I thought that they were going to release their materials to free use sometime in the future. (It did seem like strange >though pleasant thing), but I coulnd find anything about it this time around.

Was I mistaken about this prospect of publicly available lessons from CFAR curriculum?

comment by [deleted] · 2015-04-03T08:42:26.104Z · LW(p) · GW(p)

Is Occam's Razor a useful heuristic because we can observe a certain 'energy frugality' in nature? More complex hypotheses are possibly correlated with a higher energy demand and are thus less likely to happen.

Replies from: Squark, Ishaan
comment by Squark · 2015-04-06T19:28:07.494Z · LW(p) · GW(p)

Amusing idea, but I don't think there is any relation. For example, the discovery of nuclear structure strongly lowered the complexity of our description of nature but implied a huge amount of previously unknown available energy.

comment by Ishaan · 2015-04-06T07:36:55.252Z · LW(p) · GW(p)

My personal epistemology says no, and that Occam's Razor is generally useful no matter which universe you find yourself in regardless of how it is structured.

Aren't there physics equations describing processes which are believed not be driven by thermodynamics, which are nevertheless still simple and elegant?

comment by polymathwannabe · 2015-04-06T14:11:51.524Z · LW(p) · GW(p)

For those of you who know real-life coding: I started watching CSI: Cyber and I'm hooked. I'm loving it. But is it rubbish?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-04-11T11:46:55.265Z · LW(p) · GW(p)

I never watched CSI: Cyber. That said:

Yes.

:)

comment by DataPacRat · 2015-04-02T23:15:55.426Z · LW(p) · GW(p)

Which cryonicist to thaw?

Say that, in thirty-plus years, you're still alive and I've been cryonically preserved for a while. What could I have done during my life to convince you to apply your finite resources to resurrect me, rather than someone else?

Would it make a difference if the only potentially available resurrection method was destructive mind uploading, for which a vitrified brain would happen to be an ideal test subject?

Replies from: DanielLC, ChristianKl, advancedatheist, Richard_Kennaway, Richard_Kennaway, eeuuah
comment by DanielLC · 2015-04-03T00:05:59.680Z · LW(p) · GW(p)

Setting up a trust fund to pay whoever resurrects you would help.

Replies from: Dorikka
comment by Dorikka · 2015-04-04T07:48:37.322Z · LW(p) · GW(p)

Curious whether you would basically need to architect the terms assuming that your resurrectors are unfriendly.

Replies from: DataPacRat
comment by DataPacRat · 2015-04-04T08:37:30.931Z · LW(p) · GW(p)

I'm not sure what you mean by 'architect', but as I don't believe there are any current trust funds set up in quite this way, it would likely require designing a custom legal instrument. In which case, not only would I recommend involving a contract lawyer to handle the pitfalls of terminology, but also spending some time working out the game-theory aspects of the payout to avoid perverse incentives of the more likely scenarios - eg, you don't want to incentivize would-be resurrectors to bring you back early and with brain damage, when it's plausible that waiting a bit longer would be in your own interests.

Alternately, the terms of the trust fund may be less important than choosing an executor willing to interpret those terms in the way you meant, rather than the way they were written. ... Which, should your first choice pass away or retire, brings up a whole host of other issues about how to choose replacements.

Replies from: DanielLC
comment by DanielLC · 2015-04-04T17:47:01.598Z · LW(p) · GW(p)

I don't believe there are any current trust funds set up in quite this way

Do you know why there aren't? There are trust funds set up so that the interest pays for the cost of being cryopreserved. I would have assumed that they'd have clauses in them where, once the person is thawed, the money goes to whoever thawed them. We don't want people kept frozen just so they can get money from those trust funds.

Replies from: DataPacRat
comment by DataPacRat · 2015-04-04T22:02:03.729Z · LW(p) · GW(p)

interest pays for the cost of being cryopreserved

... Um, are you sure? For the cryo organizations I'm aware of, there /is/ no continuing cost of being cryopreserved for the individual - it's all up-front cost, with the funding going to the organization so /it/ can handle those continuing costs.

comment by ChristianKl · 2015-04-03T17:13:39.817Z · LW(p) · GW(p)

The first attempts at reviving are going to focus on testing the resurrection method. I'm not sure if you want to be in that bunch.

If you want then it's important for the people who resurrect you to check whether your personality is intact or changed.

If you would fill out a personality test every month and that test would provide stable values before your death, it would be interesting to check whether your personality stays stable.

Having a complex Anki deck that contains information about cards that should be in your mind would also be useful for that purpose.

Replies from: Gurkenglas, None, DataPacRat
comment by Gurkenglas · 2015-04-15T09:09:58.919Z · LW(p) · GW(p)

A personality change might simply be because of the new, futuristic environment. One could control for this by bringing personality-stable people from a poor, underdeveloped country into civilisation.

comment by [deleted] · 2015-04-04T21:49:41.741Z · LW(p) · GW(p)

My understanding was that written personality tests tend to have low accuracy although I could easily be wrong in that belief. I think video recordings might be more useful.

comment by DataPacRat · 2015-04-03T22:21:47.595Z · LW(p) · GW(p)

a personality test

As an alternative, what would you think of assuming a certain degree of advance in computation and psychology, and making arrangements to store every bit of digital data I've ever typed, or decided was worth storing in my personal e-library?

Replies from: ChristianKl
comment by ChristianKl · 2015-04-04T10:05:43.809Z · LW(p) · GW(p)

More data is likely better when you want to check whether anything in the mind is lost.

comment by advancedatheist · 2015-04-03T04:07:19.239Z · LW(p) · GW(p)

People who ask this sort of question assume that the cryonics era just comes and goes in a few decades. I find it more likely that cryonics or its successor technologies will become part of mainstream medicine indefinitely. If you have an illness or injury (probably some new kind of pathology we haven't seen yet) that the health care providers in, say, the 22nd Century don't know how to treat, they would put you in some kind of biostasis for attempted revival in, say, the 24th Century, when medicine has advanced enough to know what to do.

So why would people in the 22nd Century want to revive and rejuvenate and transhumanize people from the 21st Century? Well, they might return the favor for their resuscitators in the 24th Century.

comment by Richard_Kennaway · 2015-04-03T07:20:16.555Z · LW(p) · GW(p)

Say that, in thirty-plus years, you're still alive and I've been cryonically preserved for a while. What could I have done during my life to convince you to apply your finite resources to resurrect me, rather than someone else?

Say that, in thirty-plus years, you're still hale and hearty and I've been seriously ill for a while. What could I have done during my life so far to convince you to apply your finite resources to heal me, rather than someone else?

Replies from: DataPacRat
comment by DataPacRat · 2015-04-03T07:50:34.692Z · LW(p) · GW(p)

Given that it's questionable whether I'm going to have enough finite resources to bring my aging cat to the vet in the near future; and I live in Canada, with its single-payer health care system; it's a somewhat more complicated question than it may seem. Given past evidence, some minimal qualifications might involve me knowing that you exist, and knowing that I was able to help you, and knowing that the help I could provide would make a difference (this latter being one of the harder qualifications to satisfy). Given all of /that/... one potential qualification might be the possibility for future reciprocation, either directly, or by being part of a shared, low-population social group in which your future contribution could still end up benefiting me - such as, say, the two of us being part of a literally-one-in-a-million group working together to try to find some way to permanently cheat death.

There are probably other answers, including ones that I don't recognize due to my limited knowledge of human psychology and my finite insight into my own motivations... but that one seems to have some measure of plausibility.

comment by Richard_Kennaway · 2015-04-03T07:18:41.894Z · LW(p) · GW(p)

Say that, in thirty-plus years, you're still alive and I've been cryonically preserved for a while. What could I have done during my life to convince you to apply your finite resources to resurrect me, rather than someone else?

You signed a contract allowing people developing resuscitation technology to use you as one of their first experimental human revivals.

ETA: Someone has to be the first attempted revival, but I suspect that it may be last in, first out. The later you get frozen, the better the freezing technology, and the sooner the technology to reverse it will be developed. By the time people can be frozen and thawed routinely, there will still be vaults full of corpsicles that no-one knows how to revive yet. All the people in Alcor today might eventually be written off as beyond salvaging.

Replies from: DataPacRat
comment by DataPacRat · 2015-04-03T07:41:24.900Z · LW(p) · GW(p)

Hm... under current law, the cryonically preserved are considered dead, and thus any contracts they signed are no more enforceable than a contract with a graveyard to perform one form of burial instead of another. The existing cryonics companies have standardized contracts. I can't think of any way to create the contract you describe. Do you have any further details in mind?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-04-03T08:48:28.354Z · LW(p) · GW(p)

I wasn't concerned with the legal details, which will vary from time to time and place to place. At the moment, what obligates Alcor to keep their bodies frozen?

And there are wills. You can already will your body to medical research.

Replies from: DataPacRat
comment by DataPacRat · 2015-04-03T09:07:36.776Z · LW(p) · GW(p)

vary from time to time

The legal regime that cryonics has operated under has been reasonably stable for the past thirtyish years, with some minor quibbles about registering as a cemetery or not. What reasons lead you to believe that the relevant laws will undergo any more significant changes in the next thirtyish years?

what obligates Alcor to keep their bodies frozen?

At least in part, the fact that the directors are also members, and desire for their own bodies to be kept frozen after they die.

You can already will your body to medical research.

Legally, that's essentially what the wills of cryonicists already do. (In Ontario, the relevant statute is the 'Trillium Gift of Life Act'.)

comment by eeuuah · 2015-04-03T01:06:46.272Z · LW(p) · GW(p)

You would need to be able to provide value for me - so you would need to have skills (or the ability to gain skills) that are still in expensive and in demand, and society would need to give me an enforceable right to extract that value from you. Slavery or indentured servitude, perhaps.

Replies from: DataPacRat, DanielLC
comment by DataPacRat · 2015-04-03T04:00:13.253Z · LW(p) · GW(p)

Slavery or indentured servitude, perhaps.

If I may ask, are you yourself a cryonicist who might end up facing the question from either side?

Provide value

You seem to be assuming that immediate economic value is the only value worth considering; was this your intent?

enforceable right

Does this criteria apply to present-day questions that are in vaguely the same ballpark? That is, do you choose who to help based on whether or not you can force them to pay you?

Replies from: eeuuah
comment by eeuuah · 2015-04-10T00:03:51.527Z · LW(p) · GW(p)

Does this criteria apply to present-day questions that are in vaguely the same ballpark? That is, do you choose who to help based on whether or not you can force them to pay you?

Good point here - I don't usually have any mechanism to force people to pay me. I usually to help based on how likely I think it I am to get what I want out of it. A few examples:

  • I help my employer accomplish their goals very often, because I think they will pay me.
  • I help my friends with things because so far they have cooperated and helped me things in return.
  • Sometimes I help strangers with their problems with no expectation to get anything back from them. When I do, it's usually because we're part of a shared community and I am looking after my reputation.
  • If it costs me close enough to nothing, I try to help other people so I can maintain a positive self image.

You seem to be assuming that immediate economic value is the only value worth considering; was this your intent?

I'm not sure what you mean by economic value. If you mean money, no. I think that humans value many things. I could certainly see a respected artist being revived even if the reviver could not directly tax the artist's production.

If I may ask, are you yourself a cryonicist who might end up facing the question from either side?

I'm not a cryonicist at this time. I do think there's a pretty good chance that either cryonics, brain uploading, or something similar will see some people from my lifetime recreated in a form after their deaths.

comment by DanielLC · 2015-04-03T03:33:38.084Z · LW(p) · GW(p)

It's already legal to perform a medical procedure to save someone's life without their consent if they're not capable of consenting, and then demand payment. You could still go bankrupt, but that causes problems so if you're capable of repaying you probably would.

Replies from: eeuuah
comment by eeuuah · 2015-04-10T00:04:42.401Z · LW(p) · GW(p)

That's slightly terrifying, but I guess makes sense as an incentive to perform life saving medical interventions

comment by [deleted] · 2015-04-03T08:50:06.616Z · LW(p) · GW(p)

Can comforting lies be justified in certain circumstances or do the downsides of this thinking habit always outweigh its benefits? (Example: Someone takes homeopathic remedies to cure pain and benefits from the placebo effect.)

Replies from: Artaxerxes, MathiasZaman, DanielLC
comment by Artaxerxes · 2015-04-03T16:37:16.915Z · LW(p) · GW(p)

Consequentialist ethics would suggest the answer is yes, but in your example perhaps a better result would be getting the same placebo effect benefits from some kind of treatment or remedy that might actually work in itself, beyond placebo. Indulging woo isn't necessary to get positive expectation health benefits.

Replies from: Gondolinian
comment by MathiasZaman · 2015-04-04T21:24:09.410Z · LW(p) · GW(p)

Knowing about the placebo effect doesn't stop the placebo effect from kicking in.

Anyway, I'd say that there are moments when comforting lies may be worth it, but I don't trust my ability to know when those moments are happening and it would raise my overall believability if I was found out.

Replies from: Squark
comment by Squark · 2015-04-06T19:19:34.005Z · LW(p) · GW(p)

Knowing about the placebo effect doesn't stop the placebo effect from kicking in.

Especially if you know that knowing about the placebo effect doesn't stop the placebo effect from kicking in.

comment by DanielLC · 2015-04-04T03:50:09.311Z · LW(p) · GW(p)

I'd say that there are times when it's worth having comforting lies, but you can't figure out when if you're under the effect of comforting lies, so you should follow the strategy of never listening to comforting lies.

comment by [deleted] · 2015-04-19T18:32:35.951Z · LW(p) · GW(p)

Does anybody want to write a rat!BatmanBegins fic set right after the movie ended? I think it would be a great opportunity to explore several issues we have been accustomed to in HPMOR. The premise is: Batman learns that Ras'al'Ghul (sorry if misspelled) was trying to develop industrial-strength technique to produce the psychedelic gas. (Basically, to have the poppy in an in vitro culture, maybe modify it genetically and have an almost fail proof way to obtain unlimited and cheap substance.) RaG wasn't himself a specialist in this, so he hired a lab to work out the protocol. The lab team must include at least 1 person to operate a gas-chromatograph/mass spectrometer, 1 to tinker with the culture medium, 1 statistician, 1 specializing in plant secondary metabolites and (realistically, no less than) 1 assistant.

Now, Batman doesn't know whether RaG has ever succeeded in this scheme, and cannot just check using his (rather conspicuous) personas, but he has an inventor friend. So he buys the lab for W Corp and waits for evidence of culpability/innocence/... He gets to overhear them discuss the seemingly impossible phenomenon of the honest Commissioner and from their hypotheses can at least conclude they are capable of looking for alternatives - as he himself should have when somebody approached him to train him just out of the goodness of their heart.

In reality, every member on the team has had some misgivings about the use of their project, and sabotaged it in subtle ways, but seeing as this is Gotham and nobody quite knows what happened to RaG, they mistrust all outsiders.

comment by artemium · 2015-04-08T07:05:27.841Z · LW(p) · GW(p)

Recently I became active in EA (effective altruism) movement but I'm kind of stuck on the issue of animal welfare. While I agree that Animals deserve ethical treatment and that the world would be a better place if we found a way to completely eliminate animal suffering I do have some questions about some practical aspects.

  • Is there any realistic scenario where we could expect entire world population to convert to non-meat diet , considering cultural, agricultural and economic factors?

  • Would it be better if instead trying to convert billions of people to become vegetarians/vegans we invest more in synthetic meat research and other ways to make meat eating non-dependent on animals?

  • How highly should we prioritize animal welfare in comparison to other EA issues like world poverty and existential risks?

  • How does EA community view meat-eaters in general, is there strong bias against them? Is this a big issue inside the movement?

Disclosure: I am (still) a meat-eater , and at this point it would be really difficult for me to make consistent changes to my eating habits. I was raised in meat-eating culture and there are almost no cheap and convenient vegetarian/vegan food options where I live . Also my current workload prevents me in trying to spend more time on cooking.

I do feel kind of bad though, and maybe I'm not trying hard enough . If you have some good suggestions how I can make some common-sense changes towards less animal dependent-diet that might be helpful.

Replies from: None
comment by [deleted] · 2015-04-14T12:38:27.127Z · LW(p) · GW(p)

1) hardly, but then again, what is the minimum % of world population do you expect to be convincable? It doesn't have to be everybody. 2) what are the minuses of this technology? Illegal trade in real meat will thrive, for example, and the animals would live in even worse conditions. 3) I think poverty might contribute to meat consumption, if we're speaking about not starving people but, say, large families with minimal income. Meat makes making nutritious soups easy.

comment by Error · 2015-04-04T15:08:34.864Z · LW(p) · GW(p)

Where can I find recipe listings that 1. are relatively quick to make (because time is precious), 2. have ingredients that cannot be used as finger food (I have no self-control), and 3. are easily adaptable for picky eaters (there's a huge array of things I just can't abide eating)?

Replies from: ChristianKl
comment by ChristianKl · 2015-04-05T23:16:27.844Z · LW(p) · GW(p)
  1. have ingredients that cannot be used as finger food (I have no self-control)

There no problem with eating vegetables as finger food anyway.

comment by curioux · 2015-04-03T17:52:44.742Z · LW(p) · GW(p)

Suppose you became deeply religious as a young adult and married someone of the same religion with a traditional promise to be loyal to them until death. Divorce was unthinkable to your spouse and you had repeatedly reassured them that you fully meant to keep your promise to never leave them, no matter what changes the future brought. You are now no longer religious and remaining married to this person makes you miserable in ways you are sure you can't fix without betraying who you currently are. Is it moral to leave your partner? Why and why not? (Don't worry, this is a hypothetical situation.)

Replies from: Unknowns, falenas108, Good_Burning_Plastic, Epictetus, Dorikka, Manfred, Vladimir_Nesov, advancedatheist, shminux
comment by Unknowns · 2015-04-04T16:31:25.325Z · LW(p) · GW(p)

No, since "no matter what changes the future brought" includes changes of religion.

Replies from: Jiro
comment by Jiro · 2015-04-05T17:12:44.130Z · LW(p) · GW(p)

Does it? It literally does, but you probably weren't thinking that at the time.

Replies from: Viliam
comment by Viliam · 2015-04-07T12:54:01.254Z · LW(p) · GW(p)

Does it? It literally does

Good. :D

but you probably weren't thinking that at the time

Maybe a good method to evaluate the strenght of this objection would be to invent many other scenarios that people are not thinking about when they speak about "no matter what changes the future brings", and ask how they feel about the other scenarios. Then use them as an outside view for the change of religion.

Replies from: Jiro
comment by Jiro · 2015-04-07T16:17:40.414Z · LW(p) · GW(p)

Divorcing someone because of a change in religion brings up two points at once. The first is that they should divorce because a marriage between people with different beliefs doesn't work. The second is that he should find divorce acceptable because he no longer believes in a religion that says divorce is unacceptable.

Assuming we're talking about the second, then a scenario that does not involve a change in religion would be something like "I didn't realize it, but my religion says God is okay with divorce after all". That's implausible without something else going on, but it's possible that his religious leaders changed their minds, or that he misunderstood some points of his religion (for instance, perhaps his religion doesn't consider a secular divorce to be a divorce, and finds those acceptable). I would say that under those circumstances, yes, he probably would be okay with divorce.

So the answer is yes, you can break your promise.

Replies from: Jiro
comment by Jiro · 2015-04-08T14:41:56.973Z · LW(p) · GW(p)

Is there some reason why this was modded down aside from saying things that go against people's ideas about morality?

comment by falenas108 · 2015-04-05T17:00:58.859Z · LW(p) · GW(p)

ETHICAL INJUNCTION:

Any moral reasoning that results in "...and I will be miserable for the rest of my life" that is not extremely difficult to prevent and has few other tradeoffs is probably not correct, no matter how well-argued.

comment by Good_Burning_Plastic · 2015-04-03T19:01:32.833Z · LW(p) · GW(p)

Assuming they only married me because they knew I was never going to leave them, no it isn't.

Replies from: DanielLC
comment by DanielLC · 2015-04-04T03:37:22.329Z · LW(p) · GW(p)

Being someone who keeps their word can have value, but sometimes it doesn't. If someone kidnaps you and then forces you to promise to give them all your money when they release you, it's bad. If they knew you wouldn't keep your word, they wouldn't have kidnapped you. That's why contracts made under duress aren't binding. I don't think duress is the only reason to break a promise. Another one is that you were stupid. You don't want to make promises you'll later regret, so if someone doesn't accept your promise because they predict you'll come to regret it, that's good.

Replies from: Jiro
comment by Jiro · 2015-04-04T16:05:07.444Z · LW(p) · GW(p)

The kidnapper should precommit to kidnap a fixed number of people regardless of their propensity to keep to contracts made under duress. Like many precommitments, this harms the kidnapper if he actually has to follow through with it under unfavorable circumstances (he may know that nobody keeps such contracts, in which case he's precommitted to kidnapping people for no profit at all). However, it reduces the measure of worlds with such unfavorable characteristics, thus financially benefitting the kidnapper on average--if you know the kidnapper has made this precommitment, you can no longer use the reasoning you just use above and so you will obey contracts made under duress.

Replies from: DanielLC
comment by DanielLC · 2015-04-04T18:21:14.384Z · LW(p) · GW(p)

Being kidnapped isn't that big a deal. Are you saying that he should just kill everyone who isn't known to keep contracts made under duress? If "he" is a large organized crime syndicate or a government or something, that might work, but there's no way one person could kill enough people to make it worth while to start paying people to kidnap you just because he might be the one getting payed. He'd have to cooperate on the prisoner's dilemma with all the other kidnappers, who are themselves defecting from the rest of society. Why would he do that?

There's a reason for the idea of fairness. Consider the ultimatum game. There's a Nash equilibrium for every strategy where one player will accept no less than x points and the other no less than 1-x points. It seems like you could demand 1-ɛ and they'd have to accept the ɛ because it's better than nothing, but by the same logic you wouldn't be able to ask for more than ɛ because they'd demand 1-ɛ. So you pick a schelling point and demand that much. You demand half. They demand half. You agree to split it evenly. If they demand more than the schelling point, you give them nothing. If there's some reason that the schelling point isn't completely obvious, you might give them some benefit of the doubt and probabilistically accept so you don't both get nothing, but you make it unlikely enough that them demanding more than the schelling point is not a viable strategy. This is what fairness is. It's why shouldn't agree to unfair deals, even if the alternative is no deal.

Replies from: Jiro
comment by Jiro · 2015-04-05T17:09:38.345Z · LW(p) · GW(p)

The point is that "if they knew, they wouldn't have kidnapped you" is defeated by a precommitment to kidnap people whether they know or not. They don't have to kill anyone to do this.

Replies from: DanielLC
comment by DanielLC · 2015-04-05T18:30:10.321Z · LW(p) · GW(p)

Being kidnapped, promising to pay ransom, being released, and not paying is better than being kidnapped, promising to pay ransom, being released, and paying. Keeping your word gives no advantage.

Replies from: Jiro
comment by Jiro · 2015-04-06T01:13:00.773Z · LW(p) · GW(p)

Precommitment is relevant in a second way here. You have to (before being released) precommit to pay ransom after being released. Once you are released, your precommitment would force you to pay the ransom afterwards.

If you are incapable of rewiring your brain so that you will pay the ransom, there could instead be laws recognizing that contracts made under duress are valid. That would have the effect of precommitting.

This precommitment is disadvantageous in the sense that being released without it is better than being released with it, but it also increases your chance of surviving to be released rather than being shot for not having any ransom. Precommitments tend to work like that--precommitting to do an action that can only harm you in a particular situation can be overall advantageous because it alters the odds of being in that situation.

Replies from: DanielLC
comment by DanielLC · 2015-04-06T06:17:59.402Z · LW(p) · GW(p)

Currently, laws do not enforce contracts made under duress. How frequently are people murdered in protest of this?

comment by Epictetus · 2015-04-05T06:43:47.152Z · LW(p) · GW(p)

I don't consider it moral for two people to make each other suffer for years instead of admitting their mistake and moving on with their lives. That's the result of pride, not forbearance. Still worse if one party suffers while the other remains pleased.

If there are severe practical obstacles to divorce then that's one thing, but even then there are ways around that. It's nothing unusual for a couple to separate while remaining married. For example, Warren Buffett had such an arrangement for nearly 30 years--until his wife died.

So now I'm praying for the end of time
To hurry up and arrive
'Cause if I gotta spend another minute with you
I don't think that I can really survive
I'll never break my promise or forget my vow
But God only knows what I can do right now
I'm praying for the end of time, It's all that I can do
Praying for the end of time so I can end my time with you

--Meatloaf

comment by Dorikka · 2015-04-04T07:40:34.866Z · LW(p) · GW(p)

File this under "things that could probably be said better, but which might be better said than not said given I won't action it for later".

Whenever I see a post or question of the type "is X moral", I have an instinctual aversive reaction because such questions seem to leave so much that still needs to be asked, and the important questions are not even addressed, so even taking a potshot at the question requires wheeling some rather heavy equipment up to do some rather heavy digging as to the values, priorities, risk tolerance, etc of the person asking the question.

Re "the important questions are not even addressed": Fundamentally, are you trying to satisfice or maximize here? Are you trying to figure out the "optimal" action per those values that you group in the "morality" category, or are you trying to figure out which actions have an acceptable impact in terms of those values (such that you're then going to choose between the acceptable possibilities with a different set of values?) Once the meta's taken care of, what are the actual things that you value? Inferential distance is often pretty humongous in this regard, so more explicit often is better.

Maybe a more concrete example will be useful. If I ask you "what computer should I buy?", I should not take an immediate answer seriously with no further info, because I know you have no way of knowing what my decision criteria are (and its kinda hard for your recommendation to align with them by chance.) As such, I would probably want to give you a decent amount of information regarding my relevant preferences if I ask for such a recommendation...am I going to play games? Office work? Might even be useful to specify the type of games I'm playing and whether graphics are a biggie for me, etc.

When I don't see this type of info flow occurring, it feels like a charade, because if I were the one asking the question I would have to discard any answers that I got in the absence of such info about preferences, etc.

Again, apologies for going meta + possibly abrasive tone at the same time. Just trying to help discussions like this get started off on the right foot, as it feels like I see them more and more lately. Probably tapping out.

ETA punctuation.

comment by Manfred · 2015-04-04T00:57:02.867Z · LW(p) · GW(p)

This sounds like a place where Kantian ethics would give the right answer. I think, there is some point at which it would be stupid to not seek divorce, and some point at which the promise you made is indeed more important, and the thing that differentiates those two states is not whether you want divorce now, but whether which procedure would it be better for people to follow - the one that has you stay married here, or the one that has you divorce here.

Replies from: Larks
comment by Larks · 2015-04-05T14:09:25.301Z · LW(p) · GW(p)

Kantian ethics would almost definitely say to never divorce. Kantianism is not the same as Rule Utilitarianism!

Replies from: Creutzer, Manfred
comment by Creutzer · 2015-04-06T19:52:19.789Z · LW(p) · GW(p)

Even if we ignore for a moment the fact that Kantian ethics doesn't say anything because it's not well-defined, it's not at all clear to me that this is true. As it stands, your statement sounds like it's based more on popular impressions of what Kantian ethics is supposedly like than an actual attempt at Kantian reasoning.

comment by Manfred · 2015-04-05T15:41:49.036Z · LW(p) · GW(p)

Okay, thanks :)

comment by Vladimir_Nesov · 2015-04-03T18:01:09.804Z · LW(p) · GW(p)

The issue is with the decision, so asking "Is it moral?" is a potentially misleading framing because of the connotations of "moral" that aren't directly concerned with comparing effects of alternative actions. So the choice is between the scenario where a person made promises etc. and later stuck with them while miserable, and the scenario where they did something else.

Replies from: curioux
comment by curioux · 2015-04-03T18:09:41.608Z · LW(p) · GW(p)

I'm asking what would make you justify leaving or staying.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2015-04-03T22:30:14.405Z · LW(p) · GW(p)

"Justify" has a similar problem. Justifications may be mistaken, even intentionally so. Calling something a justification emphasizes persuasion over accuracy.

comment by advancedatheist · 2015-04-04T02:26:48.351Z · LW(p) · GW(p)

Suppose you became deeply religious as a young adult

This assumes that different kinds of religiosity tend to converge on similar ethics about marital commitments and fidelity. You could become "deeply religious" in a way which allows for divorce or outside relationships.

This also assumes that your religion's doctrine on these matters remains stable over many generations. If your religious community accepts 22nd+ Century medicine and permits its members to seek treatment for engineered negligible senescence and superlongevity, then you could live long enough to see your religion undergo a Reformation-like event which allows for a more flexible view of marriage and sexual relationships.

I think I've mentioned this before, but I find Ridley Scott's portrayal of Future Christians in the film Prometheus interesting. The space ship's archaeologist character, Elizabeth Shaw (played by Swedish actress Noomi Rapace), wears a cross and professes christian beliefs at a time when christianity has apparently gone into decline and christians have become relatively uncommon. Yet as a single christian woman she has a sexual relationship with a man on the ship, which suggests that christian sexual morality during that religion's long twilight will tend to converge with secular moral views.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-04-04T21:40:47.372Z · LW(p) · GW(p)

First two paragraphs seem reasonable. To the third though:

I think I've mentioned this before, but I find Ridley Scott's portrayal of Future Christians in the film Prometheus interesting. The space ship's archaeologist character, Elizabeth Shaw (played by Swedish actress Noomi Rapace), wears a cross and professes christian beliefs at a time when christianity has apparently gone into decline and christians have become relatively uncommon. Yet as a single christian woman she has a sexual relationship with a man on the ship, which suggests that christian sexual morality during that religion's long twilight will tend to converge with secular moral views.

Many, many self-identified Christians from pretty much all denominations have premarital sex. See e.g. here. And this isn't a new thing, even among the Puritans this was not uncommon (in there care we can tell based on extremely short times between many marriages and when children had their births recorded).

comment by shminux · 2015-04-03T19:01:22.132Z · LW(p) · GW(p)

Identity may be continuous, but it is not unchanging. You are not the person you were back then and are not required to be bound by their precommitments. No more than by someone else's precommitments. To be quasi-formal, the vows made back then are only morally binding on the fraction of your current self which are left unchanged from your old self.Or something like that.

Replies from: Jayson_Virissimo, DanielLC, torekp
comment by Jayson_Virissimo · 2015-04-03T22:24:49.747Z · LW(p) · GW(p)

Would you not object to your neighbor's refusal to return the set of tools you lent him on account of his having had a religious conversion?

Replies from: shminux
comment by shminux · 2015-04-04T05:17:55.446Z · LW(p) · GW(p)

What religion would compel you to do that?

Replies from: ChristianKl
comment by ChristianKl · 2015-04-04T10:09:28.929Z · LW(p) · GW(p)

Then don't make it a set of tools but a money loan. He switches to Islam and now things that interests on loans is immoral.

comment by DanielLC · 2015-04-04T03:44:10.289Z · LW(p) · GW(p)

Imagine you're elected leader of a country. The last leader defended against an invasion by putting the country into debt. If he hadn't done that, the country would now be under control of the other country's totalitarian regime. You can pay the debt, but if you don't nobody can force you. Should you repay the debt? Are you bound by the precommitments of your predecessor?

Replies from: Jiro
comment by Jiro · 2015-04-04T15:57:28.114Z · LW(p) · GW(p)

A country that is known to elect new leaders cannot credibly precommit to paying back a loan unless it is in a situation that is robust against new leaders refusing to pay back the loans. So you would in fact be bound by the precommitments of your predecessor whether you wanted to be or not, though the exact mechanism can vary depending on exactly what made the precommitment credible.

Replies from: DanielLC, Larks
comment by DanielLC · 2015-04-04T17:59:10.235Z · LW(p) · GW(p)

Suppose the mechanism is that they're electing people that care about the country. Would this mechanism work? Would you and the other leaders consistently pay back loans?

Replies from: Jiro
comment by Jiro · 2015-04-05T00:14:22.212Z · LW(p) · GW(p)

If the mechanism didn't work, then the precommitment wouldn't be credible, and the people making the loans would have known that there is no credible precommitment.

Replies from: DanielLC
comment by DanielLC · 2015-04-05T00:48:00.026Z · LW(p) · GW(p)

And thus the country will fall. Since the leaders care about the country, they'd rather pay back some loans than let it fall, so the mechanism will work, right?

comment by Larks · 2015-04-05T14:04:34.394Z · LW(p) · GW(p)

That's highly misleading. Empirically, many countries have successfully raise debt, and paid it back, despite debt-holders having no defense against a new leader wanting to default.

Replies from: gjm
comment by gjm · 2015-04-05T15:45:24.066Z · LW(p) · GW(p)

I think one defence those debt-holders have is that those countries have traditions of repaying debts.

Another is that, regardless of whether you're formally committed to repaying loans, if you default on one then you or your successors are going to get much worse terms (if any) for future loans. So a national leader who doesn't want to screw the country over is going to be reluctant to default.

comment by torekp · 2015-04-05T15:19:35.168Z · LW(p) · GW(p)

Derek Parfit, on identity, talks about psychological connectedness (examples: recalling memories, continuing to hold a belief or desire, acting on earlier intentions), and continuity, which is the ancestral of connectedness. It sounds like you are saying that commitments should be binding based primarily on connectedness, not on continuity. But this has certain disadvantages. If I take the suggested attitude, I will be a less attractive partner to make deals and commitments with.

(I didn't downvote your comment BTW. But I bet my worries are similar to those of whoever did.)

Replies from: shminux
comment by shminux · 2015-04-05T18:06:34.728Z · LW(p) · GW(p)

Ah, yes, connectedness is indeed what I meant. Thanks! My point was that, while legal commitments transcend connectedness, moral need not.

comment by Salemicus · 2015-04-03T12:23:58.680Z · LW(p) · GW(p)

The amount of fossil fuels extracted in a year is equal to the amount of fossil fuels burned in a year (give or take reserves, which will even out in the long run). So if fossil fuel extraction were reduced, CO2 emissions would be reduced, regardless of any taxes, cap-and-trade, alternative energy sources, etc that may or may not be in effect. Indeed, the only way that traditional environmental measures such as the above can reduce carbon emissions is if their effect on fossil fuel prices eventually causes less extraction.

Therefore it seems logical that the best way to reduce CO2 emissions is to pay fossil fuel extractors to reduce their extraction rate. This should not cost the extractors too much because they will still own the resources and will be able to monetise them eventually. But environmentalists do not favour such subsidies to e.g. Saudi Arabia and when I have brought up this suggestion to environmentalists they have looked at me funny and suggested the issue was complicated, but never provided any direct reason why this should be a bad idea. This makes me think I am missing something obvious, that this is a silly idea.

Is there academic literature on this or similar concepts? Why isn't this a good idea for reducing CO2 emissions?

Replies from: Luke_A_Somers, JoshuaZ, Dorikka, Toggle, ChristianKl, Epictetus
comment by Luke_A_Somers · 2015-04-03T14:58:19.571Z · LW(p) · GW(p)

If you pay Saudia Arabia to produce less, then someone else will produce more unless you pay them not to, too. And any of them could secretly overproduce the lower quota.

AND once you've lowered the supply, then the price will rise, making the number of potentially profitable oil-producing states rise, increasing the number of people you need to pay off, and increasing the amount you need to pay each one.

comment by JoshuaZ · 2015-04-03T13:17:26.971Z · LW(p) · GW(p)

The amount of fossil fuels extracted in a year is equal to the amount of fossil fuels burned in a year (give or take reserves, which will even out in the long run).

Fossil fuels are used for purposes other than burning. They are used in making plastic, in making fertilizer and in synthesizing chemicals as well.

comment by Dorikka · 2015-04-03T12:56:09.631Z · LW(p) · GW(p)

Deferment (producing later instead of now) is really expensive if you are using a reasonable discount rate, so this would be quite expensive. I think your plan would also constrain supply, raising the oil/gas price and making the cost even higher.

If you want to ballpark costs, try deferring whatever fraction you like of us oil prod for, say, 10 years. Try a discount rate of 7-8% and figure out the costs/year. I would assume oil price of at least 80/bbl if you are trying to estimate costs on a reasonable timescale.

comment by Toggle · 2015-04-03T16:25:44.206Z · LW(p) · GW(p)

If nothing else, because it would be prohibitively expensive. Globally, something like 70 million barrels of oil are produced per day. The total value of all barrels produced in a year varies depending on the price of oil, but at a highish but realistic $100bbl, you're talking about two and a half trillion US dollars per year. If you were to reduce the supply by introducing a 'buyer' (read: subsidy to defer production) for some large percentage of those barrels, then the price would go even higher; this project would probably cost more than the entire global military budget combined, with no immediate practical or economic benefits.

comment by ChristianKl · 2015-04-03T13:16:46.915Z · LW(p) · GW(p)

The main thing you want to do when you want to reduce fossil fuel extraction is to outlaw fracking and make it harder by vetoing pipeline bills.

Paying Saudi Arabia to lower extration rates while at the same time increasing fracking production makes no sense.

Replies from: DanielLC
comment by DanielLC · 2015-04-04T03:46:51.059Z · LW(p) · GW(p)

Paying Saudi Arabia to lower extration rates while at the same time increasing fracking production makes no sense.

Are you saying that's true in general, or that it just so happens that Saudi Arabia drilling is more cost-effective than fracking?

Replies from: ChristianKl
comment by ChristianKl · 2015-04-04T10:58:59.603Z · LW(p) · GW(p)

I don't know what "true in general" means here.

Replies from: DanielLC
comment by DanielLC · 2015-04-04T17:51:23.626Z · LW(p) · GW(p)

It sounds like a thought-terminating cliche. Sort of like saying that we should solve all our problems on Earth before we start exploring space. If Saudi Arabia's marginal oil is less cost-effective than fracking, then it's better for them to stop extracting as much and us to extract more. Are you trying to say that we should stop our own production first regardless, or that fracking has the lowest cost-effectiveness and we should worry about fracking before drilling?

Replies from: ChristianKl
comment by ChristianKl · 2015-04-04T22:48:40.543Z · LW(p) · GW(p)

Changing oil extraction rates is a complex political issue where price isn't the only variable that matters. Neither of the statements you made matches the one I made above.

comment by Epictetus · 2015-04-05T07:02:14.843Z · LW(p) · GW(p)

It costs a lot of money and only defers the problem. Extracting less coal and less oil doesn't do much to address increasing energy demands. You'll get some decrease when the price goes up from restricting supply, but once thing stabilize it's going to continue rising.

Basically, you're temporarily reducing emissions without addressing the circumstances that brought about high emissions in the first place.

comment by hoofwall · 2015-04-11T11:33:03.763Z · LW(p) · GW(p)

Sorry, never been here before and know nothing about this place and all the other "stupid questions" here seem super formal so I feel really out of place here but, how common is it for the users on this site, the likes of whom likely all refer to themselves as rationalists to be misanthropes?

I hate humans. I hate humans so much. I used to think I could change them. I used to think every human who exhibited behavior I found to be inferior was simply ignorant of true rationality. Mines is a very long story that I no longer want to tell but it was months of thinking I could change every mind I found inferior before I came to the conclusion that humans are worthless and that they've simply devolved to the lowest common denominator, to the point where they retain not the capacity to grasp the objective breadth of rationality in this universe unless they lack the very things that make them human.

I have extremely strong opinions on everything I've cared to question, the likes of which I wish to express formally before I die but I hate humans so much. I wouldn't be doing it for the human. I am probably technically depressed at the moment and have been for a long time and was just wondering how many self-proclaimed rationalists consider themselves misanthropes, or at least exhibit misanthropic views...

Replies from: jimrandomh, monsterzero, hairyfigment, ZT5, None
comment by jimrandomh · 2015-04-12T05:05:58.434Z · LW(p) · GW(p)

I hate humans.

If this is representative of your usual conversation style, then everyone above a certain level of competence will correctly infer that they should avoid you. This will leave you with conversation partners that are far below average. Your other statements make me think that this has, in fact, happened.

I used to think I could change them.

This is a difficult skill. The first step, if you truly want to change someone, is to establish mutual respect. If they think you don't like them, they will resist all attempts to influence them. This is definitely the right strategic move on their part, and even if they don't think about strategy at all, their instincts and emotions will guide them to it. If you think that you should be able to convince people of things, with this style of writing or this style of writing translated into speech, then you have misunderstood the nature of social interaction and you need to study the basics with the humility of a beginner.

Replies from: hoofwall
comment by hoofwall · 2015-04-12T05:09:39.446Z · LW(p) · GW(p)

Mate, you can argument by assertion fallacy your opinions all you want but it appears to me that my opinions are correct, and that all conceivable dissenting opinions are incorrect, and I can explain why. If pure logic cannot convince others to think like me then does it not mean that the only way to have them conform to what I want them to is by compromising the integrity of my beliefs, and attempting to effect them via playing with their emotionality directly? That is irrational behavior. I would rather be correct and affirm my superiority the right way than try to manipulate people irrationally because my rhetoric will fall on deaf ears.

comment by monsterzero · 2015-04-11T18:10:32.027Z · LW(p) · GW(p)

Our culture typically presents rationality as opposed to emotion; I believe that a disproportionate number of misanthropes are drawn to rationality for that reason.

However, logic is meaningless without having an underlying goal, and goals are generally determined by one's emotions. What are your goals?

I find that thinking of other people as inferior or irrational is not particularly helpful in accomplishing my objectives. I feel less stress and make more progress by recognizing that other people have different goals than I do.

It is possible to get others (even "irrational" others) to help you accomplish your goals by offering to help them with theirs.

Replies from: hoofwall
comment by hoofwall · 2015-04-11T23:00:31.297Z · LW(p) · GW(p)

Sorry, before I mention my personal goals I just want to say that I disagree with the notion that logic is meaningless without being founded on an underlying goal... Logic as I understand it is by definition merely a method of thinking, or the concept of sequencing thought to reach conclusions, and determining why some of such sequences are right. I believe logic in itself- according to the second definition I proposed- tends to the end of a goal, and that goal is rationality. Naturally, without having anything to sequence logic is nothing and has no breadth, but in this universe where the breadth of the construct "logic" is contingent on the human's ability to sequence data it should inherently have a goal, at least today as the human appears, and that goal should be rationality, in my opinion. I believe assuming your proposal is correct would mean assuming "logic" as you used it in your proposal is simply defined as a method of thinking, and not its more fundamental meaning, which I proposed.

My goal is simply to express in my lifetime my views on everything... I do not feel I can change the world. I do not feel I can simply approach every human I encounter and explain to them why I believe my opinions to be correct and all conceivable dissenting opinions to be wrong. I will just express myself in my own way one day and that will be it... I created an account on this website more or less randomly for me because I was recommended going here once, a while ago.

I do not believe that "stress" in itself is something to be considered when it comes to one's method of forcing the world to tend to the end they want to... I will explain what I mean. Please excuse any possible argument by assertion fallacies henceforth... converting everything to E-Prime is tiring but I do believe opinions have to actually be defended to be rational... If i ever simply assert that I believe something is true that is a mistake, as i meant to rationalize its breadth in its entirety to believe it has the capacity to be defended and inherently rebut all conceivable dissenting arguments...

Obviously, the human's understanding of rationality is a consequence of themselves, to some extent. That is not to say that rationality so defined is entirely a consequence of the human and that the human literally created a portion of this universe that retains the properties of "rationality"... What I mean is, humans appear to feel emotion, and humans appear to correlate their understanding of the concepts of "good" and "evil" to what they perceive to be positive and negative emotion, respectively. Fundamentally, every human who retains the standard properties of the human lives through their own emotionality and their idea of good and evil is founded on that very thing.

Ugh... I just realized if I expound my philosophy any further I will be affirming for the first time since posting here my opinions which many will probably disagree with but basically I think that "stress" if "stress" is defined as pain(negative emotion) entirely in the head, meaning it is simply perception, ascribing emotion to certain things and feeling pain as a result, it is entirely a consequence of perception and can be manipulated to become pleasure... Perhaps it will be a certain iteration of masochism, and perhaps actually enduring perceived stress in reality will have consequences on the outside world as distinguished from your own psyche, possibly prompting an entire lifestyle change but "stress" should be irrelevant if its properties can just be totally changed with a different opinion, in my opinion.

When it comes to me, I believe so strongly that all who disagree with me is wrong that it seems extremely unnecessary to saturate my believing their being wrong with something else in an attempt to make me cope with my own emotionality. I believe there are other ways to cope with oneself than compromising on their own beliefs. I just correlate things to good or evil freely, at face value. I really wouldn't make progress insofar as inciting a revolution is concerned by tolerating what I believe to be wrong, either. Perhaps by "goals" you mean something other than forcing the world to tend to its most rational end as you perceive it.

About your last sentence, I don't believe in manipulating via anything other than argument to entice others to do as you wish... If it is something less than true reason to think, which I believe can only be conveyed via argument of some sort, it will be blind conformity, and any society or standard based on that is doomed to conceive notions as worthless as the one it was founded on, making it inferior to what it could and I believe should be. Also, it's interesting that misanthropes are drawn to reason. I kind of expected it but I've had bad experiences with self-proclaimed misanthropes retaining the human property I hate, rendering their sub-ostracization asinine in my eyes... I probably rambled a lot in this post, sorry. I don't know what type of reply I would expect to this if any. Thanks for reading if you did.

comment by hairyfigment · 2015-04-11T18:35:42.346Z · LW(p) · GW(p)

Possibly I don't understand your situation ("devolved" doesn't make sense to me except as Star Trek syence, a word I just invented based on the name SyFy. It could be a more polite version of 'syfyces'.)

But I find it useful to remind myself that humans have no evolutionary reason to be perfectly rational. I tell myself that if any future I hope for comes to pass, the people there will see us (at the present time) as particularly foolish children who, rather horribly, age and die before growing up.

Replies from: hoofwall
comment by hoofwall · 2015-04-11T23:09:54.858Z · LW(p) · GW(p)

Sorry, I suppose I misused the word "devolve"... I've seen others use it as I have in my post here so I thought it was okay, but I suppose not. Perhaps they misused it, and if so I should not be tolerating the arbitrary and blatant misuse of words. What I meant by that word though was simply falling in stature. My using the word was to express that I believed humans have fallen in stature to the point that they cannot fall in stature anymore, and that the humans who roam the earth today will continue to breed and forge the world they want without changing very much in the next few generations of human if ever.

I just realized this site has a quoting feature. That makes responding to posts SO much easier...

But I find it useful to remind myself that humans have no evolutionary reason to be perfectly rational.

Yes... I believe the same thing. One does not have to provide to anything a rational reason to copulate, and to breed. One does not need to provide a rational reason to anything to live, to kill, to, force the world to tend to the end they want or anything. Humans appear to simply do. Naturally, through generations of the human simply doing, and doing as they please, they have perhaps become incapable of actually questioning whether or not simply doing is right, but what do I know? This is just a theory, and not one I can prove with sheer logic. Even if I fancied doing so it would be a waste of time... It would be far worth my effort to simply deduce and affirm what it means to be right, and what it means to be wrong. Whether or not the human has the capacity to truly be rational and what caused rationality and being human to be mutually exclusive if they are can be questioned later...

comment by Victor Novikov (ZT5) · 2015-04-11T17:46:06.714Z · LW(p) · GW(p)

I self-describe as a rationalist and I don't like humans that much at all. Don't know how common this is.

I like humans well enough when
-I can have a sensible interaction with them
-Or, they are willing to accommodate my needs with needing an explanation for everything
-Or, if I can manage their irrationality with a strategy that has a low cost to myself

Otherwise, I don't like humans very much or at all. Maybe disappointed? I wouldn't say hate (though the thought does come up).

I have been depressed. I've learned to deal with it, and I don't feel I'm depressed now, though I am probably at risk for depression.

Mostly I try to do things for myself. And to put myself in a position where I won't depend on any individual human for anything vital, and to have resources for as much self-reliance as possible.

comment by [deleted] · 2015-04-11T15:51:27.859Z · LW(p) · GW(p)

[Do you self-identify as a misanthrope:]{yes}{no}{just show me the results}

comment by philh · 2015-04-07T15:59:19.717Z · LW(p) · GW(p)

If I cook a fixed amount of raw rice (or couscous, or other things in that genre) in a variable amount of water, what difference does the amount of water make to calories, nutrition, satiety, whatever?

For example, if I want to eat fewer calories, could I cook less rice in more water to get something just as filling but less calorific?

Replies from: kalium, Illano, polymathwannabe
comment by kalium · 2015-04-12T20:11:35.114Z · LW(p) · GW(p)

This doesn't answer your question, but if you conclude that adding water is likely to make rice more filling per calorie (I have no idea whether it will), the dish you want is called congee, and searching for that should yield many delicious recipes.

comment by Illano · 2015-04-08T18:50:29.388Z · LW(p) · GW(p)

I don't know about varying the amount of water. But if you want to eat fewer calories of rice, there was an article that came out recently saying that the method you use to prepare it could affect the amount of calories your body actually absorbed from it.

comment by polymathwannabe · 2015-04-07T18:14:59.940Z · LW(p) · GW(p)

More water will also absorb a greater portion of water-soluble vitamins.

Replies from: philh
comment by philh · 2015-04-07T20:58:01.025Z · LW(p) · GW(p)

Does that mean I get more vitamins (e.g. because the vitamins were biologically unavailable in the rice, but available in the water) or fewer (e.g. because the reverse, or if a significant amount of water boils off)?

Replies from: kalium, polymathwannabe
comment by kalium · 2015-04-12T20:12:17.046Z · LW(p) · GW(p)

Water loss through boiling shouldn't make a difference, as the vitamins are not volatile and will not boil off with it.

comment by polymathwannabe · 2015-04-07T21:30:44.833Z · LW(p) · GW(p)

I'm not sure. The rice is supposed to absorb (most of) the water you cook it in, which complicates giving an answer.

to get something just as filling but less calorific?

I hear shirataki was invented specifically for that purpose.

comment by EphemeralNight · 2015-04-06T06:59:05.603Z · LW(p) · GW(p)

Okay, so I have a "self-healing" router that ostensibly reboots itself once a week to "allow channel switching" and to "promote network health", and given that this seems to NOT mess up my internet access in one of several ways every tuesday morning only MOST of the time, it has been causing me stress absurdly out of proportion with the actual danger (of being without internet access/my ONLY link to the outside world, for a short time).

So, my question is, what the HECK does "channel switching" or "promoting network health" even mean, and is it actually important enough that I shouldn't just flat out disable my router's "self-healing" feature?

Replies from: ChristianKl, sixes_and_sevens
comment by ChristianKl · 2015-04-06T11:29:05.102Z · LW(p) · GW(p)

In Germany most internet connection have a clause that requires regular reestablishing with get's you a new IP address. It's in the contracts because the changing IP address makes it harder to run a server behind a home connection.

The advantage of a changing IP address that it's a lot harder to track you for random websites.

It makes sense for the router to choose a time in the night where the connection isn't used to do the reconnecting. Otherwise the ISP would on it's own choose the timing which might be worse.

If your router however does this when you aren't sleeping see if disabling the feature helps.

Replies from: EphemeralNight
comment by EphemeralNight · 2015-04-06T12:47:10.995Z · LW(p) · GW(p)

I think you may have misunderstood. I'm talking about my router, which is a separate device from my modem. I have never observed the router rebooting to fix a problem, and have on several occasions observed the reboot to cause a problem. I just want to know if there is something nonobvious going on that will cause problems if the router does not reboot once a week, keeping in mind that it is a separate device from the cable modem.

comment by sixes_and_sevens · 2015-04-11T14:08:25.501Z · LW(p) · GW(p)

"Channel-switching" is referring to the wireless channel. Modern wireless routers will "intelligently" select a wireless channel to communicate over, taking into account features of their environment. For example, if there's high competition with other wireless transmissions on one wireless channel, they'll switch to a less contested one.

"Promoting network health" is a bit of a nebulous thing to say about a home network served by a single router. As a pragmatic observation, rebooting a router can solve a variety of problems it might be experiencing. Most home users can't distinguish local machine problems from problems with the network, and automatic periodic rebooting of the router probably prevents a lot of support calls. If you're happy with rebooting your own router as and when you see fit, I don't see why you shouldn't turn this feature off.

comment by [deleted] · 2015-04-14T12:26:25.295Z · LW(p) · GW(p)

Do other people like clothes when they buy them and dislike them after putting in the wardrobe? I mean, I personally think it is true for myself because my relatives like to give me clothes as easy gifts and I always feel like they remind me that I am a child, which is why I learned to smile and say thank you regardless whether I like something. Lately, when I have to buy something for myself, I just wander and use the Force. How do you unlearn such a habit (it seems wasteful and ungrateful not to accept a gift, but also wasteful and stupid not to learn to choose)?

Replies from: ChristianKl
comment by ChristianKl · 2015-04-19T20:21:36.397Z · LW(p) · GW(p)

The path I see is to develop more specific preferences and articulate why you prefer certain clothing over other clothing. Tell your relatives what kind of clothing you like.

You can even say: "I have make a decision to move to a new style ..." If you don't want to make them feel bad for past gifts.

Clothes I pick often depend on my mood and the social context I will have in the day. If the emotional state in which you buy radically differs from the state in which you choose clothes from your wardrobe that can lead to a disconnect.

Have you analysed why you prefer certain clothing over other clothing?

Replies from: None
comment by [deleted] · 2015-04-21T06:39:22.168Z · LW(p) · GW(p)

Thank you, I'll try. I prefer warm to pretty in winter and khaki/colourful (depending on whether I am with my kid or not) to neat. I prefer pants to skirts. Generally, I like only a narrow subset of how I can look, and get annoyed when people tell me to be more flexible. My thinking goes like 'can't they see I have already defined myself and have no wish to follow aging conventions?' (I'm 29.) It's a bug and not a feature, but I find other people's clothing so... hopeless, I guess, so muted, that I can't remember the last timeI envied someone.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-21T11:04:08.878Z · LW(p) · GW(p)

I like only a narrow subset of how I can look, and get annoyed when people tell me to be more flexible.

If that's true it's likely that it you could be more specific then pants>skirts and khaki/colorful.

On the other hand those seems pretty straightforward rules for your relatives. Don't gift her skirts and make sure it's either khaki or colorful.

comment by skilesare · 2015-04-07T02:37:02.559Z · LW(p) · GW(p)

We have an economic system with N actors. Each actor has its own utility function that it uses to attempt to spend/invest money in areas that will grow. The system as a whole doesn't know these functions and the nodes can't see them internally. They just make a judgment and spend/invest. If they spend in an area that grows, more money comes to them via an agent in the system that redistributes cash as it flows to the originators of cash in a node.

For example, If N1 pays a dollar to N2 for a bottle of wine, N1 gets a share in N2. As cash flows through N2, little bits get funneled back to N1. So if N2 becomes the next big wine maker, many bits will flow to N1 and it will be rewarded for sending money to N1 early in time.

Does it follow from bayes theorem that if I keep passing cash through this system, that over time, the success rate will ocellate around the actual success rate of each nodes utility function? In this scenario, if you fail you get your cash back slowly over time,if you succeed you get it back more quickly.

I'm anticipating that a set of actors in this situation would end up in an economy where the level of wealth for each node converges on their true ability to create value.

If I'm totally misinterpreting, I'd love some pointers to good info to read.

Replies from: ChristianKl, gjm
comment by ChristianKl · 2015-04-08T19:27:58.760Z · LW(p) · GW(p)

If N2 basically has to pay a tax towards money being challenged through N2 after the node is in use for a while, why doesn't it instead create a N3 node to use as a conductor for payments?

Replies from: skilesare
comment by skilesare · 2015-04-08T22:13:45.432Z · LW(p) · GW(p)

Great Question. The 'bits' in the system I'm proposing are based on a system wide demurrage or 'decay rate' of currency. Simply switching to a different node doesn't change the decay on cash you hold. There isn't an incentive to create a new node. On the positive side, existing customers have a loyalty factor. N1 will be more likely to buy the same commodity from N2 than from a random Nx. This behavior has a limited life though because diminishing returns eventually catch up and suddenly the benefit from being one of the first contributor to Nx is greater than the loyalty to N2.

This gives a lifespan to legal entities and increases the turnover thus increasing the likelihood of more fit entities emerging(if you assume that entities can cross generationally share information).

You basically get the the attractiveness of youth, the steadiness of adulthood, and the slow decline to oblivion. (and with this an increased incentive to figure out immortality by creating enough value to outrun the diminishing returns)

Replies from: ChristianKl
comment by ChristianKl · 2015-04-09T11:16:01.621Z · LW(p) · GW(p)

I don't think the question you asked above is answerable at the level of detail you use to speak about. But I don't think what you saying is true.

It quite hard to believe that "There isn't an incentive to create a new node." and "younger companies offering equal goods and services will become more attractive for the general public than old established corporation." can be both true.

You also say "If someone pays from a Hypercapital account to your hypercapital account, there is no fee. " and you say your system is build on Bitcoing with does include fees.

Replies from: skilesare
comment by skilesare · 2015-04-09T13:35:54.114Z · LW(p) · GW(p)

I did ask it in the stupid questions thread. :)

I think that both can be true and yet still have real results. Take humans, reproduction, and marriage. Typically a man is fertile for more years than a woman. We see in marriage a tension between men staying loyal to the wife of their youth and moving on to a more fertile partner. I don't have statistics in front of me, but over history the tendency is to stay loyal. Patrimonialism has a profound evolutionary basis and my theory is that you can use that built in bias to form a sustainable system where legal entities have life spans instead of immortality. If the life span is too short, than it is useless.

As far as the fees go, Bitcoin's fees are non-zero but very close to zero and many alternate payment schemes can be constructed. Typical CC transactions are 3%...much higher than the about .05 needed for a BTC transaction. There are also ways to convince miners to mine your transactions even though no BTC Fees are provided.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-09T14:21:57.537Z · LW(p) · GW(p)

I don't have statistics in front of me, but over history the tendency is to stay loyal.

If you look at how companies try to evade paying taxes, that's a bad assumptions. Companies usually try to whatever they can do to legally avoid paying taxes instead of paying more taxes then necessary out of loyalty to the government.

As far as your current setup seems to work, all the "pref" seems to go back to from N1 to N2 in cases of decay payed. The person who owns N2 can create a N3 and transfer all the money from N2 to N3. That way N2 never pays any decay fees and N2 get's part of the decay fees that N3 pays and can refunnel them to N3.

As far as the fees go, Bitcoin's fees are non-zero but very close to zero

There are people who argue that bitcoin fees should be $0.41 per transaction (http://www.coindesk.com/new-study-low-bitcoin-transaction-fees-unsustainable/). Even the 4 cents that currently exist can still matter.

While fees might be less than the average CC transactions they are not zero. Claiming that they are zero suggests that you are not clear about how bitcoin works on that level.

many alternate payment schemes can be constructed

Yes, Ripple manages to work with much less fees but you seem to want to use a blockchain based model.

Replies from: skilesare
comment by skilesare · 2015-04-09T14:56:58.249Z · LW(p) · GW(p)

I've tried to set up a system where tax avoidance is reduced or eliminated. Because the transaction system will reject transactions that don't pay the fee when they use their cash, they are stuck with the decision to participate in the system or not. Once the cash is in the system, they must pay the tax or the tax will be taken from them(using btc multi-sig where the decay charging authority is held accountable to only charge the fee on delinquent accounts.

N2 can certainly set up Nx and move all cash over there. Lets use a real example.

N1 spends $100 with N2. N2 wants to avoid the decay(but the system always charges at least one day of decay during a transaction) So they move the cash to Nx. The transaction occurs and $0.003 cents goes back to N1. Now the cash is in Nx. What are they going to do with it here. If they let it sit for 30 days they will be auto charged a decay fee of about $0.10. This flows to N2.

Even if N2 is proactive and sends it back to N3 immediately, $0.0000032 will flow back to N1. A small amount to be sure, but overtime these small amounts add up.

And if Nx uses the cash to develop something that brings in far more cash than went in, the amounts get much bigger.

That is besides the point because we want to avoid the situation entirely where N2 tries to devalue N1s benefits by passing to a shell corporation Nx

Nothing can keep someone from just passing cash and on and on and on to cash it owns except rule of law and accountability. Accountability can be observed in the blockchain and bad actors identified. Rule of law comes later. (I try to cover this in STH. Statutory Theft - https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/the_pattern_language/sth_statutory_theft.md )

Re: Fees - I don't have a great solution to this other than offering miners a share of future pref payments for any mined items that they charge no fee for. This involves them taking risk, but also provides substantial long term rewards.

All of this goes much deeper than the original question which I think now is best framed as 'does having a backflow of cash based on amount spent enhance the information we can get out of an economic system over the standard capitalistic model of today.' If we add too many things in we end up in a conjecture bias situation.

Once I answer the first question in the affirmative, then I can move on to whether the implementations of the system are rational or not. If achieving the prior is a priority, there likely exists an implementation that can achieve it. At least I think.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-09T19:36:32.660Z · LW(p) · GW(p)

(I try to cover this in STH. Statutory Theft - https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/the_pattern_language/sth_statutory_theft.md )

You don't say anything about how is supposed to have the power to enforce that statute.

It's also not quite clear in what way having a shell corporation is illegal in your system. Even if you have a fixed rule that a single individual can only own a node, people can move money to their family.

Even if N2 is proactive and sends it back to N3 immediately, $0.0000032 will flow back to N1. A small amount to be sure, but overtime these small amounts add up.

Have you done any math to show that they add up?

Also in a system like Bitcoin where it costs $0.04 to do a transaction, are you sure you can transfer $0.0000032 effectively?

That is besides the point because we want to avoid the situation entirely where N2 tries to devalue N1s benefits by passing to a shell corporation Nx

All of this goes much deeper than the original question which I think now is best framed as 'does having a backflow of cash based on amount spent enhance the information we can get out of an economic system over the standard capitalistic model of today.' If we add too many things in we end up in a conjecture bias situation.

Economic systems work by their agents trying to maximize returns. That means if there a way in your system to maximize returns in a way you didn't anticipate the calculation based on the ways you anticipate is worthless.

If you want to have a mathematical answer you have to be clear about your assumptions.

Replies from: skilesare
comment by skilesare · 2015-04-09T21:57:36.027Z · LW(p) · GW(p)

You don't say anything about how is supposed to have the power to enforce that statute.

I say a lot about it in my book. The system relies on Rule of law: https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/the_pattern_language/law_rule_of_law.md

And yes, we limit citizens to one account and legal entities, and governments have different kinds of accounts with different restrictions.

https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/hyper_capitalism/citizen_accounts.md https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/hyper_capitalism/legal_entity_accounts.md https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/hyper_capitalism/state_accounts.md

Have you done any math to show that they add up?

I've run a computer model in a closed system. I present the results here: https://vimeo.com/user17783424/review/115279592/1bb88f885d

Also in a system like Bitcoin where it costs $0.04 to do a transaction, are you sure you can transfer $0.0000032 effectively?

Yes. It can just be a few satoshi's to an output with the rest(the bigger values) going somewhere else. If the amounts are too small they can be kept off chain.

Economic systems work by their agents trying to maximize returns. That means if there a way in your system to maximize returns in a way you didn't anticipate the calculation based on the ways you anticipate is worthless.

Thus the need to experiment and try to blow the thing up. I agree 100%.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-09T23:11:15.626Z · LW(p) · GW(p)

And yes, we limit citizens to one account and legal entities, and governments have different kinds of accounts with different restrictions.

If a citizen creates a legal entity, doesn't he get his second account?

The wine seller creates a legal entity "wine shop" and transfers the money from it into his citizens account whenever the shop get's any money.

Yes. It can just be a few satoshi's to an output with the rest(the bigger values) going somewhere else. If the amounts are too small they can be kept off chain.

Of course you can transfer a few satoshi. On the other hand that doesn't stop you from paying bitcoin fees. The bitcoin blockchain is incapable of doing cheap micropayment transactions.

A corporation may choose to issue citizen accounts to anyone alive. We think that geographic restrictions still make sense, but they are not a requirement.

That sounds like a corporation could issue a citizen account to someone who already has an account.

In general if you do have to trust a government to enforce rule of law, why use the expensive bitcoin system where trust relies on the blockchain?

I've run a computer model in a closed system. I present the results here: https://vimeo.com/user17783424/review/115279592/1bb88f885d

The assumption that the business man doesn't do anything with his money is unrealistic. It also doesn't make sense to assume a 3 person economy. It would make more sense to run a model economy with 10,000 participants and assumptions about how the market participants interacts with each other via an open python script. Including a miner who gets his $0.04 for every transaction.

Replies from: skilesare
comment by skilesare · 2015-04-10T01:28:02.573Z · LW(p) · GW(p)

If a citizen creates a legal entity, doesn't he get his second account?

The wine seller creates a legal entity "wine shop" and transfers the money from it into his citizens account whenever the shop get's any money.

Yes...this is possible. I would expect a legal entity to pay it's employees. The Legal Entity also benefits for what is paid out to employees. The string of accounts and how many lengths away an account is will be a short term concern but not a long term concern. In addition wine buyers can hold the wine maker responsible for making this decision if they feel it is defrauding them in some way. The theory is that the market will tend toward vendors that use cash to invest further in the industry vs. immediately shifting money out of the industry.

Of course you can transfer a few satoshi. On the other hand that doesn't stop you from paying bitcoin fees. The bitcoin blockchain is incapable of doing cheap micropayment transactions.

I can put together some transactions I guess...but I promise it is possible. Input 0xA $5.02 0xSystem $.04

Output 0xB $5.00 0xDecayAuthority $.02 0xMinerFee $.04

Later that day

Input 0xDecayAuthority $15.04

Output(This will have 10 outputs) 0xA' .02 0xn1....-n99 $14.98 0xMinerFee $.04

That sounds like a corporation could issue a citizen account to someone who already has an account.

When I first wrote this I was envisioning a system that anyone could implement with possible a number of entities setting up different currencies.

In general if you do have to trust a government to enforce rule of law, why use the expensive bitcoin system where trust relies on the blockchain?

Only because the banking system is more expensive and because there is a significant amount of technology that allows the actual transfer of real value on BTC. I'd prefer to have proprietary system, but that takes more time and more money. This system is based on the existence of a public ledger and BTC has one of those.

The assumption that the business man doesn't do anything with his money is unrealistic. It also doesn't make sense to assume a 3 person economy. It would make more sense to run a model economy with 10,000 participants and assumptions about how the market participants interacts with each other via an open python script. Including a miner who gets his $0.04 for every transaction.

Yes. This was my first computer model. I did a second one that added a government and taxation. Next step is to write some much more detailed monte carlo simulations that have many more actors that make rational and irrational decisions and do their best to sink the economy.

Thanks for the comments they really do help me see what needs reinforcement and what ideas are weak.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-10T11:30:07.946Z · LW(p) · GW(p)

The string of accounts and how many lengths away an account is will be a short term concern but not a long term concern.

Why? The one day decay of 10%/365 doesn't do much. Transaction itself aren't strongly taxed. What's taxed is letting money sit and those taxes go one level back in the chain.

Output(This will have 10 outputs) 0xA' .02 0xn1....-n99 $14.98 0xMinerFee $.04

Transactions costs fees based on the amount of data they contain. If you batch up 10 transactions into 1 transactions the cost doesn't become the costs of 1 transaction.

Only because the banking system is more expensive and because there is a significant amount of technology that allows the actual transfer of real value on BTC.

The banking system can internally move money for nearly zero fees. Stocks get traded in a way where a 0.1% transaction tax would have major repercussions.

Our banking system costs money because it does things like fraud protection. Anybody can charge back any credit card payments made with their card.


This whole discussion reminds me of how Eliezer interacts with people who put forward AGI designs. When the AGI designer is vague, it's impossible to specifically show how the AGI will take over it's own utility function. When it comes down to the math, and Eliezer shows them how the AGI will overtake it's utility function the person just says: "Well that's not exactly what I meant..." and they are never really convinced.

I'm not certain that what you propose can't work but it seems you are making a lot of assumptions that things will just work out without having thought it through on a deeper level.

Replies from: skilesare
comment by skilesare · 2015-04-10T12:12:32.964Z · LW(p) · GW(p)

Any suggestions on how to think about it a deeper level? I'm new around here just trying to get my head around some of these ideas.

A couple points, if your one year decay rate is 12% and you have $1 billion in the system you will decay $120 million that will flow back to the system. Yes it will matter how close you are to the nodes of activity. That is the point. This updates our decision function when engaging in commerce. The question isn't is this the most affordable apple, it is is this the best apple made by the best process in a way that will lead to the most value for future apples.

I still have othe options to entice the mining of my transactions. Potentially more attractive than fees.

I'd be more than happy to engage the existing banking system and with unlimited capital I wouldn't involve Bitcoin. The cost from 0 to transaction 1 on btc is 1000x less than 0 to 1 on the banking system. State of Texas wants 20 million in reserve to even sniff your banking application.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-10T12:26:35.905Z · LW(p) · GW(p)

Any suggestions on how to think about it a deeper level? I'm new around here just trying to get my head around some of these ideas.

I think writing down a monto carlo model helps to make decisions explicit. At best you do it in a form where the model is easy to modify for other people.

You could run a tournament where people can submit bots that act in the economy to maximize their returns.

I'd be more than happy to engage the existing banking system and with unlimited capital I wouldn't involve Bitcoin.

Bitcoin invests a lot of resources into not needing to trust any single entity. That's why it's transactions are much more expensive than Ripple transactions.

If you want to trust central authority to uphold law anyway, then it's likely beneficial to not go via bitcoin trust model and have cheaper transactions.

Replies from: skilesare
comment by gjm · 2015-04-07T12:24:20.630Z · LW(p) · GW(p)

I'm not sure I've correctly understood your question, but it's hard to see how anything much like that could follow from Bayes' theorem on its own.

Replies from: skilesare, skilesare
comment by skilesare · 2015-04-07T14:56:46.864Z · LW(p) · GW(p)

Question updated. Any more clear?

Replies from: gjm
comment by gjm · 2015-04-07T22:41:41.939Z · LW(p) · GW(p)

Still doesn't seem like the thing that Bayes alone could possibly answer. It seems more like a question about differential equations or dynamical systems or something of the kind. All Bayes' theorem tells you is the relationship between certain conditional probabilities.

Replies from: skilesare
comment by skilesare · 2015-04-07T23:13:16.520Z · LW(p) · GW(p)

I guess the stupid question is does it follow from Bayes that if you keep measuring the same probability over and over that you will converge on the 'actual' probability.

Replies from: gjm
comment by gjm · 2015-04-07T23:47:21.516Z · LW(p) · GW(p)

That's more like the Bernstein-von Mises theorem, I think. But that only applies if what you're doing is actually Bayesian updating, and it's not obvious to me that that's necessarily happening in the system you describe. (The actors might happen to be doing that, but you haven't said anything about how they make their decisions. Or there might be some more "automatic" bit of the system that's equivalent to Bayesian updating -- e.g., maybe some of the money flows might adjust themselves in ways that correspond to Bayesian updating -- but I don't see any reason to expect that from what you've said.)

Replies from: skilesare
comment by skilesare · 2015-04-08T02:52:54.485Z · LW(p) · GW(p)

This was really helpful and gives me some great stuff to look at.

Thank you.

My theory is that actors in an economy spend cash on things and some of those things produce lasting value in the economy and some don't. Each actors probability of making a valuable choice that leads to overall growth is unknown. If we reward those that make a valuable voice with fresh cash, they then have the opportunity to succeed or fail again. If we do this over and over the 'right' probabilities will emerge and we will see who the 'best spenders' are by who has the biggest rewards flowing back.

We optimize for value creation and in the long run have a system with better and better information.

comment by skilesare · 2015-04-07T14:29:27.843Z · LW(p) · GW(p)

That is interesting....what do you mean 'on its own'. Are there some other things that affect the application of Bayes to a system?

Let me think about reforming the question now that I'm not on an iphone.

comment by [deleted] · 2015-04-03T14:44:43.383Z · LW(p) · GW(p)

Duhigg's The Power of Habit is great but very hard to use. The idea is to keep the trigger, keep the reward, but change the action that leads to the reward. But it is not trivial to find less harmful or more helpful actions leading to the same rewards. Can we try to make a list together? I.e. e-cigs, non-A beer, similar ideas.

The stupid part is how incredibly hard to come up with replacements that in the hindsight seem extremely d'uh. I mean people are still buying the sugared version of Coke not the Zero, right? Probably more ugh field than cognitive difficulty, still.

Replies from: ChristianKl, None, Dorikka
comment by ChristianKl · 2015-04-03T16:24:41.392Z · LW(p) · GW(p)

I mean people are still buying the sugared version of Coke not the Zero, right?

It's not a straightforward case:

In one study, published in 2008 in the journal "Obesity," researchers studied more than 5,000 participants for up to eight years and found that the waistlines of people who drank diet soft drinks increased 70 percent more than people who didn't drink diet soda.

Replies from: None
comment by [deleted] · 2015-04-09T09:18:46.671Z · LW(p) · GW(p)

Yes, but you know the "with fries and make it large, but diet coke, I am trying to lose weight tee hee hee" stereotype, right? :) Usually diet coke is drunk by people who are fighting their unhealthy habits, as it seems people who always had healthy ones are more content with water.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-09T10:51:19.828Z · LW(p) · GW(p)

The fact that the stereotype exists doesn't mean that the strategy works. It only shows that the marketing works.

comment by [deleted] · 2015-04-04T16:54:57.438Z · LW(p) · GW(p)

It should really depend on what is to be replaced. It's difficult to think of examples otherwise. Maybe, make yourself a special cup of tea - like, it is five o'clock, I shall drink this my very favourite cup of tea with lemon and 1 1/5 lumps of sugar on my balcony, and count the day as a win?:)

comment by Dorikka · 2015-04-04T07:46:55.726Z · LW(p) · GW(p)

I mean people are still buying the sugared version of Coke not the Zero, right?

Not sure that artificial sweeteners are ok for humans. Specifically, at least one of my family members is allergic to aspartame (sp?), so I tend to consider the stuff more dangerous than sugar, which I at least know I can metabolize with fairly predictable effects.

comment by gaelgouma665 · 2015-04-08T15:43:34.398Z · LW(p) · GW(p)

BUY HIGH QUALITY DATA-BASE REGISTERED MACHINE READ-ABLE SCAN-ABLE DRIVER'S LICENSE,ID'S,PASSPORTS AND CITIZENSHIP DOCUMENTS.REAL/FAKE PASSPORTS, DRIVER'S LICENSE,ID CARDS.GENUINE PASSPORTS,DRIVER'S LICENSE AND OTHER DOCUMENTS OF ALL COUNTRIES FOR SALE.FAKE PASSPORTS UK FOR SALE, DIPLOMATIC CANADIAN FALSE ID CARD ONLINE, US FAKE ID CARD, SELL DRIVER'S LICENSE.Buy genuine Green card Training certificates M GCSE, A-levels,High School Diploma Certificates ,GMAT, MCAT, and LSAT examination Certificates and credit cards, school diplomas, school degrees all in an entirely new name issued and registered in the government database system. We are the best producers of genuine high quality fake documents..Buy Real Passports,Driver’s License,ID Cards,Visas, USA Green Card,Fake Money.

Welcome To Wesley Hover and Associates Network.Get A second Chance In Life with Wesley Hover and Associates , protect your privacy, build a new credit history, by-pass criminal background checks, take back your freedom. It's a cruise, safari or just a week on a sunny beach that you want, you have come to the right place for all your travel needs and guest what? you going to finally make your dream a reality. Let us help you plan the ideal trip for you. We are an Association network responsible for the production of real genuine passports, Real Genuine Data-Base Registered Fake Passports and other Citizenship documents.We can guarantee you a new Identity starting from a clean new genuine Birth Certificate, ID card, Drivers License, Fake Passports, Social security card with SSN, credit files can you imagine?

-I DO OFFER A LEGITIMATE SERVICE: .GET REGISTERED BRITISH PASSPORT. .REGISTERED CANADIAN PASSPORT. .REGISTERED FRENCH PASSPORT. .REGISTERED AMERICAN PASSPORT. .REGISTERED USA PASSPORT. .REGISTERED PASSPORT FOR COUNTRIES IN THE EUROPEAN UNION.

I offer a service to help you through to meet your goals, we can help you with: •Getting real government issued ID under another identity(NEW NAMES), •A new social security number (verifiable with the SSA), •Checking and saving accounts for your new ID. •Credit cards •Relocation •Passports, Diplomatic passports,novelty passports. •Production and obtaining new identification documents. •We also do work permit and bank statements and have connections to OFFER JOBS in country like Dubai, USA , CANADA , UK ,china,Peru,Brazil,south Africa,Denmark,Sweden,Norway,France etc.. •Tourist and business visa services available to residents of all 50 states and all nationalities Worldwide •Taiwan and China High Quality Fake money in all currencies, bogus bills, counterfeit U.S. currency in $20s, $50s and $100s, High quality Fake Money for sale. •INTERESTED TO BUY REAL GENUINE QUALITY BANKNOTES of Euros, Dollars and Pounds with security feature magnetic ink, watermark, the pen test, and the security strip that bypass machines.Our hundreds carry “color-shifting ink,” an advanced feature that gives the money an appearance of changing color when held at different angles including Intaglio. •FAKE BANK NOTE US BILLS: most recent design 1,'s, 5, 10's, 20's, 100's $ NEW DESIGN DOLLARS “super-note” counterfeits.All above features help give U.S. currency a certain tactile feel, and it is rare to find that level of quality in fake bills only here you get this dream quality which makes you millionaire overnight. •Coaching services available.... Not even an expertise custom official or machine can ever dictate the document we offer as fake, since the document is no different from Real government issued!

All Inquiries; Registered and Unregistered USA,Canadian Documents at: (wesleyhover816@mail.com)

Registered and Unregistered UK, entire Europe, Asia , Africa, Australian Documents at: (mccollinswilliams665@gmail.com)

Real Counterfeit Money(Taiwan and China First Grade) at: (lohan.hurley2015@yandex.com)

Skype ID::::::::::::sinclaire60

E-Mail Your Questions and Comments. We are looking forward to receiving your inquiries and early receipt of your first orders! Yours Faithfully,

To get the additional information and place the order just visit our email team. Buy fake passport, buy fake passport, false, passport, passport, id, card, cards, UK, sell, online, Canadian, British, sale, novelty, counterfeit, bogus, American, united, states, USA, us, Italian, Malaysian, Australian, documents,identity, identification, driver, license, license, driving, residence, permit, SSN fake passport id, free fake passport, identity theft, fake, novelty, camouflage, passport, anonymous, private, safe, travel, anti terrorism, international, offshore, banking, id, driver, drivers, license, instant, online, for sale, cheap, wholesale, new identity, second,citizenship, identity, identification, fake documents, diplomatic, nationality, how to get fake documents, where to get fake, get, obtain, buy, purchase, make, build, a, passport, i.d., British, Honduras, UK, USA, us, u.s., Canada, Canadian, foreign, visa, Swiss, card, ids, document, getting, visas, cards, foreign ..