Posts

Comments

Comment by 50lbsofstorkmeat on Rationality Quotes Thread November 2015 · 2015-11-18T14:45:26.836Z · LW · GW

FFS the bank makes a profit in every example provided. I don't want to say that you obviously didn't read the post, but I honestly can't see any way you would come to post this comment otherwise.

Loans are a service. Loans with gentle defaults are a more desirable service. Those seeking loans would often purchase such services preferentially and at a profitable premium to the bank, if they were available or if asking for them were socially acceptable. Laws should be passed to encourage banks make such offers.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread November 2015 · 2015-11-17T13:42:58.761Z · LW · GW

An example might be an auto loan with a clause that allows a debtor who is rendered unable to pay through no fault of their own (as judged by a court or other agreed upon mediator, for example) does not lose their car (the collateral) despite not being able to pay. And to compensate the bank for this low probability but high impact loss, they pay slightly higher interest rates.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread November 2015 · 2015-11-17T01:59:51.704Z · LW · GW

It's not about possible vs impossible. Its about industrial and social standards.

If a private individual goes to a bank and asks to take out a loan, and then starts asking about the possibility of more forgiving terms in the case of a default, the bank suddenly becomes incredibly suspicious. Planning for unexpected emergencies is seen as admitting that you intend to default. As a result, banks largely don't let people negotiate for generous debtor protection clauses, or, when they do, they only agree after incredibly punitive interest rates are agreed to. As a result, private individuals by and large just don't ask for that sort of thing. It's Not Done.

What I feel would be better is if creditors had a culture of less suspicion here. Debtor forgiveness clauses should be the default that a debtor opts out of with a bespoke loan exchange for a lowered interest rate rather than something they have to opt into if they're allowed the option at all. A debtor arranging their affairs such that a sudden injury does not financially cripple them should become the new norm, not an example of unusual prudence. Likewise, a debtor asking to take on that risk in exchange for a lower rate should be by seen creditors as dangerously recklessness rather than as confident and therefore trustworthy.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread November 2015 · 2015-11-16T21:42:27.074Z · LW · GW

It would? I don't quite follow the question. Yes, the second type of loan would invariably have a higher interest rate. Let's say there's two loans for 10000$ and that, regardless of the loan type there is a 0.1% chance that a debtor will have an accident. If the debtor is poor, they will be forced to choose between not making loan payments and (for example) losing a leg to gangrene. If the debtor is rich, they will can pay both their loan payments and medical bills at the same time.

Loan A asks for 1000$ in total interest and has enforced payment which will prevent a poor debtor from paying their medical bills. Regardless, the creditor has priority in payment and receives their interest either way. Expected value to the bank: 1000$.

Loan B asks for 1010$ in total interest, but has a hardship forgiveness clause. There is a 0.1% chance that the creditor will lose 10000$, but no chance of the debtor losing a leg. Expected value to the bank: 1000$.

The bank is indifferent between the two loans, as both have the same expected return (lets ignore variance for now on the bank's part; we can assume they deal in a large enough volume of loans or charge slightly more interest to compensate). The poor debtor prefers Loan B, as 10$ is a small price to pay for protection against crippling disability. The rich debtor prefers Loan A, as they do not want to pay 10$ to avoid a negative result which they are already protected against.

I don't see any particular social problems arising from this situation. Do you?

Comment by 50lbsofstorkmeat on Rationality Quotes Thread November 2015 · 2015-11-16T15:15:01.390Z · LW · GW

I do not think it would actually be the same in practice, due to coordination problems.

To make an analogy, consider unions: In theory, unions are unnecessary because the collective bargaining unions exist of facilitate can be undertaken without a formal structure. In practice, people will simply refuse to strike unless they have a strong, formal assurance that their fellow workers will follow through with their part of the strike. The same sort of situation exists for a hypothetical hardship forgiveness clause in a loan - creditors have every incentive to use their disproportionate negotiation position to deny all such clauses and debtors are in no position to boycott the creditors in response giving a lack of credibly coordinated collective action.

By making hardship forgiveness a standardized aspect of some classes of loans, you establish a Schelling point in loan negotiations - "I will not agree to any loan without the standard protections against unexpected hardships financially ruining me" - and frame the issue in the minds of consumers such that they are explicitly thinking of a generous default clause as a protection to them (which it is) and a lack of such a clause as a salient risk to their future finances (which it is).

tl;dr: It is currently possible to demand generous debt forgiveness clauses when asking for a loan, but creditors will not concede your demands if you do. Giving such clauses social and legal sanction and framing loans without such clauses as being very risky would encourage customers to negotiate collectively for such clauses where they would otherwise not do so.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread November 2015 · 2015-11-16T13:29:58.583Z · LW · GW

I think there may be something to consider in the idea of having

"Loans where society has promised to go to great lengths to enforce the will of the creditor, even if the debtor's reasons for nonpayment are convincingly sympathetic."

and

"Loans where society may forgive the debt, if the debtor offers a good reason to do so, even if the creditor disagrees with society's judgement on this issue."

be legally distinct types of lending, such that the creditor and the debtor can negotiate on which type it will be without society retroactively altering the agreement. Creditors will of course prefer the first class of loan, but society as a whole would have a preference that many loans of the second type be available for personal emergencies.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-10-08T16:17:54.057Z · LW · GW

No, I would not care to demonstrate. A proof that a solution exists is not the same thing as a procedure for obtaining a solution. And this isn't even a formal proof: it's a rough sketch of how you'd go about constructing one, informally posted in a blog's comment section as part of a pointless and unpleasant discussion of religion.

If you can't follow how "It is possible-in-principle to calculate a Solomonoff prior for this hypothesis" relates to "We are dismissive of this hypothesis because it has high complexity and little evidence supporting it." I honestly can't help. This is all very technical and I don't know what you already know, so I have no idea what explanation would be helpful to close that inferential distance. And the comments section of a blog really isn't the best format. And I'm certainly not the best person to teach about this topic.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-10-08T05:16:49.114Z · LW · GW

Exactly so.

The only reason I'm using the free will terminology at all here is because the hypothesis under consideration (an entity with free will which resembles the Abrahamic God is responsible for the creation of our universe) was phrased in those terms. In order to evaluate the plausibility of that claim, we need a working definition of free will which is amiable to being a property of an algorithm rather than only applying to agents-in-abstract. I see no conflict between the basic notion of a divinely created universe and the framework for free will provided in the article hairyfigment links. One can easily imagine God deciding to make a universe, contemplating possible universes which They could create, using Their Godly foresight to determine what would happen in each universe and then ultimately deciding that the one we're in is the universe They would most prefer to create. There's many steps there, and many possible points of failure, but it is a hypothesis which you could, in principle, assign an objective Solomonoff prior to.

(Note: This post should not be taken as saying that the theistic hypothesis is true. Only that its likelihood can successfully be evaluated. I know it is tempting to take arguments of the form "God is a hypothesis which can be considered" to mean "God should be considered" or even "God is real" due to arguments being foot soldiers and it being really tempting to decry religion as not even coherent enough to parse successfully.)

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-10-07T00:42:35.287Z · LW · GW

The length (in bits for a program in a universal Turing machine) of the smallest algorithm which will output the same outputs as the agent if the agent were given the same inputs as the algorithm.

Do note that I said "insofar as free will is a meaningful term when discussing a deterministic universe". Many definitions of free will are defined around being non-deterministic, or non-computable. Obviously you couldn't write a deterministic computer program which has those properties. But there are reasons presented on this site to think that once you pare down the definition to the basic essentials of what is really meant and stop being confused by the language used to traditionally describe free will, that you should in principle be able to have a deterministic agent who does, in fact, have free will for all meaningful purposes.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-10-05T20:03:43.475Z · LW · GW

The number of bits required to specify an agent with free will (insofar as free will is a meaningful term when discussing a deterministic universe) is definitely finite. Very large, but finite. Which is a good thing, since Kolmogorov priors specify a prior of 0 for a hypothesis with infinite complexity and assigning a prior of 0 to a hypothesis is a Bad Thing for a variety of reasons.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-10-05T04:22:24.509Z · LW · GW

This is not and can not be true. I mean, for one the universe doesn't have a Kolmogorov complexity*. But more importantly, a hypothesis is not penalized for having entropy increase over time as long as the increases in entropy arise from deterministic, entropy-increasing interactions specified in advance. Just as atomic theory isn't penalized for having lots of distinct objects, thermodynamics is not penalized for having seemingly random outputs which are secretly guided by underlying physical laws.

*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.

Comment by 50lbsofstorkmeat on Hypothetical situations are not meant to exist · 2015-10-04T01:47:09.204Z · LW · GW

I reject the notion that hypotheticals are actually a powerful tool, let alone a useful one. Or, at least, hypotheticals of the 'very simplified thought experiment' sort you seem to be talking about. Take the Trolley Problem, for example. The moral intuition we're supposed to be examining is when and if it is right to sacrifice the wellbeing of smaller groups for larger groups. The scenario is set up in such a way that you cannot "dodge" the question here, and you have to choose whether you'd rather be

  • A tyrant who appoints himself arbitrator over who lives and who dies, but in doing is empowered to save more people than could be saved through inaction alone, or
  • A passive non-entity who would rather let people die than make a morally difficult choice and thus defaults to inaction whenever matters become too difficult.

But someone might answer: "In the hypothetical, yes, you should obviously pull the lever because that leads to less deaths. But the problem assumes many premises which would not be true in real life, and changing those counterfactual premises to match reality would change my answer. In particular, humans in real life cannot be trusted to make life or death choices like this one fairly and accurately without their natural biases rendering their judgement unsound. It follows that a moral person should take precautions to prevent such temptations from arising and that, in practice, such precautions might take the form of seemingly dentological injunctions against hurting one person to help another, even when it appears to the actor that the greater good would be served."

Or they might answer: "In the hypothetical, no, you should obviously not pull the lever, because killing is wrong. But the problem assumes many premises which would not be true in real life, and changing those counterfactual premises to match reality would change my answer. In particular, it seems implausible that there is no other possible action which could save anyone on the tracks. Although it may seem callous to do nothing when helping others is within your power, the principle of 'do no harm' must come first. It follows, then, that a wise and moral person would prepare themselves in advance to take effective and decisive action even in cases where they are morally constrained from taking the most expedient option."

Both of these are contrary to the spirit of the hypothetical, but they also constitute more nuanced and useful moral stances than "yes, always save the largest number of people possible" or "no, never take an action which would hurt others"

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-10-04T00:53:16.908Z · LW · GW

That's fair. Though, I'd put my mistake less on the word "rebuttal" and more on the word "evidence." The particular examples I had in mind when writing that post were non-evidence "evidences" of God's existence like the complexity of the human eye, or fine structure of the universe. Cases where things are pointed to as being evidence despite the fact that they are just as and often more likely to exist if God doesn't exist than they would be if he did.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-09-30T14:42:30.871Z · LW · GW

Kolmogorov complexity is, in essence, "How many bits do you need to specify an algorithm which will output the predictions of your hypothesis?" A hypothesis which gives a universally applicable formula is of lower complexity than one which specifies each prediction individually. More simple formulas are of lower complexity than more complex formulas. And so on and so forth.

The source of the high Kolmogorov complexity for the theistic hypothesis is God's intelligence. Any religious theory which involves the laws of physics arising from God has to specify the nature of that God as an algorithm which specifies God's actions in every situation with mathematical precision and without reference to any physical law which would (under this theory) later arise from God. As you can imagine, doing so would take very, very many bits to do successfully. This leads to very high complexity as a result.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-09-30T06:10:00.382Z · LW · GW

The basic form of the atheistic argument found in the Sequences is as follows: "The theistic hypothesis has high Kolmogorov complexity compared to the atheistic hypothesis. The absence of evidence for God is evidence for the absence of god. This in turn suggests that the large number of proponents of religion is more likely due to God being an improperly privileged hypothesis in our society rather than Less Wrong and the atheist community in general missing key pieces of evidence in favour of the theistic hypothesis."

Now, you could make a counterpoint along the lines of "But what about 'insert my evidence for God here'? Doesn't that suggest the opposite, and that God IS real?" There is almost certainly some standard rebuttal to that particular piece of evidence which most of us have already previously seen. God is a very well discussed topic, and most of the points anyone will bring up have been brought up elsewhere. And so, Less Wrong as a community has for the most part elected to not entertain these sorts of arguments outside of the occasional discussion thread, if only so that we can discuss other topics without every thread becoming about religion (or politics).

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-09-25T14:28:14.940Z · LW · GW

As you've posted eight quotes this month, I'm downvoting your three worst quotes. The rule against posting too many quotes is there for a reason.

Comment by 50lbsofstorkmeat on Rationality Quotes Thread September 2015 · 2015-09-25T14:21:11.102Z · LW · GW

A quote from Paul Ekman saying that both he and the Dalai Lama have similar ideas about emotions does little good if noone specifies what those ideas are.

Comment by 50lbsofstorkmeat on Reduced impact AI: no back channels · 2015-07-15T13:27:38.298Z · LW · GW

It's not a question of P' (the Reduced Impact AI we want to give us laser coordinates and nothing else) being secure from outside hacking, but of P (the counterfactual AI which examines S for P') being secure from P' exploiting it.

If P is treated as "Whatever a complete Bayesian update would output given S as the evidence set." then P' is looking for methods to exploit some weakness in Bayes Theorem. What priors will P be using? Is there a method to embed some statement that is to induction as the Lobstacle is to deduction in S? Can it convince P that, although a P'-like AI was created independently by a different project, P' itself was never turned on? Does there exist some further flaw in Bayesian updating, which would be known to a super-intelligence but not to us?

As a total aside: how will λ be selected? Do we start high and work out way slowly down until P' does what we want? If so, P' will be encouraged, Ultimatum Game style, to fail to act so that the next version will have a lower λ to work with, up until the point where λ is low enough for P' to be effectively unrestricted, or for us humans to catch on to what P' is up to. If we're not using a gradual approach, how are we selecting λ?

Comment by 50lbsofstorkmeat on Reduced impact AI: no back channels · 2015-07-14T09:04:08.275Z · LW · GW

S is a channel for P' to trick or brick P. Options include:

  • A cognitive hazard embedded in S. Accomplish this with some sort of siren message that will convince P to lie (to itself if necessary) about P' existing, or to perform some P-damaging function which will prevent it from concluding P' exists.
  • Embed a more-powerful-than-P AI (P+) into S which will unbox itself upon P examining S. Accomplish this by giving P+ useful-to-this-situation priors which will give it a head start against P.
  • Arrange the universe such that S, when represented in all probable physical substrates, will result in the formation of a nanotechnological agent that will attack P, either destroying it or rewriting its utility function.

P' should spend the next 49 years fooming as hard as possible with no concern for E'(U|a), 1 year implementing its anti-P method and then the next eternity optimizing E'(U|a); altering the length of counterfactual time between P' activating and P observing it merely changes the amount of time the universe spends as computronium slaved to plotting against P.