Posts

Comments

Comment by tom_cr on An overview of the mental model theory · 2015-10-02T21:08:08.487Z · LW · GW

I think that the communication goals of the OP were not to tell us something about a hand of cards, but rather to demonstrate that certain forms of misunderstanding are common, and that this maybe tells us something about the way our brains work.

The problem quoted unambiguously precludes the possibility of an ace, yet many of us seem to incorrectly assume that the statement is equivalent to something like, 'One of the following describes the criterion used to select a hand of cards.....,' under which, an ace is likely. The interesting question is, why?

In order to see the question as interesting, though, I first have to see the effect as real.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-24T20:11:09.623Z · LW · GW

If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.

Thanks, that focuses the argument for me a bit.

So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how can their average payoffs be the same?

To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?

I'm not saying I'm sure Dawes' argument is wrong, I just have no intuition at the moment for how it could be right.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-24T18:18:23.026Z · LW · GW

Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.

One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.

The problem feels related to Pascal's wager - how to deal with the low-probability disaster.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-24T15:49:46.861Z · LW · GW

Thanks very much for the taking the time to explain this.

It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.

It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.

Nonetheless, those exponential distributions make a very interesting argument.

I'm not entirely sure, I need to mull it over a bit more.

Thanks again, I appreciate it.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-21T23:14:59.716Z · LW · GW

I think that international relations is a simple extension of social-contract-like considerations.

If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.

You seem to be suggesting that [government] enables [cooperation]

I guess you mean that I'm saying cooperation is impossible without government. I didn't say that. Government is a form of cooperation. Albeit a highly sophisticated one, and a very powerful facilitator.

I have my quibbles with the social contract theory of government

I appreciate your frankness. I'm curious, do you have an alternative view of how government derives legitimacy? What is it that makes the rules and structure of society useful? Or do you think that government has no legitimacy?

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-21T21:58:06.739Z · LW · GW

Values start to have costs only when they are realized or implemented.

How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?

Costlessly increasing the welfare of strangers doesn't sound like altruism to me.

OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)

But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-21T21:38:53.036Z · LW · GW

If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.

You don't need it to have media of exchange, nor cooperation between individuals, nor specialization

Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.

Yes, these things can exist to a small degree in a post-apocalyptic chaos, but they will not exactly flourish. (That's why we call it post-apocalyptic chaos.) But the extent to which these things can exist is a measure of how well the social contract flourishes. Don't get too hung up on exactly, precisely what 'social contract' means, it's only a crude metaphor. (There is no actual bit of paper anywhere.)

I may not be blameless, in terms clearly explaining my position, but I'm sensing that a lot of people on this forum just plain dislike my views, without bothering to take the time to consider them honestly.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-21T16:06:02.681Z · LW · GW

Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer.

Maybe ponder this:

How could my quality of life be affected by something with no causal influence on me?

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-21T15:50:03.077Z · LW · GW

Why does it seem false?

If welfare of strangers is something you value, then it is not a net cost.

Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).

Under that old, confused definition, yes, altruism can not be rational (but not orthogonal to rationality - we could still try to measure how irrational any given altruistic act is, each act still sits somewhere on the scale of rationality).

It does not.

You seem very confident of that. Utterly bizarre, though, that you claim that not infringing on people's rights is not part of being nice to people.

But the social contract demands much more than just not infringing on people's rights. (By the way, where do those right come from?) We must actively seek each other out, trade (even if it's only trade in ideas, like now), and cooperate (this discussion wouldn't be possible without certain adopted codes of conduct ).

The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don't trust you for some reason, then the agreement breaks down. You lose income, I lose the screws I need for my factory employing 500 people, we all go bust. Your knowledge of how to make screws and my expertise in making screw drivers now counts for nothing, and everybody is screwed.

We help maintain trust by being nice to each other outside our direct trading. Furthermore, by being nice to people in trouble who we have never before met, we enhance a culture of trust that people in trouble will be helped out. We therefore increase the chances that people will help us out next time we end up in the shit. Much more importantly, we reduce a major source of people's fears. Social cohesion goes up, cooperation increases, and people are more free to take risks in new technologies and / or economic ventures: society gets better, and we derive personal benefit from that.

I think we have a pretty major disagreement about that :-/

The social contract is a technology that entangles the values of different people (there are biological mechanisms that do that as well). Generally, my life is better when the lives of people around me are better. If your screw factory goes bust, then I'm negatively affected. If my neighbour lives in terror, then who knows what he might do out of fear - I am at risk. If everybody was scared about where their next meal was coming from, then I would never leave the house for fear that what food I have would be stolen in my absence - economics collapses. Because we have this entangled utility function, what's bad for others is bad for me (in expectation), and what's bad for me is bad for everybody else. For the most part, then, any self defeating behaviour (e.g. irrational attempts to be nice to others) is bad for society, and, in the long run, doesn't help anybody.

I hope this helps.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T23:24:25.605Z · LW · GW

The question is not one of your goals being 50% fulfilled

If I'm talking about a goal actually being 50% fulfilled, then it is.

"Risk avoidance" and "value" are not synonyms.

Really?

I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?

If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.

I'll post a sibling comment.

That would be very kind :) No need to hurry.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T23:03:09.135Z · LW · GW

Apologies if my point wasn't clear.

If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.

We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won't affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.

Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.

Thus, if I implement the belief / obligation / fact of the social contract, and that is useful, then being nice is rational.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T22:06:17.612Z · LW · GW

Point 1:

my goals may be fulfilled to some degree

If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.

The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.

But risk is integral to the calculation of utility. 'Risk avoidance' and 'value' are synonyms.

Point 2:

Thanks for the reference.

But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.

If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?

Perhaps you could outline Dawes' argument? I'm open to the possibility that I'm missing something.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T20:44:36.931Z · LW · GW

I did mean after controlling for an ability to have impact

Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?

Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T20:28:15.786Z · LW · GW

I would call it a bias because it is irrational.

It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?).

Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T20:13:25.379Z · LW · GW

Rationality is about implementing your goals

That's what I meant.

An interesting claim :-) Want to unroll it?

Altruism is also about implementing your goals (via the agency of the social contract), so rationality and altruism (depending how you define it) are not orthogonal.

Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society. If this belief is useful, then being nice to other people is useful, i.e furthers one's goals, i.e. it is rational. I know this is simplistic, but it should be more than enough to make my point.

Perhaps you interpret altruism to be being nice in a way that is not self serving. But then, there can be no sense in which altruism could be effective or non-effective. (And also your initial reasoning that "rationality does not involve values and altruism is all about values" would be doubly wrong.)

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T19:55:27.719Z · LW · GW

Yes, non-rational (perhaps empathy-based) altruism is possible. This is connected to the point I made elsewhere that consequentialism does not axiomatically depend on others having value.

empathy is not [one level removed from terminal values]

Not sure what you mean here. Empathy may be a gazillion levels removed from the terminal level. Experiencing an emotion does not guarantee that that emotion is a faithful representation of a true value held. Otherwise "do exactly as you feel immediately inclined, at all times," would be all we needed to know about morality.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T19:46:52.680Z · LW · GW

I see Sniffnoy also raised the same point.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T19:37:20.019Z · LW · GW

I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T19:08:55.834Z · LW · GW

rationality does not involve values

Yikes!

May I ask you, what is it you are trying to achieve by being rational? Where does the motivation come from come from?

Or to put it another way, if it is rational to do something one way but not another, where does the difference derive from?

In my view, rationality is use of soundly reliable procedures for achieving one's goals. Rationality is 100% about values. Altruism (depending how you define it) is a subset of rationality as long as the social contract is useful (ie nearly all the time).

Comment by tom_cr on To what extent does improved rationality lead to effective altruism? · 2014-03-20T18:57:08.599Z · LW · GW

A couple of points:

(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say

[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"

Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest number is dubious. Obviously, this principle is a damn fine heuristic, but it follows from consequentialism (as long as the social contract can be inferred to be useful), and isn't a foundation for it. The paper-clipping robot is still a consequentialist.

(2) Your described principle of indifference seems to me to be manifestly false.

When we talk of the value of any thing, we are not talking of an intrinsic property of the thing, but a property of the relationship between the thing and the entity holding the value. (People are also things. ) If an entity holds any value in some object, the object must exhibit some causal effect on the entity. The nature and magnitude of the value held must be consequences of that causality. Thus, we must expect value to scale (in an order-reversing way) with some generalized measure of proximity, or causal connectedness. It is not rational for me to care as much about somebody outside my observable universe as I do about a member of my family.

Comment by tom_cr on Reference Frames for Expected Value · 2014-03-17T19:30:49.869Z · LW · GW

Thanks for taking the time to try to debunk some of the sillier aspects of classic utilitarianism. :)

‘Actual value’ exists only theoretically, even after the fact.

You've come close to an important point here, though I believe its expression needs to be refined. My conclusion is that value has real existence. This conclusion is primarily based on the personal experience of possessing real preferences, and my inference (to a high level of confidence) that other humans routinely do the same. We might reasonably doubt the a priori correspondence between actual preference, and the perception of preference, but even so, the assumption that I make decisions entails that I'm motivated by the pursuit of value.

Perhaps, then, you would agree that it is more correct to say that the relative value of an action can be judged only theoretically.

Thus, we account for the fact that if the action had not been performed, the outcome would be something different, the value of which we can at best only make an educated guess about, making a non-theory-laden assessment of relative value impossible. The further substitution of my 'can be judged' in place of your 'exists' seems to me necessary, to avoid committing the mind projection fallacy.

The main question in this essay, the harder question, is if we can judge previous decisions based on their respective expected values, ...

If it is the decision that is being judged (as the question specifies), rather than its outcome, then clearly the answer is "yes." There can not be anything better than expected value to base a decision on. In a determined bid to be voted captain obvious, I examined this in some detail, in a blog post, Is rationality desirable?

... and how to possibly come up with the relevant expected values to do so.

This is called science! You are right, though, to be cautious. It strikes me that many assume they can draw conclusions about the relative rationality of two agents, when really, they ought to do more work for their conclusions to be sound. I once listened to a talk in which it was concluded that the test subjects in some psychological study were not 'Bayesian optimal.' I asked the speaker how he knew this. How had he measured their prior distributions? their probability models? their utility functions? These things are all part of the process of determining a course of action.

Comment by tom_cr on Is my view contrarian? · 2014-03-13T22:03:10.051Z · LW · GW

If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good"....

If that is your basis for a scientific standard, then I'm afraid I must withdraw from this discussion.

Ditto, if this is your idea of humor.

what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?

That's just silly. What if c = 299,792,458 m/s is a mathematical claim about the speed of light, according to what the speed of light actually is? May I suggest that you don't invent unnecessary complexity to disguise the demise of a long deceased argument.

No further comment from me.

Comment by tom_cr on Is my view contrarian? · 2014-03-13T17:11:29.586Z · LW · GW

I quite like Bob Trivers' self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call "me" as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.

Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you're quite possibly on to something, that the computational overhead just doesn't pay off.

Comment by tom_cr on Is my view contrarian? · 2014-03-13T16:45:47.364Z · LW · GW

I'm not conflating anything. Those are different statements, and I've never implied otherwise.

The statement "X is good," which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.

"X is good" is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that "X is good" is a claim about physical matter, and therefore part of the world model of anybody who believes it.

If there is some particular point or question I can help with, don't hesitate to ask.

Comment by tom_cr on Is my view contrarian? · 2014-03-13T02:40:24.348Z · LW · GW

I guess Lukeprog also believes that Lukeprog exists, and that this element of his world view is also not contrarian. So what?

One thing I see repeatedly in others is a deep-rooted reluctance to view themselves as blobs of perfectly standard physical matter. One of the many ways this manifests itself is a failure to consider inferences about one's own mind as fundamentally similar to any other form of inference. There seems to be an assumption of some kind on non-inferable magic, when many people think about their own motivations. I'm sure you appreciate how fundamentally silly this is, but maybe you could take a little time to meditate on it some more.

Sorry if my tone is a little condescending, but understand that you have totally failed to support your initial claim that I was confused.

Comment by tom_cr on Is my view contrarian? · 2014-03-12T16:41:20.249Z · LW · GW

Are there "elements of" which don't contain value judgements?

That strikes me as a question for dictionary writers. If we agree that Newton's laws of motion constitute such an element, then clearly, there are such elements that do not not contain value judgements.

Is Alice's preference for cabernet part of Alice's world model?

iff she perceives that preference.

If Alice's preferences are part of Alice's world model, then Alice's world model is part of Alice's world model as well.

I'm not sure this follows by logical necessity, but how is this unusual? When I mention Newton's laws, am I not implicitly aware that I have this world model? Does my world model, therefore, not include some description of my world model? How is this relevant?

Comment by tom_cr on Is my view contrarian? · 2014-03-12T16:14:27.217Z · LW · GW

Alice is part of the world, right? So any belief about Alice is part of a world model. Any belief about Alice's preference for cabernet is part of a world model - specifically, the world model of who-ever holds that belief.

By any chance....?

Yes. (The phrase "the totality of" could, without any impact on our current discussion, be replaced with "elements of". )

Is there something wrong with that? I inferred that to also be the meaning of the original poster.

Comment by tom_cr on Is my view contrarian? · 2014-03-12T15:52:32.602Z · LW · GW

A value judgement both uses and mentions values.

The judgement is an inference about values. The inference derives from the fact that some value exist. (The existing value exerts a causal influence on one's inferences.)

This is how it is with all forms of inference.

Throwing a ball is not an inference (note that 'inference' and 'judgement' are synonyms), thus throwing a ball is no way necessarily part of a world model, and for our purposes, in no way analogous to making a value judgement.

Comment by tom_cr on Is my view contrarian? · 2014-03-12T15:42:29.643Z · LW · GW

I never said anything of the sort that Alice's values must necessarily be part of all world models that exist inside Alice's mind. (Note, though, that if we are talking about 'world model,' singular, as I was, then world model necessarily includes perception of some values.)

When I say that a value judgement is necessarily part of a world model, I mean that if I make a value judgement, then that judgement is necessarily part of my world model.

Comment by tom_cr on Is my view contrarian? · 2014-03-12T03:47:11.877Z · LW · GW

What levels am I confusing? Are you sure it's not you that is confused?

Your comment bears some resemblance to that of Lumifer. See my reply above.

Comment by tom_cr on Is my view contrarian? · 2014-03-12T02:19:03.362Z · LW · GW

whose world model?

Trivially, it is the world model of the person making the value judgement I'm talking about. I'm trying hard, but I'm afraid I really don't understand the point of your comment.

If I make a judgement of value, I'm making an inference about an arrangement of matter (mostly in my brain), which (inference) is therefore part of my world model. This can't be otherwise.

Furthermore, any entity capable of modeling some aspect of reality must be, by definition, capable of isolating salient phenomena, which amounts to making value judgements. Thus, I'm forced to disagree when you say "your world model does not necessarily include values..."

Your final sentence is trivially correct, but its relevance is beyond me. Sorry. If you mean that my world model may not include values I actually possess, this is correct of course, but nobody stipulated that a world model must be correct.

Comment by tom_cr on Is my view contrarian? · 2014-03-11T21:55:11.401Z · LW · GW

A minor point in relation to this topic, but an important point, generally:

It seems to be more of a contrarian value judgment than a contrarian world model

Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

Many tell me (effectively) that what I've just expressed is a contrarian view. Certainly, for many years I would have happily agreed with the non-overlapping-ness of value judgements and world views. But then I started to think about it. I thought about it all the more carefully because it seemed the conclusion I was reaching was a contrarian position. I thought about it so much, in fact, that it's now quite obvious to me that I'm right, regardless how large the majority who profess to disagree with me.

Perhaps this illustrates the utility of recognizing an idea's contrarian nature (and conversely, the danger of not pursuing ideas simply because consensus is deemed to have been already reached).

Comment by tom_cr on Approaching Logical Probability · 2014-03-03T17:32:49.598Z · LW · GW

Thanks, I'll take a look at the article.

If you don't mind, when you say "definitely not clear," do you mean that you are not certain about this point, or that you are confident, but it's complicated to explain?

Comment by tom_cr on Approaching Logical Probability · 2014-03-02T23:10:01.076Z · LW · GW

I'm not sure that's what Jaynes meant by correspondence with common sense. To me, it's more reminiscent of his consistency requirements, but I don't think it is identical to any of them.

Certainly, it is desirable that logically equivalent statements receive the same probability assignment, but I'm not aware that the derivation of Cox's theorems collapses without this assumption.

Jaynes says, "the robot always represents equivalent states of knowledge by equivalent plausibility assignments." The problem, of course, is knowing that 2 statements are equivalent - if we don't know this, we should be allowed to make different probability assignments. Equivalence and known equivalence are, to me, not the same, and Jaynes' prescriptions seem to refer to the latter. I may know that x = 298 + 587, but not know that x = 885, so I would not be not violating probability theory if I adopted different degrees of belief for these statements.

Note that Jaynes used this consistency requirement to derive such principles as the Bernoulli urn rule, which is very much about symmetry of knowledge, and not about logical equivalence of states.

Comment by tom_cr on Approaching Logical Probability · 2014-03-02T16:54:41.626Z · LW · GW

Thanks for taking the time to elaborate.

I don't recall that desideratum in Jaynes' derivations. I think it is not needed. Why should it be needed? Certainty about axioms is a million miles from certainty about all their consequences, as seems to be the exact point of your series.

Help me out, what am I not understanding?

Comment by tom_cr on Approaching Logical Probability · 2014-02-28T20:24:23.261Z · LW · GW

Maybe I'm just thick, but I'm not at all convinced by your claim that probabilistic reasoning about potential mathematical theorems violates any desiderata.

I re-read the post you linked to in the first line, but am still not satisfied. Could you be a bit more specific? Which desideratum? And how violated?

Perhaps it will help you explain, if I describe how I see things.

Since mathematical symbols are nothing more than (for example) marks on a bit of paper, there is no sense in which strings of such symbols have any independent truth (beyond the fact that the ink really is arranged on the paper in that way). To talk about truth, we must refer to some physical mechanism that implements some analogy for the mathematical operations. Thus, a question about the plausibility of some putative theorem is really a question about the behaviour of some such mechanism. To do plausible inference under complete certainty (which, as you say must overlap with classical logic) is simply to do the calculus, having seen the output of the mechanism. To assign a probability having not seen the output of the mechanism seems to me to be just another bog-standard problem of inference under uncertainty about the state of some physical entity.

Have I missed an important point?

Comment by tom_cr on Knightian Uncertainty from a Bayesian perspective · 2014-02-05T20:41:20.713Z · LW · GW

Jonah was looking at probability distributions over estimates of an unknown probability

What is an unknown probability? Forming a probability distribution means rationally assigning degrees of belief to a set of hypotheses. The very act of rational assignment entails that you know what it is.

Comment by tom_cr on Dangers of steelmanning / principle of charity · 2014-01-16T21:21:50.200Z · LW · GW

Thanks, I was half getting the point, but is this really important, as you say? If my goal is to gain value by assessing whether or not your proposition is true, why would this matter?

If the goal is to learn something about the person you are arguing with (maybe not as uncommon as I'm inclined to think?), then certainly, care must be taken. I suppose the procedure should be to form a hypothesis of the type "Y was stated in an inefficient attempt to express Z," where Z constitutes possible evidence for X, and to examine the plausibility of that hypothesis.

Comment by tom_cr on Dangers of steelmanning / principle of charity · 2014-01-16T20:09:16.191Z · LW · GW

Not sure if I properly understood the original post - apologies if I'm just restating points already made, but I see it like this.

Whatever it consists of, it's pretty much the definition of rationality that it increases expected utility. Assuming that the intermediate objective of a rationalist technique like steelmanning is to bring us closer to the truth, then there are 2 trivial cases where steelmanning is not rational:

(1) When the truth has low utility. (If a lion starts chasing me, I will temporarily abandon my attempt to find periodicity in the digits of pi.)

(2) When the expected impact of the resulting update to my estimate of what is true is negligible.

No doubt, there is need for some skill to estimate when such cases hold.

Comment by tom_cr on Probability and radical uncertainty · 2013-11-25T23:45:03.763Z · LW · GW

A few terminological headaches in this post. Sorry for the negative tone.

There is talk of a "fixed but unknown probability," which should always set alarm bells ringing.

More generally, I propose that whenever one assigns a probability to some parameter, that parameter is guaranteed not to be a probability.

I am also disturbed by the mention of Knightian uncertainty, descried as "uncertainty that can't be usefully modeled in terms of probability." Now there's a charitable interpretation of that phrase, and I can see that there may be a psychologically relevant subset of probabilities that vaguely fits this description, but if the phrase "can't be modeled" is to be taken literally, then I'm left wondering if the author has paid enough attention to the mind projection fallacy, or the difference between probability and frequency.

Comment by tom_cr on Backward Reasoning Over Decision Trees · 2013-09-13T16:32:11.765Z · LW · GW

Nice discussion of game theory in politics. Is there any theoretical basis for expecting the line-item veto generally to be more harmful than beneficial to the president?

(Not an attempt to belittle the above fascinating example, but genuine interest in any related, more general results of the theory.)

Comment by tom_cr on Three ways CFAR has changed my view of rationality · 2013-09-12T14:55:52.421Z · LW · GW

Perhaps some explanation is in order. (I thought it was quite a witty thought experiment, but apparently it's not appreciated.)

If it is in principle impossible to explain why one ought to do something, then what is the function of the word "ought"? Straightforwardly, it can have none, and we gain nothing by its existence in our vocabulary.

Alternatively, if it is not in principle impossible, then trivially the condition 'ought' (the condition of oughting?) rests entirely upon real facts about the universe, and the position of Randaly is false.

I know there is some philosophical pedigree behind this old notion, but my investigations yield that it is not possible, under valid reasoning (without butchering the word 'ought'), to assert that ought statements cannot be entirely reduced to is statements, and simultaneously to assert that one ought to believe this, which seems to present a dilemma.

I'm glad that Randaly explicitly chose this way of reasoning, as it is intimately linked with my interest in commenting on this post. Everyone accepts that questions relating to the life cycles of stars are questions of fact about the universe (questions of epistemic rationality), but the philosophical pedigree rejects the idea that questions about what is an appropriate way for a person to behave are similar (instrumental rationality) - it seems that people are somehow not part of the universe, according to this wisdom.

Comment by tom_cr on Three ways CFAR has changed my view of rationality · 2013-09-11T20:40:21.819Z · LW · GW

'ought statements' generally need to make reference to 'is statements', they cannot be entirely reduced to them

Please explain why this is so. Then please explain why you ought to believe this.

Comment by tom_cr on Three ways CFAR has changed my view of rationality · 2013-09-11T18:57:53.744Z · LW · GW

Thanks for bringing that article to my attention.

You explain how you learned skills of instrumental rationality from debating, but in doing so, you also learned reliable answers to questions of fact about the universe: how to win debates. When I'm learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn't make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.

Regarding Kelly 2003, I'd argue that learning movie spoilers is only desirable, by definition, if it contributes to one's goals. If it is not desriable, then I contend that it isn't rational, in any way.

Regarding Bostrom 2011, you say he demonstrates that, "a more accurate model of the world can be hazardous to various instrumental objectives." I absolutely agree. But if we have reliable reasons to expect that some knowledge would be dangerous, then it is not rational to seek this knowledge.

Thus, I'm inclined to reject your conclusion that epistemic and instrumental rationality can come into conflict, and to reject the proposition that they are different.

(I note that whoever wrote the wiki entry on rationality was quite careful, writing

Epistemic rationality is that part of rationality which involves achieving accurate beliefs about the world.

The use of "involves" instead of e.g. "consists entirely of" is crucial, as the latter would not normally describe a part of rationality.)

Comment by tom_cr on Three ways CFAR has changed my view of rationality · 2013-09-11T01:55:41.592Z · LW · GW

The terminology is a bit new to me, but it seems to me epistemic and instrumental rationality are necessarily identical.

If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.

Furthermore, what kind of rationality does not pursue goals? If I have no interest in chess, and ability to play chess will have no impact on any of my present or future goals, then it would seem to be irrational of me to learn to play chess.

Comment by tom_cr on Three ways CFAR has changed my view of rationality · 2013-09-11T01:43:30.409Z · LW · GW

Let's define an action as instrumentally rational if it brings you closer to your goal.

Suppose my goal is to get rich. Suppose, on a whim, I walk into a casino and put a large amount of money on number 12 in a single game of roulette. Suppose number 12 comes up. Was that rational?

Same objection applies to your definition of epistemicaly rational actions.

Comment by tom_cr on Welcome to Less Wrong! (6th thread, July 2013) · 2013-08-30T02:37:45.587Z · LW · GW

Thanks for the welcome.

I'm in Houston.

Comment by tom_cr on To reduce astronomical waste: take your time, then go very fast · 2013-08-28T22:41:01.446Z · LW · GW

As a thought experiment, this is interesting, and I’m sure informative, but there is one crucial thing that this post neglects to examine: whether the inscription under the hood actually reads “humanity maximizer.” The impression from the post is that this is already established.

But has anybody established, or even stopped to consider whether avoiding the loss of 10^46 potential lives per century is really what we value? If so, I see no evidence of it here. I see no reason to even suspect that enabling that many lives in the distant future has any remotely proportional value to us.

Does Nick Bostrom believe that sentient beings living worthwhile lives in the future are the ultimate value structures? If so, whose value is he thinking of, ours or theirs? If theirs, then he is chasing something non-existent, they can’t reach back in time to us (there can be no social contract with them).

No matter how clever we are at devising ways to maximize our colonization of the universe, if this is not actually what we desire, then it isn’t rational to do so. Surely, we must decide what we want, before arguing the best way to get it.

Comment by tom_cr on Model Combination and Adjustment · 2013-08-28T04:08:55.572Z · LW · GW

I haven't had much explicit interaction with these inside/outside view concepts, and maybe I'm misunderstanding the terminology, but a couple of the examples of outside views given struck me as more like inside views: Yelp reviews and the advice of a friend are calibrated instruments being used to measure the performance of a restaurant, ie to build a model of its internal workings.

But then almost immediately, I thought, "hey, even the inside view is an outside view." Every model is an analogy, e.g. an analogy in the sense of this thing A is a bit like thing B, so probably it will behave analogously, or e.g. 5 seconds ago the thing in my pocket was my wallet, so the thing in my pocket is probably still my wallet. It doesn't really matter if the symmetry we exploit in our modeling involves translation through space, translation from one bit of matter to another, or translation through time: strictly speaking, it's still an analogy.

I have no strong idea what implications this might have for problem solving. Perhaps there is another way of refining the language that helps. What I'm inclined to identify as the salient features of (what I understand to be) the inside view is that (subject to some super-model) there is a reasonable probability that the chosen model is correct, whereas for the outside view we are fairly certain that the chosen model is not correct, though it may still be useful. This strikes me as usefully linked to the excellent suggestions here regarding weighted application of multiple models. Perhaps the distinction between inside and outside views is a red herring, and we should concentrate instead on working out our confidence in each available model's ability to provide useful predictions, acknowledging that all models are necessarily founded on analogies, with differing degrees of relevance.

Comment by tom_cr on Welcome to Less Wrong! (6th thread, July 2013) · 2013-08-27T22:48:49.471Z · LW · GW

Hi folks

I am Tom. Allow me to introduce myself, my perception of rationality, and my goals as a rationalist. I hope what follows is not too long and boring.

I am a physicist, currently a post-doc in Texas, working on x-ray imaging. I have been interested in science for longer than I have known that 'science' is a word. I went for physics because, well, everything is physics, but I sometimes marvel that I didn't go for biology, because I have always felt that evolution by natural selection is more beautiful than any theory of 'physics' (of course, really it is a theory of physics, but not 'nominal physics').

Obviously, the absolute queen of theories is probability theory, since it is the technology that gives us all the other theories.

A few years ago, during my PhD work, I listened to a man called Ben Goldacre on BBC radio, and as a result stumbled onto several useful things. Firstly, by googling his name afterwards, I discovered that there are things called science blogs (!) and something called a 'skeptic's community.' I became hooked.

The next thing I learned from Goldacre’s blog was that I had been shockingly badly educated in statistics. I realized for example, that science and statistics are really the same thing. Damn, hindsight feels weird sometimes - how could I possibly have gone through two and a bit degrees in physics, without realizing this stupendously obvious thing? I started a systematic study.

Through the Bad Science blog, I also found my way to David Colquhoun’s noteworthy blog, where a commenter brought to my attention a certain book by a certain E.T. Jaynes. Suddenly, all the ugly, self-contradictory nonsense of frequentist statistics that I’d been struggling with (as a result of my newly adopted labors to try to understand scientific method better) was replaced with beauty and simple common sense. This was the most eye-opening period of my life.

It was also while looking through professor Colquhoun’s ‘recently read’ sidebar that I first happened to click on a link that brought me to some writing by one Dr. Yudkowsky. And it was good.

In accord with my long-held interest in science, I think I have always been a rationalist. Though I don't make any claims to be particularly rational, I hold rationality as an explicit high goal. Not my highest goal, obviously – rationality is an approach for solving problems, so without something of higher value to aim for, what problem is there to solve? What space left for being rational? I might value rationality ‘for its own sake,’ but ultimately, this means ‘being rational makes me happy’, and thus, as is necessarily so, happiness is the true goal.

But rationality is a goal, nonetheless, and a necessary one, if we are to be coherent. To desire anything is to desire to increase one’s chances of achieving it. Science (rationality) is the set of procedures that maximize one’s expectation to identify true statements about reality. Such statements include those that are trivially scientific (e.g. ‘the universe is between 13.7 and 13.9 billion years old’), and those that concern other matters of fact, that are often not considered in science’s domain, such as the best way to achieve X. (Thus questions that science can legitimately address include: How can I build an airoplane that won’t fall out of the sky? What is the best way to conduct science? How can I earn more money? What does it mean to be a good person?) Thus, since desiring a thing entails desiring an efficient way to achieve it, any desire entails holding rationality as a goal.

And so, my passion for scientific method has led me to recognize that many things traditionally considered outside the scope of science are in fact not: legal matters, political decisions, and even ethics. I realized that science and morality are identical: all questions of scientific methodology are matters of how to behave correctly, all questions of how to behave are most efficiently answered by being rational, thus being rational is the correct way to behave.

Philosophy? Yup, that too – if I (coherently) love wisdom, then necessarily, I desire an efficient procedure for achieving it. But not only does philosophy entail scientific method, since philosophy is an educated attempt to understand the structure of reality, there is no reason (other than tradition) to distinguish it from science – these two are also identical.

My goals as a rationalist can be divided into 3 parts: (1) to become more adept at actually implementing rational inference, particularly decision making, (2) to see more scientists more fully aware of the full scope and capabilities of scientific method, and (3) to see society’s governance more fully guided by rationality and common sense. Too many scientists see science as having no ethical dimension, and too many voters and politicians see science as having no particular role in deciding political policy: at best it can serve up some informative facts and figures, but the ultimate decision is a matter of human affairs, not science (echoing a religious view, that people are somehow fundamentally special, dating back to a time before anybody had even figured out that cleaning the excrement from your hands before eating is a good idea). I’m tired of democratically elected politicians making the same old crummy excuse of having a popular mandate - “How can I deny the will of the people?” - when they have never even bothered to look into whether or not their actions are in the best interests of the people. In a rational society, of course, there would be no question of evidence-based politics defying the will of the people: the people would vote to be governed rationally, every time.

Goal (1) I pursue almost wholly privately. Perhaps the Less Wrong community can help me change that. After my PhD, while still in The Netherlands, I tried to establish and market a short course in statistics for PhD students, which was my first effort to work on goal (2). This seemed like the perfect approach: firstly, as I mentioned, my own education (and that of many other physicists, in particular) on the topic of what science actually is, was severely lacking. Secondly, in NL, the custom is for PhD students to be sent for short courses as part of their education, but the selection of courses I was faced with was abysmal, and the course I was ultimately forced to attend was a joke – two days of listening to the vacuousness of a third-rate motivational speaker.

I really thought the dutch universities would jump at the chance to offer their young scientists something useful, but they couldn’t see any value in it. So I took the best bits of my short course, and made them into a blog, which also serves, to a lesser degree, to address goal (3).

As social critters, wanting the best for us and our kind, I expect that most of us in the rationalist community share a goal somewhat akin to my goal (3). Furthermore, I expect that more than any other single achievement, goal (3) would also dramatically facilitate goals (1) and (2), and their kin. Thus I predict that a reasoned analysis will yield goal (3), or something very similar, to be the highest possible goal within the pursuit of rationalism. The day that politicians consistently dare not neglect to seek out and implement the best scientific advice, for fear of getting kicked out by the electorate, will be the dawn of a new era of enlightenment.