Posts

Comments

Comment by grobstein on Linkpost: Arena’s New Opening Hand Rule Has Huge Implications For How We Play the Game · 2018-11-02T18:35:58.467Z · LW · GW

I read on r/MagicArena that, at least based on public information from Wizards, we don't *know* that "You draw two hands, and it selects the hand with the amount of lands closest to the average for your deck."

What we know is closer to: "You draw two hands, and there is some (unknown, but possibly not absolute) bias towards selecting the hand with the amount of lands closest to the average for your deck."

I take it that, if the bias is less than absolute, the consequences for deck-building are in the same direction but less extreme.

Comment by grobstein on Superintelligence 10: Instrumentally convergent goals · 2014-11-18T19:49:07.697Z · LW · GW

But I don't think "utility function" in the context of this post has to mean, a numerical utility explicitly computed in the code.

It could just be, the agent behaves as-if its utilities are given by a particular numerical function, regardless of whether this is written down anywhere.

Comment by grobstein on Superintelligence 10: Instrumentally convergent goals · 2014-11-18T19:46:06.318Z · LW · GW

In humans, goal drift may work as a hedging mechanism.

Comment by grobstein on Superintelligence 10: Instrumentally convergent goals · 2014-11-18T19:44:43.310Z · LW · GW

One possible explanation for the plasticity of human goals is that the goals that change aren't really final goals.

So me-now faces the question,

Should I assign any value to final goals that I don't have now, but that me-future will have because of goal drift?

If goals are interpreted widely enough, the answer should be, No. By hypothesis, those goals of me-future make no contribution to the goals of me-now, so they have no value to me. Accordingly, I should try pretty hard to prevent goal drift and / or reduce investment in the well-being of me-future.

Humans seem to answer, Yes, though. They simultaneously allow goal drift, and care about self-preservation, even though the future self may not have goals in common with the present.

This behavior can be rationalized if we assume that it's mostly instrumental goals that drift, with final goals remaining fixed. So maybe humans have the final goal of maximizing their inclusive fitness, and consciously accessible goals are just noisy instruments for this final goal. In that case, it may be rational to embrace goal drift because 1) future instrumental goals will be better suited to implementing the final goal, under changed future circumstances, and 2) allowing goals to change produces multiple independent instruments for the final goal, which may reduce statistical noise.

Comment by grobstein on Superintelligence 10: Instrumentally convergent goals · 2014-11-18T04:09:56.189Z · LW · GW

I am not that confident in the convergence properties of self-preservation as instrumental goal.

It seems that at least some goals should be pursued ballistically -- i.e., by setting an appropriate course in motion so that it doesn't need active guidance.

For example, living organisms vary widely in their commitments to self-preservations. One measure of this variety is the variety of lifespans and lifecycles. Organisms generally share the goal of reproducing, and they pursue this goal by a range of means, some of which require active guidance (like teaching your children) and some of which don't (like releasing thousands of eggs into the ocean).

If goals are allowed to range very widely, it's hard to believe that all final goals will counsel the adoption of the same CIVs as subgoals. The space of all final goals seems very large. I'm not even very sure what a goal is. But it seems at least plausible that this choice of CIVs is contaminated by our own (parochial) goals, and given the full range of weird possible goals these convergences only form small attractors.

A different convergence argument might start from competition among goals. A superintelligence might not "take off" unless it starts with sufficiently ambitious goals. Call a goal ambitious if its CIVs include coming to control significant resources. In that case, even if only a relatively small region in goal-space has the CIV of controlling significant resources, intelligences with those goals will quickly be overrepresented. Cf. this intriguing BBS paper I haven't read yet.

Comment by grobstein on Causal Universes · 2012-11-29T17:28:10.394Z · LW · GW

Hard to see why you can't make a version of this same argument, at an additional remove, in the time travel case. For example, if you are a "determinist" and / or "n-dimensionalist" about the "meta-time" concept in Eliezer's story, the future people who are lopped off the timeline still exist in the meta-timeless eternity of the "meta-timeline," just as in your comment the dead still exist in the eternity of the past.

In the (seemingly degenerate) hypothetical where you go back in time and change the future, I'm not sure why we should prefer to say that we "destroy" the "old" future, rather than simply that we disconnect it from our local universe. That might be a horrible thing to do, but then again it might not be. There's lots of at-least-conceivable stuff that is disconnected from our local universe.

Comment by grobstein on Causal Universes · 2012-11-29T16:59:30.158Z · LW · GW

Any inference about "what sort of thingies can be real" seems to me premature. If we are talking about causality and space-time locality, it seems to me that the more parsimonious inference regards what sort of thingies a conscious experience can be embedded in, or what sort of thingies a conscious experience can be of.

The suggested inference seems to privilege minds too much, as if to say that only the states of affairs that allow a particular class of computation can possibly be real. (This view may reduce to empiricism, which people like, but stated this way I think it's pretty hard to support! What's so special about conscious experience?)

EDIT: Hmm, here is a rather similar comment. Hard to process this whole discussion.

EDIT EDIT: maybe even this comment is about the same issue, although its argument is being applied to a slightly different inference than the one suggested in the main article.

Comment by grobstein on Less Wrong NYC: Case Study of a Successful Rationalist Chapter · 2011-03-24T20:48:09.089Z · LW · GW

Which of these is a major stressor on romantic relationships?

Comment by grobstein on Optimal Employment · 2011-02-01T22:23:11.879Z · LW · GW

(Wikipedia's article on tax incidence claims that employees pay almost all of payroll taxes, but cites a single paper that claims a 70% labor / 30% owner split for corporate income tax burden in the US, and I have no idea how or whether that translates to payroll tax burden or whether the paper's conclusions are generally accepted.)

There's no consensus on the incidence of the corporate income tax in the fully general case. It's split among too many parties.

Comment by grobstein on Optimal Employment · 2011-01-31T21:29:36.517Z · LW · GW

The USA is not the best place to earn money.2 My own experience suggests that at least Japan, New Zealand, and Australia can all be better. This may be shocking, but young professionals with advanced degrees can earn more discretionary income as a receptionist or a bartender in the Australian outback than as, say, a software engineer in the USA.

As a side question, when did a receptionist or bartender become a "professional"? Is "professional" just used as a class marker, standing for something like "person with a non-vocational 4-year college degree"?

Or is the idea that one is a professional because one is in some sense a software engineer (e.g.), even while employed as a receptionist or bartender?

Comment by grobstein on Optimal Employment · 2011-01-31T19:51:37.332Z · LW · GW

Note that a lot of the financial benefit described here comes from living somewhere remote -- in particular the housing and food costs. That's the reason for the strenuous warning not to live in "Sidney, Melbourne or any major Australian city." From a larger perspective, it partly accounts for choosing Australia over America (low population density --> low housing costs, etc.).

For a full analysis, the cost differentials of living in the Australian outback vs. an American city (or whatever) have to be decomposed into price level, consumption, and other factors. For example, I pay a very high cost for living in New York. But I recover part of the cost in various benefits. Broadly: 1) New York may be the only place in the world where my employment situation is possible, 2) New York is a social coordination point where it is especially easy to meet the kind of people I would like to meet.

This is probably the case for many people who decide to live in New York.

Comment by grobstein on Optimal Employment · 2011-01-31T19:22:06.079Z · LW · GW

There used to be a special "expatriation tax" that applied only to taxpayers who renounced their (tax) citizenship for tax avoidance purposes. However, under current law, I believe you are treated the same regardless of your reason for renouncing your (tax) citizenship. Here's an IRS page on the subject:

http://www.irs.gov/businesses/small/international/article/0,,id=97245,00.html

This is not an area of my expertise, though.

Comment by grobstein on Attention Lurkers: Please say hi · 2010-04-19T01:04:05.625Z · LW · GW

Hi. I am a very occasional participant, mostly because of competing time demands, but I appreciate the work done here and check it out when I can.

Comment by grobstein on It's not like anything to be a bat · 2010-03-30T16:20:08.318Z · LW · GW

If there is an infinite number of conscious minds, how do the anthropic probability arguments work out?

In a big universe, there are infinitely many beings like us.

Comment by grobstein on Coffee: When it helps, when it hurts · 2010-03-10T17:33:37.777Z · LW · GW

Caffeine, of course, is rather addictive.

So one might (and I do) find it difficult to optimize finely according to what tasks one is attempting. The addictive nature of the drug probably explains the "always or never" consumption pattern.

Comment by grobstein on Conversation Halters · 2010-02-22T00:32:42.893Z · LW · GW

In the wild, people use these gambits mostly for social, rather than argumentative, reasons. If you are arguing with someone and believe their arguments are pathological, and engagement is not working, you need to be able to stop the debate. Hence, one of the above -- this is most clear with "Let's agree to disagree."

In practice, it can be almost impossible to get out of a degrading argument without being somewhat intellectually dishonest. And people generally are willing to be a little dishonest if it will get them out of an annoying and unproductive situation.

If you have frequently been on the receiving end of "conversation halters," consider the hypothesis that you are doing something wrong. If you often provoke the reaction that people would rather not engage with you, the social part of your argumentative technique is badly broken.

Comment by grobstein on The AI in a box boxes you · 2010-02-02T20:32:24.256Z · LW · GW

It seems obvious that if the AI has the capacity to torture trillions of people inside the box, it would have the capacity to torture *illions outside the box.

Comment by grobstein on The AI in a box boxes you · 2010-02-02T19:46:01.639Z · LW · GW

If that's true, what consequence does it have for your decision?

Comment by grobstein on What I Tell You Three Times Is True · 2009-05-03T03:32:56.980Z · LW · GW

The difficulty for me is that this technique is at war with having an accurate self-concept, and may conflict with good epistemic hygiene generally. For the program to work, one must seemingly learn to suppress one's critical faculties for selected cases of wishful thinking. This runs against trying to be just the right amount critical when faced with propositions in general. How can someone who is just the right amount critical affirm things that are probably not true?

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T23:37:27.750Z · LW · GW

It's $45 from Amazon. At that price, I'm going to scheme to steal it back first.

OR MAYBE IT'S BECAUSE I'M CRAAAZY AND DON'T ACT FOR REASONS!

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T23:26:56.726Z · LW · GW

Rationality is made of win.

Duhhh!

(Cf.)

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T23:26:07.716Z · LW · GW

Eliezer's argument, if I understand it, is that any decision-making algorithm that results in two-boxing is by definition irrational due to giving a predictably bad outcome.

So he's assuming the conclusion that you get a bad outcome? Golly.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T22:17:09.627Z · LW · GW

This premise is not accepted by the 1-box contingent. Occasionally they claim there's a reason.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T21:10:59.771Z · LW · GW

Please ... Newcomb is a toy non-mathematizable problem and not a valid argument for anything at all.

Why?

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T20:49:03.007Z · LW · GW

Actually the problem is an ambiguity in "right" -- you can take the "right" course of action (instrumental rationality, or ethics), or you can have "right" belief (epistemic rationality).

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T20:45:48.644Z · LW · GW

Here's a functional difference: Omega says that Box B is empty if you try to win what's inside it.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T20:39:51.324Z · LW · GW

Your argument is equivalent to, "But what if your utility function rates keeping promises higher than a million orgasms, what then?"

The hypo is meant to be a very simple model, because simple models are useful. It includes two goods: getting home, and having $100. Any other speculative values that a real person might or might not have are distractions.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T20:09:55.505Z · LW · GW

Right. The question of course is, "better" for what purpose? Which model is better depends on what you're trying to figure out.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T20:07:48.642Z · LW · GW

I do think these problems are mostly useful for purposes of understanding and (moreso) defining rationality ("rationality"), which is perhaps a somewhat dubious use. But look how much time we're spending on it.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T20:05:42.599Z · LW · GW

I very much recommend Reasons and Persons, by the way. A friend stole my copy and I miss it all the time.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T19:55:42.898Z · LW · GW

What is it, pray tell, that Omega cannot do?

Can he not scan your brain and determine what strategy you are following? That would be odd, because this is no stronger than the original Newcomb problem and does not seem to contain any logical impossibilities.

Can he not compute the strategy, S, with the property "that at each moment, acting as S tells you to act -- given (1) your beliefs about the universe at that point and (2) your intention of following S at all times -- maximizes your net utility [over all time]?" That would be very odd, since you seem to believe a regular person can compute S. If you can do it, why not Omega? (NB, no, it doesn't help to define an approximation of S and use that. If it's rational, Omega will punish you for it. If it's not, why are you doing it?)

Can he not compare your strategy to S, given that he knows the value of each? That seems odd, because a pushdown automaton could make the comparison. Do you require Omega to be weaker than a pushdown automaton?

No?

Then is it possible, maybe, that the problem is in the definition of S?

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T19:29:55.966Z · LW · GW

Yes, this seems unimpeachable. The missing piece is, rational at what margin? Once you are home, it is not rational at the margin to pay the $100 you promised.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T19:05:48.571Z · LW · GW

It's a test case for rationality as pure self-interest (really it's like an altruistic version of the game of Chicken).

Suppose I'm purely selfish and stranded on a road at night. A motorist pulls over and offers to take me home for $100, which is a good deal for me. I only have money at home. I will be able to get home then IFF I can promise to pay $100 when I get home.

But when I get home, the marginal benefit to paying $100 is zero (under assumption of pure selfishness). Therefore if I behave rationally at the margin when I get home, I cannot keep my promise.

I am better off overall if I can commit in advance to keeping my promise. In other words, I am better off overall if I have a disposition which sometimes causes me to behave irrationally at the margin. Under the self-interest notion of rationality, then, it is rational, at the margin of choosing your disposition, to choose a disposition which is not rational under the self-interest notion of rationality. (This is what Parfit describes as an "indirectly self-defeating" result; note that being indirectly self-defeating is not a knockdown argument against a position.)

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T18:40:18.070Z · LW · GW

Yes, you are changing the hypo. Your Omega dummy says that it is the same game as Newcomb's problem, but it's not. As VN notes, it may be equivalent to the version of Newcomb's problem that assumes time travel, but this is not the classical (or an interesting) statement of the problem.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T18:33:21.102Z · LW · GW

No. The point is that you actually want to survive more than you want to win, so if you are rational about Chicken you will sometimes lose (consult your model for details). Given your preferences, there will always be some distance \epsilon before the cliff where it is rational for you to give up.

Therefore, under these assumptions, the strategy "win or die trying" seemingly requires you to be irrational. However, if you can credibly commit to this strategy -- be the kind of person who will win or die trying -- you will beat a rational player every time.

This is a case where it is rational to have an irrational disposition, a disposition other than doing what is rational at every margin.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T18:14:43.635Z · LW · GW

Why don't you accept his distinction between acting rationally at a given moment and having the disposition which it is rational to have, integrated over all time?

EDIT: er, Parfit's, that is.

Comment by grobstein on Can Humanism Match Religion's Output? · 2009-04-03T17:57:53.798Z · LW · GW

Obviously we can construct an agent who does this. I just don't see a reasonably parsimonious model that does it without including a preference for getting AIDS, or something similarly crazy. Perhaps I'm just stuck.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T17:52:13.062Z · LW · GW

(likewise the fairness language of the parent post)

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T17:34:31.234Z · LW · GW

It's impossible to add substance to "non-pathological universe." I suspect circularity: a non-pathological universe is one that rewards rationality; rationality is the disposition that lets you win in a nonpathological universe.

You need to attempt to define terms to avoid these traps.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T16:43:11.867Z · LW · GW

This is a classic point and clearer than the related argument I'm making above. In addition to being part of the accumulated game theory learning, it's one of the types of arguments that shows up frequently in Derek Parfit's discussion of what-is-rationality, in Ch. 1 of Reasons and Persons.

I feel like there are difficulties here that EY is not attempting to tackle.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T16:22:57.053Z · LW · GW

Quoting myself:

(though I don't see how you identify any distinction between "properties of the agent" and "decisions . . . predicted to be made by the agent" or why you care about it).

I'll go further and say this distinction doesn't matter unless you assume that Newcomb's problem is a time paradox or some other kind of backwards causation.

This is all tangential, though, I think.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T16:21:11.815Z · LW · GW

Yes, all well and good (though I don't see how you identify any distinction between "properties of the agent" and "decisions . . . predicted to be made by the agent" or why you care about it). My point is that a concept of rationality-as-winning can't have a definite extension say across the domain of agents, because of the existence of Russell's-Paradox problems like the one I identified.

This is perfectly robust to the point that weird and seemingly arbitrary properties are rewarded by the game known as the universe. Your proposed redefinition may actually disagree with EY's theory of Newcomb's problem. After all, your decision can't empty box B, since the contents of box B are determinate by the time you make your decision.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T15:57:10.621Z · LW · GW

What you give is far harder than a Newcomb-like problem. In Newcomb-like problems, Omega rewards your decisions, he isn't looking at how you reach them.

You misunderstand. In my variant, Omega is also not looking at how you reach your decision. Rather, he is looking at you beforehand -- "scanning your brain", if you will -- and evaluating the kind of person you are (i.e., how you "would" behave). This, along with the choice you make, determines your later reward.

In the classical problem, (unless you just assume backwards causation,) what Omega is doing is assessing the kind of person you are before you've physically indicated your choice. You're rewarded IFF you're the kind of person who would choose only box B.

My variant is exactly symmetrical: he assesses whether you are the kind of person who is rational, and responds as I outlined.

Comment by grobstein on Rationality is Systematized Winning · 2009-04-03T14:46:50.769Z · LW · GW

I don't think I buy this for Newcomb-like problems. Consider Omega who says, "There will be $1M in Box B IFF you are irrational."

Rationality as winning is probably subject to a whole family of Russell's-Paradox-type problems like that. I suppose I'm not sure there's a better notion of rationality.

Comment by grobstein on Can Humanism Match Religion's Output? · 2009-04-03T05:25:41.948Z · LW · GW

"Passing out condoms increases the amount of sex but makes each sex act less dangerous. So theoretically it's indeterminant whether it increases or decreases the spread of AIDS."

Not quite -- on a rational choice model, passing out condoms may decrease or not impact the spread of AIDS (in principle), but it can't increase it. A rational actor who doesn't actively want AIDS might increase their sexual activity enough to compensate for the added safety of the condom, but they would not go further than that.

(This is different from the seatbelt case because car crashes result in costs, say to pedestrians who are struck, that are not internalized by the driver.)

Comment by grobstein on Where are we? · 2009-04-03T05:04:34.876Z · LW · GW

Cambridge, MA. Rarely venture beyond Boston metro area.

However, I'll in the Pioneer Valley on Apr. 17-19, if anyone is interested in a meetup that Sunday (19th), say NoHo or Amherst.

Comment by grobstein on LessWrong anti-kibitzer (hides comment authors and vote counts) · 2009-03-10T18:09:52.165Z · LW · GW

Simply annoying from a usability point of view. It requires you to know too surely which posts you will want to vote on and which authors you'll want to know; if you care about the value of your karmic vote you'll wind up interfering with your enjoyment of the posts to preserve its value; etc.

Comment by grobstein on Formalization is a rationality technique · 2009-03-07T01:39:06.275Z · LW · GW

Formalizations that take big chunks of arguments as black boxes are not that useful. Formalizations that instead map all of an argument's moving parts are very hard.

The reason that specialists learn formalizations for domain-specific arguments only is because formalizing truly general arguments[FN1] is an extremely difficult problem -- difficult to design and difficult to use. This is why mathematicians work largely in natural language, even though their arguments could (usually or always) be described in formal logic. Specialized formal languages are possible and useful only because they describe radically stripped-down models of the world.

Overall, this means that -- while moves like the example in the post probably are helpful -- we shouldn't expect to go much further in this direction.


[FN1]To be precise, formalizing truly general arguments in a way that simplifies and clarifies is hard. There is a trivial formalization for every argument, since the class of arguments is equinumerous with the natural numbers; the decision process can be a symbolic representation of a bunch of experts' brains or something like that.

Comment by grobstein on That You'd Tell All Your Friends · 2009-03-06T02:19:38.374Z · LW · GW

Totally agree -- helps if you can convince them to read Fire Upon the Deep, too. I'm not being facetious; the explicit and implicit background vocabulary (seems to) make it easier to understand the essays.

(EDIT: to clarify, it is not that I think Fire in particular must be elevated as a classic of rationality, but that it's part of a smart sci/fi tradition that helps lay the ground for learning important things. There's an Eliezer webpage about this somewhere.)

Comment by grobstein on That You'd Tell All Your Friends · 2009-03-06T02:16:39.900Z · LW · GW

Clarity and transparency. One should be able to open the book to a page, read an argument, and see that it is right.

(Obviously this trades off against other values -- and is in some measure a deception --, but it's the kind of thing that impresses my friends.)