Posts

Comments

Comment by robzahra on Atheism = Untheism + Antitheism · 2009-07-02T00:23:58.197Z · LW · GW

The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior

Comment by robzahra on Readiness Heuristics · 2009-06-16T16:42:15.636Z · LW · GW

Shutting up and multiplying, answer is clearly to save eliezer...and do so versus a lot more people than just three...question is more interesting if you ask people what n (probably greater than 3) is their cut off point.

Comment by robzahra on A Request for Open Problems · 2009-05-09T13:40:48.832Z · LW · GW

Due to chaotic / non-linear effects, you're not going to get anywhere near the compression you need for 33 bits to be enough...I'm very confident the answer is much much higher...

Comment by robzahra on Excuse me, would you like to take a survey? · 2009-04-27T13:04:00.286Z · LW · GW

you're right. speaking more precisely, by "ask yourself what you would do", I mean "engage in the act of reflecting, wherein you realize the symmetry between you and your opponent which reduces the decision problem to (C,C) and (D,D), so that you choose (C,C)", as you've outlined above. Note though that even when the reduction is not complete (for example, b/c you're fighting a similar but inexact clone), there can still be added incentive to cooperate...

Comment by robzahra on Excuse me, would you like to take a survey? · 2009-04-27T12:26:55.224Z · LW · GW

Agreed that in general one will have some uncertainty over whether one's opponent is the type of algorithm who one boxes / cooperates / whom one wants to cooperate with, etc. It does look like you need to plug these uncertainties into your expected utility calculation, such that you decide to cooperate or defect based on your degree of uncertainty about your opponent.

However, in some cases at least, you don't need to be Omega-superior to predict whether another agent one-boxes....for example, if you're facing a clone of yourself; you can just ask yourself what you would do, and you know the answer. There may be some class of algorithms non-identical to you but which are still close enough to you to make this self-reflection increased evidence that your opponent will cooperate if you do.

Comment by robzahra on Excuse me, would you like to take a survey? · 2009-04-27T03:34:14.088Z · LW · GW

Agreed with tarleton, the prisoner's dilemma questions do look under-specified...e.g., eliezer has said something like cooperate if he thinks his opponent one-boxes on newcomb-like problems..maybe you could have some write-in box here and figure out how to map the votes to simple categories later, depending on the variety of survey responses you get

Comment by robzahra on Excuse me, would you like to take a survey? · 2009-04-27T03:27:38.106Z · LW · GW

On the belief in god question, rule out simulation scenarios explicitly...I assume you intend "supernatural" to rule out a simulation creator as a "god"?

Comment by robzahra on Excuse me, would you like to take a survey? · 2009-04-27T03:26:09.549Z · LW · GW

On marital status, distinguish "single and looking for a relationship" versus "single and looking for people to casually romantically interact with"

Comment by robzahra on This Didn't Have To Happen · 2009-04-24T23:30:02.372Z · LW · GW

Seems worth mentioning: I think a thorough treatment of what "you" want needs to address extrapolated volition and all the associated issues that raises.
To my knowledge, some of those issues remain unsolved, such as whether different simulations of oneself in different environments necessarily converge (seems to me very unlikely, and this looks provable in a simplified model of the situation), and if not, how to "best" harmonize their differing opinions... similarly, whether a single simulated instance of oneself might itself not converge or not provably converge on one utility function as simulated time goes to infinity (seems quite likely; moreover, provable , in a simplified model) etc., etc.
If conclusive work has been done of which I'm unaware, it would be great if someone wants to link to it.
It seems unlikely to me that we can satisfactorily answer these questions without at least a detailed model of our own brains linked to reductionist explanations of what it means to "want" something, etc.

Comment by robzahra on Winning is Hard · 2009-04-17T09:52:31.746Z · LW · GW

Wh- I definitely agree the point you're making about knives etc., though I think one intepretation of the nfl as applying not to just to search but also to optimization makes your observation an instance of one type of nfl. Admittedly, there are some fine print assumptions that I think go under the term "almost no free lunch" when discussed.

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-17T09:31:22.446Z · LW · GW

Tim-Good, your distinction sounds correct to me.

Comment by robzahra on Actions and Words: Akrasia and the Fruit of Self-Knowledge · 2009-04-17T09:28:33.852Z · LW · GW

Annoyance, I don't disagree. The runaway loop leading to intelligence seems plausible, and it appears to support the idea that partially accurate modeling confers enough advantage to be incrementally selected .

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-17T09:23:43.872Z · LW · GW

Yes, the golden gate bridge is a special case of deduction in the sense meant here. I have no problem with anything in your comment, I think we agree.

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-16T01:04:40.404Z · LW · GW

I think we're probably using some words differently, and that's making you think my claim that deductive reasoning is a special case of Bayes is stronger than I mean it to be.

All I mean, approximately, is:

Bayes theorem: p(B|A) = p(A|B)*p(B) / p(A)

Deduction : Consider a deductive system to be a set of axioms and inference rules. Each inference rule says: "with such and such things proven already, you can then conclude such and such". And deduction in general then consists of recursively turning the crank of the inference rules on the axioms and already generated results over and over to conclude everything you can.

Think of each inference rule "i" as i(A) = B, where A is some set of already established statements and B corresponds to what statements "i" let's you conclude, if you already have A.

Then, by deduction we're just trying to say that if we have generated A, and we have an inference rule i(A) = B, then we can generate or conclude B.

The connection between deduction and Baye's is to take the generated "proofs" of the deductive system as those things to which you assign probability of 1 using Bayes.

So, the inference rule corresponds to the fact that p(B | A) = 1. The fact that A has been already generated corresponds to p(A) = 1. Also, since A has already been generated independently of B, p(A | B) = 1, since A didn't need B to be generated. And we want to know what p(B) is.

Well, plugging into Bayes:
p(B|A) = p(A|B)p(B) / p(A) i.e. 1 = 1 p(B) / 1 i.e. p(B) = 1.

In other words, B can be generated, which is what we wanted to show.

So basically, I think of deductive reasoning as just reasoning with no uncertainty, and I see that as popping out of bayes in the limiting case. If a certain formal interpretation of this leads me into Godelian problems, then I would just need to weaken my claim somewhat, because some useful analogy is clearly there in how the uncertain reasoning of Bayes reduces to certain conclusions in various limits of the inputs (p=0, p=1, etc.).

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-15T23:34:52.985Z · LW · GW

Ciphergoth, I agree your points, that if your prior over world-states were not induction biased to start with, you would not be able to reliably use induction, and that this is a type of circularity. Also of course, the universe might just be such that the Occam prior doesn't make you win; there is no free lunch, after all.

But I still think induction could meaningfully justify itself, at least in a partial sense. One possible, though speculative, pathway: Suppose Tegmark is right and all possible math structures exist, and that some of these contain conscious sub-structures, such as you. Suppose further that Bostrom is right and observers can be counted to constrain empirical predictions. Then it might be that there are more beings in your reference class that are part of simple mathematical structures as opposed to complex mathematical structures, possibly as a result of some mathematical fact about your structure and how that logically inter-relates to all possible structures. This might actually make something like induction true about the universe, without it needing to be a direct assumption. I personally don't know if this will turn out to be true, nor whether it is provable even if true, but this would seem to me to be a deep, though still partially circular, justification for induction, if it is the case.

We're not fully out of the woods even if all of this is true, because one still might want to ask Tegmark "Why does literally everything exist rather than something else?", to which he might want to point to an Occam-like argument that "Everything exists" is algorithmically very simple. But these, while circularities, do not appear trivial to my mind; i.e., they are still deep and arguably meaningful connections which seem to lend credence to the whole edifice. Eli discusses in great detail why some circular loops like these might be ok/necessary to use in Where Recursive Justification Hits Bottom

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-15T22:34:02.033Z · LW · GW

I agree with Jimmy's examples. Tim, the Solomonoff model may have some other fine print assumptions {see some analysis by Shane Legg here}, but "the earth having the same laws as space" or "laws not varying with time" are definitely not needed for the optimality proofs of the universal prior (though of course, to your point, uniformity does make our induction in practice easier, and time and space translation invariance of physical law do appear to be true, AFAIK.). Basically, assuming the universe is computable is enough to get the optimality guarantees. This doesn't mean you might not still be wrong if Mars in empirical fact changes the rules you've learned on Earth, but it still provides a strong justification for using induction even if you were not guaranteed that the laws were the same, until you observed Mars to have different laws, at which point, you would assign largest weight to the simplest joint hypothesis for your next decision.

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-15T21:59:08.790Z · LW · GW

Tim--- To resolve your disagreement: Induction is not purely about deduction, but it nevertheless can be completely modelled by a deductive system.

More specifically, I agree with your claim about induction (see point 4 above). However, in defense of Eliezer's claim that induction is a special case of deduction, I think you can model it in a deductive system even though induction might require additional assumptions. For one thing, deduction in practice seems to me to require empirical assumptions as well (i.e., the "axioms" and "inference rules" are chosen based on how right they seem), so the fact that induction needs some axioms should not itself prevent deductive style proofs using an appropriately formalized version of it. So, once one decides on various axioms, such as the various desiderata I list above for a Solomonoff-like system, you CAN describe via a mathematical deduction system how the process of induction would proceed. So, induction can be formalized and proofs can be made about the best thing for an agent to do; the AIXI model is basically an example of this.

Comment by robzahra on Actions and Words: Akrasia and the Fruit of Self-Knowledge · 2009-04-15T17:03:56.309Z · LW · GW

I agree with the spirit of this, though of course we have a long way to go in cognitive neuroscience before we know ourselves anywhere near as well as we know the majority of our current human artifacts. However, it does seem like relatively more accurate models will help us comparatively more, most of the time. Presumably that human intelligence was able to evolve at all is some evidence in favor of this.

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-15T11:55:29.097Z · LW · GW

It looks to me like those uniformity of nature principles would be nice but that induction could still be a smart thing to do despite non-uniformity. We'd need to specify in what sense uniformity was broken to distinguish when induction still holds.

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-13T22:12:20.906Z · LW · GW

Are you saying that you would modify the first definition of rational to include these >> other ways of knowing (Occam's Razor and Inductive Bias), and that they can make conclusions about metaphysical things?

yes, I don't think you can get far at all without an induction principle. We could make a meta-model of ourselves and our situation and prove we need induction in that model, if it helps people, but I think most people have the intuition already that nothing observational can be proven "absolutely", that there are an infinite number of ways to draw curved lines connecting two points, etc. Basically, one needs induction to move beyond skeptical arguments and do anything here. We're using induction implicitly in all or most of our applied reasoning, I think.

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-13T22:07:16.660Z · LW · GW

yes, exactly

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-13T21:01:50.515Z · LW · GW

Why to accept an inductive principle:

  1. Finite agents have to accept an "inductive-ish" principle, because they can't even process the infinitely many consistent theories which are longer than the number of computations they have in which to compute, and therefore they can't even directly consider most of the long theories. Zooming out and viewing from the macro, this is extremely inductive-ish, though it doesn't decide between two fairly short theories, like Christianity versus string theory.

  2. Probabilities over all your hypotheses have to add to 1, and getting an extra bit of info allows you to rule out approximately half of the remaining consistent theories; therefore, your probability of a theory one bit longer being true ought to drop by that ratio. If your language is binary, this has the nice property that you can assign a 1-length hypothesis a probability of 1/2, a 2-length hypothesis a probability of 1/4, ... an n -length hypothesis a probability of 1/(2^n)...and you notice that 1/2+1/4+1/8 + ... + ~= 1. So the scheme fits pretty naturally.

  3. Under various assumptions, an agent does only a constant factor worse using this induction assumption versus any other method, making this seem not only less than arbitrary, but arguably, "universal".

  4. Ultimately, we could be wrong and our universe may not actually obey the Occam Prior. It appears we don't and can't even in principle have a complete response to religionists who are using solipsistic arguments. For example, there could be a demon making these bullet points seem reasonable to your brain, while they are in fact entirely untrue. However, this does not appear to be a good reason not to use Occam's razor.

  5. Related to (2)--you can't assign equal probability greater than 0 to each of the infinite number of theories consistent with your data, and still have your sums converge to 1 (because for any rational number R > 0, the sum of an infinite number of R's will diverge). So, you have to discount some hypotheses relative to others, and induction looks to be the simplest way to do this (One could say of the previous sentence, "meta-occam's razor supports occam's razor"). The burden of proof is on the religionist to propose a plausible alternative mapping, since the Occam mapping appears to satisy the fairly stringent desiderata.

  6. Further to (5), notice that to get the probability sum to converge to 1, and also to assign each of the infinite consistent hypotheses a probability greater than 0, most hypotheses need to have smaller probability than any fixed rational number. In fact, you need more than that, you actually need the probabilities to drop pretty fast, since 1/2 + 1/3 + 1/4 + .... + does not converge. On the other hand, you COULD have certain instances where you switch two theories around in their probability assignments (for example, you could arbitrarily say Christianity was more likely than string theory, even though Christianity is a longer theory), but for most of the theories, with increasing length you MUST drop your probability down towards 0 relatively fast to maintain the desiderata at all. To switch these probabilities only for particular theories you care about, while you also need and want to use the theory on other problems (including normal "common sense" intuitions, which are very well-explained by this framework), and you ALSO need to use it generally on this problem except for a few counter-examples you explicitly hard-code, seems incredibly contrived. You're better off just to go with occam's razor, unless some better alternative can be proposed.

Rob Zahra

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-13T20:46:43.488Z · LW · GW

this can be viewed the other way around, deductive reasoning as a special case of Bayes

Comment by robzahra on GroupThink, Theism ... and the Wiki · 2009-04-13T20:37:52.381Z · LW · GW

seconding timtyler and guysrinivasan--I think, but can't prove, that you need an induction principle to reach the anti-religion conclusion. See especially Occam's Razor and Inductive Bias. If someone wants to bullet point the reasons to accept an induction principle, that would be useful. Maybe I'll take a stab later. It ties into Solomonoff induction among other things.

EDIT---I've put some bullet points below which state the case for induction to the best of my knowledge.

Comment by robzahra on Persuasiveness vs Soundness · 2009-04-13T14:41:47.629Z · LW · GW

yes, what to call the chunk is a separate issue...I at least partially agree with you, but I'd want to hear what others have to say. The recent debate over the tone of the Twelve Virtues seems relevant.

Comment by robzahra on Persuasiveness vs Soundness · 2009-04-13T14:23:21.518Z · LW · GW

This is the Dark Side root link. In my opinion it's a useful chunked) concept, though maybe people should be hyperlinking here when they use the term, to be more accessible to people who haven't read every post. At the very least, the FAQ builders should add this, if it's not there already.

Comment by robzahra on Marketing rationalism · 2009-04-12T23:46:00.002Z · LW · GW

Some examples of what I think you're looking for:

  1. Vassar's proposed shift from saying "this is the best thing you can do" to "this is a cool thing you can do" because people's psychologies respond better to this
  2. Operant conditioning in general
  3. Generally, create a model of the other person, then use standard rationality to explore how to most efficiently change them. Obviously, the less wrong and overcoming bias knowledge base is very relevant for this.
Comment by robzahra on It's okay to be (at least a little) irrational · 2009-04-12T22:30:56.870Z · LW · GW

I mostly agree with your practical conclusion, however I don't see purchasing fuzzies and utilons separately as an instance of irrationality per se. As a rationalist, you should model the inside of your brain accurately and admit that some things you would like to do might actually be beyond your control to carry out. Purchasing fuzzies would then be rational for agents with certain types of brains. "Oh well, nobody's perfect" is not the right reason to purchase fuzzies; rather, upon reflection, this appears to be the best way for you to maximize utilons long term. Maybe this is only a language difference (you tell me), but I think it might be more than that.

Comment by robzahra on Real-Life Anthropic Weirdness · 2009-04-07T15:12:07.887Z · LW · GW

Agreed

Comment by robzahra on Real-Life Anthropic Weirdness · 2009-04-07T02:19:16.655Z · LW · GW

If a gun were put to my head and I had to decide right now, I agree with your irritation. However, he did make an interesting point about public disrespect as a means of deterrence which deserves more thinking about. If that method looks promising after further inspection, we'd probably want to reconsider its application to this situation, though it's still unclear to me to what extent it applies in this case.

Comment by robzahra on Real-Life Anthropic Weirdness · 2009-04-06T23:43:31.684Z · LW · GW

Ok, I soften my critique given your reply which made a point I hadn't fully considered.
It sounds like the public disrespect is intentional, and it does have a purpose.. To be a good thing to do, you need to believe, among other things:

  1. Publicly doing that is more likely to make him stop relative to privately doing it. (Seems plausible).
  2. You're not losing something greater than the wasted time by other people observing your doing it. (Unclear to me)

It would be better I think if you could just privately charge someone for the time wasted;but it does seem unlikely phil would agree to that. I think your suggestion of linking to a fairly respectful but forceful reply works pretty well for the time being.

Comment by robzahra on Real-Life Anthropic Weirdness · 2009-04-06T23:31:17.959Z · LW · GW

Phil, I think you're interpeting his claim too literally (relative to his intent). He is only trying to help people who have a psychological inability to discount small probabilities appropriately. Certainly if the lottery award grows high enough, standard decision theory implies you play ....this is one of the pascal's mugging variants (similarly, whether to perform hypothetical exotic physics experiments with small probability of yielding infinite (or just extremely large) utility and large probability of destroying everything) which is not fully resolved for any of us, I think.

Comment by robzahra on Real-Life Anthropic Weirdness · 2009-04-06T17:15:35.225Z · LW · GW

Eli tends to say stylistically: "You will not " for what others, when they're thinking formally, express as "You very probably will not __" This is only a language confusion between speakers. There are other related ones here, I'll link to them later. Telling someone to "win" versus "try to win" is a very similar issue.

Comment by robzahra on Real-Life Anthropic Weirdness · 2009-04-06T17:08:48.942Z · LW · GW

While you appear to be right about phil's incorrect interpretation, I don't think he meant any malice by it...however, you appear to me to have meant malice in return. So, I think your comment borders on unnecessary disrespect and if it were me who had made the comment, I would edit it to make the same point while sounding less hateful. If people disagree with me, please down vote this comment. (Though admittedly, if you edit your comment now, we won't get good data, so you probably should leave it as is.)

I admit that I'm not factoring in your entire history with phil much so you may have further justification of which I'm unaware, but my view I would expect to be shared even more by casual readers who don't know either of you well. Maybe in that case, a comment like yours is fine, but only if delivered privately.

Comment by robzahra on Where are we? · 2009-04-05T16:46:37.863Z · LW · GW

NYC area: Rob Zahra, AlexU, and Michael Vassar sometimes...

Comment by robzahra on Aumann voting; or, How to vote when you're ignorant · 2009-04-05T14:18:32.549Z · LW · GW

Phil- clever heuristic, canceling idiots..though note that it actually applies directly from a bayesian expected value calculation in certain scenarios:

  1. Assume you have no info about the voting issues except who the idiots are and how they vote. Now either your prior is that reversed stupidity is intelligence in this domain or it's not. If it is, then you have clear bayesian grounds to vote against the idiots. If it's not, then reversed stupidity either is definite stupidity or it has 0 correlation. In case 1, reason itself does not work (e.g., a situation in which god confounds the wisdom of the wise, I.e. You're screwed precisely for being rational). If 0 correlation, then the idiots are noise and provided you can count the idiots to be sure multiple of you don't cancel one idiot, you reduce noise, which is the best you can do.

The doubtful point in this assessment is how you identify "idiots" about a voting situation which Ostensibly you know nothing else about. In your examples, the info you used to identify the idiots seemed to require some domain knowledge which itself should figure into how you vote. Assuming idiots are "cross-domain incompetent" may be true for worlds like ours, but that needs to be fleshed a lot more for soundness, I think.

Comment by robzahra on Open Thread: April 2009 · 2009-04-04T14:09:13.636Z · LW · GW

Just read your last 5 comments and they looked useful to me, including most with 1 karma point. I would keep posting whenever you have information to add, and take actual critiques in replies to your comments much more seriously than lack of karma. Hope this helps.. Rob zahra

Comment by robzahra on Winning is Hard · 2009-04-04T02:22:57.893Z · LW · GW

Whpearson----I think I do see some powerful points in your post that aren't getting fully appreciated by the comments so far. It looks to me like you're constructing a situation in which rationality won't help. I think such situations necessarily exist in the realm of platonic possibility. In other words, it appears you provably cannot always win across all possible math structures; that is, I think your observation can be considered one instance of a no free lunch theorem.

My advice to you is that No Free Lunch is a fact and thus you must deal with it. You can't win in all worlds, but maybe you can win in the world you're in (assuming it's not specially designed to thwart your efforts; in which case, you're screwed). So just because rationality has limits, does not mean you shouldn't still try to be rational. (Though also note I haven't proven that one should be rational by any of the above).

Eli addressed this dilemma you're mentioning in passing the recursive buck and elsewhere on overcoming bias)

Comment by robzahra on Rationality is Systematized Winning · 2009-04-03T22:55:43.699Z · LW · GW

I'm quite confident there is only a language difference between eliezer's description and the point a number of you have just made. Winning versus trying to win are clearly two different things, and it's also clear that "genuinely trying to win" is the best one can do, based on the definition those in this thread are using. But Eli's point on ob was that telling oneself "I'm genuinely trying to win" often results in less than genuinely trying. It results in "trying to try"...which means being satisfied by a display of effort rather than utility maximizing. So instead, he arguesn why not say to oneself the imperative "Win!", where he bakes the "try" part into the implicit imperative. I agree eli's language usage here may be slightly non standard for most of us (me included) and therefore perhaps misleading to the uninitiated, but I'm doubtful we need to stress about it too much if the facts are as I've stated. Does anyone disagree? Perhaps one could argue eli should have to say, "Rational agents should win_eli" and link to an Explanation like this thread, if we are genuinely concerned about people getting confused.

Comment by robzahra on Where are we? · 2009-04-02T23:59:20.586Z · LW · GW

This post is a good idea, but wouldn't it be easier for everyone to join the less wrong facebook group? I'm not positive, but I think the geographical sorting can then be easily viewed automatically. You could then invite the subgroups to their own group, and easily send group messages.

Comment by robzahra on The Benefits of Rationality? · 2009-03-31T16:06:34.092Z · LW · GW

Ob has changed people's practical lives in some major ways. Not all of these are mine personally:

"I donated more money to anti aging, risk reduction, etc"

"I signed up for cryonics."

"I wear a seatbelt in a taxi even when no one else does."

"I stopped going to church but started hanging out socially with aspiring rationalists."

"I decided rationality works and started writing down my goals and pathways to them."

"I decided it's important for me to think carefully about what my ultimate values are."

Comment by robzahra on Most Rationalists Are Elsewhere · 2009-03-30T00:06:08.525Z · LW · GW

Various people on our blogs have talked about how useful a whuffie concept would be (see especially Vassar on reputation markets. I agree that Less Wrong's karma scores encourage an inward focus; however, the general concept seems so useful that we ought to consider finding a way to expand Karma scores beyond just this site, as opposed to shelving the idea. Whether that is best implemented through facebook or some other means is unclear to me. Can anyone link to any analysis on this?

Rob Zahra

Comment by robzahra on Church vs. Taskforce · 2009-03-28T23:21:36.377Z · LW · GW

Michael: "The closest thing that I have found to a secular church really is probably a gym."

Perhaps in the short run we could just use the gym directly, or analogs. Aristotle's Peripatetic school and other notable thinkers who walked suggests that having people walking while talking, thinking, and socializing is worth some experimentation. This could be done by walking outside or on parallel exercise machines in a gym (would be informative which worked better to tease out what it is about walking that improves thinking, assuming the hypothesized causality is true). Michael, I realize you are effectively already doing this.

-Rob Zahra

Comment by robzahra on Cached Selves · 2009-03-23T00:28:08.158Z · LW · GW

Agree with and like the post. Two related avenues for application:

  1. Using this effect to accelerate one's own behavior modification by making commitments in the direction of the type of person one wants to become. (e.g. donating even small amounts to SIAI to view oneself as rationally altruistic, speaking in favor of weight loss as a way to achieve weight loss goals, etc.). Obviously this would need to be used cautiously to avoid cementing sub-optimal goals.

  2. Memetics: Applying these techniques on others may help them adopt your goals without your needing to explicitly push them too hard. Again, caution and foresight advisable.