Posts

Mate selection for the men here 2009-06-03T23:05:25.181Z

Comments

Comment by rhollerith on Let's reimplement EURISKO! · 2009-06-14T00:40:01.101Z · LW · GW

Let us briefly review the discussion up to now since many readers use the the comments page which does not provide much context. rwallace has been arguing that AI researchers are too concerned (or will become too concerned) about the existential risk from reimplementing EURISKO and things like that.

You have mentioned two or three times, rwallace, that without more advanced technology, humans will eventually go extinct. (I quote one of those 2 or 3 mentions below.) You mention that to create and to manage that future advanced technology, civilization will need better tools to manage complexity. Well, I see one possible objection to your argument right there, in that better science and better technology might well decrease the complexity of the cultural information humans are required to keep on top of. Consider that once Newton gave our civilization a correct theory of dynamics, almost all of the books written before Newton on dynamics could safely be thrown away (the exceptions being books by Descartes and Galileo that help people understand Newton and put Newton in historical context) which of course constitutes a net reduction in the complexity of the cultural information that our civilization has to keep on top of. (If it does not seem like a reduction, that is because the possession of Newtonian dynamical theory made our civilization more ambitious about what goals to try for.)

do you believe humanity can survive permanently as we are now, confined to this planet? If you do, then I will point you to the geological evidence to the contrary. If not, then it follows that without more advanced technology, we are dead.

But please explain to me what your argument has to do with EURISKO and things like that: is it your position that the complexity of future human culture can be managed only with better AGI software?

And do you maintain that that software cannot be developed fast enough by AGI researchers such as Eliezer who are being very careful about existential risks?

In general, the things you argue are dangerous are slow dangers. You yourself refer to "geological evidence" which suggests that they are dangerous on geological timescales.

In contrast, research into certain areas of AI seems to me genuinely fast dangers: things with a high probability of wiping out our civilization in the next 30, 50 or 100 years. It seems unwise to increase fast dangers to decrease slow dangers. But I suppose you disagree that AGI research if not done very carefully is a fast danger. (I'm still studying your arguments on that.)

Comment by rhollerith on Let's reimplement EURISKO! · 2009-06-13T14:30:31.078Z · LW · GW

rwallace has been arguing the position that AI researchers are too concerned (or will become too concerned) about the existential risk from UFAI. He writes that

we need software tools smart enough to help us deal with complexity.

rwallace: can we deal with complexity sufficiently well without new software that engages in strongly-recursive self-improvement?

Without new AGI software?

One part of the risk that rwallace says outweighs the risk of UFAI is that

we remain confined to one little planet . . . with everyone in weapon range of everyone else

The only response rwallace suggests to that risk is

we need more advanced technology, for which we need software tools smart enough to help us deal with complexity

rwallace: please give your reasoning for how more advanced technology decreases the existential risk posed by weapons more than it increases it.

Another part of the risk that rwallace says outweighs the risk of UFAI is that

we remain confined to one little planet running off a dwindling resource base

Please explain how dwindling resources presents a significant existential risk. I can come up with several argument, but I'd like to see the one or two you consider the strongest arguments.

Comment by rhollerith on Typical Mind and Politics · 2009-06-13T14:01:10.194Z · LW · GW

I agree that introspection certainly can be a valid tool.

Comment by rhollerith on Typical Mind and Politics · 2009-06-13T12:35:33.015Z · LW · GW

I have a strong pain signal from lost money and from lost time. To the extent that I can introspect on the workings of my insula, I think that this is one impulse for me, rather than two as Yvain describes - one for time and one for money.

The most parsimonious explanation of what you observe is that it is human nature to be overconfident of the results of introspection.

Comment by rhollerith on Mate selection for the men here · 2009-06-05T22:23:42.202Z · LW · GW

When I wrote that "it is never in the financial self-interest of any [self-help] practitioner to do the hard long work to collect evidence that would sway a non-gullible client," I referred to long hard work many orders of magnitude longer and harder than posting a link to a web page. Consequently, your pointing out that you post links to web pages even when it is not in your financial self-interest to do so does not refute my point. I do not maintain that you should do the long hard work to collect evidence that would sway a non-guillible client: you probably cannot afford to spend the necessary time, attention and money. But I do wish you would stop submitting to this site weak evidence that would sway only a gullible client or a client very desperate for help.

And with that I have exceeded the time I have budgeted for participation on this site for the day, so my response to your other points will have to wait for another day. If I may make a practical suggestion to those readers wanting to follow this thread: subscribe to the feed for my user page till you see my response to pjeby's other points, then unsubscribe.

Comment by rhollerith on Mate selection for the men here · 2009-06-05T20:01:18.732Z · LW · GW

Previously in this thread I opined as follows on the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.

PJ Eby took exception as follows:

you ignored the part where I just gave somebody a pointer to somebody else's work that they could download for free

Lots of people offer pointers to somebody else's writings. Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader's while. IMHO almost all the writings on the net about producing lasting useful psychological change are not worth the reader's while.

In the future, I will write "lasting change" when I mean "lasting useful psychological change".

you indirectly accused me of being more interested in financial incentives than results

The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being. Let us call the probability I just referred to "probability D". (The D stands for deception.)

You have written (in a response to Eliezer) that you usually charge clients a couple of hundred dollars an hour.

The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.

The fact that you use hypnotic techniques on clients and write a lot about hypnosis raises probability D significantly because hypnotic techniques rely on the natural human machinery for negotiating who is dominant and who is submissive or the natural human machinery for deciding who will be the leader of the hunting party. Putting the client into a submissive or compliant state of mind probably helps a practitioner quite a bit to persuade the client to believe falsely that lasting change has been produced. You have presented no evidence or argument -- nor am I aware of any evidence or argument -- that putting the client into a submissive or compliant state helps a practitioner producing lasting change. Consequently, your reliance on and interest in hypnotic techniques significantly raises probability D.

Parenthetically, I do not claim that I know for sure that you are producing false beliefs rather than producing lasting change. It is just that you have not raised the probability I assign to your being able to produce lasting change high enough to justify my choosing to chase a pointer you gave into the literature or high enough for me to stop wishing that you would stop writing about how to produce lasting change in another human being on this site.

Parenthetically, I do not claim that your deception, if indeed that is what it is, is conscious or intentional. Most self-help and mental-health practitioners deceive because they are self-deceived on the same point.

You believe and are fond of repeating that a major reason for the failure of some of the techniques you use is a refusal by the client to believe that the technique can work. Exhorting the client to refrain from scepticism or pessimism is like hypnosis in that it strongly tends to put the client in a submissive or compliant state of mind, which again significantly raises probability D.

To the best of my knowledge (maybe you can correct me here) you have never described on this site an instance where you used a reliable means to verify that you had produced a lasting change. When you believe for example that you have produced a lasting improvement in a male client's ability to pick up women in bars, have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman's phone number or getting the woman to follow the client out to his car)?

In your extensive writings on this site, I can recall no instance where you describe your verifying your impression that you have created a lasting change in a client using reliable means. Rather, you have described only unreliable means, namely, your perceptions of the mental and the social environment and reports from clients about their perceptions of the mental and the social environment. That drastically raises probability D. Of course, you can bring probability D right back down again, and more, by describing instances where you have used reliable means to verify your impression that you have created a lasting change.

For readers who want to read more, here are two of Eliezer's sceptical responses to PJ Eby: 001, 002

If it makes you feel any better, I am not seeing you any more harshly than I see any other self-help, life-coach or mental-health practitioner, including those with PhDs in psychology and MDs in psychiatry and those with prestigious academic appointments. In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.

Actually there is one way in which I resent you more than I resent other self-help, life-coach or mental-health practitioners: the other ones do not bring their false beliefs or rather their most-probably-false not-sufficiently-verified beliefs to my favorite place to read about the mental environment and the social environment. I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.

Comment by rhollerith on Mate selection for the men here · 2009-06-05T17:55:01.617Z · LW · GW

Previously in this thread: PJ Eby asserts that the inability to refrain from conveying contempt is a common and severe interpersonal handicap. Nazgulnarsil replies, "This is my problem. . . . I can't hide the fact that I feel contempt for the vast majority of the people around me (including desirable partners)."

I probably have the problem too. Although it is rare that I am aware of feeling contempt for my interlocutor, there is a lot of circumstantial evidence that messages (mostly nonverbal) conveying contempt are present in my face-to-face communication with non-friends (even if I would like the non-friend to become a friend).

I expect that PJ Eby will assure me that he has seen himself and his clients learn how to transcend this problem. Maybe he can even produce written testimonials from clients assuring me that PJ Eby has cured them of this problem. But I fear that PJ Eby has nothing that a strong Bayesian with long experience with self-help practitioners would consider sufficient evidence that he can help me transcend this problem. Such is the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.

Comment by rhollerith on My concerns about the term 'rationalist' · 2009-06-05T09:36:11.319Z · LW · GW

I changed the title of my post from "Mate selection for the male rationalist" to "Mate selection for the men here".

Comment by rhollerith on Mate selection for the men here · 2009-06-04T22:51:17.405Z · LW · GW

We differ in that respect, perhaps because I have had more time slowly to shape my emotional responses to women.

Comment by rhollerith on Probability distributions and writing style · 2009-06-04T21:36:52.857Z · LW · GW

BTW it would be great to have all my writings subjected to examination by the community to determine whether the writings use probability distributions, utility functions and the language of causality correctly and sensibly.

Comment by rhollerith on Mate selection for the men here · 2009-06-04T20:34:18.072Z · LW · GW

HughRistik writes, "I recommend women who are high in Openness to Experience."

My two most personally-useful long-term relationships have been with women high in Openness to Experience. The Wikipedia article says that this trait is normally distributed, so I will add that both women were definitely in the top quartile in this trait and probably at least a standard deviation above the mean.

HughRistik, since we seem to see things similarly, maybe we should talk.

Contact rhollerith

Comment by rhollerith on Mate selection for the men here · 2009-06-04T20:11:31.982Z · LW · GW

Yes, but there is a sense of the word "rationalist" that makes HughRistik's quote (and my post) make sense. Something like "strongly motivated to learn science and the art of rationality" or "the kind of person you become if for the last 20 years you have been strongly motivated to . . ."

Comment by rhollerith on Mate selection for the men here · 2009-06-04T19:39:17.417Z · LW · GW

This post assumes that the reader wants a long-term relationship.

Post edited to make the assumption explicit.

Comment by rhollerith on Probability distributions and writing style · 2009-06-04T19:11:53.651Z · LW · GW

dclayh, I have replied to you privately.

Specifically, the first likely google hit for "dclayh" is a Livejournal user of that name, so I used Livejournal to send a private message to that user.

Contact rhollerith

Comment by rhollerith on Mate selection for the men here · 2009-06-04T01:20:36.794Z · LW · GW

The following comments are evidence that female rationality is important to at least some male rationalists. Note that the first comment was upvoted by 7 readers.

I know I would love to have my next girlfriend be a rationalist (if only to avoid my most recent failure mode)

http://lesswrong.com/lw/ap/of_gender_and_rationality/7me by MBlume

But she loves magical thinking, she is somewhat averse to expected-utility calculations, my atheism, etc. . . . We love each other but are scared that our differences may be too great.

http://lesswrong.com/lw/zj/open_thread_june_2009/rxy

Comment by rhollerith on Open Thread: June 2009 · 2009-06-04T00:40:37.122Z · LW · GW

I am pretty sure that most strong male rationalists are better off learning how the typical woman thinks than holding out for a long-term relationship with a strong female rationalist. Since this point is probably of general interest, I put it in a top-level post.

Converting her to your worldview sounds like a bad idea in general. An additional consideration that applies in your particular situation is that converting a helping professional from deontologism to consequentialism will more likely than not make her less effective at work (because helping professionals need all the help they can get to care enough about their patients and clients, and worldview is definitely one source of significant help in that regard).

Nobody has responded to the following:

she is, by her own admission, subject to strong swings of emotion and at greater than average risk of longer-lasting depression

I, too, will refrain from commenting because you probably mean "strong swings of mood" and I do not have romantic experience with a moody woman. I do have romantic experience with a fiery woman, i.e., a woman easily aroused to strong negative emotions, but I doubt that is what you mean: in what I am calling a "fiery" woman, the emotion always dissipates quickly -- usually in a few minutes.

You say,

She excels at her job, which is a helping profession, and one which I believe improves social welfare far more than most.

I would consider that a very positive sign in a prospective sexual partner -- maybe an extremely positive sign (the reason for my uncertainty being that I have never been with a woman whose expected global utility was as high as you describe) -- a sign that would make me pursue the woman much more keenly. The fact that you use language such as "would have net-benefits for her and for the world long-term" (emphasis mine) suggests to me that you are like me in the relevant characteristics and consequently should take it to be a very positive sign, too.

The most I can say about the global expected utility (i.e., expected effect on the world in the long term) of any of my girlfriends up to now is that (1) she has many close friendships of long duration, and she is very caring and helpful to those friends or that (2) she is a resourceful and clearly productive member of the labor force and does not harm anyone unless you consider the occasional cheating of the government a harm. If I were with a woman whose expected global utility was much higher than any of my girlfriends up to now, there is a good chance that I could become much more unconditionally loving to her than I have been to any of my girlfriends up to now. By "unconditionally loving" I mean being helpful and caring to her without any regard for how much she has done for me or is expected to do for me.

So, that is why I would consider what you wrote a very positive sign: lack of expected global utility is my best current guess as to what has been holding me back from being more unconditionally loving to my girlfriend up to now. (Why I even want to become more unconditionally loving to my girlfriend is a long story.)

And yeah, I know that "expected global utility of the girlfriend" is an odd and cold phrasing, but if that oddness or coldness is enough to prevent you from reading this comment, then we are probably too different for the advice in this comment to be of any use to you.

Comment by rhollerith on Image vs. Impact: Can public commitment be counterproductive for achievement? · 2009-05-30T11:45:09.607Z · LW · GW

Status seekers probably greatly outnumber true altruists.

But you should tend to keep the status seekers out of positions of great responsibility IMHO even if doing so greatly reduces the total number of volunteers working on existential risks.

My tentative belief that status seekers will not do as good a job BTW stems from (1) first-hand observation and second-hand observation of long-term personal performance as a function of personal motivation in domains such as science-learning, programming, management and politics and (2) a result from social psychology that intrinsic reinforcers provide more reliable motivation than extrinsic reinforcers (for more about which, google "Punished by Rewards").

The last thing the future light cone needs is for existential-risk activism to become the next big thing in how to show prospective friends and prospective lovers how cool you are.

Comment by rhollerith on This Failing Earth · 2009-05-29T02:16:38.634Z · LW · GW

I am in tentative agreement with Moldbug's main points. But like patrissimo says, some of his claims are overly sweeping. Unlike patrissimo, I have no significant personal stake in Moldbug's being right aside from the stake we all have in the health of the state and the society in which we live.

Comment by rhollerith on Image vs. Impact: Can public commitment be counterproductive for achievement? · 2009-05-29T00:54:34.923Z · LW · GW

Helping to rescue marine mammals is a more effective way for a straight guy to signal high status to prospective sex partners than addressing existential risks is. I always considered that a feature, not a bug, because I always thought that people doing something to signal status do not do as good a job as people motivated by altruism, a desire to serve something greater than oneself or a sense of duty -- or even people motivated by a salary.

Comment by rhollerith on Willpower Hax #487: Execute by Default · 2009-05-13T17:16:45.941Z · LW · GW

"I get up most easily when I've slept enough. . . Does anyone else have the same experience?"

I am going to go out on a limb and say that most of us have that experience.

Comment by rhollerith on Open Thread: May 2009 · 2009-05-11T21:18:01.040Z · LW · GW

JGWeissman writes, "I don't see what you gain by this strategy that justifies the decrease in correlation between a comments displayed karma score and the value the community assigns it that occurs when you down vote a comment not because it is a problem, but because the author had written other comments that are a problem."

Vladimir Nesov writes, "If you are downvoting indiscriminately, not separating the better comments from the worse ones, without even bothering to understand them, you are abusing the system."

Anna writes, "This has the following advantages over blanket user-downvoting: . . . It does not impair quality-indicators on the user's other comments"

The objection is valid. I retract my proposal and will say so in an addendum to my original comment.

The problem with my proposal is the part where the voter goes to a commenter's lesswrong.com/user/ page and votes down 20 or 30 or so comments in a row. That dilutes or cancels out useful information, namely, votes from those who used the system the way it was intended.

If there were a way for a voter to reduce the karma of a person without reducing the point-score of any substantive comment, then my proposal might still have value, but without that, my proposal will have a destructive effect on the community, so of course I withdraw my proposal.

Comment by rhollerith on Open Thread: May 2009 · 2009-05-11T20:21:38.286Z · LW · GW

A normally good contributor's having a bad day is not going to be enough to trigger any downvoting of any of his comments under the policy I contemplate. Th policy I contemplate makes use of a general skill that I hypothesize that most participants on this site have: the ability to reserve judgement on someone till one has seen at least a dozen communications from that person and then to make a determination as to whether the person is worth continuing to pay attention to.

The people who have the most to contribute to a site like this are very busy. As Eliezer has written recently on this site, all that is need for this site to die is for these busy people to get discouraged because they see that the contributions of the worthwhile people are difficult to find among the contributions of the people who are not worth reading -- and I stress that the people who are not worth reading often have a lot of free time which they use to generate many contributions.

Well, the voting is supposed to be the main way that the worthwhile contributions "float to the top" or float to where people are more likely to see them than to see the mediocre contributions. But that only works if the people who can distinguish a worthwhile contribution from a mediocre contribution bother to vote. So let us consider whether they do. For example, has Patri Friedman or Shane Legg bothered to vote? They both have made a few comments here. But they are both very busy people. I'll send them both emails, referencing this conversation and asking them if they remember actually voting on comments here, and report back to y'all. (Eliezer is not a good person to ask in this regard because he has a big investment in the idea that a social web site based on voting will win, so of course he has been voting on the contributions here.)

The highest-scoring comment I know of is Shane Legg's description of an anti-procrastination technique, which currently has 16 points. But there are thousands of readers of this site. Now it is possible that a lot more readers of Shane's comment would have voted it up if it did not already have a high score, but I humbly suggest that it is more likely that only one or two or three percent of the readers of a comment would have bothered to vote on the comment regardless of its score.

Whether this site lives or dies seems to depend on the frequency with which the people who can tell a worthwhile comment from a non-worthwhile comment bother to vote. But like I said, these people tend to be very busy.

Hence my suggestion of adopting a policy of voting on commenters rather than coment -- because that is going to save some of the busy person's time.

There is a strong ethic in American society (and probably in other societies) that it is contributions and not individuals that should be judged. Well, I humbly suggest that since being able to contribute comments and posts here is not a basic human need, like housing or education or the opportunity to compete on an equal footing with other workers for income, the application of that admirable ethic to the decision of who gets to comment and post here is not worth the risk of this site's going downhill to the point where the people who could have carried the site decide it is not worth the time out of their busy lives.

EDIT. If no other participants on this site declare their intention to use commenter-based voting, then I probably will not use commenter-based voting either because of what the economists call network effects. The only reason I suggested it in the first place is that conchis's comment is not the first time someone here has indicated that voters other than me are already using commenter-based voting.

EDIT. I have backed down from the whole idea of downvoting many comments in one go. I do not delete this comment only because someone already replied to it.

Comment by rhollerith on Open Thread: May 2009 · 2009-05-11T19:00:18.887Z · LW · GW

Now, if you have the habit of reading through someone's comments all at one time and judge each comment for its own value

No, that's not what I have been contemplating.

"A commenter's karma means nothing," is a bit of an overstatement because you need 20 karma to post. Also, most commenters are probably aware of changes in their karma. And if I reduce a person's karma by 20 or 30 points, I would send him a private message to explain.

What I propose reduces the informativeness of a comment's point score but more-or-less maintains the informativeness of a commenter's karma. If enough voters come to do what I contemplate doing or if enough well-regard participants announce their intention to do what I contempate doing, then the maintainers of the site will adjust to the reduction in the informativeness of a comment's point score by focusing more of their site-improvement efforts on a commenter's karma. Note that those site-improvement efforts will tend to make effective use of the information created by the voters who follow the original policy of voting on individual comments (as well as the information created by voters who vote that way I contempate voting).

Comment by rhollerith on Open Thread: May 2009 · 2009-05-11T17:57:10.531Z · LW · GW

conchis, I have been reading your comments for at least 12 months on Overcoming Bias and have accumulated no negative feeling or opinion about you, so please do not think that what I am going to say is directed at you.

I have been thinking of adopting this strategy of occasionally giving a participant 20 or 30 or so downvotes all at once rather than frequently giving a comment a single downvote because I judge moderation of coment-writers (used, e.g., on SL4 back before 2005 and again in recent months, when a List Sniper has been active, during which times SL4 has been IMHO very high quality) to work better than moderation of comments (used, e.g., on Slashdot, Reddit and Hacker News, which are IMHO of lower quality).

So, I would like people to consider the possibility that downvoting of 20 or 30 of the comments of one comment-maker in one go should not be regarded as an abuse or an improper use of this web site unless of course it is done for a bad reason.

I hereby withdraw this comment because the responses to this comment have made me realize that it is destructive to downvote a comment without regard to the worthwhileness or quality of that particular comment.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-09T21:46:18.590Z · LW · GW

A system of valuing things is a definition. I have defined a system and said, "Oh, by the way, this system has my loyalty."

It is possible that the system is ill-defined, that is, that my definition contradicts itself, does not apply to the reality we find ourselves in, or differs in some significant way from what I think it means. But your appeal to general relativity does not show the ill-definedness of my system because it is possible to pick the time dimension out of spacetime: the time dimension it is treated quite specially in general relativity.

Eliezer's response to my definition appeals not to general relativity but rather to Julian Barbour's endless physics and Eliezer's refinements and additions to it, but his response does not establish the ill-definedness of my system any more than your argument does. If anyone wants the URLs of Eliezer's comments (on Overcoming Bias) that respond to my definition, write me and say a few words about why it is important to you that I make this minor effort.

If Eliezer has a non-flimsy argument that my definition contradicts itself, does not apply to the reality we find ourselves in, or differs significantly from what I think it means, he has not shared it with me.

When I am being careful, I use Judea Pearl's language of causality in my definition rather than the concept of time. The reason I used the concept of time in yesterday's description is succinctness: "I am indifferent to impermanent effects" is shorter than "I care only about terminal effects where a terminal effect is defined as an effect that is not itself a cause" plus sufficient explanation of Judea Pearl's framework to avoid the most common ways in which those words would be misunderstood.

So if I had to, I could use Judea Pearl's language of causality to remove the reliance of my definition on the concept of time. But again, nothing you or Eliezer has written requires me to retreat from my use of the concept of time.

So there is my response to the parts of your comment that can be interpreted as implying that my system is ill-defined.

But what you were probably after when you asked, "Would you go into why you only care about permanent effects?" is why I am loyal to this system I have defined -- or more to the point why you should give it any of your loyalty. Well, I used try to persuade people to become loyal to the system, but that had negative effects, including the effect of causing me to tend to hijack conversations on Overcoming Bias, so now I try only to explain and inform. I no longer try to promote or persuade.

My main advice to you, dclayh, is to chalk this up to the fact that the internet gives a voice to people whose values are very different from yours. For example, you will probably find the values implied by the Voluntary Human Extinction Movement or by anti-natalism just as unconventional as my values. Peace, dclayh.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-08T00:41:46.458Z · LW · GW

Imagine that you were somehow shown a magically 100% sound, 100% persuasive proof that you could not have permanent effect on reality, and that the entire multiverse would eventually end.

I agree with you, Anna, that in that case the concept of my aims does not cease to be predictively useful. (Consequently, I take back my "then I have no preferences" .) It is just that I have not devoted any serious brain time to what my aims might be if knew for sure I cannot have a permanent effect. (Nor does it bother me that I am bad at predicting what I might do if I knew for sure I cannot have a permanent effect.)

Most of the people who say they are loyal to goal system zero seem to have only a superficial commitment to goal system zero. In contrast, Garcia clearly had a very strong deep commitment to goal system zero. Another way of saying what I said above: like Garcia's, my commitment to goal system zero is strong and deep. But that is probably not helping you.

One of the ways I have approached CEV is to think of the superintelligence as implementing what would have happened if the superintelligence had not come into being -- with certain modifications. An example of a modification you and I will agree is desirable: if Joe suffers brain damage the day before the superintelligence comes into being, the superintelligence arranges things the way that Joe would have arranged them if he had not suffered the brain damage. The intelligence might learn that by e.g. reading what Joe posted on the internet before his injury. In summary, one line of investigation that seems worthwhile to me is to get away from this slippery concept of preference or volition and think instead of what the superintelligence predicts would have happened if the superintelligence does not act. Note that e.g. the human sense of right and wrong are predicted by any competent agent to have huge effects on what will happen.

My adoption of goal system zero in 1992 helped me to resolve an emotional problem of mine. I severely doubt it would help your professional goals and concerns for me to describe that, though.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-07T21:57:19.192Z · LW · GW

Anna, you are incorrect in guessing that my statement of preference is less than extremely useful for an outside observer to predict my actual behavior.

In other words, the part of me that is loyal to the intellectual framework is very good at getting the rest of me to serve the framework.

The rest of this comment consists of more than most readers probably want to know about my unusual way of valuing things.

I am indifferent to impermanent effects. Internal experiences, mine and yours, certainly qualify as impermanent effects. Note though that internal experiences correlate with things I assign high instrumental value to.

OK, so I care only about permanent effects. I still have not said which permanent effects I prefer. Well, I value the ability to predict and control reality. Whose ability to predict and control? I am indifferent about that: what I want to maximize is reality's ability to predict and control reality: if maximizing my own ability is the best way to achieve that, then that is what I do. If maximizing my friend's ability or my hostile annoying neighbor's ability is the best way, then I do that. When do I want it? Well, my discount rate is zero.

That is the most informative 130 words I can write for improving the ability of someone who does not know me to predict the global effects of my actual behavior.

Since I am in a tiny, tiny minority in wanting this, I might choose to ally myself with people with significantly different preferences. And it is probably impossible in the long term to be allies or colleagues or coworkers with a group of people who all roughly share the same preferences without in a real sense adopting those preferences as my own.

But the preferences I just outlined are the criteria I'd use to decide who to ally with. The single criterion that is most informative in predicting who I might ally with BTW is the prospective ally's intrinsic values' discount rate's being low.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-07T21:56:06.600Z · LW · GW

I am worried, Kennaway, that our conversation about my way of valuing things will distract you from what I wrote below about the risk of post-traumatic stress disorder from a surgical procedure. Your scenario is less than ideal for exploring what intrinsic value people assign to internal experience: it is better to present people with a choice of being killed painlessly and being killed after 24 hours of intense pain and then asking what benefit to their best friend or to humanity would induce them to choose the intense pain.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-07T20:56:35.001Z · LW · GW

I am not completely indifferent to being tortured, so in your hypothetical, Kennaway, I will try to get Ming to let me go because in your hypothetical I know I cannot have a permanent effect on reality.

But when faced with a choice between having a positive permanent effect on reality and avoiding being tortured I'll always choose having the permanent effect if I can.

Almost everybody gives in under torture. Almost everyone will eventually tell an interrogator skilled in torture everything they know, e.g., the passphrase to the rebel mainframe. Since I have no reason to believe I am any different in that regard, there are limits to my ability to choose the way I said. But for most practical purposes, I can and will choose the way I said. In particular, I think I can calmly choose being tortured over losing my ability to have a permanent effect on reality: it is just that once the torture actually starts, I will probably lose my resolve.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-07T14:41:44.724Z · LW · GW

The following conclusions come from a book on post-traumatic stress disorder (PTSD) called Waking the Tiger by Peter Levine, who treats PTSD for a living. I have a copy of this book, which I hereby offer to loan to Richard Kennaway if I do not have to pay to get it to him and to get it back from him.

Surgical procedures are in the opinion of Peter Levine a huge cause of PTSD.

According to Levine, PTSD is caused by subtle damage to the brain stem. Since in contrast episodic memory seems to have very little to do with the brain stem, the fact that one has no episodic memories of a surgical procedure does not mean that one was not traumatized by the procedure.

Since it is impossible in our society for doctors and nurses and such to ignore the fact that someone has died, you can somewhat sometimes rely on them not to kill you unnecessarily, but for anything as subtle as PTSD with as much false information floating about as there is about PTSD, you can pretty much count on it that whenever they cause a case of PTSD, they will remain serenely unaware of that fact, and consequently they will not take even the simplest and most straightforward measure to avoid traumatizing a patient. This sentiment (that medical professionals regularly do harms they are unaware of) is not in Levine's book AFAICR but is pretty common among rationalists who have extensive experience with the health-care system.

Most cases of traumatization caused by surgical procedures probably occur despite the use of general or local anesthesia.

In conclusion, if I had to undergo a surgical procedure, I'd gather more information of the type I have been sharing here, but if that were not possible, I would treat the possibility of being tramatized by a surgical procedure requiring the use of general anesthetic as having a greater expected negative effect on my health, intelligence and creativity than losing a fingernail would have. (It is more likely than not to turn out less bad than losing a fingernail, but the worst possible consequences are significantly worse than the worst possible consequences of losing the fingernail. In other words, I would tend to choose the loss of a fingernail because the uncertainty and consequently the probability of getting a really bad outcome is much less.)

Contact Richard Hollerith.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-07T02:58:12.774Z · LW · GW

I am not completely surprised to learn that your not getting the point was intentional, Newport, because your comments are usually good.

Do you consider it a "leap of imagination that few are capable of" to ask people here to indicate how much they value internal experience compared to how much they value external reality?

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-07T00:26:45.105Z · LW · GW

Kennaway's reason for asking the questions is probably to get at how much people prefer to avoid negative internal experiences relative to negative effects on external reality, which parenthetically is the main theme of my blog on the ethics of superintelligence. If so, then he wants you to assume that you can trust Ming 100% to do what he says -- and he also wants you to assume that Ming's evil geniuses can somehow compensate you for the fact that you could have done something else with the 24 hours during which you were experiencing the unimaginably intense pain, e.g., by using a (probably imposible in reality) time machine to roll back the clock by 24 hours.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-07T00:01:08.081Z · LW · GW

I am in essential agreement with MBlume. It is more likely than not that the space-time continuum we find ourselves in will support life and intelligence for only a finite length of time. But even if that is the case, there might be another compartment of reality beyond our space-time continuum that can support life or intelligence indefinitely. If I affect that other compartment (even if I merely influence someone who influences someone who communicates with the other compartment) then my struggling comes to more than nothing.

If on the other hand, there really is no way for me or my friends to have a permanent effect on reality, then I have no preference for what happens.

Comment by rhollerith on Off Topic Thread: May 2009 · 2009-05-06T21:46:47.838Z · LW · GW

I choose (b) without hesitation. There is not some counter or accumulator somewhere that is incremented any time someone has a positive experience and decremented every time someone has a negative experience.

EDIT. To answer Kennaway's second question, there is no way to attenuate (a) to make me prefer it to (b). I'd choose (b) even if the alternative was a dust speck in my eye or a small scratch on my skin because the dust speck and the scratch have a nonzero probability of negatively affecting my vision or my health.

Comment by rhollerith on Open Thread: May 2009 · 2009-05-01T19:49:01.629Z · LW · GW

Phil, things like cables and phone lines going to houses are "natural monopolies" in that it costs so much to install them that competitors probably can never get started. In fact, if the technology to deliver video over phone lines were available or anticipated when cable TV was building out in the 70s, the owner of the phone lines (pre-breakup AT&T) could probably have stopped the new cable TV companies from ever getting off the ground (by using the fact that AT&T has already paid for its pipe to the home to lowball the new companies). In other words, the probable reason we have two data pipes going into most homes in the U.S. rather than just one is that the first data pipe (the phone line) was not at that time of the introduction of the second pipe technically able to carry the new kind of data (video).

It is desirable that these duopolists (the owners of the phone lines and the cable-TV cables going to the home) are not able to use their natural duopoly as a wedge to enter markets for data services like search engines, online travel agencies, online stores, etc, in the way that Microsoft used their ownership of DOS to lever their way into dominance of markets like word processors and web browsers.

One way to do that is to draw a line at the level of "IP connectivity" and impose a regulation that say that the duopolists are in the business of selling this IP connectivity (and if they like, levels below IP connectivity like raw copper) but cannot enter the market (or prefer partners who are in the market) of selling services that ride on top of IP connectivity and depend on IP connectivity to deliver value to the residential consumer.

This proposal has the advantage that up to now on the internet companies that provide IP connectivity have mostly stayed out of most of the markets that depend on IP connectivity to deliver value to the residential consumer.

It is possible to enshrine such a separation into law and regulations without letting one cable-internet user on a local network (or whatever they call them) shared by a whole block of houses hog up most of the bandwidth of the local network. I.e., there is nothing incompatible here with contracts that impose a monthly cap on bytes received.

And even if spam filtering is made an exception to the separation, so that both connectivity providers (cable-internet and DSL providers) and Google can offer spam filtering, that does not mean that spammer get a free license to spam. What we want is to prevent Verizon or Comcast from making it impossible or more difficult for the Joe Consumer to go to Expedia than to go to Travelocity (or the Comcast Travel Store) -- or more difficult for him to go to Windows Live Search than to Google Search -- and we can do that while still allowing Verizon and Comcast to cut off recalictrant spammers (or requiring Joe Consumer to get his email from a email provider that does not happen to be a duopolist who will cut off recalitrant spammers).

Bob Frankston has been eloquent on this issue for at least 10 years now.

Comment by rhollerith on Fighting Akrasia: Incentivising Action · 2009-04-30T07:35:26.507Z · LW · GW

If I were you, I would not cancel your projects till you have tried having your business partners in the room with you when you are working. (Maybe you have.)

Comment by rhollerith on Fighting Akrasia: Incentivising Action · 2009-04-29T23:04:23.071Z · LW · GW

Unlike roland and gworley, my experience is that my current romantic partner helps me substantially in my fight against procrastination.

Specifically, my diet is better than it would be if she did not express her opinions on my diet and if I were not motivated to avoid disappointing her. (Both of us have similar chronic health problems, including food allergies.)

Also, she regularly prods me to start a medical treatment that I have been putting off for the last couple of years. Although I have not yet started the treatment, it is pretty clear that I will start it sooner than I would have without her influence. In fact, I might have never gotten around to it without her influence.

For a time, there was an open wifi network available at her apartment, and I would bring a laptop with me during visits to her place to take advantage of it. By the time the open wifi network went away, she had gotten into the habit of monitoring what I was doing on the laptop to make sure I was not wasting time. I found this monitoring quite helpful, and wish that we lived together so she could monitor more of my internet usage or that she was confident enough with computers to use vnc or something to monitor my internet usage remotely when I am at my apartment and she is at hers.

She grew up in New York City, and I get the impression from what she says about her childhood friends that women of her generation who grew up there are more likely to prod their men like this than American women in general are.

Comment by rhollerith on Generalizing From One Example · 2009-04-29T19:31:22.814Z · LW · GW

Molloy did not mention verifying the numbers (by, e.g., calling them) so he probably did not verify them.

Comment by rhollerith on Generalizing From One Example · 2009-04-29T00:06:05.646Z · LW · GW

John T. Molloy once paid actors to go into bars and try to get women's phone numbers. One group of actors he asked to act confident. A second group of actors he asked to act arrogant. The actors asked to act arrogant were more successful. (Described in Molloy's 1975 book Dress for Success.)

Of course, as Alicorn says, the population of women who go to bars and talk to strange men might not be representative of all single women.

Comment by rhollerith on Where's Your Sense of Mystery? · 2009-04-26T19:39:49.592Z · LW · GW

I have copies of The Structure of Magic, Volumes I and II (Hardcover, 1975) to give away. If you want them, please contact me privately. Preference given to those who will either travel to my home in San Rafael, CA, to pick them up or who will attend the next OB/LW meetup in the Bay Area (because then I do not have to pay shipping costs).

The fact that I own the volumes should not be taken as endorsement of them. In fact, I tend to suspect that Eliezer and those about as smart, knowledgable and committed to understanding intelligence are better off not wasting their time on NLP and that they should stick to ev psy and hard cognitive science and neuroscience instead.

Comment by rhollerith on Where's Your Sense of Mystery? · 2009-04-26T17:49:59.611Z · LW · GW

Agree. And pjeby's comments are long which makes it a little tedious for me to scroll past them.

Comment by rhollerith on Programmatic Prediction markets · 2009-04-25T16:44:35.850Z · LW · GW

Intriguing idea, whpearson.

The biggest hurdle to the adoption of such a system is probably the fact that most current traders probably do not have enough programming skill to trade in such a system without incurring significant costs by starting lots of bots and probably do not have enough programming skill to extract more than a small fraction of the information revealed by such a system. One way to get over that hurdle is to target your new market at programmers and allied occupations (like project managers). Programmers and allied occupations could use an effective prediction market to help them, e.g., predict completion dates of their programming projects. I have written more on how prediction markets might serve programmers and allied occupations.

It would be much better of course if there was some math behind your assertion that a programmatic market is more informative than current market designs.

Comment by rhollerith on The ideas you're not ready to post · 2009-04-22T12:40:21.467Z · LW · GW

Heck yeah, I want to see it. I suggest adopting Eliezer's modus operandi of using a lot of words. And every time you see something in your draft post that might need explanation, post on that topic first.

Comment by rhollerith on Welcome to Less Wrong! · 2009-04-17T22:11:09.550Z · LW · GW

Bongo asks me what is it then that I desire nowadays?

And my answer is, pretty much the same things everyone else desires! There are certain things you have to have to remain healthy and to protect your intelligence and your creativity, and getting those things takes up most of my time. Also, even in the cases where my motivational structure is different from the typical, I often present a typical facade to the outside world because typical is comfortable and familiar to people whereas atypical is suspicious or just too much trouble for people to learn.

Bongo, the human mind is very complex, so the temptation is very great to oversimplify, which is what I did above. But to answer your question, there is a ruthless hard part of me that views my happiness and the shards of my desire as means to an end. Kind of like money is also a means to an end for me. And just as I have to spend some money every day, I have to experience some pleasure every day in order to keep on functioning.

A means to what end? I hear you asking. Well, you can read about that. The model I present on the linked page is a simplification of a complex psychological reality, and it makes me look more different from the average person than I really am. Out of respect for Eliezer's wishes, do not discuss this "goal system zero" here. Instead, discuss it on my blog or by private email.

Now to bring the discussion back to mysticism. My main interest in mysticism is that it gives the individual flexibility that can be used to rearrange or "rationalize" the individual's motivational structure. A few have used that flexibility to rearrange emotional valences so that everything is a means to one all-embracing end, resulting in a sense of morality similar to mine. But most use it in other ways. One of the most notorious way to use mysticism is to use it to develop the interpersonal skills necessary to win a person's trust (because the person can sense that you are not relating to him in the same anxious or greedy way that most people relate to him) and then once you have his trust, to teach him to overcome unnecessary suffering. This is what most gurus do. If you want a typical example, search Youtube for Gangaji, a typical mystic skilled at helping ordinary people reduce their suffering.

I take you back to the fact that a full mystical experience is 1,000,000 times more pleasurable than anything a person would ordinarily experience. That blots out or makes irrelevant everything else that is happening to the person! So the person is able to sit under a tree without moving for weeks and months while his body slowly rots away. People do that in India: a case was in the news a few years ago.

Of course he should get up from sitting under the tree and go home and finish college. Or fetch wood, carry water. Or whatever it is he needs to do to maintain his health, prosperity, intelligence and creativity. But the experience of sitting under the tree can put the petty annoyances and the petty grievances of life in perspective so that they do not have as much influence on the person's thinking and behavior as they used to. Which is quite useful.

Comment by rhollerith on Welcome to Less Wrong! · 2009-04-16T22:13:14.171Z · LW · GW

Nesov points out that Eliezer picks and chooses rather than identifying with every shard of his desire.

Fair enough, but the point remains that it is not too misleading to say that I identify with fewer of the shards of human desire than Eliezer does -- which affects what we recommend to other people.

Comment by rhollerith on Bayesians vs. Barbarians · 2009-04-16T21:38:05.112Z · LW · GW

I think people exist who will make the personal sacrifice of going to jail for a long time to prevent the nuke from going off. But I do not think people exist who will also sacrifice a friend. But under American law that is what a person would have to do to consult with a friend on the decision of whether to torture: American law punishes people who have foreknowledge of certain crimes but do not convey their foreknowledge to the authorities. So the person is faced with making what may well be the most important decision of their lives without help from any friend or conspiring somehow to keep the authorities from learning about the friend's foreknowledge of the crime. Although I believe that lying is sometimes justified, this particular lie must be planned out simultaneously with the deliberations over the important decision -- potentially undermining those deliberations if the person is unused to high-stakes lies -- and the person probably is unused to high-stakes lies if he is the kind of person seriously considering such a large personal sacrifice.

Any suggestions for the person?

Comment by rhollerith on Welcome to Less Wrong! · 2009-04-16T18:40:01.172Z · LW · GW

Most mystics reject science and rationality (and I think I have a pretty good causal model of why that is) but there have been scientific rational mystics, e.g., physicist David Bohm. I know of no reason why a person who starts out committed to science and rationality should lose that commitment through mystical training and mystical experience if he has competent advice.

My main interest in mystical experience is that it is a hole in the human motivational system -- one of the few ways for a person to become independent from what Eliezer calls the thousand shards of desire. Most of the people in this community (notably Eliezer) assign intrinsic value to the thousand shards of desire, but I am indifferent to them except for their instrumental value. (In my experience the main instrumental value of keeping a connection to them is that it makes one more effective at interpersonal communication.)

Transcending the thousand shards of desire while we are still flesh-and-blood humans strikes me as potentially saner and better than "implementing them in silicon" and relying on cycles within cycles to make everything come out all right. And the public discourse on subjects like cryonics would IMHO be much crisper if more of the participants would overcome certain natural human biases about personal identity and the continuation of "the self".

I am not a mystic or aspiring mystic (I became indifferent to the thousand shards of my own desire a different way) but have a personal relationship of long standing with a man who underwent the full mystical experience: ecstacy 1,000,000 times greater than any other thing he ever experienced, uncommonly good control over his emotional responses, interpersonal ability to attract trusting followers without even trying. And yes, I am sure that he is not lying to me: I had a business relationship with him for about 7 years before he even mentioned (causally, tangentially) his mystical experience, and he is among the most honest people I have ever met.

Marin County, California, where I live, has an unusually high concentration of mystics, and I have in-depth personal knowledge of more than one of them.

Mystical experience is risky. (I hope I am not the first person to tell you that, Stefan!) It can create or intensify certain undesirable personality traits, like dogmatism, passivity or a messiah complex, and even with the best advice available, there is no guarantee that one will not lose one's commitment to rationality. But it has the potential to be extremely valuable, according to my way of valuing thing.

If you really do want to transcend the natural human goal system, Stefan, I encourage you to contact me.

Comment by rhollerith on Bayesians vs. Barbarians · 2009-04-15T02:43:35.669Z · LW · GW

I agree with every sentence in this post. (And I read it twice to make sure.)

Comment by rhollerith on Collective Apathy and the Internet · 2009-04-14T20:53:16.360Z · LW · GW

There are many benefits to surrounding yourself with extremely bright rationalists and scientific generalists. But I wonder if Eliezer has been too successful in sparing himself from the tedium and the trouble of interacting with and observing the common run of scientifically-illiterate irrational not-particularly-bright humanity. If he had been forced to spend a few years in an ordinary American high school or in an ordinary workplace -- or even if he had had lengthy dealings with a few of the many community activists of the San Francisco Bay Area where he lives -- or even if he just had 20 more years of life experience -- I wonder if he would still think it is a good idea to "make it possible / easier for groups of strangers to coalesce into an effective task force over the Internet" using only the skills for working in groups that come from the ancestral environment.

The way it is now, to devise an effective plan to change society, a person needs more rationality skill and more true information about their society than most people have. But I humbly submit that that is not a bug, but rather a feature! So is the fact that the instincts and emotions and biases that come from the ancestral environment are not enough to do it. (I sometimes go even further and say that it is important to be able to use rationality and deliberation to veto or override the instincts and emotions and biases that come from the ancestral environment.)

And I do not think it is particularly useful to frame what I just said as elitism. It is just an acknowledgement of the following reality: for almost any plan you can come up with for empowering the masses, I can come up with a plan that preferentially empowers the people more likely to use the new power for good -- for some definition of "good" that you and I can both agree on. Science and technology and other means of empowering people have become too potent for scientists and technologists and others not to think through what people are likely to do with the new power.

EDIT: for the sake of perspective and balance, I note that the mere fact that a person has read this post and consequently probably has a strong interest in the subject matter of this web site might be enough evidence of rationality to ameliorate my concerns about empowering them with a new technology for collaboration provided that the collaboration has a goal or mission statement less ambiguous than the current mission statement of a certain institute that must not be named, but if Eliezer's purpose is to empower only the people interested enough in rationality and knowledge to keep coming back to this web site, he should say so instead of speaking of empowering people in general.

There are certain resonances between Robin's comment and this one, which is why I put it here.

Comment by rhollerith on The uniquely awful example of theism · 2009-04-11T17:27:59.735Z · LW · GW

Sure does. Thanks.