Posts

Is it okay to take toilet-pills? / Rationality vs. the disgust factor 2011-07-25T09:08:32.014Z
Overcoming Suffering & Buddhism 2011-05-31T04:35:00.004Z
A puzzle on the ASVAB 2011-05-30T04:01:00.757Z
Grigori Perelman refused prize because he knows "how to control the universe" 2011-05-13T23:39:45.771Z
How should I help us achieve immortality? 2011-05-10T02:00:16.735Z

Comments

Comment by Hul-Gil on Applause Lights · 2012-05-23T07:07:32.757Z · LW · GW

but I can't think of any where it was just better, the way that actual technologies often are

I find that a little irritating - for people supposedly open to new ideas, science fiction authors sure seem fearful and/or disapproving of future technology.

Comment by Hul-Gil on Thoughts on the Singularity Institute (SI) · 2012-05-11T07:24:24.572Z · LW · GW

Thanks. I read the whole debate, or as much of it as is there; I've prepared a short summary to post tomorrow if anyone is interested in knowing what really went on ("as according to Hul-Gil", anyway) without having to hack their way through that thread-jungle themselves.

(Summary of summary: Loosemore really does know what he's talking about - mostly - but he also appears somewhat dishonest, or at least extremely imprecise in his communication.)

Comment by Hul-Gil on Thoughts on the Singularity Institute (SI) · 2012-05-11T03:44:40.532Z · LW · GW

That doesn't help you if you need a car to take you someplace in the next hour or so, though. I think jed's point is that sometimes it is useful for an AI to take action rather than merely provide information.

Comment by Hul-Gil on Thoughts on the Singularity Institute (SI) · 2012-05-11T03:29:11.079Z · LW · GW

The answer is probably that you overestimate that community's dedication to rationality because you share its biases.

That's probably no small part of it. However, even if my opinion of the community is tinted rose, note that I refer specifically to observation. That is, I've sampled a good amount of posts and comments here on LessWrong, and I see people behaving rationally in arguments - appreciation of polite and lucid dissension, no insults or ad hominem attacks, etc. It's harder to tell what's going on with karma, but again, I've not seen any one particular individual harassed with negative karma merely for disagreeing.

The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?

Can you elaborate, please? I'm not sure what enormous conceit you refer to.

My take is that neither side in this argument distinguished itself. Loosemore called for an "outside adjudicator" to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these "sins" deserved a ban

I think that's an excellent analysis. I certainly feel like Yudkowsky overreacted, and as you say, in the circumstances no wonder it still chafes; but as I say above, Richard's arguments failed to impress, and calling for outside help ("adjudication" for an argument that should be based only on facts and logic?) is indeed beyond obnoxious.

Comment by Hul-Gil on Thoughts on the Singularity Institute (SI) · 2012-05-11T01:00:49.639Z · LW · GW

Can you provide some examples of these "abusive personal attacks"? I would also be interested in this ruthless suppression you mention. I have never seen this sort of behavior on LessWrong, and would be shocked to find it among those who support the Singularity Institute in general.

I've read a few of your previous comments, and while I felt that they were not strong arguments, I didn't downvote them because they were intelligent and well-written, and competent constructive criticism is something we don't get nearly enough of. Indeed, it is usually welcomed. The amount of downvotes given to the comments, therefore, does seem odd to me. (Any LW regular who is familiar with the situation is also welcome to comment on this.)

I have seen something like this before, and it turned out the comments were being downvoted because the person making them had gone over, and over, and over the same issues, unable or unwilling to either competently defend them, or change his own mind. That's no evidence that the same thing is happening here, of course, but I give the example because in my experience, this community is almost never vindictive or malicious, and is laudably willing to consider any cogent argument. I've never seen an actual insult levied here by any regular, for instance, and well-constructed dissenting opinions are actively encouraged.

So in summary, I am very curious about this situation; why would a community that has been - to me, almost shockingly - consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?

One final thing I will note is that you do seem to be upset about past events, and it seems like it colors your view (and prose, a bit!). From checking both here and on SL4, for instance, your later claims regarding what's going on ("dissent is ruthlessly suppressed") seem exaggerated. But I don't know the whole story, obviously - thus this question.

Comment by Hul-Gil on Circular Altruism · 2012-05-01T18:22:43.804Z · LW · GW

Well, he didn't actually identify dust mote disutility as zero; he says that dust motes register as zero on his torture scale. He goes on to mention that torture isn't on his dust-mote scale, so he isn't just using "torture scale" as a synonym for "disutility scale"; rather, he is emphasizing that there is more than just a single "(dis)utility scale" involved. I believe his contention is that the events (torture and dust-mote-in-the-eye) are fundamentally different in terms of "how the mind experiences and deals with [them]", such that no amount of dust motes can add up to the experience of torture... even if they (the motes) have a nonzero amount of disutility.

I believe I am making much the same distinction with my separation of disutility into trivial and non-trivial categories, where no amount of trivial disutility across multiple people can sum to the experience of non-trivial disutility. There is a fundamental gap in the scale (or different scales altogether, à la Jones), a difference in how different amounts of disutility work for humans. For a more concrete example of how this might work, suppose I steal one cent each from one billion different people, and Eliezer steals $100,000 from one person. The total amount of money I have stolen is greater than the amount that Eliezer has stolen; yet my victims will probably never even realize their loss, whereas the loss of $100,000 for one individual is significant. A cent does have a nonzero amount of purchasing power, but none of my victims have actually lost the ability to purchase anything; whereas Eliezer's, on the other hand, has lost the ability to purchase many, many things.

I believe utility for humans works in the same manner. Another thought experiment I found helpful is to imagine a certain amount of disutility, x, being experienced by one person. Let's suppose x is "being brutally tortured for a week straight". Call this situation A. Now divide this disutility among people until we have y people all experiencing (1/y)*x disutility - say, a dust speck in the eye each. Call this situation B. If we can add up disutility like Eliezer supposes in the main article, the total amount of disutility in either situation is the same. But now, ask yourself: which situation would you choose to bring about, if you were forced to pick one?

Would you just flip a coin?

I believe few, if any, would choose situation A. This brings me to a final point I've been wanting to make about this article, but have never gotten around to doing. Mr. Yudkowsky often defines rationality as winning - a reasonable definition, I think. But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual.

I don't think this is winning; no one is happier with this situation. Like Eliezer says in reference to Newcomb's problem, if rationality seems to be telling us to go with the choice that results in losing, perhaps we need to take another look at what we're calling rationality.


*Well, assuming a population like our own, not every single individual would agree to experience a dust speck in the eye to save the to-be-tortured individual; but I think it is clear that the vast majority would.

Comment by Hul-Gil on [deleted post] 2012-05-01T02:03:32.863Z

I think you're a good writer, in that you form sentences well, and you understand how the language works, and your prose is not stilted or boring. The problem I personally had, mostly with the previous two entries in this series, was that the "meat" - the interesting bits telling me what you had concluded, and why, and how to apply it, and how (specifically) you have applied it - seemed very spread out among a lot of filler or elaboration. I couldn't tell what you were eventually going to arrive at, and whether it'd be of use or interest to me. Too much generality, perhaps: compare "this made my life better" with "by doing X I caught myself thinking Y and changed this to result in the accomplishment of Z."

I tell you this only in case you are interested in constructive criticism from yet another perspective; some undoubtedly consider the things I have mentioned virtues in an author. In any case, I have upvoted this article; it doesn't deserve a negative score, I think - long-winded, maybe; poorly done or actively irrational, certainly not. The ideas are interesting, the methodology is reasonable, and the effort is appreciated.

Comment by Hul-Gil on [deleted post] 2012-05-01T01:42:58.223Z

That's nicely done! Clear, concise, and immediately applicable. I think Frank himself is an intelligent person with good and interesting ideas, but the "meat" of these posts seems spread out among a lot of filler/elaboration - possibly why they're hard to skim. I wasn't even sure, for quite a while, what the whole series was really about, beyond "general self-improvement."

This latest article is much more "functional" than the previous two, though, so I think we're moving in the right direction.

One thing your comment brings to mind - Frank notes something about unconscious mental processes being trainable, and the suggestion is that one can train them to be rational, or at least more accurate. (If I remember correctly.) Is this idea included your comment? Perhaps under "folk psychology"?

It seems like an interesting concept, though I was unable to find any instruction on how to actually accomplish it. (But I haven't looked too hard yet.)

Comment by Hul-Gil on [deleted post] 2012-05-01T01:28:13.711Z

Upvoted both this and its parent, because the quoted bit of Strunk and White seems like good advice, and because the linked criticism of Strunk and White is lucid and informative as well as entertaining. I learned about two new but related things, one right after the other; my conclusions about Strunk and White swung rapidly from one position to the opposite in quick succession. Quite an experience! ("Oh look, there are these two folks who are recognized authorities on English, and they're presenting good writing advice. Strunk and White... must remember. Wait; here's a response... Oh - turns out not much of their advice is that good after all! Passive voice IS acceptable! Language Log... must remember.")

Comment by Hul-Gil on The Wonder of Evolution · 2012-05-01T01:12:18.337Z · LW · GW

If you pick the chance worldview, you are heavilly reliant on evolution to validate your worldview.

No, not at all. Evolution is one aspect of one field of one discipline. One can argue that existence came about by chance (and I'm not comfortable with that term) without referring to evolution at all; there are many other reasons to reject the idea of a designer.

See Desrtopa's reply, below, regarding chance and design and whether a designer helps here. S/he said it better than I could!

Comment by Hul-Gil on The Wonder of Evolution · 2012-05-01T00:59:35.798Z · LW · GW

I addressed this here, but I missed a few things. For one, I address the extremity of the hypotheticals in the linked post, but I didn't point out, also, that these things seem extreme because we're used to seeing things work out as if evolution were true. These things wouldn't seem extreme if we had been seeing them all along; it's precisely because evolution fits what we do find so well that evolution-falsifying examples seem so extreme. Fossil rabbits in the Precambrian would probably not seem so extreme to a creationist; it's what they'd expect to find (since all species supposedly lived alongside one another, AFAIK).

For two:

We are limited to these two conclusions and nothing else. Therefore any hit on a theory that advocates one, is a support for the other.

I don't think that follows. A hit on a chance-favoring theory could be a "hit" in such a way as to support a different chance-favoring theory, rather than any favoring design.

I think this pushes scientists (even sub-consciously) to view evolution almost as a belief system rather than a science.

Can you point out some ways that scientists view evolution as a belief system rather than science?

Comment by Hul-Gil on The Wonder of Evolution · 2012-05-01T00:32:37.626Z · LW · GW

That depends on how you define 'system'. Is 'system' the entire biological existence of earth? In that case, yes evolution would be a mathematical certainty eventually. But is system a specific species? In that case evolution would only occurr within those species.

He goes on to tell you exactly what systems: any with random heritable changes that can selectively help or hinder reproduction. This would mean both all life on earth that fits within that definition, and any particular species also under that umbrella.

It seems to me like you're trying to make a distinction between "microevolution" and "macroevolution" here, but I may be misreading you. If you are, however, notice that thomblake's process makes no distinction between them; to suppose one but not the other could occur, you'd need a specific mechanism or reason.

Also, time is another factor. Your explanation logically does not necessitate that evolution has already happened, only that it will eventually happen.

No, it necessitates that it is happening and has happened in any such system. The process, that is. You're correct if you're just saying that the process may not have resulted in any differentiation at any given time.

Comment by Hul-Gil on Recognizing memetic infections and forging resistance memes · 2012-04-30T04:52:02.443Z · LW · GW

I feel like you're trying to say we should care about "memetic life" as well as... other life. But the parallel you draw seems flawed: an individual of any race and sex is still recognizably conscious, and an individual. Do we care about non-sentient life, memetic or otherwise? Should we care?

Comment by Hul-Gil on The Wonder of Evolution · 2012-04-28T16:49:34.832Z · LW · GW

I would like to see the scientific community come up with more specific parameters as to what would be considered: A. minor damage to the theory, B. major hit on the theory, and C. evidence that would make the theory most likely untenable. We do this for almost every other science, except evolution.

I think we do this for evolution as much as any other part of science. In any, the judgment of the severity of a "hit" is possible if you understand the relevant concepts. An understanding of the concepts lets one see what separates minor issues from fossil rabbits in the Precambrian; what's a detail, and what's central to the theory - some things would necessitate a modification, and some would cast the entire theory into question. Think of what it took to overturn any other well-established theory in history, or what it would take to overturn relativistic physics.

More generally, if you have a whole bunch of evidence that points to one conclusion, it should take something fairly extreme to substantially sway you away from belief in that conclusion and make you re-evaluate all the accumulated evidence. (And there's a lot of evidence for evolution.)

Comment by Hul-Gil on Thanksgiving Prayer · 2012-04-26T17:07:05.689Z · LW · GW

I'm aware this is from 2008, but I just can't let this stand in case one day an undecided visitor wanders past and reads GenericThinker's comment. (I also can't resist pointing out that his handle is rather appropriate.)

1.) Belief in God doesn't necessarily drive people to behave in a more moral way. Consider Muslim fundamentalist terrorists, for example.

2.) The question of God's existence is not unanswerable. The evidence for or against God is no more open to interpretation than any other evidence. If God affects the material universe, we can observe the effect(s); if God doesn't affect the material universe, the question is moot. I believe Mr. Yudkowsky has also written about the fallacious "non-overlapping magisteria" idea.

3.) God's existence may or may not make "the issues of evolution" (what are these?) easier to explain, but it brings up many, many more questions... like how an omnipotent, omniscient being might come about - a much more surprising phenomenon than mere humans, surely.

4.) No one is "bashing God."

We're bashing theists.

Comment by Hul-Gil on Serious Stories · 2012-04-25T18:17:49.550Z · LW · GW

I enjoyed Down and Out in the Magic Kingdom quite a bit! I'm glad Kevin7 posted this link.

However, the insanity portrayed as being beneficial and desirable in The Metamorphosis is too egregious to ignore - even if the rest of the story had made good on its promise of providing an interesting look at a posthuman world. (It doesn't. We don't even get to see anything of it.) At first, I thought "oh, great; more cached-thought SF"... but it was worse than that. I forced myself to finish it just so I could be sure the following is accurate.

Worse than the already-irritating "death gives meaning to life!" reasoning replete in the work, we find either actual insanity or just a blithe disregard for self-contradiction:

  • Technology is bad, because one day the universe will die. (What's the connection? No fucking clue.)
  • We should live like cavemen, because technology (and knowledge itself - no reading!) will lead to murder (but certain arbitrary tools are okay); but death is fine when it's a bear or disease that kills you.
  • Reality isn't "really real" if it's created or controlled by an AI, even if it's indistinguishable from... uh... other reality.
  • And, of course, we save the most obvious conclusion for last (sorta-spoiler warning): despite item #2, it's okay to murder billions of happy immortals because you're unhappy that life is safe and everyone is free at last.

Merits as a story? Well, at first, it's even a little exciting, as we are treated to a glimpse of a post-Singularity world (the only glimpse we get, as it turns out), and then some backstory on how the AI was created. That's cool; but after that, it's not worth reading, in this reader's humble opinion. It's very formulaic, the characters (all ~three of them) have no personality (unless you count angst), and any technical or imaginative details that might be interesting are... well, either not there at all, or waved away with the magic Correlation Effect Plot Device. (It's first used to explain one thing, then turns out to do, quite literally, everything.)

I would like to contrast this to John Wright's The Golden Age trilogy. That work is replete with interesting ideas and details about how a Far Future society might look and work; no magic one-size-fits-all Plotonium (to coin a term; I'm sure TVTropes already has one, though) here. In Metamorphosis, we aren't really given any glimpse at society, but what we do see is essentially Now Except With Magic Powers. In The Golden Age, it is immediately obvious we aren't in Kansas any more. Metamorphosis explores one idea - AI - and that, poorly; The Golden Age includes nanotech, simulation, self-modification, the problem of willpower (see: Werewolf Contracts), posthumans, post-posthumans, post-/trans-human art, and more. Check it out if you have transhumanist leanings... or just enjoy science fiction, come to that.

Comment by Hul-Gil on Absolute denial for atheists · 2012-04-24T01:38:53.671Z · LW · GW

Ergo, whatever environmental influences shape personality come from outside the home, not inside.

How far apart were the different homes - in the same neighborhood? School district? I also wonder how different the parenting styles considered were; at the same economic level in the same town, for example, divisions in "style" might be minor compared to people elsewhere, of different means.

It doesn't seem plausible, but you assert the books have mountains of evidence and I am not curious enough to check myself, so I ultimately withhold judgment.

Comment by Hul-Gil on Politics is the Mind-Killer · 2012-04-24T01:25:21.929Z · LW · GW

The beauty of politics is that there is just enough uncertainty to make every position appear plausible to some portion of the public, even in those rare cases where there is definitive "proof" (however defined) that one particular position is correct. [emphasis added]

Well, that doesn't sound very beautiful.

Comment by Hul-Gil on Undiscriminating Skepticism · 2012-04-21T05:09:39.737Z · LW · GW

I'm thinking of moral reasoning as the kind of reasoning you're morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.

Can't that apply to hypotheticals? If you come to the wrong conclusion you're a horrible person, sort of thing.

I would probably call "moral reasoning" something along the lines of "reasoning about morals". Even using your above definition, I think reasoning about morals using hypotheticals can result in a judgment, about what sort of action would be appropriate in the situation.

Comment by Hul-Gil on The Fallacy of Gray · 2012-04-20T19:39:15.025Z · LW · GW

That's true. They could be wrong in different ways (or "different directions", in our example), which could be important for some purposes. But as you say, that depends on said purposes; I'm still uncertain as to the fallacy that dspeyer refers to. If our only purpose is determining some belief's level of correctness, absent other considerations (like in which way it's incorrect), isn't the one dimension of the "shades of grey" model sufficient?

Although -- come to think of it, I could be misunderstanding his criticism. I took it to mean he had an issue with the original post, but he could just be providing an example of how the shades-of-grey model could be used fallaciously, rather than saying it is fallacious, as I initially interpreted.

Comment by Hul-Gil on The Fallacy of Gray · 2012-04-20T18:30:24.504Z · LW · GW

I'm trying to imagine the other dimension we could add to this. If we have "more right" and "less right" along one axis, what's orthogonal to it?

I initially felt this comment was silly (the post isn't saying every space can be reasonably modeled as one-dimensional, is it?), but my brain is telling me we actually could come up with a more precise way to represent the article's concept with a Cartesian plane... but I'm not actually able to think of one. False intuition based on my experience with the "Political Compass" graph, perhaps.

Comment by Hul-Gil on The Fallacy of Gray · 2012-04-20T18:24:41.605Z · LW · GW

It reminded me of that as well. Here is the full article; I'm glad it's online, because the errors he (and Yudkowsky, above) clears up are astonishingly prevalent. I've had cause to link to it many times.

Comment by Hul-Gil on How can we get more and better LW contrarians? · 2012-04-19T18:33:14.344Z · LW · GW

What do you think about Kabbalah?

40 is sometimes used, in the Torah, to indicate a general large quantity - according to Google. It also has associations with purification and/or wisdom, according to my interpretation of the various places it appears in the Bible as a whole. (There are a lot of them.)

Comment by Hul-Gil on Be Happier · 2012-04-19T07:52:03.765Z · LW · GW

I would have liked some thoughts on/insight into the data posted as well; but all the same, summaries like this, that gather a lot of related but widely-dispersed information together, are very useful (especially as a quick reference or overview, or, as Jonathan says below, as a starting point for further research), and I definitely wouldn't mind seeing more of them.

Comment by Hul-Gil on Rationality Quotes April 2012 · 2012-04-19T07:20:26.650Z · LW · GW

This suggests measuring posts for comment EV.

Now that is an interesting concept. I like where this subthread is going.

Interesting comparisons to other systems involving currency come to mind.

EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties... for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--

...okay, perhaps some sleep is in order first.

Comment by Hul-Gil on Intelligence Explosion vs. Co-operative Explosion · 2012-04-19T06:11:20.935Z · LW · GW

(Since you two seem to be mostly using the mentioned IQ scores as a way to indicate relative intelligence, rather than speaking of anything directly related to IQ and IQ tests, this is somewhat tangential; however, Mr. Newsome does mention some actual scores below, and I think it's always good to be mindful when throwing IQ scores around. So when speaking of IQ specifically, I find it helpful to keep in mind the following.

There are many different tests, which value scores differently. In some tests, scores higher than about 150 are impossible or meaningless; and in all tests, the higher the numbers go the less reliable [more fuzzy] they are. One reason for this, IIRC, is that smaller and smaller differences in performance will impact the result more, on the extreme ends of the curve; so the difference in score between two people with genius IQs could be a bad day that resulted in a poorer performance on a single question. [There is another reason, the same reason that high enough scores can be meaningless; I believe this is due to the scarcity of data/people on those extreme ends, making it difficult or impossible to normalize the test for them, but I'm not certain I have the explanation right. I'm sure someone else here knows more.])

Comment by Hul-Gil on Intelligence Explosion vs. Co-operative Explosion · 2012-04-19T05:34:10.508Z · LW · GW

The heuristic I generally use is "use parentheses as needed, but rewrite if you find that you're needing to use square brackets." Why? Thinking about it, I believe this is because I see parentheses all the time in professional texts, but almost never parentheticals inside parentheticals.

But as I verbalize this heuristic, I suddenly feel like it might lend the writing a certain charm or desirable style to defy convention and double-bag some asides. Hmm.

Comment by Hul-Gil on Intelligence Explosion vs. Co-operative Explosion · 2012-04-19T05:30:54.470Z · LW · GW

No, that time passed when you merely had a single parenthetical inside a parenthetical. But when you have a further parenthetical inside the former two, is it then time to break out the curly brackets?

Comment by Hul-Gil on Intelligence Explosion vs. Co-operative Explosion · 2012-04-19T05:25:58.209Z · LW · GW

I have found entirely the opposite; it's very strongly correlated with spelling ability - or so it seems from my necessarily few observations, of course. I know some excellent mathematicians who write very stilted prose, and a few make more grammatical errors than I'd have expected, but they can all at least spell well.

Comment by Hul-Gil on No Safe Defense, Not Even Science · 2012-04-18T19:21:41.960Z · LW · GW

Not only this, but you can be obviously wrong. We look at people trusting in spontaneous generation, or a spirit theory of disease, and mock them - rightfully. They took "reasonable" explanations of ideas, tested them as best they could, and ended up with unreasonable confidence in utterly illogical ideas.

I don't believe most of the old "obviously wrong" beliefs, like a spirit theory of disease, were ever actually systematically tested. Experimentation doesn't prevent you from coming to silly conclusions, but it can throw out a lot of them.

(A nitpick: Either these things are only obviously wrong in retrospect, or they did not start with reasonable explanations. That is, either we cannot rightfully mock them, or the ideas were ridiculous from the beginning.)

As for the rest, I don't disagree with your assertions - only the (implied) view we should take of them. It is certainly true that science can be slow, and true that you can't ever really know if your explanation is the right one. But I think that emphasis on knowing "the real truth", the really right explanation, is missing the point a little; or, in fact, the idea of the One True Explanation itself is unproductive at best and incoherent at worst. After all, even if we eventually have such an understanding of the universe that we can predict the future in its entirety to the finest level of detail theoretically possible, our understanding could still be totally wrong as to what is "actually" happening. Think of Descartes' Evil Genius, for example. We could be very, very confident we had it right... but not totally sure.

But - once you are at this point, does it matter? The power of science and rationality lies in their predictive ability. Whether our understanding is the real deal or simply an "[apparently] perfect model" becomes immaterial. So I think yes, science can lead you to the right conclusion, if by "right" we mean "applicable to the observed world" and not The Undoubtable Truth. No such thing exists, after all.

The slowness is a disappointment, though. But it's accelerating!

Comment by Hul-Gil on Be Happier · 2012-04-17T22:19:26.537Z · LW · GW

I was thinking of that; maybe some people equate leisure time with being directionless, and thus need externally-imposed goals?

Comment by Hul-Gil on Be Happier · 2012-04-17T22:18:02.355Z · LW · GW

I was hoping someone would bring that up. You've already given the same answer I would, though: it's not necessarily an either/or scenario like Nozick's "experience machine" concept, so it's possible to have both heroin and pictures, in theory.

Comment by Hul-Gil on Be Happier · 2012-04-17T16:49:33.632Z · LW · GW

See my post below; I think this is due to a.) a misunderstanding of the nature of happiness (a thought that chemically-induced happiness is different from "regular" happiness... which is also chemical), b.) a feeling that opium is incredibly dangerous (as it can be), and c.) a misunderstanding of how opium makes you feel - people can say "I know opium makes you happy" without actually feeling/knowing that it does so. That is, their mental picture of how they'd feel if they smoked opium doesn't correspond to the reality, which is - for most people - that it makes them feel much, much better than they would have imagined.

Comment by Hul-Gil on Be Happier · 2012-04-17T16:46:24.208Z · LW · GW

Most people I know believe that heroin (and similar mechanisms) get short-term happiness followed either by long-term unhappiness, or death.

That's the long and short of it, I think. There is no reason not to use heroin to obtain maximum utility (for one's self), if one a.) finds it pleasurable, b.) can afford it, and c.) is able to obtain pure and measured doses. (Or simply uses pharmaceuticals.) The perceived danger of heroin comes from its price and illegality (uncertain dosage + potentially dangerous impurities), which often results in penury, and overdose or illness, for the user.

People also want "real" happiness, by which I presume they mean happiness resulting from actions like painting a picture, and not happiness induced by chemical... which is silly, since the two feelings are produced by the same neurochemistry and functionally identical (i.e., all happiness is ultimately chemical). (The perceived difference may still bother someone enough that they choose a different route, though, especially if they don't realize they can just paint a picture... on heroin.)

Comment by Hul-Gil on Be Happier · 2012-04-17T16:38:51.993Z · LW · GW

I have experienced the same thing. I have apparently endless capacity for leisure, possibly because I have an endless number of interests and hobbies to pursue then drop then pick back up. I've never understood people who don't want this kind of life; do they really exist? Can people get bored with leisure?

Comment by Hul-Gil on Be Happier · 2012-04-17T16:29:52.522Z · LW · GW

Great post! I'm going to use as much of it as I can.

I think it might be difficult to apply some of these, since I notice a good deal of my unhappiness is not affected by changes in thought or outward motions, and it can be hard to translate knowing you should try something into actually applying it. (But both of these can be mitigated: smiling, for instance, really does make me feel a bit happier even if I'm forcing the smile, and I'm sure there are plenty of articles about akrasia, here on LessWrong.)

Prefer experiential purchases; avoid materialistic goals.

Indeed. You can also skip the middleman and go straight to purchasing the direct experience of euphoria. The caveat of varying your experiences applies here too, though. Especially here.

Comment by Hul-Gil on A rationalist's guide to psychoactive drugs · 2012-04-12T05:33:18.652Z · LW · GW

I don't think so - acetic anhydride is really the only other reagent involved in the step we're considering, and an excess wouldn't be harmful in any way... except, possibly, making the product a bit uncomfortable to ingest, if too much acetic acid was left over. (An excess of acetic anhydride is commonly used so as to make sure all the morphine reacts; any excess will become acetic acid - i.e., vinegar - as well.) It's common for a little to be left over, giving heroin its characteristic (vinegar-y) smell, but I don't think it's dangerous.

So I'd say that there's no danger here... but lack of quality control in general is definitely a big problem indeed.

Comment by Hul-Gil on Do you have High-Functioning Asperger's Syndrome? · 2012-04-11T00:17:18.415Z · LW · GW

This is a great way to put it.

Comment by Hul-Gil on Extreme Rationality: It's Not That Great · 2012-04-10T05:51:06.477Z · LW · GW

I think one reason might be that the vast majority of the decisions we make are not going to make a significant difference as to our overall success by themselves; or rather, not as significant a difference as chance or other factors (e.g., native talent) could. For example, take the example about not buying into a snake-oil health product lessdazed uses above: you've benefited from your rationality, but it's still small potatoes compared to the amount of benefit you could get from being in the right place at the right time and becoming a pop star... or getting lucky with the stock market... or starting a software company at just the right time. These people, who have to have varying degrees of something besides luck and a small amount of rationality to capitalize on it, are much more visible; even if their decisions were less-than-optimal, the other factors make up for it. Nothing's stopping them from poisoning themselves with a scam miracle-elixir, though.

This ties in with the point lessdazed was making, that the rational person most likely loses less rather than wins big - that is, makes a large number of small decisions well, rather than a single important one extremely well. That's not to be despised; I wonder what the results if we ask about overall well-being and happiness rather than fame and fortune.

Comment by Hul-Gil on SotW: Check Consequentialism · 2012-04-06T02:15:14.003Z · LW · GW

Was not my counterfactual scenario. It was someone else describing a counterfactual where ninjas are travelling by sea to a ninja-convention. My only contribution there was to (implicitly) assert that the counterfactualising operation that preserves the most probability mass to produce that scenario would not result in ninjas travelling on unarmed ships.

I edited that; I think the daimyos did have their own navies. I'm not actually certain about that, though, and I don't feel like looking it up. Maybe someone who knows more Japanese history can contribute. In either case, I don't think it's possible to say which is more probable, since whether they book passage on a merchant ship, or are sent with a naval ship by a master who controls both, depends entirely on the circumstance we concoct. Historically, they could have done both, if daimyos did have navies.

Comment by Hul-Gil on SotW: Check Consequentialism · 2012-04-06T01:17:41.773Z · LW · GW

Worse in what manner? In individual combat? A pirate crew vs. an association of ninjas?

From the comments I've read so far, I think the hypothetical situations you've used to determine that ninjas would win are grossly weighted in favor of ninjas. For example, you've already said it can't be any sea-based conflict (unless the ninjas are specially-trained sailor-ninjas on a navy ship, instead of being passenger ninjas booking passage on a merchant ship, as most would do if required to travel by sea if traveling by sea is incidental to their main function - being, in this case, assassination), "because why would ninjas be at sea?" Yet it can be "drunk pirates in whorehouses, unaware they are being targeted, vs ninjas that know exactly where they are and can get to them in time."

Your ninjas also have unrealistic government support (they were usually employed by noble families or daimyos, not imperial forces, and considered quite expendable - see hairyfigment's comment below), "better" ships than the pirates (AFAIK almost impossible, considering the naval technology the sides would be using), the ability to obtain whatever training is required (no time limit? the pirates cannot buy training too?), are aware of the conflict while the pirates aren't (or would they be getting drunk on land? well - maybe), etc.

A distinction should be made between a strict determination of fighting prowess - pirates vs ninjas in equal numbers in open combat - and the sort of situation you seem to be thinking of, wherein we try to be as realistic as possible, and all factors (such as whether or not ninjas would be at sea, and a sailor's propensity to get drunk) are considered. The latter is a lot more difficult to figure out, since so much would depend on circumstance (as in your drunk pirate example). This should also include allowance for the favored methods of both sides - pirates fighting at sea, ninjas not charging forward in open combat but assassinating and infiltrating - though you only seem to make the latter allowance.

For the former situation, I believe pirates could possibly win. They have better guns, and contrary to popular assumption, pirates could be very skilled in swordsmanship and general brawling. A lot of them were ex-navy, and in any case you wouldn't survive long as a pirate without obtaining some competence. Ninjas might be trained in espionage and assassination, but that doesn't include open combat, and they'd likely have less experience with it than pirates. They were trained in swordmanship as well, though, and quite possibly more thoroughly (but in some cases inadequately!), and bows could be as good or better than guns at many points in our possible time-range.

For the latter, here are a few factors to consider. One, ninjas didn't often attack in groups; sometimes they operated in small teams, but not any as large as a pirate crew. They were not used to wipe out large groups of people, but individual targets. Already we must depart from realism if we want to grant anything like equal numbers; it wouldn't be interesting to think about "one ninja vs a crew of pirates", but it weights the situation in favor of the ninjas if we go beyond "a small group of ninjas vs a crew of pirates". Two, would the pirates be aware they're being targeted by assassins? That would seem to depend on why exactly they're being targeted - a bounty they might be aware of; a covert vendetta for personal reasons, probably not. Trying to think of a realistic reason for the conflict might be a bit difficult. Three, I don't think ninjas could ever requisition ships, but if they could, they would still be at a disadvantage in naval combat considering the superiority of European vessels up until very recently. (It's not like pirates would be exactly inexperienced at naval combat, note.) Four, the pirates might be based at an unknown location, or nowhere at all, leaving the ninjas to attempt to catch them either in the act of raiding a coastal village, or making landfall to obtain supplies. Five, the ninjas might also be based in a location unknown to the pirates, or operating clandestinely; so while I initially considered that the pirates might raid their Ninja HQ, that might not be possible.

So... we might have a crew of pirates making landfall in various locations and attempting to locate and kill a small (<12?) team of ninjas, and said small team of ninjas attempting to catch them in the act and kill them right back. I suppose there is also the possibility of ninjas acquiring a vessel to pursue the pirates, although I can't see how that would end any way but badly for them. They could hardly crew the entire vessel themselves, even if they had sufficient numbers, as they're not sailors. You could give them some year(s) to obtain sailing skill, but then, you could also give the pirates some year(s) to obtain espionage skill. You could give them an unhistorical amount of support and grant them a naval vessel with crew, but now it's not strictly pirates vs. ninjas.

A variety of situations could develop from this: ninjas creep aboard anchored pirate vessel and attempt to assassinate the crew, pirates raid village where ninjas are staying, pirates and ninjas engage in naval combat, land-based combat... I think, as Nornagest says above, it is clear that you have to stretch to come up with a situation in which they're actually engaging in conflict, and if you do, who wins depends entirely on the circumstance you have concocted.

To say that anyone who doesn't think real pirates aren't "ridiculously 'worse' than ninjas is not thinking well at all" seems quite absurd to me, and even smacks of "ninja fanboyism". It's by no means so clear-cut that any pirate-supporter is obviously mentally deficient. And I like ninjas much more, personally. Pirates were awful people who deserve to be vilified, not romanticized. I don't even know why I've put this much effort in supporting them, come to think of it, except my general urge to correct what I see as error. I've been exposed to too much weeabo-ism, perhaps.

Comment by Hul-Gil on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T19:43:06.572Z · LW · GW

This is interesting to me, since we seem to be in about the same position academically (though you're a bit ahead of me). What was responsible for such a huge increase in productivity, or can that not be summarized? I need to research more myself, but I do not think I will be able to afford or attend the minicamp, so anything you'd be able to share would be appreciated.

Comment by Hul-Gil on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T19:19:19.534Z · LW · GW

Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.

Are you familiar with the study (studies) about meditation and brain health? I've seen one or two crop up, but I've not read the actual studies themselves - just summaries. IIRC, it appears to reduce the effects of aging.

The other reason I consider meditation possibly worth pursuing is that it appears to be an effective "mindhack" in at least one respect: it can be used to reduce or eliminate unpleasant physical and mental sensations. For example, I believe it's been shown to be effective in reducing stress and anxiety, and - more impressively - chronic pain, or even sensations like "chilly". How useful this is is more debatable: while I'm waiting in line, shivering, I probably won't be able to meditate effectively, or have the time to.

Comment by Hul-Gil on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T19:09:50.087Z · LW · GW

I think mysticism is inherently irrational, and thus seriously participating in "mysticism itself" is counter-productive if you wish you become more rational. But I say "seriously participating", because as you say, perhaps mystical aliefs can be used to produce useful mental states - as long as it is recognized that that's what you're doing, and you don't ascribe any special significance to the mystical aspects (i.e., you recognize that the same effect can probably be achieved without any such relics; it's just a matter of preference).

Like those neopagans you mention, I am both an atheist and a Wodanist. I use Wodan as a symbol of various ideals, and the devotions, rituals, symbols, etc. involved to remind myself of these. My actual beliefs are entirely atheistic and materialistic, but I enjoy the trappings and history behind Germanic paganism of this sort; thus, the main reason behind my Wodanism is simply enjoyment. Useful? Yes, as a reminder or way to encourage yourself (e.g., "though I am tempted to waste my money, I will be self-disciplined like my patron god") - but that's entirely apart from any mystical aspects.

Comment by Hul-Gil on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T18:46:35.297Z · LW · GW

Thanks for this; it's detailed and doesn't shy from pointing out the Bad and the Ugly (though it seems like there isn't much of those!). One thing that made me curious, however:

the marginal return on playing Dominion online was negative past about the first 10% of my time spent

How did you determine this?

Edit: Oh, I see you explain this below.

Comment by Hul-Gil on Rationality Quotes March 2012 · 2012-03-30T18:14:05.802Z · LW · GW

That's a good quote! +1.

Unfortunately, for every rational action, there appears to be an equal and opposite irrational one: did you see bhousel's response?

Rationality is emotionless and mechanical. It's about making a reasonable decision based on whatever information is available to you. However, rational decisions do not involve morals, culture, or feelings. This is exactly what companies like Google and Goldman Sachs are being criticized for. [...] If I look down into my wallet and see no money there, and I'm hungry for lunch, and I decide to steal some money from a little old lady, that may be a perfectly rational decision to make. An outside observer may say I'm being evil, but they don't have a complete information picture about how hungry I am, or how long the line at the ATM is, or that everyone else is eating lunch so I have a duty to my shareholders to do the same.

Sigh.

Comment by Hul-Gil on Rationality Quotes March 2012 · 2012-03-30T18:08:09.784Z · LW · GW

This probably helps explain some of the more blatantly maladaptive aspects of religious law we know about

Can you expand on this a little? I'm interested to see what in particular you're thinking of.

Comment by Hul-Gil on Rationality Quotes March 2012 · 2012-03-30T18:01:36.980Z · LW · GW

Since I have just read that "the intelligentsia" is usually now used to refer to artists etc. and doesn't often include scientists, this isn't as bad as I first thought; but still, it seems pretty silly to me - trying to appear deep by turning our expectations on their head. A common trick, and sometimes it can be used to make a good point... but what's the point being made here? Ordinary people are more rational than those engaged in intellectual pursuits? I doubt that, though rationality is in short supply in either category; but in any case, we know the "ordinary man" is extremely foolish in his beliefs.

Folk wisdom and common sense are a favored refuge of those who like to mock those foolish, Godless int'lectual types, and that's what this reminds me of; you know, the entirely too-common trope of the supposedly intelligent scientist or other educated person being shown up by the homespun wisdom and plain sense of Joe Ordinary. (Not to accuse Orwell of being anti-intellectual in general - I just don't like this particular quote.)

Comment by Hul-Gil on Epilogue: Atonement (8/8) · 2012-03-30T05:05:26.275Z · LW · GW

I think that point would make more sense than the point he is apparently actually making... which is that we must keep negative aspects of ourselves (such as pain) to remain "human" (as defined by current specimens, I suppose), which is apparently something important. Either that or, as you say, Yudkowsky believes that suffering is required to appreciate happiness.

I too would have been happy to take the SH deal; or, if not happy, at least happier than with any of the alternatives.

Comment by Hul-Gil on Epilogue: Atonement (8/8) · 2012-03-30T04:59:07.593Z · LW · GW

Agreed. I was very surprised that Mr. Yudkowsky went with the very ending I, myself, thought would be the "traditional" and irrational ending - where suffering and death are allowed to go on, and even caused, because... um... because humans are special, and pain is good because it's part of our identity!

Yes, and the appendix is useful because it's part of our body.