Posts

Less Wrong on Twitter 2012-06-22T15:51:08.521Z
I Stand by the Sequences 2012-05-15T10:21:26.469Z
[link] TEDxYale - Keith Chen - The Impact of Language on Economic Behavior 2012-04-07T17:20:27.624Z
The Best Comments Ever 2012-03-18T13:02:09.744Z
[Pile of links] Miscommunication 2012-02-21T22:02:31.792Z
On Saying the Obvious 2012-02-02T05:13:43.030Z
[Transcript] Richard Feynman on Why Questions 2012-01-08T19:01:11.825Z
[Transcript] Tyler Cowen on Stories 2011-12-17T05:42:39.630Z

Comments

Comment by Grognor on Rationality Quotes October 2013 · 2013-10-26T06:01:25.546Z · LW · GW

I suggest a new rule: the source of the quote should be at least three months old. It's too easy to get excited about the latest blog post that made the rounds on Facebook.

Comment by Grognor on Rationality Quotes February 2013 · 2013-02-03T21:59:37.060Z · LW · GW

It is because a mirror has no commitment to any image that it can clearly and accurately reflect any image before it. The mind of a warrior is like a mirror in that it has no commitment to any outcome and is free to let form and purpose result on the spot, according to the situation.

—Yagyū Munenori, The Life-Giving Sword

Comment by Grognor on Rationality Quotes September 2012 · 2012-09-12T05:37:03.923Z · LW · GW

You may find it felicitous to link directly to the tweet.

Comment by Grognor on Rationality Quotes September 2012 · 2012-09-04T01:26:39.363Z · LW · GW

This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:

Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them.

Comment by Grognor on Open Thread, August 16-31, 2012 · 2012-08-20T16:29:04.822Z · LW · GW

I have become 30% confident that my comments here are a net harm, which is too much to bear and so I am discontinuing my comments here unless someone cares to convince me otherwise.

Edit: Good-bye.

Comment by Grognor on Open Thread, August 1-15, 2012 · 2012-08-08T16:34:46.933Z · LW · GW

Which is not the same thing as expecting a project to take much less time than it actually will.

Edit: I reveal my ignorance. Mea culpa.

Comment by Grognor on Friendly AI and the limits of computational epistemology · 2012-08-08T14:19:42.565Z · LW · GW

Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven.

My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology.

You are anthropomorphizing the universe.

Comment by Grognor on Open Thread, August 1-15, 2012 · 2012-08-08T06:14:26.089Z · LW · GW

That isn't the planning fallacy.

Comment by Grognor on Natural Laws Are Descriptions, not Rules · 2012-08-08T04:46:51.694Z · LW · GW

This is a better explanation than I could have given for my intuition that physicalism (i.e. "the universe is made out of physics") is a category error.

Comment by Grognor on Self-skepticism: the first principle of rationality · 2012-08-06T22:38:07.230Z · LW · GW

Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.

-Reply to Holden on Tool AI

Comment by Grognor on The Doubling Box · 2012-08-06T15:14:23.349Z · LW · GW

Nonsense. The problem has posed has always been around, and the solution is just to avoid repeating the same state twice, because that results in a draw.

Comment by Grognor on The Doubling Box · 2012-08-06T15:01:00.074Z · LW · GW

I like this. I was going to say something like,

"Suppose , what does that say about your solutions designed for real life?" and screw you I hate when people do this and think it is clever. Utility monster is another example of this sort of nonsense.

but you said the same thing, and less rudely, so upvoted.

Comment by Grognor on The Problem Of Apostasy · 2012-08-05T23:11:18.573Z · LW · GW

Perhaps it was imprudent, but I assumed that someone trying to promote rationality would herself be rational enough to overcome this parochialism bias.

Comment by Grognor on Russian plan for immortality [link] · 2012-08-03T22:41:38.520Z · LW · GW

"Any and all cost" would subsume low probabilities if it were true (which, of course, it is not).

Comment by Grognor on Russian plan for immortality [link] · 2012-08-01T23:00:20.298Z · LW · GW

The poll results are intriguing. 35% would want cybernetic immortality at any and all cost! And yet I don't see 35% of people who can afford it signed up for cryonics.

Comment by Grognor on Open Thread, March 16-31, 2012 · 2012-08-01T22:33:22.451Z · LW · GW

What?

Comment by Grognor on Open Thread, August 1-15, 2012 · 2012-08-01T22:32:27.753Z · LW · GW

A failure or so, in itself, would not matter, if it did not incur a loss of self-esteem and of self-confidence. But just as nothing succeeds like success, so nothing fails like failure. Most people who are ruined are ruined by attempting too much. Therefore, in setting out on the immense enterprise of living fully and comfortably within the narrow limits of twenty-four hours a day, let us avoid at any cost the risk of an early failure. I will not agree that, in this business at any rate, a glorious failure is better than a petty success. I am all for the petty success. A glorious failure leads to nothing; a petty success may lead to a success that is not petty.

-Arnold Bennett, How to Live on 24 Hours a Day

The notion that "If you fail at this you will fail at this forever" is very dangerous to depressed people,

A dangerous truth is still true. Let's not recommend people try at things if a failure will cause a failure cascade!

TDT doesn't say anything useful [...] about entities that change over time

The notion of "change over time" is deeply irrelevant to TDT, hence its name.

Comment by Grognor on August 2012 Media Thread · 2012-08-01T22:26:58.563Z · LW · GW

I personally found the research in Influence rather lacking and thought Cialdini speculated too much. But chapter 3 of the book is dead on.

Comment by Grognor on Open Thread, August 1-15, 2012 · 2012-08-01T15:59:54.419Z · LW · GW

Do people think superrationality, TDT, and UDT are supposed to be useable by humans?

I had always assumed that these things were created as sort of abstract ideals, things you could program an AI to use (I find it no coincidence that all three of these concepts come from AI researchers/theorists to some degree) or something you could compare humans to, but not something that humans can actually use in real life.

But having read the original superrationality essays, I realize that Hofstadter makes no mention of using this in an AI framework and instead thinks about humans using it. And in HPMoR, Eliezer has two eleven-year old humans using a bare-bones version of TDT to cooperate (I forget the chapter this occurs in), and in the TDT paper, Eliezer still makes no mention of AIs but instead talks about "causal decision theorists" and "evidential decision theorists" as though they were just people walking around with opinions about decision theory, not the platonic formalized abstraction of decision theories. (I don't think he uses the phrase "timeless decision theorists".)

I think part of the rejection people have to these decision theories might be from how impossible they are to actually implement in humans. To get superrationality to work in humans, you'd probably have to broadcast it directly into the minds of everyone on the planet, and even then it's uncertain how many defectors would remain. You almost certainly could not possibly get TDT or UDT to work in humans because the majority of them cannot even understand them. I certainly had trouble, and I am not exactly one of the dumbest members of the species, and frankly I'm not even sure I understand them now.

The original question remains. It is not rhetorical. Do people think TDT/UDT/superrationality are supposed to be useable by humans?

(I am aware of this; it is no surprise that a very smart and motivated person can use TDT to cooperate with himself, but I doubt they can really be used in practice to get people to cooperate with other people, especially those not of the same tribe.)

Comment by Grognor on [Link] Machiavelli in historical context · 2012-07-31T22:13:46.071Z · LW · GW

An expert on political ruthlessness, not an expert at political ruthlessness!

Comment by Grognor on Open Thread, March 16-31, 2012 · 2012-07-31T06:44:21.878Z · LW · GW

My current guess is that having the knows-the-solution property puts them in a different reference class. But if even a tiny fraction deletes this knowledge...

Comment by Grognor on Mind Projection Fallacy · 2012-07-31T06:00:55.374Z · LW · GW

The part you highlight about shminux's comment is correct, but this part:

this would define "looks attractive to a certain subset of humans"

is wrong; attractiveness is psychological reactions to things, not the things themselves. Theoretically you could alter the things and still produce the attractiveness response; not to mention the empirical observation that for any given thing, you can find humans attracted to it. Since that part of the comment is wrong but the rest of it is correct, I can't vote on it; the forces cancel out. But anyway I find that to be a better explanation for its prior downvotation than a cadre of anti-shminux voters.

Mind you I downvoted JohnEPaton's comment because he got all of this wrong.

Comment by Grognor on Open Thread, July 16-31, 2012 · 2012-07-30T02:07:01.025Z · LW · GW

For a while, I assumed that I would never understand UDT. I kept getting confused trying to understand why an agent wouldn't want or need to act on all available information and stuff. I also assumed that this intuition must simply be wrong because Vladimir Goddamned Nesov and Wei Motherfucking Dai created it or whatever and they are both straight upgrades from Grognor.

Yesterday, I saw an exchange involving Mitchell Porter, Vladimir Nesov, and Dmytry_messaging. The latter of these insisted that one-boxing in transparent Newcomb's (when the box is empty) was irrational, and I downvoted him because of course I knew he was wrong. Today at work (it is a mindless job), I thought for a while about the reply I would have given private_messaging if I did not consider it morally wrong to reply to trolls, and I started thinking things like how he either doesn't understand reflective consistency or doesn't understand why it's important and how if you two-box then Omega predicted correctly and I also thought,

"Well sure the box is empty, but you can't condition on that fact or else-"

It hit me like a lightning bolt. That's why it's called updateless! That's why you need to- oh man I get it I actually get it now!

I think this happened because of how much time I've spent thinking about this stuff and also thanks to just recently having finished reading the TDT paper (which I was surprised to find contained almost solely things I already knew).

Comment by Grognor on Is Politics the Mindkiller? An Inconclusive Test · 2012-07-29T02:35:34.625Z · LW · GW

http://kefkaponders.wordpress.com/2012/05/15/logical-conclusions-to-christianitys-existence-claims/

Comment by Grognor on Verbal Overshadowing and The Art of Rationality · 2012-07-25T05:25:31.100Z · LW · GW

Both of the studies linked to at the top of this post, on which the entire post is based, have been discredited. Even if they were true, I think it was a stretch to go from those to postulating a generalized verbal overshadowing bias.

With the benefit of hindsight I can say that this post was probably a mistake, which leaves me a bit dumbfounded at its karma score of 61 and endorsement by Newsome. When I scrolled down to the bottom I saw that I had already downvoted it, which made me even more confused.

Comment by Grognor on Welcome to Less Wrong! (July 2012) · 2012-07-25T04:58:26.629Z · LW · GW

I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything.

I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism.

Also, please don't call what we do here, "rationalism". Call it "rationality".

Comment by Grognor on How to Run a Successful Less Wrong Meetup · 2012-07-25T03:02:47.381Z · LW · GW

On reflection, I agree with you and will be downvoting all of Clippy's comments and those of all other abusive sockpuppets I'm aware of.

Comment by Grognor on Game Theory As A Dark Art · 2012-07-24T22:12:35.765Z · LW · GW

I really wish you would have put a disclaimer on these posts the likes of:

One of the assumptions The Art of Strategy makes is that rational agents use causal decision theory. This is not actually true, but I'll be using their incorrect use of "rationality" in order to make you uncomfortable.

Anyway,

Nick successfully meta-games the game by transforming it from the Prisoner's Dilemma (where defection is rational) [...]

this is the problem with writing out your whole sequence before submitting even the first post. You make the later posts insufficiently responsive to feedback and make up poor excuses for not changing them.

Edit: Why yes, wedrifid, there was. Fixed.

Comment by Grognor on A Marriage Ceremony for Aspiring Rationalists · 2012-07-24T03:29:04.890Z · LW · GW

I initially had the parent upvoted, but I retracted it on learning that the grandparent comment is speaking from experience, and since I have the same experience, it's difficult not to believe.

Comment by Grognor on Stupid Questions Open Thread Round 3 · 2012-07-24T00:57:53.062Z · LW · GW

Could someone please explain to me exactly, precisely, what a utility function is? I have seen it called a perfectly well-defined mathematical object as well as not-vague, but as far as I can tell, no one has ever explained what one is, ever.

The words "positive affine transformation" have been used, but they fly over my head. So the For Dummies version, please.

Comment by Grognor on What Is Signaling, Really? · 2012-07-23T07:00:36.984Z · LW · GW
  • The first half of the first sentence of your comment is incomprehensible. "If you throw enough money into the hole that you spend less" is somewhere between gibberish and severely sleep-deprived mumblings.

  • A -2 score means almost nothing. Two people downvoted your comment, so what?

  • Calling the community "pathetic" in response to downvotes is hypocrisy.

  • In all likelihood, the downvoters did not even recognize your "Keynesian slant" and were downvoting because signaling. Oftentimes people will downvote a -1 comment just by the principle of social proof. These are humans, you know.

  • Don't make up theories about the whole population of this website by two downvotes to a difficult-to-understand comment. Also, you are abusing the term "groupthink", which has a technical meaning that you are not using.

what am I supposed to think?

Anything other than an orthodox-libertarian conspiracy to silence the Keynesians by downvoting small amounts.

Comment by Grognor on Intellectual insularity and productivity · 2012-07-23T06:50:16.435Z · LW · GW

Better yet,

I don't lie.

-Eliezer Yudkowsky

Comment by Grognor on WorldviewNaturalism.com: A "landing page" for scientific naturalism · 2012-07-23T02:06:28.203Z · LW · GW

I believe this term is used solely to countersignal and has no more technical meaning than "guy I don't like who defends females".

Comment by Grognor on Welcome to Less Wrong! (2012) · 2012-07-23T01:41:08.968Z · LW · GW

Hello, friend, and welcome to Less Wrong.

I do think you should start a discussion post, as this seems clearly important to you.

My advice to you at the moment is to brush up on Less Wrong's own atheism sequence. If you find that insufficient, then I suggest reading some of Paul Almond's (and I quote):

great atheology

If you find that insufficient, then it is time for the big guy, Richard Dawkins:

If you are somehow still unsatisfied after all this, lukeprog's new website should direct you to some other resources, of which the internet has plenty, I assure you.

Edit: It seems I interpreted "defend myself" differently from all the other responders. I was thinking you would just say nothing and inwardly remember the well-reasoned arguments for atheism, but that's what I would do, not what a normal person would do. I hope this comment wasn't useless anyway.

Comment by Grognor on What Is Optimal Philanthropy? · 2012-07-22T21:58:43.981Z · LW · GW

It occurred to me that although I agree with Statement #5 - "People are morally responsible for the opportunity costs of their actions," I do not think it is a claim actually being made by the optimal philanthropy zeitgeist. I think the actual claim is "Your actions have opportunity costs and you should probably think about that," which should be uncontroversial.

Comment by Grognor on POSITION: Design and Write Rationality Curriculum · 2012-07-21T20:55:54.093Z · LW · GW

The "rationality" link in the "Bonuses" section has become broken.

Comment by Grognor on Less Wrong on Twitter · 2012-07-20T05:00:25.290Z · LW · GW

After the addition I just made, my list contains 58 items, and yours contains 47. If you feel like updating. (Also, I removed one, because William Eden deleted one of his accounts.)

Comment by Grognor on Singularity Institute - mainstream media exposure in Australia · 2012-07-20T03:01:46.103Z · LW · GW

Comments section makes for interesting reading.

No it doesn't; it is the standard cloud of thought-failure.

Comment by Grognor on The Problem Of Apostasy · 2012-07-19T22:33:03.287Z · LW · GW

Do you see that as a good thing?

Comment by Grognor on The Problem Of Apostasy · 2012-07-19T22:29:06.134Z · LW · GW

So, let's take this hypothetical (harrumph) youth. They see irrationality around them, obvious and immense, they see the waste and the pain it causes. They'd like to do something about it. How would you advise them to go about it?

Donate to CFAR. There's no good reason to demand a local increase in rationality.

[...]should we try to distance ourselves from atheism and anti-religiousness as such? Is this baggage too inconvenient, or is it too much a part of what we stand for?

We don't stand for atheism; we stand by atheism, prepared to walk away at any time should the proper evidence come about. (Of course, it won't.) In any case I think we should talk about atheism less because it is preaching to the choir and because the psychological principle of social proof makes people update on "a bunch of rationalists have all decided there's no god!" which is double-counting evidence.

Comment by Grognor on Eliezer apparently wrong about higgs boson · 2012-07-18T02:43:39.004Z · LW · GW

The title of this post tempted me to make another article called "Eliezer apparently right about just about everything else" but I already tried that and it was a bad idea.

Comment by Grognor on Morality open thread · 2012-07-16T16:13:50.394Z · LW · GW

My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."

Indeed, this is one of many reasons why I am starting to think "go meta" is really, really good advice.

Edit: Clarification, what I mean is that I think virtue ethics, deontology, utilitarianism, and the less popular ethical theories agree way more than their proponents think they do. At this point this is still a guess.

Comment by Grognor on Morality open thread · 2012-07-16T14:51:01.071Z · LW · GW

[...]and related-to-rationality enough to deserve its own thread.

I've gotten to thinking that morality and rationality are very, very isomorphic. The former seems to require the latter, and in my experience the latter gives rise to the former. So they may not even be completely distinguishable. We've got lots of commonalities between the two, noting that both are very difficult for humans due to our haphazard makeup, and both have imaginary Ideal versions (respectively: God, and the agent who only has true beliefs and optimal decisions and infinite computing power, and they seem to be correlated (though it is hard to say for sure), and the folk versions of both are always wrong. By which I mean when someone has an axe to grind, he will say it is moral to X, or rational to X, where really X is just what he wants, whether he is in a position of power or not. Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial, and arguments and tribalism only begin when people start making claims of what each implies, but once again I can't be sure at this juncture.

What say ye, Less Wrong?

Comment by Grognor on Irrationality Game II · 2012-07-14T03:09:33.423Z · LW · GW

Irrationality game comment

The correct way to handle Pascal's Mugging and other utilitarian mathematical difficulties is to use a bounded utility function. I'm very metauncertain about this; my actual probability could be anywhere from 10% to 90%. But I guess that my probability is 70% or so.

Comment by Grognor on What Is Signaling, Really? · 2012-07-13T22:21:29.887Z · LW · GW

Could have hyperlinked it to the article.

Comment by Grognor on WorldviewNaturalism.com: A "landing page" for scientific naturalism · 2012-07-13T12:06:13.317Z · LW · GW

On the People page, the picture next to Richard Carrier's name is the same as the picture next to Richard Boyd's.

and that's because science.

That made me smile, [edit] especially because I had just recently read the Language Log article Because NOUN.

The rest of the video made me kind of uncomfortable, though, because it felt like (and I guess sort of was) an advertisement, and you keep saying "worldview naturalism" where anyone else would have said "the naturalistic worldview" or just "naturalism".

(And this is just a personal thing, but I would have put Hofstadter's GEB and Drescher's Good & Real in the self and free will readings section.)

Overall, cool website.

Comment by Grognor on Useful maxims · 2012-07-13T02:06:57.634Z · LW · GW

Working Mantras

Comment by Grognor on Learn Power Searching with Google · 2012-07-13T01:49:36.373Z · LW · GW

Signing up was a waste of time. This course is for people who don't know anything at all.

Comment by Grognor on Rationality Games & Apps Brainstorming · 2012-07-12T22:54:11.906Z · LW · GW

Much to everyone else's chagrin.

Comment by Grognor on What Is Optimal Philanthropy? · 2012-07-12T11:58:17.378Z · LW · GW

Nope, I'm talking about the humans' in questions subjective "nows", not their futures. Although if a person isn't particularly rational and has never heard of rationality and if you mentioned it to him he wouldn't feel particularly motivated to become more rational has a pretty irrational-looking future, and in such case there's no choice to make, no will, only a default path.