Posts
Comments
I suggest a new rule: the source of the quote should be at least three months old. It's too easy to get excited about the latest blog post that made the rounds on Facebook.
It is because a mirror has no commitment to any image that it can clearly and accurately reflect any image before it. The mind of a warrior is like a mirror in that it has no commitment to any outcome and is free to let form and purpose result on the spot, according to the situation.
—Yagyū Munenori, The Life-Giving Sword
You may find it felicitous to link directly to the tweet.
This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:
Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them.
I have become 30% confident that my comments here are a net harm, which is too much to bear and so I am discontinuing my comments here unless someone cares to convince me otherwise.
Edit: Good-bye.
Which is not the same thing as expecting a project to take much less time than it actually will.
Edit: I reveal my ignorance. Mea culpa.
Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.
there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
This strikes me as probably true but unproven.
My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology.
You are anthropomorphizing the universe.
That isn't the planning fallacy.
This is a better explanation than I could have given for my intuition that physicalism (i.e. "the universe is made out of physics") is a category error.
Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.
Nonsense. The problem has posed has always been around, and the solution is just to avoid repeating the same state twice, because that results in a draw.
I like this. I was going to say something like,
"Suppose , what does that say about your solutions designed for real life?" and screw you I hate when people do this and think it is clever. Utility monster is another example of this sort of nonsense.
but you said the same thing, and less rudely, so upvoted.
Perhaps it was imprudent, but I assumed that someone trying to promote rationality would herself be rational enough to overcome this parochialism bias.
"Any and all cost" would subsume low probabilities if it were true (which, of course, it is not).
The poll results are intriguing. 35% would want cybernetic immortality at any and all cost! And yet I don't see 35% of people who can afford it signed up for cryonics.
A failure or so, in itself, would not matter, if it did not incur a loss of self-esteem and of self-confidence. But just as nothing succeeds like success, so nothing fails like failure. Most people who are ruined are ruined by attempting too much. Therefore, in setting out on the immense enterprise of living fully and comfortably within the narrow limits of twenty-four hours a day, let us avoid at any cost the risk of an early failure. I will not agree that, in this business at any rate, a glorious failure is better than a petty success. I am all for the petty success. A glorious failure leads to nothing; a petty success may lead to a success that is not petty.
-Arnold Bennett, How to Live on 24 Hours a Day
The notion that "If you fail at this you will fail at this forever" is very dangerous to depressed people,
A dangerous truth is still true. Let's not recommend people try at things if a failure will cause a failure cascade!
TDT doesn't say anything useful [...] about entities that change over time
The notion of "change over time" is deeply irrelevant to TDT, hence its name.
I personally found the research in Influence rather lacking and thought Cialdini speculated too much. But chapter 3 of the book is dead on.
Do people think superrationality, TDT, and UDT are supposed to be useable by humans?
I had always assumed that these things were created as sort of abstract ideals, things you could program an AI to use (I find it no coincidence that all three of these concepts come from AI researchers/theorists to some degree) or something you could compare humans to, but not something that humans can actually use in real life.
But having read the original superrationality essays, I realize that Hofstadter makes no mention of using this in an AI framework and instead thinks about humans using it. And in HPMoR, Eliezer has two eleven-year old humans using a bare-bones version of TDT to cooperate (I forget the chapter this occurs in), and in the TDT paper, Eliezer still makes no mention of AIs but instead talks about "causal decision theorists" and "evidential decision theorists" as though they were just people walking around with opinions about decision theory, not the platonic formalized abstraction of decision theories. (I don't think he uses the phrase "timeless decision theorists".)
I think part of the rejection people have to these decision theories might be from how impossible they are to actually implement in humans. To get superrationality to work in humans, you'd probably have to broadcast it directly into the minds of everyone on the planet, and even then it's uncertain how many defectors would remain. You almost certainly could not possibly get TDT or UDT to work in humans because the majority of them cannot even understand them. I certainly had trouble, and I am not exactly one of the dumbest members of the species, and frankly I'm not even sure I understand them now.
The original question remains. It is not rhetorical. Do people think TDT/UDT/superrationality are supposed to be useable by humans?
(I am aware of this; it is no surprise that a very smart and motivated person can use TDT to cooperate with himself, but I doubt they can really be used in practice to get people to cooperate with other people, especially those not of the same tribe.)
An expert on political ruthlessness, not an expert at political ruthlessness!
My current guess is that having the knows-the-solution property puts them in a different reference class. But if even a tiny fraction deletes this knowledge...
The part you highlight about shminux's comment is correct, but this part:
this would define "looks attractive to a certain subset of humans"
is wrong; attractiveness is psychological reactions to things, not the things themselves. Theoretically you could alter the things and still produce the attractiveness response; not to mention the empirical observation that for any given thing, you can find humans attracted to it. Since that part of the comment is wrong but the rest of it is correct, I can't vote on it; the forces cancel out. But anyway I find that to be a better explanation for its prior downvotation than a cadre of anti-shminux voters.
Mind you I downvoted JohnEPaton's comment because he got all of this wrong.
For a while, I assumed that I would never understand UDT. I kept getting confused trying to understand why an agent wouldn't want or need to act on all available information and stuff. I also assumed that this intuition must simply be wrong because Vladimir Goddamned Nesov and Wei Motherfucking Dai created it or whatever and they are both straight upgrades from Grognor.
Yesterday, I saw an exchange involving Mitchell Porter, Vladimir Nesov, and Dmytry_messaging. The latter of these insisted that one-boxing in transparent Newcomb's (when the box is empty) was irrational, and I downvoted him because of course I knew he was wrong. Today at work (it is a mindless job), I thought for a while about the reply I would have given private_messaging if I did not consider it morally wrong to reply to trolls, and I started thinking things like how he either doesn't understand reflective consistency or doesn't understand why it's important and how if you two-box then Omega predicted correctly and I also thought,
"Well sure the box is empty, but you can't condition on that fact or else-"
It hit me like a lightning bolt. That's why it's called updateless! That's why you need to- oh man I get it I actually get it now!
I think this happened because of how much time I've spent thinking about this stuff and also thanks to just recently having finished reading the TDT paper (which I was surprised to find contained almost solely things I already knew).
http://kefkaponders.wordpress.com/2012/05/15/logical-conclusions-to-christianitys-existence-claims/
Both of the studies linked to at the top of this post, on which the entire post is based, have been discredited. Even if they were true, I think it was a stretch to go from those to postulating a generalized verbal overshadowing bias.
With the benefit of hindsight I can say that this post was probably a mistake, which leaves me a bit dumbfounded at its karma score of 61 and endorsement by Newsome. When I scrolled down to the bottom I saw that I had already downvoted it, which made me even more confused.
I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything.
I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism.
Also, please don't call what we do here, "rationalism". Call it "rationality".
On reflection, I agree with you and will be downvoting all of Clippy's comments and those of all other abusive sockpuppets I'm aware of.
I really wish you would have put a disclaimer on these posts the likes of:
One of the assumptions The Art of Strategy makes is that rational agents use causal decision theory. This is not actually true, but I'll be using their incorrect use of "rationality" in order to make you uncomfortable.
Anyway,
Nick successfully meta-games the game by transforming it from the Prisoner's Dilemma (where defection is rational) [...]
this is the problem with writing out your whole sequence before submitting even the first post. You make the later posts insufficiently responsive to feedback and make up poor excuses for not changing them.
Edit: Why yes, wedrifid, there was. Fixed.
I initially had the parent upvoted, but I retracted it on learning that the grandparent comment is speaking from experience, and since I have the same experience, it's difficult not to believe.
Could someone please explain to me exactly, precisely, what a utility function is? I have seen it called a perfectly well-defined mathematical object as well as not-vague, but as far as I can tell, no one has ever explained what one is, ever.
The words "positive affine transformation" have been used, but they fly over my head. So the For Dummies version, please.
The first half of the first sentence of your comment is incomprehensible. "If you throw enough money into the hole that you spend less" is somewhere between gibberish and severely sleep-deprived mumblings.
A -2 score means almost nothing. Two people downvoted your comment, so what?
Calling the community "pathetic" in response to downvotes is hypocrisy.
In all likelihood, the downvoters did not even recognize your "Keynesian slant" and were downvoting because signaling. Oftentimes people will downvote a -1 comment just by the principle of social proof. These are humans, you know.
Don't make up theories about the whole population of this website by two downvotes to a difficult-to-understand comment. Also, you are abusing the term "groupthink", which has a technical meaning that you are not using.
what am I supposed to think?
Anything other than an orthodox-libertarian conspiracy to silence the Keynesians by downvoting small amounts.
Better yet,
I don't lie.
I believe this term is used solely to countersignal and has no more technical meaning than "guy I don't like who defends females".
Hello, friend, and welcome to Less Wrong.
I do think you should start a discussion post, as this seems clearly important to you.
My advice to you at the moment is to brush up on Less Wrong's own atheism sequence. If you find that insufficient, then I suggest reading some of Paul Almond's (and I quote):
If you find that insufficient, then it is time for the big guy, Richard Dawkins:
If you are somehow still unsatisfied after all this, lukeprog's new website should direct you to some other resources, of which the internet has plenty, I assure you.
Edit: It seems I interpreted "defend myself" differently from all the other responders. I was thinking you would just say nothing and inwardly remember the well-reasoned arguments for atheism, but that's what I would do, not what a normal person would do. I hope this comment wasn't useless anyway.
It occurred to me that although I agree with Statement #5 - "People are morally responsible for the opportunity costs of their actions," I do not think it is a claim actually being made by the optimal philanthropy zeitgeist. I think the actual claim is "Your actions have opportunity costs and you should probably think about that," which should be uncontroversial.
The "rationality" link in the "Bonuses" section has become broken.
After the addition I just made, my list contains 58 items, and yours contains 47. If you feel like updating. (Also, I removed one, because William Eden deleted one of his accounts.)
Comments section makes for interesting reading.
No it doesn't; it is the standard cloud of thought-failure.
Do you see that as a good thing?
So, let's take this hypothetical (harrumph) youth. They see irrationality around them, obvious and immense, they see the waste and the pain it causes. They'd like to do something about it. How would you advise them to go about it?
Donate to CFAR. There's no good reason to demand a local increase in rationality.
[...]should we try to distance ourselves from atheism and anti-religiousness as such? Is this baggage too inconvenient, or is it too much a part of what we stand for?
We don't stand for atheism; we stand by atheism, prepared to walk away at any time should the proper evidence come about. (Of course, it won't.) In any case I think we should talk about atheism less because it is preaching to the choir and because the psychological principle of social proof makes people update on "a bunch of rationalists have all decided there's no god!" which is double-counting evidence.
The title of this post tempted me to make another article called "Eliezer apparently right about just about everything else" but I already tried that and it was a bad idea.
My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."
Indeed, this is one of many reasons why I am starting to think "go meta" is really, really good advice.
Edit: Clarification, what I mean is that I think virtue ethics, deontology, utilitarianism, and the less popular ethical theories agree way more than their proponents think they do. At this point this is still a guess.
[...]and related-to-rationality enough to deserve its own thread.
I've gotten to thinking that morality and rationality are very, very isomorphic. The former seems to require the latter, and in my experience the latter gives rise to the former. So they may not even be completely distinguishable. We've got lots of commonalities between the two, noting that both are very difficult for humans due to our haphazard makeup, and both have imaginary Ideal versions (respectively: God, and the agent who only has true beliefs and optimal decisions and infinite computing power, and they seem to be correlated (though it is hard to say for sure), and the folk versions of both are always wrong. By which I mean when someone has an axe to grind, he will say it is moral to X, or rational to X, where really X is just what he wants, whether he is in a position of power or not. Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial, and arguments and tribalism only begin when people start making claims of what each implies, but once again I can't be sure at this juncture.
What say ye, Less Wrong?
Irrationality game comment
The correct way to handle Pascal's Mugging and other utilitarian mathematical difficulties is to use a bounded utility function. I'm very metauncertain about this; my actual probability could be anywhere from 10% to 90%. But I guess that my probability is 70% or so.
Could have hyperlinked it to the article.
On the People page, the picture next to Richard Carrier's name is the same as the picture next to Richard Boyd's.
and that's because science.
That made me smile, [edit] especially because I had just recently read the Language Log article Because NOUN.
The rest of the video made me kind of uncomfortable, though, because it felt like (and I guess sort of was) an advertisement, and you keep saying "worldview naturalism" where anyone else would have said "the naturalistic worldview" or just "naturalism".
(And this is just a personal thing, but I would have put Hofstadter's GEB and Drescher's Good & Real in the self and free will readings section.)
Overall, cool website.
Signing up was a waste of time. This course is for people who don't know anything at all.
Much to everyone else's chagrin.
Nope, I'm talking about the humans' in questions subjective "nows", not their futures. Although if a person isn't particularly rational and has never heard of rationality and if you mentioned it to him he wouldn't feel particularly motivated to become more rational has a pretty irrational-looking future, and in such case there's no choice to make, no will, only a default path.