Posts
Comments
Oh, sorry, neither did I. I'm not trying to accuse Raemon of deliberate brainwashing. But getting together every year to sings songs about, say, existential risk will make people more likely to disregard evidence showing that X-risk is lower than previously thought. Same for every other subject.
Who said anything about mindhacking?
Raemon did. It's a ritual, deliberately styled after religious rituals, some of the most powerful mindhacks known.
As of now, there is no evidence that the average LessWronger is more rational than the average smart, educated person (see the LW poll). Therefore, a lot of LWers thinking something is not any stronger evidence for its truth than any other similarly-sized group of smart, educated people thinking it. Therefore, until we get way better at this, I think we should be humble in our certainty estimates, and not do mindhacky things to cement the beliefs we currently hold.
The line that people tend to quote there is "מנהג ישראל דין הוא" (the custom of Israel is law), but most people have never looked up its formal definition. Its actual halachic bearing is much too narrow to justify (for example) making kids sing Shabbat meal songs.
Correct me if I'm wrong, but it looks like you're talking about anti-deathism (weak or strong) as if it was a defining value of the LessWrong community. This bothers me.
If you're successful, these rituals will become part of the community identity, and I personally would rather LW tried to be about rationality and just that as much as it can. Everything else that correlates with membership - transhumanism, nerdiness, thinking Eliezer is awesome - I would urge you not to include in the rituals. It's inevitable that they'd turn up, but I wouldn't give them extra weight by including them in codified documents.
As an analogy, one of the things that bugged me about Orthodox Judaism was that it claims to be about keeping the Commandments, but there's a huge pile of stuff that's done just for tradition's sake, that isn't commanded anywhere (no, not even in the Oral Lore or by rabbinical decree).
So everyone in the human-superiority crowd gloating about how they're superior to mere machines and formal systems, because they can see that Godel's Statement is true just by their sacred and mysterious mathematical intuition... "...Is actually committing a horrendous logical fallacy [...] though there's a less stupid version of the same argument which invokes second-order logic."
So... not everyone. In Godel, Escher, Bach, Hofstadter presents the second-order explanation of Godel's Incompleteness Theorem, and then goes on to discuss the "human-superiority" crowd. Granted, he doesn't give it much weight - but for reasons that have nothing to do with first- versus second-order logic.
Don't bash a camp just because some of their arguments are bad. Bash them because their strongest argument is bad, or shut up.
(To avoid misunderstanding: I think said camp is in fact wrong.)
I perceive most of signalling as a waste of resources and think that cultivating a community which tried to minimize unnecessary signalling would be good.
Correcting spelling errors doesn't waste many resources. But yeah, the amount of pointless signalling that goes on in the nerd community is kind of worrying.
Why do I do it myself? Force of habit, probably. I was the dumbest person in my peer group throughout high school, so I had to consciously cultivate an image that made me worth their attention, which I craved.
It's kind of saddening that this kind of problem draws my attention much quicker than serious logical problems.
To be fair, they're a hell of a lot easier to notice. Although there's probably a signalling issue involved as well - particular kinds of pedantry are good ways of signalling "nerdiness", and I think most LWers try to cultivate that kind of image.
The founders of Castify are big fans of Less Wrong so their rolling out their beta with some of our content.
Twitch.
But seriously, this is great. I'm trying to get into the habit of using podcasts and recorded lectures to make better use of my time, especially while travelling.
Stealing?
I took "spiritual" to mean in this context that you don't believe in ontologically basic mental entities, but still embrace feelings of wonder, majesty, euphoria, etc. typically associated with religions when contemplating/experiencing the natural world.
Notice that other people answering my question had different interpretations. I left it blank.
Do you not have a preference for low/high redistribution of wealth because you haven't studied enough economics, or because you have studied economics and haven't found a satisfying answer?
Because I haven't studied economics beyond the Wikipedia level, and systems with large numbers of humans involved are really, really complicated. Why so many democratic citizens feel qualified to intuit their way to an opinion is beyond me.
Two questions, as I take the survey:
- What does "spiritual" mean, in the context of "Atheist [but | and not] spiritual"?
- I genuinely have no idea whether I'd prefer low or high redistribution of wealth. What do I tick for my political opinion?
Depends what you mean by "familiar". I'd imagine anyone reading the essay can do algebra, but that they're still likely to be more comfortable when presented with specific numbers. People are weird like that - we can learn general principles from examples more easily than from having the general principles explained to us explicitly.
Exceptions abound, obviously.
Remove from your life everything you forget; what is left is you.
Can we just agree that English doesn't have a working definition for "self", and that different definitions are helpful in different contexts? I don't think there's anything profound in proposing definitions for words that fuzzy.
I think it does. Can't believe I missed that.
Actually, this fits well with my personal experience. I've frequently found it easier to verbalize sophisticated arguments for the other team, since my own opinions just seem self-evident.
I suspect sheep would be less susceptible to this sort of thing than humans.
The study asked people to rate their position on a 9-point scale. People who took more extreme positions, while more likely to detect the reversal, also gave the strongest arguments in favour of the opposite opinion when they failed to detect the reversal.
Also, the poll had two kinds of questions. Some of them were general moral principles, but some of them were specific statements.
"Easy to communicate to other humans", "easy to understand", or "having few parts".
Am I the only one who thinks we should stop using the word "simple" for Occam's Razor / Solomonoff's Whatever? In 99% of use-cases by actual humans, it doesn't mean Solomonoff induction, so it's confusing.
Don't think you can fuck with people a lot more powerful than you are and get away with it.
I'm no expert, but that seems to be the moral of a lot of Greek myths.
Verbatim from the comic:
It is not God who kills the children. Not fate that butchers them or destiny that feeds them to the dogs. It's us.
Only us.
I personally think that Watchmen is a fantastic study* on all the different ways people react to that realisation.
("Study" in the artistic sense rather than the scientific.)
Now someone just has to write a book entitled "The Rationality of Sisyphus", give it a really pretentious-sounding philosophical blurb, and then fill it with Grand Theft Robot.
Rot13'd for minor spoiling potential: Ur'f n jnet / fxvapunatre.
The chance of human augmentation reaching that level within my lifespan (or even within my someone's-looking-after-my-frozen-brain-span) is, by my estimate, vanishingly low. But if you're so sure, could I borrow money from you and pay you back some ludicrously high amount in a million years' time?
More seriously: Seeing as my current brain finds regret unpleasant, that's something that reduces to my current terminal values anyway. I do consider transhuman-me close enough to current-me that I want it to be happy. But where their terminal values actually differ, I'm not so sure - even if I knew I were going to undergo augmentation.
And you only have one thing to give in return: your life.
Also effort, expertise, and insider information on one of the most powerful Houses around. And magic powers.
Open question: Do you care about what (your current brain predicts) your transhuman self would want?
My brain technically-not-a-lies to me far more than it actually lies to me.
-- Aristosophy (again)
... which one wish, carefully phrased, could also provide.
"Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?"
"It's a metaphor for the human struggle."
"I don't see how that changes my point."
- SMBC Comics #2719
Read as:
the auction gains even more money from people who have seen it before [and are nevertheless willing to play again] than it does from naive bidders
I agree (in general) with Xenophon's advice: Calm down, do whatever you're comfortable with spiritually, and in the worst case scenario call it "God" to keep the peace with whoever you want to keep the peace with.
With that said, if you still want advice, I deconverted myself a year ago and have since successfully corrupted others, and I've been wanting to codify the fallacies I saw anyway. Before I start: bear in mind that you might be wrong. I find it very unlikely that any form of Abrahamic theism is true, but if you care about the truth you have to keep an open mind.
Here are some common fallacious arguments and argumentative techniques I've seen used by religion (and other ideologies, of course). They include exercises which I think you'd benefit from practising; if you get stuck on any of 'em, send me a PM and I'll be glad to help out.
- Abuse and Neglect of Definitions
Whenever anyone tries to convince you of the truth or falsehood of some claim, make sure to ask them exactly what that means - and repeat the question until it's totally clear. You'd be amazed how many of the central theological tenets of Abrahamism are literally meaningless, since almost no-one can define them, and among those who can no two will give the same definition.
For example: God created the Universe. Pretty important part of the theology, right? So what does it mean, exactly?
A smart theist will say: God caused the Universe to exist.
Okay, great. What does "cause" mean?
Seriously? You know what "cause" means; it's a word you use all the time.
(This is a classic part of this fallacy. In our own minds we have definitions that work in everyday life, but not for talking about something as abstract as God. In this specific case, the distinction is as follows:)
When I say "X caused Y" (where X and Y are events) I mean: within the laws of nature as I know them Y wouldn't have happened if X hadn't. But God created the Universe outside (or "before") any laws of nature, so what does "cause" mean?
... and I've got no idea what an Abrahamist theist would answer, since I've yet to hear one who could. Although of course I'd love to.
For homework: Play the same game, in your head (I assume your old religious self is still knocking around up there) or with a smart religious friend, on some of the other basic tenets of Abrahamism: God is all-powerful, God is all-knowing, God is (all-)good, God is formless. Similarly with any statement of the form "God loves X", "God wants X", or even "God did X" or "God said X" (how can the Cause of Everything be said to have "said" any statement more than any other?)
- Intellectual Package Deals
Most religious doctrines are comprised of a huge number of logically independent statements. In Abrahamic theism, we have the various qualities of God mentioned above, as well as a bunch of moral axioms, beliefs regarding the afterlife, and so on. "Proofs" of the doctrines as a whole will often treat the whole collection as a unit, so they only need to bother proving a small fraction.
For instance: A proof of Judaism one of my teachers was fond of was based on proving the Revelation at Mt Sinai - God made a thundering announcement to six hundred thousand families, announcing Its existence and several commandments (there's a dispute as to how many).
Okay, let's say I accept the proof that the Revelation happened. This points to a very powerful speaker, but does it indicate that the speaker is all-powerful? That it is good? That it is telling the truth when it claims to be the being that brought us out of Egypt? That I am morally obligated to do what it wants?
For homework: Write down as many of the axioms of Christianity as you can think of. Once you have a list, look at the behaviour of practising Christians you know, and try to see if it actually follows from the axioms you've got. Add axioms and repeat. (I did this with a religious friend of mine about Orthodox Judaism, and we got to at least fifteen before we got bored.)
Query your memory, Google, your books, and whichever humans you feel comfortable for proofs of Christianity. Check off which of the axioms on your list they actually address - before you even bother to check the proofs for coherence.
- X is not satisfactorily explained by modern science... therefore God/soul/etc.
(Including the specific cases where X=the existence of the universe, complex life, or consciousness.)
Aside from almost always falling under #2 (and sometimes #1 as well), arguments of this form are mathematically fallacious. To understand why, though, you have to do the maths. You can find it on this site as “Bayes's Rule” and it's well worth reading the full-length articles about it, but the short version is as follows:
We have two competing models, A and B, and an observation E. Then E will constitute evidence for A over B if and only if A predicts E with higher probability than B predicts E – that is, if I were to query an A-believer and a B-believer before I ran the experiment, the former would be more likely to get it right than than the latter.
This is easiest to see in cases where the models predict outcomes with very high or low probability. For example: If I ask a believer in Newtonian mechanics whether a rock will keep moving after I throw it (in a vacuum), he'll say “yes” (probability 1). If I ask an Aristotelian physicist, he'll say “no” (probability 0). And lo, the rock did keep moving. Therefore, the Newtonian assigned a higher probability to (what we now know is) the correct outcome than the Aristotelian, so this experiment is evidence for Newtonianism over Aristotelianism.
Got that? Then let's take a specifically religious example: as far as I know, modern science does not have a good explanation for the origin of life. We have a vague idea, but our best explanation is based on some pretty astounding coincidences. Religion, on the other hand, has: God created life. There's your explanation.
But translating into maths we get: if atheist science were true, the probability of life arising would be low, since it would take some unlikely coincidences. If theist science (normal laws of physics + God) were true, the probability of life arising would be...
Wait a second. What's the probability of God deciding to create life? We might say we have no idea, since God is inscrutable, in which case the argument obviously can't continue. But the clever apologist might say: God is good, which is to say It wants happiness. Therefore, it must create minds. So the probability of it creating life is actually quite high.
Except that God, being all-powerful, is perfectly capable of making happiness without life – a bunch of super-happy abstract beings like Itself, for example. So what's the probability of It “bothering” to create life? It has no reason not too, having infinite time and energy, but It has an infinite number of courses of action – what's the probability of It picking the specific one we observed happening?
I'm tempted to say that 1/(infinity) = 0, but that's not mathematically sound, so we'll leave it at “I don't know”. Regardless, the point is that arguments of this form fail once you actually look for numbers.
This answer is already long enough to qualify as a post in itself, so I'll leave off here (although there's lots more to talk about). Feel free to ask if I wasn't clear, or once you've finished all the exercises.
OTOOH, the people who do figure it out effectively get more power over choosing the result than people who don't. In most democracies, this would be considered a negative. Not that real-life elections are totally fair either, of course.
How'd they react? Did it work?
The point I should have made clear was that data-entry clerks don't exist outside of corporations, because in isolation they're useless. More generally, mass production has been made possible by the production-line paradigm: break down the undertaking into tiny discrete jobs and assign a bunch of people to doing each one over and over again.
Once you get that kind of framework, exceptionally good workers aren't very helpful, because the people to either side of them in the production line aren't necessarily going to keep up. You just need to shut up and do your job, the same as everyone else.
At the high levels - the people wielding their collective underlings as a tool, rather than the people who are part of that tool - this obviously no longer works.
Important note: all of the above, including my original comment, is 100% psuedo-intellectual wank, since I've never been part of a corporation, never taken a business management course or seminar, and never conducted or read a study on the efficacy of various business practices.
One of the most important social structures of modern society is the corporation - a framework for large groups of people to band together and get absolutely huge projects done. In this framework, the structure itself is more important than individual excellence at most levels. To a lesser extent, the same applies to academia and even "society as a whole".
In that context, I think preferring negative selection to positive makes sense: a genius data-entry clerk is less helpful than an insubordinate data-entry clerk is disruptive.
And remember that we have side routes so real geniuses (of some kinds) can still make it: set up their own company, start their own political party, start publishing their work online, design games in their basement, and so on.
Out of genuine curiosity, how do you know that? I thought you never went to university.
Are these your own estimates, or have you found some objective, accurate test for ranking "Conceptual originality"?
What Bill Maher said was that if a person claims that ~Bite is significant evidence for God, they must admit that Bite is significant evidence for ~God. I'm saying I don't think that's accurate.
The sentiment that one should update on the evidence is obviously great, but I think we should keep an eye on the maths.
Sure. But if I handle snakes to prove they won't bite me because God is real, and they don't bite me -- you do the math.
More seriously, though: the sentiment expressed in the quote is flawed, IMHO. Evidence isn't always symmetrical. Any particular transitional fossil is reasonable evidence for evolution; not finding a particular transitional fossil isn't strong evidence against it. A person perjuring themselves once is strong evidence against their honesty; a person once declining to perjure themselves is not strong evidence in favour of their honesty; et cetera.
I think this might have something to do with the prior, actually: The stronger your prior probability, the less evidence it should take to drastically reduce it.
Edit: Nope, that last conclusion is wrong. Never mind.
Scientific theories are judged by the coherence they lend to our natural experience and the simplicity with which they do so.
Eloquent!
The grand principle of the heavens balances on the razor's edge of truth.
physical contact? karaoke? the outdoors? What does that have to do with rationality?
It's not really any of my business, since I'm not a New Yorker, but I'd be inclined to ask the same question. I understand that you're trying to build a community... I just have no idea why.
I've never heard the word "simple" used in game-theoretic context either. It just seemed that word was better suited to describe a [do x] strategy than a [do x with probability p and y with probability (1-p)] strategy.
If the word "remember" is bothering you, I've found people tend to be more receptive to explanations if you pretend you're reminding them of something they knew already. And the definition of a Nash equilibrium was in the main post.
No simple Nash equilibrium. Both players adopting the mixed (coin-flipping) strategy is the Nash equilibrium in this case. Remember: a Nash equilibrium isn't a specific choice-per-player, but a specific strategy-per-player.
Eliezer's explanation hinges on the MWI being correct, which I understand is currently the minority opinion. Are we to understand that you're with the minority on this one?
That's really interesting. Thanks for the education.
I've never read Marx, but I don't think Plato's Republic would match most modern definitions of "democracy"; it was made up of predefined castes ruled by an elite minority.
And stay there, except for occasional digressions.
In other words, assuming I understand the claim: as time approaches infinity, so the probability of a randomly selected country being democratic approaches 1.
In this context, it would mean that those countries that aren't currently democratic will almost certainly adopt democracy at some point in the future.
Quick poll: Who here has actually met someone who thinks democracy arises inevitably from human nature?
3) I allot a reasonable-seeming amount of time to think before deciding to drastically change something important. The logic is that the argument isn't evidence in itself - the evidence is the fact that the argument exists, and that you're not aware of any flaws in it. If you haven't thought about it for a while, the probability of having found flaws is low whether or not those flaws exist - so not having found them yet is only weak evidence against your current position.
So basically, "Before you've had time to consider them".