Posts
Comments
Maybe I am amoral, but I don't value myself the same as a random person even in a theoretical sense. What I do is I recognize that in some sense I am no more valuable to humanity than any other person. But I am way more valuable to me - if I die, that brings utility to 0, and while it can be negative in some circumstances (aka Life is not worth living), some random person's death clearly cannot do so, people are constantly dying in huge numbers all the time, and the cost of each death is non-zero to me, but must be relatively small, else I would easily be in the negative territory, and I am not.
That's interesting, but how much money is needed to solve "most of the world's current problems"?
To forestall an objection: I think investing with a goal of improving the world as opposed to maximizing income, is basically the same as giving, so that comes into the category of how to spend, not how much money to allocate for it. If you were investing rather than giving, and had income from it, you'd simply allocate it back into the category.
That's a very useful point. I do have employer match and it is likely to be an inflection point for effectiveness of any money I give.
I apologize for being unclear in my description. At the moment, after all my bills I have money left over. This implicitly goes toward retirement. So it wouldn't be slighting my family to give some more to charity. I also have enough saved to semi-retire today (e.g. if I chose to move to a cheap area I could live like a lower-middle class person on my savings alone), and my regular 401K contributions (assuming I don't retire) would mean that I'll have plenty of income if I retire at 65 or so.
I was hoping that answering "How did you decide how much of your income to give to charity?" is obviously one way of answering my original question, and so some people would answer that. But you may be right that it's too ambiguous.
I don't mean that I have one that's superior to anyone else's, but there are tools to deal with this problem, various numbers that indicate risk, waste level, impact, etc. I can also decide what areas to give in based on personal preferences/biases.
This thread is interesting, but off-topic. There is lots of useful discussion on the most effective ways to give, but that wasn't my question.
I see what you mean now, I think. I don't have a good model of dealing with a situation where someone can influence the actual updating process either. I was always thinking of a setup where the sorcerer affects something other than this.
By the way, I remember reading a book which had a game-theoretical analysis of games where one side had god-like powers (omniscience, etc), but I don't remember what it was called. Does anyone reading this by any chance know which book I mean?
For this experiment, I don't want to get involved in the social aspect of this. Suppose they aren't aware of each other, or it's very impolite to talk about sorcerers, or whatever. I am curious about their individual minds, and about an outside observer that can observe both (i.e. me).
How about, if Bob has a sort of "sorcerous experience" which is kind of like an epiphany. I don't want to go off to Zombie-land with this, but let's say it could be caused by his brain doing its mysterious thing, or by a sorcerer. Does that still count as "moving things around in the world"?
I am not certain that it's the same A. If I say to you, here's a book that proves that P=NP. You go and read it, and it's full of Math, and you can't fully process it. Later, you come back and read it again, this time you actually able to fully comprehend it. Even later you come back again, and not only comprehend it, but are able to prove some new facts, using no external sources, just your mind. Those are not all the same "A". So, you may have some evidence for/against a sorcerer, but are not able to accurately estimate the probability. After some reflection, you derive new facts, and then update again. Upon further reflection, you derive more facts, and update. Why should this process stop?
It's not that different from saying "I believe it will rain tomorrow, and the fact that I believe that is evidence that it is rain tomorrow, so I'll increase my degree of belief. But wait, that makes the evidence even stronger!".
This is completely different. My belief about the rain tomorrow is in no way evidence for actual rain tomorrow, as you point out - it's already factored in. Tomorrow's rain is in no way able to affect my beliefs, whereas a sorcerer can, even without mind tampering. He can, for instance, manufacture evidence so as to mislead me, and if he is sufficiently clever, I'll be misled. But I am also aware that my belief state about sorcerers is not as reliable because of possible tampering.
Here, by me, I mean a person living in Faerie, not "me" as in the original post.
That's a very interesting analysis. I think you are taking the point of view that sorcerers are rational, or that they are optimizing solely for proving or disproving their existence. That wasn't my assumption. Sorcerers are mysterious, so people can't expect their cooperation in an experiment designed for this purpose. Even under your assumption you can never distinguish between Bright and Dark existing: they could behave identically, to convince you that Bright exists. Dark would sort the deck whenever you query for Bright, for instance.
The way I was thinking about it is that you have other beliefs about sorcerers and your evidence for their existence is primarily established based on other grounds (e.g. see my comment about kittens in another thread). Then Bob and Daisy take into account the fact that Bright and Dark have these additional peculiar preferences for people's belief in them.
I don't think I completely follow everything you say, but let's take a concrete case. Suppose I believe that Dark is extremely powerful and clever and wishes to convince me he doesn't exist. I think you can conclude from this that if I believe he exists, he can't possibly exist (because he'd find a way to convince me otherwise), so I conclude he can't exist (or at least the probability is very low). Now I've convinced myself he doesn't exist. But maybe that's how he operates! So I have new evidence that he does in fact exist. I think there's some sort of paradox in this situation. You can't say that this evidence is screened off, since I haven't considered the result of my reasoning until I have arrived at it. It seems to me that your belief oscillates between 2 numbers, or else your updates get smaller and you converge to some number in between.
I am not assuming they are Bayesians necessarily, but I think it's fine to take this case too. Let's suppose that Bob finds that whenever he calls upon Bright for help (in his head, so nobody can observe this), he gets unexpectedly high success rate in whatever he tries. Let's further suppose that it's believed that Dark hates kittens (and it's more important for him than trying to hide his existence), and Daisy is Faerie's chief veterinarian and is aware of a number of mysterious deaths of kittens that she can't rationally explain. She is afraid to discuss this with anyone, so it's private. For numeric probabilities you can take, say, 0.7, for each.
Thanks. I am of course assuming they lack common knowledge. I understand what you are saying, but I am interested in a qualitative answer (for #2): does the fact they have updated their knowledge according to this meta-reasoning process affect my own update of the evidence, or not?
If you don't have a bachelor's degree, that makes it rather unlikely that you could get a PhD. I agree with folks that you shouldn't bother - if you are right, you'll get your honorary degrees and Nobel prizes, and if not, then not. (I know I am replying to a five-year-old comment).
I also think you are too quick to dismiss the point of getting these degrees, since you in fact have no experience in what that involves.
That's the standard scientific point of view, certainly. But would an Orthodox Bayesian agree?:) Isn't there a very strong prior?
if cognitive biases/sociology provide a substantial portion of or even all of the explanation for creationists talking about irreducible organs, then their actual counterarguments are screened off by your prior knowledge of what causes them to deploy those counterarguments; you should be less inclined to consider their arguments than a random string generator that happened to output a sentence that reads as a counterargument against natural selection.
I've just discovered Argument Screens Off Authority by EY, so it seems I've got an authority on my side too:) You can't eliminate an argument even if it's presented by untrustworthy people.
It only goes to show how we are all susceptible to power of stories, rather than able to examine them dispassionately, like a rationalist presumably should.
As the person who asked the question, I'd like to say that I don't particularly care about what creationists believe either.
Well then,
you should want to be powerful yourself - so certainly go and exploit the society:)
the powerful are really not paying for it, and if they are it's completely peanuts to them. If you are screwing up anyone by so-called leeching, it's the middle class:) You are not bad to "them", they don't care about you one way or another.
I am rich and powerful (compared to you, at least), and I hereby command you to do it:)
I really don't see what your usefulness/uselessness to powerful people has to do with you being bad. I can't even imagine what premises you are relying on for such a statement.
I think it better be true that both of these are falsifiable (and they both are). I agree that the former is overwhelmingly likely and no one I'd care to talk to disputes it. In any event I am only talking about the latter. The fact that it completely explains the variety of life on Earth is the very thing I am accepting on faith, and that's what I don't like.
Essentially none. I have a lot of evidence of science being right (at least as far as I can reasonably tell) in some other subject areas such as parts of physics, chemistry, cognitive science, etc.
I've read some FAQs on both, but it doesn't count as verification. I suppose I can look at the map of S. America and Africa and see coastlines roughly match, that is some evidence for plate tectonics. Also, as I mentioned in reply to other comments, it seems correct that with genetics being right (that I strongly believe), natural selection would certainly work to cause some species to change. I think even creationists nowadays are forced to agree with this.
I think you are interpreting my comments with too much emphasis on specific examples I give. Sure, Earth being 1 million years old is unlikely, but there could be some equally embarrassing artifact or contradictory evidence. I can't give a realistic example because I haven't studied the problem - that's my whole point. You seem to be saying that the Theory of Evolution is unfalsifiable, at least in practice. That would be a bad thing, not a good thing. Besides, surely, if someone runs cryptological analysis software on the DNA of E. Coli, and get back "(C) Microsoft Corp.", that would rather undermine the theory?:)
In actuality, for me it comes down to trust: I expect if there was important contradictory evidence, someone would report it. Creationists think that biologists are all in on a conspiracy to hide the truth and would not change their mind if they see such evidence - that is rather unlikely from my point of view. That is to say, like you, I am not spending a lot of time evaluating the underlying facts because I think one side is reliable and the other is not. But it feels wrong to me to ignore evidence because of who says it. I understand your argument that you expect some evidence to be presented by them and that makes it unnecessary to examine it, but I think you are wrong. You do have to examine it in case it turns out that their evidence is in fact overwhelming your prior. They could be right in a specific case even if it's unlikely. Even a stopped clock is right twice a day.
I agree that certainly some evolution would follow from your premises (1) and (2). But imagine that we also have independent evidence that Earth is 1 million years old. In that case, I'd be forced to say that the Theory of Evolution can't account for the evidence of life we observe, given mutation rates, etc. This is the sort of thing I am worried about when I say I haven't looked at the evidence. As far as I know there isn't any contradictory evidence of this sort, but there may be specific challenges that aren't well-explained. Creationists like to cite irreducible organs and claim that those exist (i.e. where it can't evolve from anything that has any evolutionary advantage) and are contrary to the theory. I know about this objection, but it would be a lot of work to truly evaluate it in depth.
As far as having an alternative: this isn't necessary. I'd be reluctant to go with "God did it", so I'd be fine with "the theory explains 95% of the evidence, and about the other 5% we don't know yet, and we have no better theory".
That's very useful, actually. I think I have a tendency to just accept the latest medical theory/practice as being the best guess that the most qualified people made with the current state of evidence. Which may be really suboptimal if they don't have a lot of evidence for it, and perhaps it should be independently examined if it concerns you personally. I am not sure what degree of belief to assign such things, though, because I have no experience with them.
Do you, or anyone, have an idea of how trustworthy such things generally are, in the modern age? Are there statistics about how often mainstream approaches are later proven to be harmful (and how often merely suboptimal)?
If you'd deferred to the leading authorities over the past 100 years, you would have been an introspectionist, then a behaviourist, then a cognitive scientist and now you'd probably be a cognitive neuroscientist.
I think you are right, but is it so bad? If I were living at the time of the introspectionists, was there a better alternative for me? I suspect that unless I personally worked out some other theory (unlikely), I'd have to either take that one or something equally bad. Maybe it's slightly different around boundaries of these paradigm shifts where I could possibly adopt the new ideas before the mainstream did, but most of the time it wouldn't happen. I am far from being confident that I'd do a better job personally then the general consensus, even if that tends to be very conservative.
What you are talking about is a lay sense of evolution. Sure, things change, and the more adapted thing should survive with higher frequency, this much is obvious even to creationists. It is also obvious to me (as it was to Aristotle), that things which are in motion tend to come to rest. Turns out, it's not really true. Just because a theory is intuitive, doesn't mean that's how the world really works. You only need to think about Heliocentrism, let alone something like quantum physics.
One problem that Darwin had was the lack of mechanism for evolution (i.e. genetics). If I were alive at the time he wrote his books, I would have liked his theory, but would have been forced to acknowledge that it does not truly explain how the world works. I am told that now this is all solved, but have been taking that largely on faith.
It also may be that the theory is wrong, but there is no better theory to replace it, a la physics in 1900. If that were the case, I'd like to know that too.
I've been thinking about this sort of thing as well. There are lots of books published by creationists and I am sure they are quite compelling (I haven't actually read those either), otherwise they wouldn't write those. Essentially, reading someone's summary is again putting yourself into the hands of whoever wrote it. If they have an agenda, you'll likely end up believing it. So, really, you need to read both sides, compare their arguments, etc. Lots of work.
I don't think I can have "knowledge" in Science. It's done by humans, therefore it makes errors. For any given proposition, if I examine the evidence and find it compelling, sure. But my whole point is whether I can rely on it without specifically examining it.
That's a good point. I suppose it has no practical implications for me, except that I'd like to have an accurate model of how the Universe works. Although if I were a young-earth creationist, it would have mattered a lot.
But let's take global warming. That one does matter in a practical sense.
I am not sure I got that. Is "the question I am asking now" referring to a theory whose truthfulness I am evaluating? And "the asked in past" the ones whose truthfulness I have verified? It's confusing because chronologically it's the other way around: most of these theories are old and were accepted by me on faith since school days, and I could only verify a few of them as I grew older.
Thanks, that was interesting, although didn't specifically address my question.
I think the whole experience is also interesting on a meta-level. Since programming is essentially the same as logical reasoning, it goes to show that humans are very nearly incapable of creating long chains of reasoning without making mistakes, often extremely subtle ones. Sometimes finding them provides insight (especially in multi-threaded code or with memory manipulation), although most often it's just you failing to pay attention.
I know this is not your main topic, but are you familiar with Good-Turing estimation? It's a way of assigning non-arbitrary probability to unobserved events.
He probably is an INTP, although it's too early to tell. I am too. That doesn't really answer the question:)
SInce we are on the subject of quotes, here's one from C.S. Lewis, who I am not generally a fan of, but this is something that struck me when I read it for the first time:
“Oh, Piebald, Piebald,” she said, still laughing. “How often the people of your race speak!”
“I’m sorry,” said Ransom, a little put out. “What are you sorry for?”
“I am sorry if you think I talk too much”
“Too much? How can I tell what would be too much for you to talk?”
“In our world when they say a man talks much they mean they wish him to be silent.”
“If that is what they mean, why do they not say it?”
“What made you laugh?” asked Ransom, finding her question too hard.
That specific thing is not a human universal. But the general behavior is, as far as I know. There are always little lies one is supposed to say. E.g. "no, that woman is not as beautiful as you", "he looks just like his dad", "nice to meet you", "please come again" (but I'll never invite you). In Russian, in particular, the very act of greeting is often a lie, since it means "be healthy" and there is effectively no way to "greet" an enemy without wishing him well.
I am in fact not planning to interfere for now.
I don't disagree necessarily, but this is way too subtle for a kid, so it's not a practical answer.
Besides, as a semi-professional linguist, I must say you are confusing semantics (e.g. your boxes example) with pragmatics which is what we are talking about, where one uses words to mean something other than what the dictionary + propositional logic say they mean. These are often very confusing because they rely on cultural context and both kids and foreigners often screw up when they deal with them.
Well, it's one thing not to give details and another to misreport. Even now, as an adult, I say "I am OK" when I mean "things suck", and "I am great" when things are OK. I just shift them by a degree in the positive direction. Now, if he is unhappy, should he say "I am fine"? If he is not fine, he is lying.
I am not sure I completely follow, but I think the point is that you will in fact update the probability up if a new argument is more convincing than you expect. Since AI can better estimate what you expect it to do than you can estimate how convincing AI will make it, it will be able to make all arguments more convincing than you expect.
I am not convinced that 1984-style persuasion really works. I don't think that one can really be persuaded to genuinely believe something by fear or torture. In the end you can get someone to respond as if they believe it, but probably not to actually do so. It might convince them to undergo something like what my experiment actually describes.
There is some degree to which you should expect to be swayed by empty arguments, and yes, you should subtract that out if you anticipate it.
Right. I think my argument hinges on the fact that AI knows how much you intend to subtract before you read the book, and can make it be more convincing than this amount.
So the person in the thought experiment doesn’t expect to agree with a book's conclusion, before reading it.
No he expects that if he reads the book, his posterior belief in the proposition is likely going to be high. But his current prior belief in the truth of the proposition is low.
Also, as I made clear in my update, AI is not perfect, merely very good. I only need it to be good enough for the whole episode to go through, i.e. that you don't argue that a rational person will never believe in Z after reading the book and my story is implausible.
I understand the principle, yes. But it means if your friend is a liar, no argument he gives needs to be examined on its own merits. But what if he is a liar and he saw a UFO? What if P(he is a liar) and P(there's a UFO) are not independent? I think if they are independent, your argument works. If they are not, it doesn't. If UFOs appear mostly to liars, you can't ignore his evidence. Do you agree? In my case, they are not independent: it's easier to argue for a true proposition, even for a very intelligent AI. Here I assume that P must be strictly less than 1 always.
We are running into meta issues that are really hard to wrap your head around. You believe that the book is likely to convince you, but it's not absolutely guaranteed to. Whether it will do so surely depends on the actual arguments used. You'd expect, a priori, that if it argues for X which is more likely, its arguments would also be more convincing. But until you actually see the arguments, you don't know that they will convince you. It depends on what they actually are. In your formulation, what happens if you read the book and the arguments do not convince you? Also, what if the arguments do not convince you, but only because you expect the book to be extremely convincing, is this different from the case of arguments taken without this meta-knowledge not convinving you?