Posts
Comments
What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?
I think what you're saying about needing some way to change our minds is a good point though. And I certainly wouldn't say that every single object-level belief I hold is more secure than every meta belief. I'll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.
But I don't think that's the way to change our minds about something like how we deal with homosexuality, either on a descriptive or a normative level. Nobody read Bentham and said, "you know what guys I don't think being gay actually costs any utils! I guess it's fine". And if they did, it would have been bad moral epistemology. If you put yourself in the mind of an average Victorian, "don't be gay" sits very securely in your web of belief. It's bolstered by what you think about virtue, religion, deontology, and even health. And what you think about those things is more or less consistent with and confirmed by what you think about everything else. It's like moral-epistemic page rank. The "don't be gay" node has strongly weighted edges from the strongest cluster of nodes in your belief system. And they all point at each other. Compared to those nodes, meta level stuff like utilitarianism is in a distant and unimportant backwater region of the graph. If anything an arrow from utilitarianism to "being gay is ok" looks to you like a reason not to take utilitarianism too seriously. In order for you to change your mind about homosexuality, you need to change your mind about everything. You need to move all that moral pagerank to totally different regions of the graph. And picking a meta theory to rule them all and assigning it a massive weight seems like a crazy reckless way to do that. If you're doing that you're basically saying you prioritize meta-ethical consistency over all the object level things that you actually care about. It seems to me the only sane way to update is to slowly alter the object level stuff as you learn new facts, or discover inconsistencies in what you value, and try to maintain as much reflective consistency as you can while you do it.
PS. I guess I kind of made it sound like I believe the Whig theory of moral history, where modern western values are clearly true scion of Victorian values, and if we could just tell them what we know and walk them though the arguments we could convince the Victorians that we were right, even by their own standards. I'm undecided on that and I'll admit it might be the case that we just fundamentally disagree on values, and that "moral progress" is a random walk. Or not. Or it's a mix. I have no idea.
I think you've pretty much stated the exact opposite of my own moral-epistomological worldview.
I don't like the analogy with physics. Physical theories get tested against external reality in a way that makes them fundamentally different from ethical theories.
If you want to analogize between ethics and science, I want to compare it to the foundations of mathematics. So utilitarianism isn't relativity, it's ZFC. Even though ZFC proves PA is a consistent and true theory of the natural numbers, it's a huge mistake for a human to base their trust in PA on that!
There is almost no argument or evidence that can convince me to put more trust in ZFC than i do PA. I don't think I'm wrong.
I trust low-energy moral conclusions more than I will ever trust abstract metaethical foundational theories. I think it is a mistake to look for low-complexity foundations and reason from them. I think the best we can do is seek reflective equilibrium.
Now, that being said, I don't think it's wrong to study abstract metaethical theories, to ask what their consequences are, and even to believe them a little bit. The analogy with math still holds here. We study the heck out of ZFC. We even believe it more than a little at this point. But we don't believe it more than we believe the intermediate value theorem.
PS: I also don't think "shut up and calculate" is something you can actually do under utilitarianism, because there are good utilitarian arguments for obeying deontological rules and being virtuous, and pretty much every ethical debate that anyone has ever had can be rephrased as a debate about what terms should go in the utility function and what the most effective way to maximize it is.
I haven't. I'll see if I can show up for the next one.
this was also the part of Dalliard's critique I found most convincing. Shalizi's argument seems to a refutation of a straw man.
One thing Dalliard mentions is that the 'g' derived from different studies are 'statistically indistinguishable'. What's the technical content of this statement?
thanks for the link.
Not that I feel particularly qualified to judge, but I'd say Dalliard has a way better argument. I wonder if Shalizi has written a response.
wow that's a neat service.
It looks like we may have enough people interested in Probability Theory, Though I doubt we all live in the same city. I live near DC.
Depending on how many people are interested/where they live, it might make sense to meet over video chat instead.
I'm 32.
So you are assuming that it will be wanting to prove the soundness of any successors? Even though it can't even prove the soundness of itself? But it can believe in it's own soundness in a Bayesian sense without being able to prove it. There is not (as far as I know) any Godelian obstacle to that. I guess that was your point in the first place.
OK, forget about F for a second. Isn't the huge difficulty finding the right deductions to make, not formalizing them and verifying them?
This is all nifty and interesting, as mathematics, but I feel like you are probably barking up the wrong tree when it comes to applying this stuff to AI. I say this for a couple of reasons:
First, ZFC itself is already comically overpowered. Have you read about reverse mathematics? Stephen Simpson edited a good book on the topic. Anyway, my point is that there's a whole spectrum of systems a lot weaker than ZFC that are sufficient for a large fraction of theorems, and probably all the reasoning that you would ever need to do physics or make real word, actionable decisions. The idea that physics could depend on reasoning of a higher consistency strength than ZFC just feels really wrong to me. Like the idea that P could really equal NP. Of course my gut feeling isn't evidence, but I'm interested in the question of why we disagree. Why do you think these considerations are likely to be important?
Second, Isn't the the whole topic of formal reasoning a bike shed? Isn't the real danger that you will formalize the goal function wrong, not that the deductions will be invalid?
I don't think you've chosen your examples particularly well.
Abortion certainly can be a 'central' case of murder. Immagine aborting a fetus 10 minutes prior to when it would have been born. It can also be totally 'noncentral': the morning after pill. Abortions are a grey area of central-murder depending on the progress of neural devlopment of the fetus.
Affermative action really IS a central case of racism. It's bad for the same reason as segregation was bad, because it's not fair to judge people based on their race. The only difference is that it's not nearly AS bad. Segregation was brutal and oppressive, while affermative action doesn't really affect most peopel enough for them to notice.
What do you think you're adding to the discussion by trotting out this sort of pedantic literalism?
Unless someone explicitly says they know something with absolute 100% mathematical certainty, why don't you just use your common sense and figure that when they say they "know" something, they mean they assign it a very high probability, and believe they have epistemologically sound reasons for doing so.
"Trust your intuitions, but don't waste too much time arguing for them"
This is an excellent point. Intuition plays an absolutely crucial point in human thought, but there's no point in debating an opinion that (by definition, even) you're incapable of verbalizing your reasons for. Let me suggest another maxim:
Intuitions tell you where to look, not what you'll find.
wait so, are you agreeing with me or disagreeing?
What makes you think Hitler didn't deliberately think about how to yell at crowds?
You're confusing "reason" with inappropriate confidence in models and formalism.
I vote for the meta-thread convention, or for any other mechanism that keeps meta off the front page.
I think the main problem with mormon2's submission was not where it was posted, but that it was pointless and uninformed.
I suggest you run an experiment. Go try to eat at a restaurant and explicitly state your intention not to tip. I predict the waiter will tell you to fuck off, and if the manager gets called out, he'll tell you to fuck off too.
I basically agree with you, though I'm not sure the legal distinction between "theft" and "breach of contract" is meaningful in this context. As far as I know there's no law that says you have to tip at all. So from a technical legal perspective, failing to tip is neither theft nor breach of contract nor any other offense.
It may not be legal theft, but it's still moral theft. You sat down and ate with the mutual understanding that you would tip. The only reason the waiter is bringing you food is because of the expectation that you will tip. If you announced your intention not to tip, he would not serve you, he would tell you to fuck off. The tip is a payment for a service, it is not a gift. The fact that the agreement to pay is implicit, the fact that the precise amount of the payment is left partially unspecified are merely technicalities that do not change the basic fact that the tip is a payment, not a gift.
You don't tip in order to be altruistic, you tip because you informally agreed to tip by eating in a restaurant in the first place. If you don't tip (assuming the service was acceptable), you aren't being virtuous, you're being a thief.
Perhaps you should say the correct moral move is to tip exactly 15%.
I believe EY has already explained that he's trying to make more rationalists, so they can go and solve FAI.
If I think I know a more efficient way to make a widget, I still need to convince somebody to put up the capital for my new widget factory.
But if results depend on my ability to convince rich people, that's not prediction market!
what!? Why not?
I guess it depends on how you define bullet-biting. Let me be more specific: voted up for accepting an ugly truth instead of rationalizing or making excuses.
Voted up for bullet-biting.
Arbitrage, in the broadest sense, means picking up free money - money that is free because of other people's preferences
except, finding exploitable inconsistencies in other peoples preferences that haven't yet been destroyed by some other arbitrageur actually requires a fair bit of work and/or risk.
Do you vote?
Well, no.
Status is a informal, social concept. The legal system doesn't have much to do with "awarding" it.
In my experience, children are cruel, immoral, egotistical, and utterly selfish. The last thing they need is to have their inflated sense of self worth and entitlement stroked by the sort of parenting you seem to be advocating. Children ought to have fundamentally lower status, not just because they're children per se, but because they're stupid and useless. They should indeed be grateful that anyone would take the trouble to feed and care for someone as stupid and useless as they, and repay the favor by becoming stronger.
an other example: cox's theorem.
"The truly fast way to produce a human-relative ideal moral agent is to create an AI with the interim goal of inferring the "human utility function" (but with a few safeguards built in, so it doesn't, e.g., kill off humanity while it solves that sub-problem),"
That is three-laws-of-robotics-ism, and it won't work. There's no such thing as a safe superintelligince that doesn't already share our values.
it's perfectly possible for one twin to get fat while the other doesn't. If it doesn't happen often, it's because features like willpower are more controlled by genes than we think, not because staying thin doesn't depend on willpower.
I figured it out! Roger Penrose is right about the nature of the brain!
just kidding.
Yes, I think it will change the decision. You need a very large number of minuscule steps to go from specs to torture, and at each stage you need to decimate the number of people affected to justify inflicting the extra suffering on the few. It's probably fair to assume the universe can't support more than say 2^250 people, which doesn't seem nearly enough.
These thought experiments all seem to require vastly more resources than the physical universe contains. Does that mean they don't matter?
seems to me that ESR is basically right, except, I'm not sure Dennet would even disagree. Maybe he'll reply in a comment?
Yup. I get all that. I still want to go for the specs.
Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn't actually matter. I don't know. But I'm just not convinced.
Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn't see it last time around.
I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there's something wrong with the escalation argument that I'm not presently clever enough to find. It's a bit like reading a proof that 2+2 = 5. You know you've just read a proof, and you checked each step, but you still, justifiably, don't believe it. It's far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.
the right answer is |U(3^^^3 + 1dustspecs) - U(3^^^3 dustspecs)| < |U(1 dustspec) - U(0 dustspecs)|, and U(any number of dustspecs) < U(torture)
There is no additivity axiom for utility.
I don't think it's an exact quote of anything on OB or LW. If it is then my subconscious has a much better memory than I do. I was just attempting to relate the Bourdain quote to OBLW terminology.
Yea, but then it wouldn't be a quote anymore!
"I don't, I've come to believe, have to agree with you to like you, or respect you."
--Anthony Bourdain.
Never forget that your opponents are not evil mutants. They are the heroes of their own stories, and if you can't fathom why they do what they do, or why they believe what they believe, that's your failing not theirs.
If anyone guesses above 0, anyone guessing 0 will be beaten by someone with a guess between 0 and the average.
if the average is less than 3/4 then the zeros will still win
you are confusing wanting "truth" with wanting the beliefs you consider to be true.
What a presumptuous, useless thing to say. Why don't you explain how you've deduced my confusion from that one sentence.
Apparently you think I've got a particular truth in mind and I'm accusing those who disagree with me of deprioritizing truth. Even if I was, why does that indicate confusion on my part? If I wanted to accuse them of being wrong because they were stupid, or of being wrong because they lacked the evidence, I would have said so. I'm accusing them of being wrong because it's more fun and convenient than being right. Seeing as how you don't know any specifics of what the argument is about, on what basis have you determined my confusion?
But actually I didn't have a particular controversy in mind. I'm claiming people deprioritize truth about smaller questions than "is there a god", or "does socialism work". I'm guessing they deprioritize truth even on things that are much closer to home, like "am i competent?", or "do people like me", or "is my company on the path the success?"
Come to think of it, that sounds quite testable. I wonder if anyone's done an experiment....
thanks! I haven't seen that one before.
I'm working on a post on this topic, but I don't think I can really adequately address what I don't like about how Jayne's presents the foundations of probability theory without presenting it myself the way I think it ought to be. And to do that I need to actually learn some things I don't know yet, so it's going to be a bit of a project.
Interestingly, those goals I described us in terms of -- wanting truth, wanting to avoid deluding ourselves -- are not really what separates "us" from "them".
I'm not sure if that's true. Everyone says they want the truth, but often reveal though their actions that it's pretty low on the priority list. Perhaps we should say that we want truth more than most people. Or that we don't believe we can get away with deceiving ourselves without paying a terrible price.