Posts

g, a Statistical Myth 2013-04-11T06:30:29.165Z
Anybody want to join a Math Club? 2013-04-05T04:36:56.570Z

Comments

Comment by smoofra on High energy ethics and general moral relativity · 2015-06-22T04:22:26.493Z · LW · GW

What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?

I think what you're saying about needing some way to change our minds is a good point though. And I certainly wouldn't say that every single object-level belief I hold is more secure than every meta belief. I'll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.

But I don't think that's the way to change our minds about something like how we deal with homosexuality, either on a descriptive or a normative level. Nobody read Bentham and said, "you know what guys I don't think being gay actually costs any utils! I guess it's fine". And if they did, it would have been bad moral epistemology. If you put yourself in the mind of an average Victorian, "don't be gay" sits very securely in your web of belief. It's bolstered by what you think about virtue, religion, deontology, and even health. And what you think about those things is more or less consistent with and confirmed by what you think about everything else. It's like moral-epistemic page rank. The "don't be gay" node has strongly weighted edges from the strongest cluster of nodes in your belief system. And they all point at each other. Compared to those nodes, meta level stuff like utilitarianism is in a distant and unimportant backwater region of the graph. If anything an arrow from utilitarianism to "being gay is ok" looks to you like a reason not to take utilitarianism too seriously. In order for you to change your mind about homosexuality, you need to change your mind about everything. You need to move all that moral pagerank to totally different regions of the graph. And picking a meta theory to rule them all and assigning it a massive weight seems like a crazy reckless way to do that. If you're doing that you're basically saying you prioritize meta-ethical consistency over all the object level things that you actually care about. It seems to me the only sane way to update is to slowly alter the object level stuff as you learn new facts, or discover inconsistencies in what you value, and try to maintain as much reflective consistency as you can while you do it.

PS. I guess I kind of made it sound like I believe the Whig theory of moral history, where modern western values are clearly true scion of Victorian values, and if we could just tell them what we know and walk them though the arguments we could convince the Victorians that we were right, even by their own standards. I'm undecided on that and I'll admit it might be the case that we just fundamentally disagree on values, and that "moral progress" is a random walk. Or not. Or it's a mix. I have no idea.

Comment by smoofra on High energy ethics and general moral relativity · 2015-06-21T21:54:49.577Z · LW · GW

I think you've pretty much stated the exact opposite of my own moral-epistomological worldview.

I don't like the analogy with physics. Physical theories get tested against external reality in a way that makes them fundamentally different from ethical theories.

If you want to analogize between ethics and science, I want to compare it to the foundations of mathematics. So utilitarianism isn't relativity, it's ZFC. Even though ZFC proves PA is a consistent and true theory of the natural numbers, it's a huge mistake for a human to base their trust in PA on that!

There is almost no argument or evidence that can convince me to put more trust in ZFC than i do PA. I don't think I'm wrong.

I trust low-energy moral conclusions more than I will ever trust abstract metaethical foundational theories. I think it is a mistake to look for low-complexity foundations and reason from them. I think the best we can do is seek reflective equilibrium.

Now, that being said, I don't think it's wrong to study abstract metaethical theories, to ask what their consequences are, and even to believe them a little bit. The analogy with math still holds here. We study the heck out of ZFC. We even believe it more than a little at this point. But we don't believe it more than we believe the intermediate value theorem.

PS: I also don't think "shut up and calculate" is something you can actually do under utilitarianism, because there are good utilitarian arguments for obeying deontological rules and being virtuous, and pretty much every ethical debate that anyone has ever had can be rephrased as a debate about what terms should go in the utility function and what the most effective way to maximize it is.

Comment by smoofra on Anybody want to join a Math Club? · 2013-04-16T21:11:10.922Z · LW · GW

I haven't. I'll see if I can show up for the next one.

Comment by smoofra on g, a Statistical Myth · 2013-04-11T14:28:40.416Z · LW · GW

this was also the part of Dalliard's critique I found most convincing. Shalizi's argument seems to a refutation of a straw man.

Comment by smoofra on g, a Statistical Myth · 2013-04-11T14:25:37.105Z · LW · GW

One thing Dalliard mentions is that the 'g' derived from different studies are 'statistically indistinguishable'. What's the technical content of this statement?

Comment by smoofra on g, a Statistical Myth · 2013-04-11T06:45:25.472Z · LW · GW

thanks for the link.

Not that I feel particularly qualified to judge, but I'd say Dalliard has a way better argument. I wonder if Shalizi has written a response.

Comment by smoofra on Anybody want to join a Math Club? · 2013-04-05T15:27:46.710Z · LW · GW

wow that's a neat service.

Comment by smoofra on Anybody want to join a Math Club? · 2013-04-05T15:24:46.644Z · LW · GW

It looks like we may have enough people interested in Probability Theory, Though I doubt we all live in the same city. I live near DC.

Depending on how many people are interested/where they live, it might make sense to meet over video chat instead.

Comment by smoofra on Anybody want to join a Math Club? · 2013-04-05T15:20:05.391Z · LW · GW

I'm 32.

Comment by smoofra on Reflection in Probabilistic Logic · 2013-03-24T03:23:03.377Z · LW · GW

So you are assuming that it will be wanting to prove the soundness of any successors? Even though it can't even prove the soundness of itself? But it can believe in it's own soundness in a Bayesian sense without being able to prove it. There is not (as far as I know) any Godelian obstacle to that. I guess that was your point in the first place.

Comment by smoofra on Reflection in Probabilistic Logic · 2013-03-23T21:36:02.965Z · LW · GW

OK, forget about F for a second. Isn't the huge difficulty finding the right deductions to make, not formalizing them and verifying them?

Comment by smoofra on Reflection in Probabilistic Logic · 2013-03-23T20:41:30.371Z · LW · GW

This is all nifty and interesting, as mathematics, but I feel like you are probably barking up the wrong tree when it comes to applying this stuff to AI. I say this for a couple of reasons:

First, ZFC itself is already comically overpowered. Have you read about reverse mathematics? Stephen Simpson edited a good book on the topic. Anyway, my point is that there's a whole spectrum of systems a lot weaker than ZFC that are sufficient for a large fraction of theorems, and probably all the reasoning that you would ever need to do physics or make real word, actionable decisions. The idea that physics could depend on reasoning of a higher consistency strength than ZFC just feels really wrong to me. Like the idea that P could really equal NP. Of course my gut feeling isn't evidence, but I'm interested in the question of why we disagree. Why do you think these considerations are likely to be important?

Second, Isn't the the whole topic of formal reasoning a bike shed? Isn't the real danger that you will formalize the goal function wrong, not that the deductions will be invalid?

Comment by smoofra on The noncentral fallacy - the worst argument in the world? · 2012-10-16T21:43:11.870Z · LW · GW

I don't think you've chosen your examples particularly well.

Abortion certainly can be a 'central' case of murder. Immagine aborting a fetus 10 minutes prior to when it would have been born. It can also be totally 'noncentral': the morning after pill. Abortions are a grey area of central-murder depending on the progress of neural devlopment of the fetus.

Affermative action really IS a central case of racism. It's bad for the same reason as segregation was bad, because it's not fair to judge people based on their race. The only difference is that it's not nearly AS bad. Segregation was brutal and oppressive, while affermative action doesn't really affect most peopel enough for them to notice.

Comment by smoofra on Theism, Wednesday, and Not Being Adopted · 2010-05-13T20:35:56.539Z · LW · GW

What do you think you're adding to the discussion by trotting out this sort of pedantic literalism?

Unless someone explicitly says they know something with absolute 100% mathematical certainty, why don't you just use your common sense and figure that when they say they "know" something, they mean they assign it a very high probability, and believe they have epistemologically sound reasons for doing so.

Comment by smoofra on Tips and Tricks for Answering Hard Questions · 2010-01-18T22:40:45.477Z · LW · GW

"Trust your intuitions, but don't waste too much time arguing for them"

This is an excellent point. Intuition plays an absolutely crucial point in human thought, but there's no point in debating an opinion that (by definition, even) you're incapable of verbalizing your reasons for. Let me suggest another maxim:

Intuitions tell you where to look, not what you'll find.

Comment by smoofra on On the Power of Intelligence and Rationality · 2009-12-25T02:19:08.527Z · LW · GW

wait so, are you agreeing with me or disagreeing?

Comment by smoofra on On the Power of Intelligence and Rationality · 2009-12-24T19:46:06.152Z · LW · GW

What makes you think Hitler didn't deliberately think about how to yell at crowds?

Comment by smoofra on If reason told you to jump off a cliff, would you do it? · 2009-12-22T19:03:10.202Z · LW · GW

You're confusing "reason" with inappropriate confidence in models and formalism.

Comment by smoofra on December 2009 Meta Thread · 2009-12-17T15:51:06.499Z · LW · GW

I vote for the meta-thread convention, or for any other mechanism that keeps meta off the front page.

Comment by smoofra on An account of what I believe to be inconsistent behavior on the part of our editor · 2009-12-17T03:31:18.003Z · LW · GW

I think the main problem with mormon2's submission was not where it was posted, but that it was pointless and uninformed.

Comment by smoofra on Rebasing Ethics · 2009-12-16T16:33:14.674Z · LW · GW

I suggest you run an experiment. Go try to eat at a restaurant and explicitly state your intention not to tip. I predict the waiter will tell you to fuck off, and if the manager gets called out, he'll tell you to fuck off too.

Comment by smoofra on Rebasing Ethics · 2009-12-16T16:28:51.831Z · LW · GW

I basically agree with you, though I'm not sure the legal distinction between "theft" and "breach of contract" is meaningful in this context. As far as I know there's no law that says you have to tip at all. So from a technical legal perspective, failing to tip is neither theft nor breach of contract nor any other offense.

Comment by smoofra on Rebasing Ethics · 2009-12-15T18:54:22.852Z · LW · GW

It may not be legal theft, but it's still moral theft. You sat down and ate with the mutual understanding that you would tip. The only reason the waiter is bringing you food is because of the expectation that you will tip. If you announced your intention not to tip, he would not serve you, he would tell you to fuck off. The tip is a payment for a service, it is not a gift. The fact that the agreement to pay is implicit, the fact that the precise amount of the payment is left partially unspecified are merely technicalities that do not change the basic fact that the tip is a payment, not a gift.

Comment by smoofra on Rebasing Ethics · 2009-12-15T16:48:10.577Z · LW · GW

You don't tip in order to be altruistic, you tip because you informally agreed to tip by eating in a restaurant in the first place. If you don't tip (assuming the service was acceptable), you aren't being virtuous, you're being a thief.

Perhaps you should say the correct moral move is to tip exactly 15%.

Comment by smoofra on A question of rationality · 2009-12-14T23:56:13.402Z · LW · GW

I believe EY has already explained that he's trying to make more rationalists, so they can go and solve FAI.

Comment by smoofra on Arbitrage of prediction markets · 2009-12-07T18:01:16.024Z · LW · GW

If I think I know a more efficient way to make a widget, I still need to convince somebody to put up the capital for my new widget factory.

Comment by smoofra on Arbitrage of prediction markets · 2009-12-04T23:39:46.410Z · LW · GW

But if results depend on my ability to convince rich people, that's not prediction market!

what!? Why not?

Comment by smoofra on Morality and International Humanitarian Law · 2009-12-04T21:07:01.038Z · LW · GW

I guess it depends on how you define bullet-biting. Let me be more specific: voted up for accepting an ugly truth instead of rationalizing or making excuses.

Comment by smoofra on Morality and International Humanitarian Law · 2009-12-03T20:19:53.070Z · LW · GW

Voted up for bullet-biting.

Comment by smoofra on Consequences of arbitrage: expected cash · 2009-11-13T16:23:41.848Z · LW · GW

Arbitrage, in the broadest sense, means picking up free money - money that is free because of other people's preferences

except, finding exploitable inconsistencies in other peoples preferences that haven't yet been destroyed by some other arbitrageur actually requires a fair bit of work and/or risk.

Comment by smoofra on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T05:58:59.282Z · LW · GW

Do you vote?

Comment by smoofra on Our House, My Rules · 2009-11-03T16:58:17.208Z · LW · GW

Well, no.

Status is a informal, social concept. The legal system doesn't have much to do with "awarding" it.

Comment by smoofra on Our House, My Rules · 2009-11-02T15:31:16.868Z · LW · GW

In my experience, children are cruel, immoral, egotistical, and utterly selfish. The last thing they need is to have their inflated sense of self worth and entitlement stroked by the sort of parenting you seem to be advocating. Children ought to have fundamentally lower status, not just because they're children per se, but because they're stupid and useless. They should indeed be grateful that anyone would take the trouble to feed and care for someone as stupid and useless as they, and repay the favor by becoming stronger.

Comment by smoofra on Arrow's Theorem is a Lie · 2009-10-25T17:44:08.094Z · LW · GW

an other example: cox's theorem.

Comment by smoofra on How to get that Friendly Singularity: a minority view · 2009-10-12T14:33:18.536Z · LW · GW

"The truly fast way to produce a human-relative ideal moral agent is to create an AI with the interim goal of inferring the "human utility function" (but with a few safeguards built in, so it doesn't, e.g., kill off humanity while it solves that sub-problem),"

That is three-laws-of-robotics-ism, and it won't work. There's no such thing as a safe superintelligince that doesn't already share our values.

Comment by smoofra on Why Don’t We Apply What We Know About Twins to Everybody Else? · 2009-10-01T17:31:16.860Z · LW · GW

it's perfectly possible for one twin to get fat while the other doesn't. If it doesn't happen often, it's because features like willpower are more controlled by genes than we think, not because staying thin doesn't depend on willpower.

Comment by smoofra on The Anthropic Trilemma · 2009-09-28T21:23:44.980Z · LW · GW

I figured it out! Roger Penrose is right about the nature of the brain!

just kidding.

Comment by smoofra on The Lifespan Dilemma · 2009-09-11T15:31:57.477Z · LW · GW

Yes, I think it will change the decision. You need a very large number of minuscule steps to go from specs to torture, and at each stage you need to decimate the number of people affected to justify inflicting the extra suffering on the few. It's probably fair to assume the universe can't support more than say 2^250 people, which doesn't seem nearly enough.

Comment by smoofra on The Lifespan Dilemma · 2009-09-10T19:43:16.145Z · LW · GW

These thought experiments all seem to require vastly more resources than the physical universe contains. Does that mean they don't matter?

Comment by smoofra on ESR's New Take on Qualia · 2009-08-21T13:48:26.921Z · LW · GW

seems to me that ESR is basically right, except, I'm not sure Dennet would even disagree. Maybe he'll reply in a comment?

Comment by smoofra on Revisiting torture vs. dust specks · 2009-07-09T05:49:49.430Z · LW · GW

Yup. I get all that. I still want to go for the specs.

Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn't actually matter. I don't know. But I'm just not convinced.

Comment by smoofra on Revisiting torture vs. dust specks · 2009-07-08T19:53:27.011Z · LW · GW

Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn't see it last time around.

I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there's something wrong with the escalation argument that I'm not presently clever enough to find. It's a bit like reading a proof that 2+2 = 5. You know you've just read a proof, and you checked each step, but you still, justifiably, don't believe it. It's far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.

Comment by smoofra on Revisiting torture vs. dust specks · 2009-07-08T14:52:23.497Z · LW · GW

the right answer is |U(3^^^3 + 1dustspecs) - U(3^^^3 dustspecs)| < |U(1 dustspec) - U(0 dustspecs)|, and U(any number of dustspecs) < U(torture)

There is no additivity axiom for utility.

Comment by smoofra on Rationality Quotes - June 2009 · 2009-06-16T03:17:24.597Z · LW · GW

I don't think it's an exact quote of anything on OB or LW. If it is then my subconscious has a much better memory than I do. I was just attempting to relate the Bourdain quote to OBLW terminology.

Comment by smoofra on Rationality Quotes - June 2009 · 2009-06-16T03:15:09.489Z · LW · GW

Yea, but then it wouldn't be a quote anymore!

Comment by smoofra on Rationality Quotes - June 2009 · 2009-06-15T15:49:24.192Z · LW · GW

"I don't, I've come to believe, have to agree with you to like you, or respect you."

--Anthony Bourdain.

Never forget that your opponents are not evil mutants. They are the heroes of their own stories, and if you can't fathom why they do what they do, or why they believe what they believe, that's your failing not theirs.

Comment by smoofra on The Aumann's agreement theorem game (guess 2/3 of the average) · 2009-06-09T19:45:10.204Z · LW · GW

If anyone guesses above 0, anyone guessing 0 will be beaten by someone with a guess between 0 and the average.

if the average is less than 3/4 then the zeros will still win

Comment by smoofra on My concerns about the term 'rationalist' · 2009-06-05T16:43:00.736Z · LW · GW

you are confusing wanting "truth" with wanting the beliefs you consider to be true.

What a presumptuous, useless thing to say. Why don't you explain how you've deduced my confusion from that one sentence.

Apparently you think I've got a particular truth in mind and I'm accusing those who disagree with me of deprioritizing truth. Even if I was, why does that indicate confusion on my part? If I wanted to accuse them of being wrong because they were stupid, or of being wrong because they lacked the evidence, I would have said so. I'm accusing them of being wrong because it's more fun and convenient than being right. Seeing as how you don't know any specifics of what the argument is about, on what basis have you determined my confusion?

But actually I didn't have a particular controversy in mind. I'm claiming people deprioritize truth about smaller questions than "is there a god", or "does socialism work". I'm guessing they deprioritize truth even on things that are much closer to home, like "am i competent?", or "do people like me", or "is my company on the path the success?"

Come to think of it, that sounds quite testable. I wonder if anyone's done an experiment....

Comment by smoofra on Do Fandoms Need Awfulness? · 2009-06-04T20:24:43.871Z · LW · GW

thanks! I haven't seen that one before.

I'm working on a post on this topic, but I don't think I can really adequately address what I don't like about how Jayne's presents the foundations of probability theory without presenting it myself the way I think it ought to be. And to do that I need to actually learn some things I don't know yet, so it's going to be a bit of a project.

Comment by smoofra on My concerns about the term 'rationalist' · 2009-06-04T17:26:27.921Z · LW · GW

Interestingly, those goals I described us in terms of -- wanting truth, wanting to avoid deluding ourselves -- are not really what separates "us" from "them".

I'm not sure if that's true. Everyone says they want the truth, but often reveal though their actions that it's pretty low on the priority list. Perhaps we should say that we want truth more than most people. Or that we don't believe we can get away with deceiving ourselves without paying a terrible price.