Posts

[link] SMBC on utilitarianism and vegatarianism. 2011-10-16T03:29:41.908Z

Comments

Comment by mkehrt on Review of Kurzweil, 'The Singularity is Near' · 2011-11-27T01:52:52.245Z · LW · GW

This isn't really true--clock performance is a really good metric for computing power. If your clock speed doubles, you get a 2x speedup in the amount of computation you can do without any algorithmic changes. If you instead increase chip complexity, e.g., with parallelism, you need to write new code to take advantage of it.

Comment by mkehrt on Do the people behind the veil of ignorance vote for "specks"? · 2011-11-12T04:44:23.246Z · LW · GW

I'm not entirely convinced by the rest of your argument, but

The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.

Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.

Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it's an indication that the decision theory is horribly, horribly wrong,

It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone's utility function to [0,1]).

Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It's the epitome of Spock "rationality".

Comment by mkehrt on 2011 Less Wrong Census / Survey · 2011-11-04T07:30:40.539Z · LW · GW

Issues with the survey:

  1. As mentioned elsewhere, politics is Americentric.
  2. Race race seems to be missing some categorizations.
  3. If you are going to include transgender, you probably should call the others cis. Otherwise you run the risk of implying transgendered people are not "really" their target gender, which is a mess.
  4. The question of academic field was poorly phrased. I'm not an academic, so I assumed you meant what academic field was most relevant to my work. But you really should ask this question without referring to academia.
  5. The academic question and the question about field of work need more options.
  6. Expertise question needs CS as an answer :-)

EDIT: Overall, it's pretty good.

Comment by mkehrt on 2011 Less Wrong Census / Survey · 2011-11-04T07:16:00.993Z · LW · GW

27 years early, 60% certain. Oops.

Comment by mkehrt on [SEQ RERUN] Beware Stephen J. Gould · 2011-10-22T04:50:28.089Z · LW · GW

I'm fairly convinced that MWI is LW dogma because it supports the Bayesian notion that probabilities are mental entitites rather than physical ones, and not on its own merits.

Comment by mkehrt on Tool ideology · 2011-09-10T02:39:41.207Z · LW · GW

This is phenomenal! Thanks!

Comment by mkehrt on Bayesian Conspiracy @ Burning Man 2011 · 2011-07-26T03:57:16.416Z · LW · GW

Every part of this comment is true for me, too.

Comment by mkehrt on [LINK] Wondermark comic talks about "carpe diem" vs. "think more" · 2011-06-07T06:04:54.678Z · LW · GW

I think I thought this was better when it was utterly inexplicable, actually.

Comment by mkehrt on Eight questions for computationalists · 2011-04-14T01:59:19.492Z · LW · GW

That reply is entirely begging the question. Whether or not consciousness is a phenomenon "like math" or a phenomenon "like photosynthesis" is exactly is being argued about. So it's not an answering argument; it's an assertion.

Comment by mkehrt on Human errors, human values · 2011-04-10T03:53:14.310Z · LW · GW

No, I used http://www.rot13.com .

Comment by mkehrt on Human errors, human values · 2011-04-10T03:49:00.659Z · LW · GW

It's possible that you are referring to the secondary plot line of Chasm City by Alaistair Reynolds in which gur nagvureb wrggvfbaf unys gur uvoreangvba cbqf va uvf fgnefuvc, nyybjvat vg gb neevir orsber gur bguref va gur syrrg naq fb tnva zvyvgnel nqinagntr.

Comment by mkehrt on [LINK] Clothing as status signalling, logos and co-operation · 2011-04-02T21:51:27.990Z · LW · GW

I wonder how common it is for the opposite to be true. I think visible logos on clothig are phenomenally tacky and have a strong immediate negative reaction to the people wearing them when I see them. This isn't really a reaction to certain brands, but to the idea of advertising them.

On the other hand, I might assume that these people are wealthier.

Comment by mkehrt on Rationality Quotes: March 2011 · 2011-03-05T08:12:08.753Z · LW · GW

I've always been a huge fan of this story.

Comment by mkehrt on Does Solomonoff always win? · 2011-02-25T05:08:05.908Z · LW · GW

I'm suspicious of this. My understanding is that true Solomonoff induction is incomputable (because it requires ordering by Kolomogorov complexity.). Thus, you can't just create an algorithm to predict its next move.

edit: That is, just because it is "defined mathematically" doesn't mean we can predict its next move.

Comment by mkehrt on Is Morality a Valid Preference? · 2011-02-21T04:27:14.845Z · LW · GW

I've been thinking about this on and off for half a year or so, and I have come to the conclusion that I cannot agree with any proposed moral system that answers "torture" to dust specks and torture. If this means my morality is scope-insensitive, then so be it.

(I don't think it is; I just don't think utilitarianism with an aggregation function of summation over all individuals is correct; I think the correct aggregation function should probably be different. I am not sure what the correct aggregation function is, but maximizing the minimum individual utility is a lot closer to my intuitions than summation (where by "correct" I mean compatible with my moral intuitions). I'm planning on writing a post about this soon.)

Comment by mkehrt on Overcoming the negative signal of not attending college. · 2011-02-17T17:33:49.499Z · LW · GW

For anyone high up in a political campaign, I imagine one can volunteer quite a bit to work their way in. I also know someone who tried to do this by working as a local canvasser, but those are apparently low wage jobs with no clear path to advance up the chain.

Comment by mkehrt on Overcoming the negative signal of not attending college. · 2011-02-17T07:31:23.542Z · LW · GW

I think I am at least a standard deviation out on this, but my college experience had a lot of very good both theoretical and practical training which served me extremely well as a grad student and is continuing to do so in my current job. While I could imagine having done it in less than four years, the idea of learning all that I did and getting the practice applying it that I did in less than two or three years is insane. While college does have a very high signaling value, it can also be very good at what it is nominally for: teaching students. Although individuals looking to get the best jobs may find that the signaling value is what matters in achieving that goal, on a society wide level, it probably makes more sense to spend effort on increasing the quality of education.

That being said, don't underestimate the other reasons that people of most classes in the US go to college, namely, the social experience both for itself and for networking later in life.

Comment by mkehrt on Exercise and motivation · 2011-02-15T05:12:58.434Z · LW · GW

I've been pretty consistent about rock climbing and martial arts for multiple short periods in my life, and it is always glorious. Currently I am climbing (bouldering, which has a simplicty top-roping does not) multiple times a week, and weightlifting and getting cardio exercise as well for a few months. I am probably in the best cardio shape of my life (which is pretty mediocre!) and it is pretty great. I've got a group I go with, which is good for motivation.

Comment by mkehrt on Procedural Knowledge Gaps · 2011-02-13T06:05:22.715Z · LW · GW

Why?

Comment by mkehrt on Time Magazine has an article about the Singularity... · 2011-02-11T17:53:17.339Z · LW · GW

It is not clear that anyone outside of the computer world is aware of this. The chip makers PR departments are trying their best to hide this fact. ("It's a Core Two Duo Pro X2400! Don't ask how fast it is!")

Comment by mkehrt on Rationality Quotes: February 2011 · 2011-02-05T07:15:12.821Z · LW · GW

I totally knew who said that. Does that make me a bad rationalist?

Comment by mkehrt on You're in Newcomb's Box · 2011-02-04T02:24:55.396Z · LW · GW

Let me try to make my objection clearer. You seem to be concerned with things that make your existence less likely. But that is never going to be a problem. You already know the probability of your own existence is 1; you can't update it based on new data.

Comment by mkehrt on You're in Newcomb's Box · 2011-02-03T07:00:21.841Z · LW · GW

But that's not true? I already exist. There's nothing acausal going on here. I can pick whatever I want, and it just makes Prometheus wrong.

(Similarly, if Omega presented me with the same problem, but said that he (omniscient) had only created me if I would one-box this problem, I would still (assuming I am not hit by a meteor or something from outside the problem affects my brain) two-box. It would just make Omega wrong. If that contradicts the problem, well, then the problem was paradoxical to begin with.)

Comment by mkehrt on Link: Monetizing anti-akrasia mechanisms · 2011-01-28T04:45:06.700Z · LW · GW

I..I can't work up the will power to make the obvious joke about this.

Comment by mkehrt on Who are these spammers? · 2011-01-21T03:08:16.770Z · LW · GW

OP implies that it is imitation high end jewelry.

Comment by mkehrt on Meta: A 5 karma requirement to post in discussion · 2011-01-20T07:15:25.171Z · LW · GW

I, too, would be more in favor of 1.

Comment by mkehrt on Theists are wrong; is theism? · 2011-01-20T02:50:19.067Z · LW · GW

Phenomena with non-agenty origins include: any evolved trait or life form (as far as we have seen), any stellar/astronomical/geological body/formation/event...

It is pretty likely you are correct, but this is probably the best example of question-begging I have ever seen.

Comment by mkehrt on A Bayesian Argument for the Resurrection of Jesus · 2011-01-08T18:41:58.827Z · LW · GW

You said pretty much exacty everything I would have said and more.

One question--I only read the first third of so and skimmed the rest. The bits I read seemed to give a false dichotomy for dates of the composition of the gospels. The authors discussed atheistic schools that believed the gospels were all composed post 100 and contrasted these with the pre70 dates of Christian belief. Do they ever discuss the modern scholarly mostly-consensus of 70-90?

Relatedly, do you know of any good arguments for post 70 composition dates, especially for Matthew and Luke, other than fulfilled prophecies of the destruction of Jerusalem? I've always found the arguments that these books were written before 70 because they could not have predicted the destruction of Jerusalem suspiciously question-begging about the possibility of miracles.

Comment by mkehrt on Is technological change accelerating? · 2010-12-23T03:26:43.596Z · LW · GW

Per area! :-P

Comment by mkehrt on Exponentiation goes wrong first · 2010-12-16T03:03:30.957Z · LW · GW

A side note of possible interest. Under ZF, as well as ZFC, one can actually prove that induction works, via Tarski's fixed point theorem. Thus, if you think that induction seems a little weird as an axiom,, but set theory is cool, you still get to use induction.

Comment by mkehrt on Harry Potter and the Methods of Rationality discussion thread, part 6 · 2010-11-30T07:20:57.375Z · LW · GW

rot13ed because I am convinced I am correct and so this counts as a spoiler ;-)

It's pretty obvious to me that Santa Claus is fvevhf oynpx, nffhzvat gung "v'z abg frevbhf" zrnaf ur vfa'g va nmxnona.

Has this been discussed? It seems to fit, especially given the way things worked out in canon.

Comment by mkehrt on Harry Potter and the Methods of Rationality discussion thread, part 5 · 2010-11-29T02:56:51.388Z · LW · GW

Hm, I guess you are right.

However, it still seems to me that Dumbledore is acting significantly more sane than he has in previous chapters. So far he has attempted to fill the role of Wise Old Wizard exactly.

Comment by mkehrt on Harry Potter and the Methods of Rationality discussion thread, part 5 · 2010-11-28T08:38:34.945Z · LW · GW

I've got to say, I think the wizardly mentor character pointing out the mistakes of Gandalf is about as far from Genre Savvy as one can get!

Comment by mkehrt on Harry Potter and the Methods of Rationality discussion thread, part 5 · 2010-11-27T22:45:47.265Z · LW · GW

Chapter 62

Dumbledore's comments on The Lord of the Rings and his keeping Harry at Hogwarts seem significantly more rational than usual. Any chance Dumbledore is secretly awesome?

Comment by mkehrt on Harry Potter and the Methods of Rationality discussion thread, part 5 · 2010-11-11T03:06:29.990Z · LW · GW

Thanks.

Oh, ok, awesome.

Comment by mkehrt on Harry Potter and the Methods of Rationality discussion thread, part 5 · 2010-11-11T02:42:17.538Z · LW · GW

Any chance we can get links to the latest thread in the original MoR post? I can never find this without expending a fair amount of mental effort wading through search results, and the first thread is the one that comes up when I search.

Comment by mkehrt on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) · 2010-11-01T02:57:28.989Z · LW · GW

I dislike watching videos, as they are synchronous (i.e., require a set amount of time to watch, which is generally more than it would take to read the same material) and not random access (i.e., I cannot easily skim them for a certain section).

Comment by mkehrt on Is there evolutionary selection for female orgasms? · 2010-10-13T22:55:04.542Z · LW · GW

Do you think that people only have sex because they might have an orgasm? Really?

Comment by mkehrt on Understanding vipassana meditation · 2010-10-08T08:37:46.987Z · LW · GW

The word translates to "insight" and the term "insight meditation" is sometimes used for this form of mediation.

Comment by mkehrt on Harry Potter and the Methods of Rationality discussion thread, part 3 · 2010-09-09T00:15:20.967Z · LW · GW

Or ideally you would launch it into space, with a cloak against detection, and a randomly fluctuating acceleration factor that would take it out of the Solar System.

Is this a MoR explanation for the Pioneer anomaly? Because that would be awesome.

Also, I assumed Voldemort was talking about the classical elements, too, and was amused that Harry, a scientist, had come up with those at random.

Comment by mkehrt on September Less Wrong Meetup aka Eliezer's Bayesian Birthday Bash · 2010-09-08T23:23:46.244Z · LW · GW

Yeah it's definitely all about large powers of two of Planck times. Nothing else is actually worth celebrating.

Comment by mkehrt on Burning Man Meetup: Bayes Camp · 2010-08-25T21:11:55.089Z · LW · GW

It is not at all clear to me why this was upvoted three times.

Comment by mkehrt on Burning Man Meetup: Bayes Camp · 2010-08-25T18:18:19.525Z · LW · GW

Awesome. I'm mostly a lurker, but I'll stop by and meet you people.

Comment by mkehrt on Other Existential Risks · 2010-08-19T00:17:05.030Z · LW · GW

But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale.

I think this is a key point. While I think unFriendly AI could be a problem in an eventual future, other issues seem much more compelling.

As someone who has been a computer science grad student for four years, I'm baffled by claims about AI. While I do not do research in AI, I know plenty of people who do. No one is working on AGI in academia, and I think this is true in industry as well. To people who actually work on giving computers more human capabilities, AGI is an entirely science ficitonal goal. It's not even clear that researchers in CS think an AGI is a desirable goal. So, while I think it probable that AGIs will eventually exist, it's something that is distant,

Therefore, it seems like, if one is interested in reducing existential risk, there are a lot more important things to work on. Resource depletion, nuclear proliferation and natural disasters like asteroids and supervolcanoes seem like much more useful targets.

Comment by mkehrt on Other Existential Risks · 2010-08-19T00:05:17.436Z · LW · GW

Is it only expected to be a few million? This could easily be privately funded with a good advertising campaign. For example, a project which might have a similar audience, SETI, is entirely privately funded and has a budget of a few million a year.

Comment by mkehrt on Other Existential Risks · 2010-08-18T23:59:23.462Z · LW · GW

Not voted, because I think this is utterly fascinating and entirely off topic!

Comment by mkehrt on Should I believe what the SIAI claims? · 2010-08-12T20:51:52.051Z · LW · GW

I really agree with both a and b (although I do not care about c). I am glad to see other people around here who think both these things.

Comment by mkehrt on Open Thread, August 2010 · 2010-08-03T21:46:13.659Z · LW · GW

What kind of math do you know in where things can be "true, and that's the end of that"? In math, things should be provable from a known set of axioms, not chosen to be true because they feel right. Change the axioms, and you get different result.

Intuition is a good guide for finding a proof, and in picking axioms, but not much more than that. And intuitively true axioms can easily result in inconsistent systems.

The questions, "what axioms do I need to accept to prove Bayes' Theorem?", "Why should I believe these axioms reflect the physical universe"? and "What proof techniques do I need to prove the theorem?" are very relevant to deciding whether to accept Bayes' Theorem as a good model of the universe.

Comment by mkehrt on Rationality quotes: August 2010 · 2010-08-03T17:49:44.207Z · LW · GW

I was intrigued when I first read this when you last posted it, and I thought about it for a while. The problem with it, it seems to me, is that this is a good explanation for why qualia are ineffable, but it doesn't seem to be come any close to explaining what they are or how they arise.

So, I could imagine a world (it may even be this one!) where people's brains happen to be organized similarly enough that two people really could transfer qualia between them, but this still doesn't explain anything about them.

Comment by mkehrt on Rationality quotes: August 2010 · 2010-08-03T07:18:36.236Z · LW · GW

Completely out of curiosity, why do you cite him by his birth name rather than his pen name of Anton Szander LaVey?