Posts

Meetup : Urbana-Champaign, Consciousness 2014-04-23T16:54:09.534Z
Meetup : Urbana-Champaign Scantily Attended Meetups Rerun 2014-04-10T05:06:17.144Z
Meetup : Urbana-Campaign: Discussion (Folk Wisdom) 2014-03-27T01:04:39.988Z
Meetup : Urbana-Champaign: Discussion 2014-03-12T18:33:44.433Z
Meetup : Urbana-Champaign: Games 2014-02-28T21:49:02.228Z
Meetup : Urbana-Champaign: Politics and the English Language 2014-02-15T02:30:28.436Z
Meetup : Urbana-Champaign: Fallacy Categorization 2014-02-01T20:14:52.154Z
Meetup : Urbana-Champaign: Nomic 2014-01-17T23:43:46.585Z
Rationality Quotes January 2014 2014-01-04T19:39:54.264Z
Meetup : Urbana-Champaign: Rituals 2013-12-20T15:28:26.843Z
Meetup : Urbana-Champaign: Fun and Games 2013-12-14T15:19:19.627Z
Meetup : Urbana-Champaign: Thinking Fast and Slow Discussion November 17 2013-11-13T20:47:20.587Z
Meetup : Urbana-Champaign: Thinking Fast and Slow Pages 70-140 Discussion Meetup 2013-11-04T10:39:19.077Z
Meetup : Urbana-Champaign: Thinking Fast and Slow Discussion 2013-10-27T22:26:59.864Z
Meetup : Urbana-Champaign: Dark Arts Meetup Sunday October 27, Illini Union Courtyard Cafe (Ground Floor), 2PM 2013-10-25T00:24:49.147Z
Meetup : Urbana-Champaign: Decision Theory 2013-10-17T22:34:07.587Z
Meetup : Urbana Champaign: Metaethics, Normative Ethics, Applied Ethics 2013-10-06T01:39:06.416Z
Open Thread, September 23-29, 2013 2013-09-24T01:25:54.498Z
Meetup : Urbana-Champaign, Illinois Games/Discussion 2013-09-13T03:33:08.874Z
Meetup : Urbana-Champaign, Illinois 2013-09-03T05:40:20.396Z
Meetup : Urbana-Champaign, Illinois 2013-08-26T04:47:20.167Z
Meetup : New Meetup: Urbana-Champaign, Illinois. 2013-08-17T03:39:53.619Z
Meetup Interest Probe: Urbana-Champaign, Illinois 2013-04-10T03:52:56.494Z
Circular Preferences Don't Lead To Getting Money Pumped 2012-09-11T03:42:41.314Z
The Doubling Box 2012-08-06T05:50:19.798Z

Comments

Comment by Mestroyer on Guarding Against the Postmodernist Failure Mode · 2015-05-12T18:09:28.414Z · LW · GW

A couple questions- what portion of the workshop attendees self-selected from among people who were already interesting in rationality, compared to the portion that randomly stumbled upon it for some reason?

Don't know, sorry.

Comment by Mestroyer on The Strangest Thing An AI Could Tell You · 2015-05-12T18:03:23.830Z · LW · GW

Hi. Checking back on this account on a whim after a long time of not using it. You're right. 2012!Mestroyer was a noob and I am still cleaning up his bad software.

Comment by Mestroyer on Deception detection machines · 2014-09-06T07:28:39.325Z · LW · GW

I would need a bunch of guarantees about the actual mechanics of how the AI was forced to answer before I stopped seeing vague classes of ways this could go wrong. And even then, I'd assume there were some I'd missed, and if the AI has a way to show me anything other than "yes" or "no", or I can't prevent myself from thinking about long sequences of bits instead of just single bits separately, I'd be afraid it could manipulate me.

An example of a vague class of ways this could go wrong is if the AI figures out what my CEV would want using CDT, and itself uses a more advanced decision theory to exploit the CEV computation into wanting to write something more favorable to the AI's utility function in the file.

Also, IIRC, Eliezer Yudkowsky said there are problems with CEV itself. (Maybe he just meant problems with the many-people version, but probably not). It was only supposed to be a vague outline, and a "see, you don't have to spend all this time worrying about whether we share your ethical/political philosophy. Because It's not going to be hardcoded into the AI anyway"

Comment by Mestroyer on Deception detection machines · 2014-09-06T04:22:20.740Z · LW · GW

That's the goal, yeah.

Comment by Mestroyer on Deception detection machines · 2014-09-06T03:09:47.137Z · LW · GW

It doesn't have to know what my CEV would be to know what I would want in those bits, which is a compressed seed of an FAI targetted (indirectly) at my CEV.

But there are problems like, "How much effort is it required to put into it?" (clearly I don't want it to spend far more compute power than it has trying to come up with the perfect combination of bits which will make my FAI unfold a little bit faster, but I also don't want it to spend no time optimizing. How do I get it to pick somewhere in between without it already wanting to pick the optimal amount of optimization for me?) "What decision theory is my CEV using to decide those bits? (Hopefully not something exploitable, but how do I specify that?)"

Comment by Mestroyer on Deception detection machines · 2014-09-06T02:26:25.074Z · LW · GW

Given that I'm turning the stream of bits, 10KiB long I'm about to extract from you into an executable file, through this exact process, which I will run on this particular computer (describe specifics of computer, which is not the computer the AI is currently running on) to create your replacement, would my CEV prefer that this next bit be a 1 or a 0? By CEV, would I rather that the bit after that be a 1 or a 0, given that I have permanently fixed the preceding bit as what I made it? By CEV, would I rather that the bit after that be a 1 or a 0, given that I have permanently fixed the preceding bit as what I made it? ...

(Note: I would not actually try this.)

Comment by Mestroyer on Six Plausible Meta-Ethical Alternatives · 2014-08-09T00:28:12.011Z · LW · GW

~5, huh? Am I to credit?

Comment by Mestroyer on Guarding Against the Postmodernist Failure Mode · 2014-07-08T08:47:05.905Z · LW · GW

This reminds me of this SMBC. There are fields (modern physics comes to mind too) that no one outside of them can understand what they are doing anymore, yet that appear to have remained sane. There are more safeguards against postmodernists' failure mode than this one. In fact, I think there is a lot more wrong with postmodernism than that they don't have to justify themselves to outsiders. Math and physics have mechanisms determining what ideas within them get accepted that imbue them with their sanity. In math, there are proofs. In physics, there are experiments.

If something like this safeguard is going to work for us, our mechanism that determines what ideas spread among us needs to reflect something good, so that producing the kind of idea that passes that filter makes our community worthwhile. This can be broken into two subgoals: making sure the kinds of questions we're asking are worthwhile, that we are searching for the right kind of thing, and making sure that our acceptance criterion is a good one. (There's also something that modern physics may or may not have for much longer, which is "Can progress be made toward the thing you're looking for").

Comment by Mestroyer on Guarding Against the Postmodernist Failure Mode · 2014-07-08T08:41:39.398Z · LW · GW

CFAR seems to be trying to be using (some of) our common beliefs to produce something useful to outsiders. And they get good ratings from workshop attendees.

Comment by Mestroyer on Open thread, 3-8 June 2014 · 2014-07-01T07:22:49.630Z · LW · GW

The last point is particularly important, since on one hand, with the current quasi-Ponzi mechanism of funding, the position of preserved patients is secured by the arrival of new members.

Downvoted because if I remember correctly, this is wrong; the cost of preservation of a particular person includes a lump of money big enough for the interest to pay for their maintenance. If I remember incorrectly and someone points it out, I will rescind my downvote.

Comment by Mestroyer on How do you take notes? · 2014-06-22T23:29:12.500Z · LW · GW

I use text files. (.txt, because I hate waiting for a rich text editor to open, and I hate autocomplete for normal writing) It's the only way to be able to keep track of them. I sometimes write paper notes when I don't have a computer nearby, but I usually don't keep those notes. Sometimes if I think of something I absolutely have to remember as I'm dozing off to sleep, I'll enter it in my cell phone because I use that as an alarm clock and it's always close to my bed. But my cell phone's keyboard makes writing notes really slow, so I avoid it under normal circumstances.

I have several kinds of notes that I make. One is when I'm doing a hard intellectual task and I want to free up short-term memory, I will write things down as I think of them. I usually title this kind of file with the name of the task. For tasks too minor to remember just by a title like that, I just write something like " notes 2014-06-22".

I also write "where I left off " notes, whenever I leave a programming project or something for a day (or sometimes even before I leave for lunch), because usually I will be forming ideas about how to fix problems as I'm fixing other problems, so I can save my future self some work by not forgetting them.

Comment by Mestroyer on Questions to ask theist philosophers? I will soon be speaking with several · 2014-06-21T07:36:40.797Z · LW · GW

In response to your first paragraph,

Human morality is indeed the complex unfolding of a simple idea in a certain environment. It's not the one you're thinking of though. And if we're talking about hypotheses for the fundamental nature of reality, rather than a sliver of it (because a sliver of something can be more complicated than the whole) you have to include the complexity of everything that contributes to how your simple thing will play out.

Note also that we can't explain reality with a god with a utility function of "maximize the number of copies of some genes", because the universe isn't just an infinite expanse of copies of some genes. Any omnipotent god you want to use to explain real life has to have a utility function that desires ALL the things we see in reality. Good luck adding the necessary stuff for that into "good" without making "good" much more complicated, and without just saying "good is whatever the laws of physics say will happpen."

You can say for any complex thing, "Maybe it's really simple. Look at these other things that are really simple." but there are many (exponentially) more possible complex things than simple things. The prior for a complex thing being generable from a simple thing is very low by necessity. If I think about this like, "well, I can't name N things I am (N-1)/N confident of and be right N-1 times, and I have to watch out for overconfidence etc., so there's no way I can apply 99% confidence to 'morality is complicated'..." then I am implicitly hypothesis privileging. You can't be virtuously modest for every complicated-looking utility function you wonder if could be simple, or your probability distribution will sum to more than 1.

By hypothesis, "God" means actus purus, moral perfection; there is no reason to double count.

I'm not double-counting. I'm counting once the utility function which specifies the exact way things shall be (as it must if we're going with omnipotence for this god hypothesis), and once the utility-maximization stuff, and comparing it to the non-god hypothesis, where we just count the utility function without the utility maximizer.

Comment by Mestroyer on Some alternatives to “Friendly AI” · 2014-06-15T22:57:41.794Z · LW · GW

I agree. "AGI Safety"/"Safe AGI" seems like the best option. if people say, "Let me save you some time and tell you right now that's impossible" half of the work is done. The other half is just convincing them that we have to do it anyway because otherwise everyone is doomed. (This is of course, as long as they are using "impossible" in a loose sense. If they aren't, the problem can probably be fixed by saying "our definition of safety is a little bit more loose than the one you're probably thinking of, but not so much more loose that it becomes easy").

Comment by Mestroyer on What resources have increasing marginal utility? · 2014-06-14T06:42:50.604Z · LW · GW

Time spent doing any kind of work with a high skill cap.

Edit: Well, okay not any kind of work meeting that criterion, to preempt the obvious LessWrongian response. Any kind you can get paid for is closer to true.

Comment by Mestroyer on Can noise have power? · 2014-05-23T06:01:31.012Z · LW · GW

One of my old CS teachers defended treating the environment as adversarial and knowing your source code, because of hackers. See median of 3 killers. (I'd link something, but besides a paper, I can't find a nice link explaining what they are in a small amount of googling).

I don't see why Yudkowsky makes superintelligence a requirement for this.

Also, it doesn't even have to be source code they have access to (which they could if it was open-source software anyway). There are such things as disassemblers and decompilers.

[Edit: removed implication that Yudkowsky thought source code was necessary]

Comment by Mestroyer on I'm About as Good as Dead: the End of Xah Lee · 2014-05-19T04:56:55.536Z · LW · GW

A lot of stuff on LessWrong is relevant to picking which charity to donate to. Doing that correctly is of overwhelming importance. Far more important than working a little bit more every week.

Comment by Mestroyer on [LINK] Sentient Robots Not Possible According To Math Proof · 2014-05-14T20:13:56.007Z · LW · GW

This is the kind of thing that when I take the outside view about my response, it looks bad. There is a scholarly paper refuting one of my strongly-held beliefs, a belief I arrived at due to armchair reasoning. And without reading it, or even trying to understand their argument indirectly, I'm going to brush it off as wrong. Merely based on the kind of bad argument (Bad philosophy doing all the work, wrapped in a little bit of correct math to prove some minor point once you've made the bad assumptions) I expect it to be, because this is what I think it would take to make a mathematical argument against my strongly-held belief, and because other people who share my strongly-held belief are saying that that's the mistake they make.

Still not wasting my time on this though.

Comment by Mestroyer on Rationality Quotes May 2014 · 2014-05-05T08:52:19.654Z · LW · GW

Actually, if you do this with something besides a test, this sounds like a really good way to teach a third-grader probabilities.

Comment by Mestroyer on Rationality Quotes May 2014 · 2014-05-04T03:38:21.293Z · LW · GW

we're human beings with the blood of a million savage years on our hands. But we can stop it. We can admit that we're killers, but we're not going to kill Today.

Captain James Tiberius Kirk dodging an appeal to nature and the "what the hell" effect, to optimize for consequences instead of virtue.

Comment by Mestroyer on Email tone and status: !s, friendliness, 'please', etc. · 2014-05-04T03:27:11.220Z · LW · GW

My impression is that they don't, because I haven't seen people who do this as low status. But they've all been people who are clearly high status anyway, due to their professional positions.

This is a bad template for reasoning about status in general, because of countersignaling.

Comment by Mestroyer on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-29T04:50:41.352Z · LW · GW

Omniscience and omnipotence are nice and simple, but "morally perfect" is a word that hides a lot of complexity. Complexity comparable to that of a human mind.

I would allow ideal rational agents, as long as their utility functions were simple (Edit: by "allow" I mean they don't get the very strong prohibition that a human::morally_perfect agent does) , and their relationship to the world was simple (omniscience and omnipotence are a simple relationship to the world). Our world does not appear to be optimized according to a utility function simpler than the equations of physics. And an ideal rational agent with no limitations to its capability is a little bit more complicated than its own utility function. So "just the laws of physics" wins over "agent enforcing the laws of physics." (Edit: in fact, now that I think of it this way, "universe which follows the rules of moral perfection by itself" wins over "universe which follows the rules of moral perfection because there is an ideal rational agent that makes it do so.")

Comment by Mestroyer on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-29T03:59:42.607Z · LW · GW

A first approximation to what I want to draw a distinction between is parts of a hypothesis that are correlated with the rest of the parts, and parts that aren't, so that and adding them decreases the probability of the hypothesis more. In the extreme case, if a part of a hypothesis is logically deduced from the other parts, then it's perfectly correlated and doesn't decrease the probability at all.

When we look at a hypothesis, (to simplify, assume that all the parts can be put into groups such that everything within a group has probability 1 conditioned on the other things in the group, and all groups are independent). Usually, we're going to pick something from each group and say, "These are the fundamentals of my hypothesis, everything else is derived from them". And see what we can predict when you put them together. For example, Maxwell's equations are a nice group of things that aren't really implied by each other, and together, you can make all kinds of interesting predictions by them. You don't want to penalize electromagnetics for complexity because of all the different forms of the equations you could derive from them. Only for the number of equations there are, and how complicated they are.

The choice within the groups is arbitrary. But pick a thing from each group, and if this is a hypothesis about all reality, then those things are the fundamental nature of reality if your hypothesis is true. Picking a different thing from each group is just naming the fundamental nature of reality differently.

This of course needs tweaking I don't know how to do for the general case. But...

If your theory is something like, "There are many universes, most of them not fine-tuned for life. Perhaps most that are fine-tuned for life don't have intelligent life. We have these equations and whatever that predict that. They also predict that some of that intelligent life is going to run simulations, and that the simulated people are going to be much more numerous than the 'real' ones, so we're probably the simulated ones, which means there are mind(s) who constructed our 'universe'." And you've worked out that that's what the equations and whatever predict. Then those equations are the fundamental nature of reality, not the simulation overlords, because simulation overlords follow from the equations, and you don't have to pay a conjunction penalty for every feature of the simulation overlords. Just for every feature of the equations and whatever.

You are allowed to get away with simulation overlords even if you don't know the exact equations that predict them, and if you haven't done all the work of making all the predictions with hardcore math, because simulation overlords have a bunch of plausible explanations, how you could derive them from something simple like that, because they are allowed to have causal history. They are allowed to not always have existed. So you can use the "lots of different universes, sometimes they give rise to intelligent life, selection effect on which ones we can observe" magic wand to get experiences of beings in simulations from universes with simple rules.

But Abrahamic and deistic gods are eternal. They have always been minds. Which makes that kind of complexity-reducing correlation impossible (or greatly reduces its strength) for hypotheses with them.

That's what I was trying to get at. If that's not what ontologically basic means, well, I don't think I have any more reason to learn what it means than other philosophical terms I don't know.

Comment by Mestroyer on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-28T15:43:30.417Z · LW · GW

Perhaps I'm misusing the phrase "ontologically basic," I admit my sole source for what it means is Eliezer Yudkowsky's summary of Richard Carrier's definition of the supernatural, "ontologically basic mental things, mental entities that cannot be reduced to nonmental entities." Minds are complicated, and I think Occam's razor should be applied to the fundamental nature of reality directly. If a mind is part of the fundamental nature of reality, then it can't be a result of simpler things like human minds appear to be, and there is no lessening the complexity penalty.

Comment by Mestroyer on Request for concrete AI takeover mechanisms · 2014-04-28T02:57:32.362Z · LW · GW

It seemed pretty obvious to me that MIRI thinks defenses cannot be made, whether or not such a list exists, and wants easier ways to convince people that defenses cannot be made. Thus the part that said: "We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated. "

Comment by Mestroyer on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-27T16:15:55.095Z · LW · GW

I think theism (not to be confused with deism, simulationism, or anything similar) is a position only a crazy person could defend because:

  1. God is an ontologically basic mental entity. Huge Occam penalty.

  2. The original texts the theisms these philosophers probably adhere to require extreme garage-dragoning to avoid making a demonstrably false claim. What's left after the garage-dragoning is either deism or an agent with an extremely complicated utility function, with no plausible explanation for why this utility function is as it is.

  3. I've already listened to some of their arguments, and they've been word games that attempt to get information about reality out without putting any information in, or fake explanations that push existing mystery into an equal or greater amount of mystery in God's utility function. (Example: "Why is the universe fine-tuned for life? Because God wanted to create life, so he tuned it up." well, why is God fine-tuned to be the kind of god who would want to create life?) If they had evidence anywhere close to the amount that would be required to convince someone without a rigged prior, I would have heard it.

I don't have any respect for deism either. It still has the ontologically basic mental entity problem, but at least it avoids the garage-dragoning. I don't think simulationism is crazy, but I don't assign >0.5 probability to it.

I pay attention to theists when they are talking about things besides theism. But I have stopped paying attention to theists about theism.

I don't take the argument from expert opinion here seriously because:

A. We have a good explanation of why they would be wrong.

B. Philosophy is not a discipline that reliably tracks the truth. Or converges to anything, really. See this. On topics that have been debated for centuries, many don't even have an answer that 50% of philosophers can agree on. In spite of this, and in spite of the base rate among the general population for atheism, 72.8% of these philosophers surveyed were atheists. If you just look at philosophy of religion there's a huge selection effect because a religious person is much more likely to think it's worth studying.

the alternative theories still have major problems, problems which theism avoids. I bet if you list the problems, I can show you that theism doesn't avoid them.

Edit: formatting.

Comment by Mestroyer on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-26T21:07:15.228Z · LW · GW

Theistic philosophers raised as atheists? Hmm, here is a question you could ask:

"Remember your past self, 3 years before you became a theist. And think, not of the reasons for being a theist you know now, but the one that originally convinced you. What was the reason, and if you could travel back in time and describe that reason, would that past self agree that that was a good reason to become a theist?"

Comment by Mestroyer on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-26T13:37:59.385Z · LW · GW

Mestroyer keeps saying this is a personality flaw of mine

An imaginary anorexic says: "I don't eat 5 supersize McDonalds meals a day. My doctor keeps saying this is a personality flaw of mine."

I don't pay attention to theistic philosophers (at least not anymore, and I haven't for a while). There's seeking evidence and arguments that could change your mind, and then there's wasting your time on crazy people as some kind of ritual because that's the kind of thing you think rationalists are supposed to do.

Comment by Mestroyer on Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link] · 2014-04-23T19:13:05.385Z · LW · GW

If a few decades is enough to make an FAI, we could build one and either have it deal with the aliens, or have it upload everyone, put them in static storage, and send a few Von Neumann probes faster than it would be economical for aliens to send them to catch us if they are interested in maximum spread, instead of maximum speed, to galaxies which will soon be outside the aliens' cosmological horizon.

Comment by Mestroyer on [Requesting advice] Problems with optimizing my life as a high school student · 2014-04-15T05:22:13.379Z · LW · GW

Can't answer any of the bolded questions, but...

When you did game programming how much did you enjoy it? For me, it became something that was both productive (relatively, because it taught me general programming skills) and fun (enough that I could do it all day for several days straight, driven by excitement rather than willpower). If you are like me and the difference in fun is big enough, it will probably outweigh the benefit of doing programming exercises designed to teach you specific things. Having a decent-sized codebase that I wrote myself to refactor when I learned new programming things was useful. Also, for everyday basic AI you can work on AI for game enemies.

If you want to be an FAI researcher, you probably want to start working through this. You need advanced math skill, not just normal AI programming skill. There's also earning to give. I don't know which would be better in your case.

About programming, read all of these, and note what MIRI says about functional programming in their course list. Though the kind of functional programming they're talking about, without side effects, is more restrictive than everything Lisp can do. I expect that learning a language that will teach you a lot of things, and let you abstract more stuff out, and then if you need to, learning pure function programming (no side effects) later is best or near-best.

Comment by Mestroyer on The Ten Commandments of Rationality · 2014-04-08T21:28:07.649Z · LW · GW

The downside to not reading what I write is that when you write your own long reply, it's an argument against a misunderstood version of my position.

I am done with you. Hasta nunca.

Comment by Mestroyer on Meetup : Chicago: Seeing with Fresh Eyes Review · 2014-04-04T20:47:24.616Z · LW · GW

3 AM? Y'all are dedicated.

Comment by Mestroyer on Meetup : Urbana-Champaign: Rationality and Cooking · 2014-04-04T06:57:55.974Z · LW · GW

Two regular attendees. Two people who sometimes show up. One person who's new and whose attendance rate hasn't been well-established.

Comment by Mestroyer on The Ten Commandments of Rationality · 2014-04-02T23:04:48.761Z · LW · GW

You, personally, probably don't care about all sentient beings. You probably care about other things. It takes a very rare, very special person to truly care about "all sentient beings," and I know of 0 that exist.

I care about other things, yes, but I do care quite a bit about all sentient beings as well (though not really on the level of "something to protect", I'll admit). And I have cared about them before I even heard of Eliezer Yudkowsky. In fact, when I first encountered EY's writing, I figured he did not care about all sentient beings, that he in fact cared about all sapient beings, and was misusing the word like they usually do in science fiction, rather than holding some weird theory of what consciousness is that I haven't heard of anyone else respectable holding, that the majority of neuroscientists disagree with, and that unlike tons of other contrarian positions he holds, he doesn't argue for publicly (I think there might have been one facebook post with an argument about it he made, but I can't find it now).

Something I neglected in the phrase "all sentient beings" is that I care less about "bad" sentient beings, or sentient beings who deliberately do bad things than "good" sentient beings. But even for that classic example of evil, Adolf Hitler, if he were alive, I'd rather that he be somehow reformed than killed.

I find it very convenient that most of Less Wrong has the same "thing-to-protect" as EY/SigInst, for the following reasons: Safe strong AI is something that can only be worked on by very few people, leaving most of LW free to do mostly what they were doing before they adopted that thing-to-protect.

I may not be able to do FAI research, but I can do what I'm actually doing, which is donating a significant fraction of my income to people who can. (slightly more than 10% of adjusted gross income last tax year, and I'm still a student, so as they say, "This isn't even my final form").

Taking the same thing-to-protect as the person they learned the concept from prevents them from having to think critically about their own wants, needs, and desires as they relate to their actual life. (This is deceptively hard -- most people do not know what they want, and are very willing to substitute nice-sounding things for what they actually want.)

What I've really taken from the person who taught me the concept of a thing-to-protect, is a means-to-protect. If I hadn't been convinced that FAI was a good plan for achieving my values, I would be pursuing lesser plans to achieve my values. I almost started earning to give to charities spreading vegetarianism/veganism instead of MIRI. And I have thought pretty hard about whether this is a good means-to-protect.

Also, though I may not be "thing-to-protect"-level altruistic yet, I'm working on it. I'm more altruistic than I was a few years ago.

This isn't even my final form.

...it seems obvious to me that most people on LW are brutally abusing the concept of having a thing-to-protect, and thus have no real test for their rationality, making the entire community an exercise in doing ever-more-elaborate performance forms rather than a sparring ground.

Examples?

Comment by Mestroyer on Rationality Quotes April 2014 · 2014-04-02T15:46:24.370Z · LW · GW

The problem with "Act on what you feel in your heart" is that it's too generalizable. It proves too much, because of course someone else might feel something different and some of those things might be horrible.

It looks like there's all this undefined behavior, and demons coming out the nose from the outside because you aren't looking at the exact details of what's going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It's still deterministic.

As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It's entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout "Nasal demons!", but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure.

The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven't bothered to understand, and saying "Who can possibly say what that system will do?"

Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren't accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That's undefined behavior with respect to the C/C++ standard. But it's perfectly predictable if you know what platform you're on.

Other people who aren't meta-ethical anti-realists' utility functions are not really negotiable either. You can't really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desires and things they care about, and they care a lot more about having a morality which sounds logical when argued for than I do.

And if you actually examine what's going on with the feelings of people with feeling-driven epistemology that makes them believe things, instead of just shouting "Nasal demons! Unspecified behavior! Infinitely beyond the reach of understanding!" you will see that the non-psychopathic ones have mostly-deterministic internal structure to their feelings that prevents them from believing that they should murder Sharon Tate. And psychopaths won't be made ethical by reasoning with them anyway. I don't believe the 9/11 hijackers were psychopaths, but that's the holy book problem I mentioned, and a rare case.

In most cases of undefined C constructs, there isn't another carefully-tuned structure that's doing the job of the C standard in making the behavior something you want, so you crash. And faith-epistemology does behave like this (crashing, rather than running hacky cryptographic code that uses the rotate instructions) when it comes to generating beliefs that don't have obvious consequences to the user. So it would have been a fair criticism to say "You believe something because you believe it in your heart, and you've justified not signing your children up for cryonics because you believe in an afterlife," because (A) they actually do that, (B) it's a result of them having an epistemology which doesn't track the truth.

Disclaimer: I'm not signed up for cryonics, though if I had kids, they would be.

Comment by Mestroyer on [deleted post] 2014-04-02T03:01:41.166Z

Is Year 2000-era computing power your true estimate for a level of computing power that is significantly safer than what comes after?

Comment by Mestroyer on Rationality Quotes April 2014 · 2014-04-01T20:20:23.968Z · LW · GW

This quote seems like it's lumping every process for arriving at beliefs besides reason into one. "If you don't follow the process I understand and is guaranteed not to produce beliefs like that, then I can't guarantee you won't produce beliefs like that!" But there are many such processes besides reason, that could be going on in their "hearts" to produce their beliefs. Because they are all opaque and non-negotiable and not this particular one you trust not to make people murder Sharon Tate, does not mean that they all have the same probability of producing plane-flying-into-building beliefs.

Consider the following made-up quote: "when you say you believe something is acceptable for some reason other than the Bible said so, you have completely justified Stalin's planned famines. You have justified Pol Pot. If it's acceptable for for you, why isn't it acceptable for them? Why are you different? If you say 'I believe that gays should not be stoned to death and the Bible doesn't support me but I believe it in my heart', then it's perfectly okay to believe in your heart that dissidents should be sent to be worked to death in Siberia. It's perfectly okay to believe because your secular morality says so that all the intellectuals in your country need to be killed."

I would respond to it: "Stop lumping all moralities into two classes, your morality, and all others. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually condone gulags"

And likewise I respond to Penn Jilette's quote: "Stop lumping all epistemologies into two classes, yours, and the one where people draw beliefs from their 'hearts'. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually result in beliefs that drive them to fly planes into buildings."

The wishful-thinking new-age "all powerful force of love" faith epistemology is actually pretty safe in terms of not driving people to violence who wouldn't already be inclined to it. That belief wouldn't make them feel good. Though of course, faith plus ancient texts which condone violence can be more dangerous, though as we know empirically, for some reason, people driven to violence by their religions are rare these days, even coming from religions like that.

Comment by Mestroyer on The Ten Commandments of Rationality · 2014-03-31T07:06:30.004Z · LW · GW

To have a thing to protect is rare indeed. (Aside: If your thing-to-protect is the same as a notable celebrity, or as the person you learned the concept from, it is not your thing-to-protect.)

Really? What if the thing you protect is "all sentient beings," and that happens to be the same as the thing the person who introduced it to you or a celebrity protects? There're some pretty big common choices (Edited to remove inflationary language) or what a human would want to protect.

Beware value hipsterism.

Or, if by "thing to protect", you really mean "means to protect", and you're warning against having the same plan to protect the thing as a celebrity or person who introduced the idea to you, this sounds like "Celebrities and people who introduce people to the idea of means to protect things are never correct and telling the truth about the best available means to protect", which is obviously false.

Comment by Mestroyer on The Ten Commandments of Rationality · 2014-03-30T16:56:33.278Z · LW · GW

Thou shalt never engage in solipsism or defeatism, nor wallow in ennui or existential angst, or in any other way declare that thy efforts are pointless and that exerting thyself is entirely without merit. For just as it is true that matters may never get to the point where they cannot possibly get any worse, so is it true that no situation is impossible to improve upon. Verily, the most blessed of silver linings is the fact that the inherent incertitude of one’s own beliefs also implies that there is never cause for complete hopelessness and despair.

Absolute-certainty/universal applicability red flag raised.

Silver-lining claim red flag raised.

And by far, most importantly: map-territory conflation red flag raised.

Some possible situations truly can't be improved upon. The fact that you must always be uncertain about whether you are really in one is no help. Just a guarantee that in such a situation a rationalist will always have a little bit of false hope.

Upvoted anyway, most of these are good.

Comment by Mestroyer on Intelligence-disadvantage · 2014-03-16T11:03:43.506Z · LW · GW

Overthinking issues that are really very simple

Counter-signallign as a smart-person mistake

Valuing intelligence above all other qualities

Rigidly adhering to rules -- compare the two endings of "Three Worlds Collide" and the decision by which they diverge.

Expecting other people to always be rational

Got nothing for the last two. I don't think the last one is a mistake that very many people at all make. (I think being right about things has surprising benefits well past the point that most people can see it having benefits).

Other smart person mistake covering posts that spring to mind: http://lesswrong.com/lw/dxr/epiphany_addiction/ http://lesswrong.com/lw/j8/the_crackpot_offer/

And a lot of the general mistakes that LessWrong warns against are just person mistakes, rather than smart person or normal person mistakes. [edit: grammar]

Comment by Mestroyer on Rationality Quotes March 2014 · 2014-03-13T22:29:52.472Z · LW · GW

Context: Aang ("A") is a classic Batman's Rule (never kill) hero, as a result of his upbringing in Air Nomad culture. It appears to him that he must kill someone in order to save the world. He is the only one who can do it, because he's currently the one and only avatar. Yangchen ("Y") is the last avatar to have also been an Air Nomad, and has probably faced similar dilemmas in the past. Aang can communicate with her spirit, but she's dead and can't do things directly anymore.

The story would have been better if Aang had listened to her advice, in my opinion.

Comment by Mestroyer on Epilogue: Atonement (8/8) · 2014-03-13T03:08:34.854Z · LW · GW

And anyhow, why didn't they forcibly sedate every human until after the change? Then if they decided it wasn't worthwhile they could choose to die then.

It wouldn't be their own value system making the decision. It would be the modified version after the change.

Unrelatedly, you like Eliezer Yudkowsky's writing, huh? You should read HPMOR.

Comment by Mestroyer on Channel factors · 2014-03-12T18:15:43.773Z · LW · GW

Something that's helped me floss consistently: (a) getting a plastic holder thing, not the little kind where it's still extremely difficult to reach your back teeth, but a reusable one with a long handle that you wrap floss onto, and (b) keeping it next to my computer, within arms reach.

Comment by Mestroyer on Open Thread for February 18-24 2014 · 2014-03-07T05:50:15.574Z · LW · GW

If you are told a billion dollars hasn't been taxed from people in a city, how many people getting to keep a thousand dollars (say) do you imagine? Probably not a million of them. How many hours not worked, or small things that they buy do you imagine? Probably not any.

But now that I think about it, I'd rather have an extra thousand dollars than be able to drink at a particular drinking fountain.

But I don't think fairness the morality center is necessarily fairness over differing amounts of harm. It could be differing over social status. You could have an inflated sense of fairness, so that you cared much more than the underlying difference in what people get.

Comment by Mestroyer on Open Thread for February 18-24 2014 · 2014-03-07T04:54:21.625Z · LW · GW

You're familiar with the idea of anthropomorphization, right? Well, by analogy to that, I would call what you did here "rationalistomorphization," a word I wish was added to LessWrong jargon.

This reaction needs only scope insensitivity to explain, you don't need to invoke purity. Though I actually agree with you that liberals have a disgust moral center.

Comment by Mestroyer on Open Thread February 25 - March 3 · 2014-03-01T02:11:55.596Z · LW · GW

What is the best textbook on datamining? I solemnly swear that upon learning, I intend to use my powers for good.

Comment by Mestroyer on "Smarter than us" is out! · 2014-02-26T21:27:48.476Z · LW · GW

This sounds like bad instrumental rationality. If your current option is "don't publish it in paperback at all", and you are presented with an option you would be willing to take, publishing at a certain quality, if that quality was the best quality, then the fact that there may be better options you haven't explored should never return your "best choice to make" to "don't publish it in paperback at all." Your only viable candidates should be: "Publish using a suboptimal option" and "Do verified research about what is the best option and then do that."

As they say, "The perfect is the enemy of the good."

Comment by Mestroyer on LINK: In favor of niceness, community, and civilisation · 2014-02-25T19:07:39.843Z · LW · GW

Downvoted for the fake utility function.

"I wont let the world be destroyed because then rationality can't influence the future" is an attempt to avoid weighing your love of rationality against anything else.

Think about it. Is it really that rationality isn't in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived?

If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?

Comment by Mestroyer on LINK: In favor of niceness, community, and civilisation · 2014-02-25T02:53:33.849Z · LW · GW

Whether or not the lawful-goods of the world like Yvain are right, they are common. There are tons of people who want to side with good causes, but who are repulsed by the dark side even when used in favor of those causes. Maybe they aren't playing to win, but you don't play to win by saying you hate them for for following their lawful code.

For many people, the lawful code of "I'm siding with the truth" comes before the good code of "I'm going to press whatever issue." When these people see a movement playing dirty, advocating arguments as soldiers, where you decide whether to argue against it based on whether it's for your side rather than whether it's a good argument, getting mad at people for pointing out bad arguments from their side, they begin to suspect that your side is not the "Side of Truth". So you lose potential recruits. And the real Sith lords, not the ones who are trying to use the dark side for good, will have much less trouble hijacking your movement with the lawful-goods and their annoying code and the social standards they impose gone.

Leaving aside the honor among foes idea, and the "what if you're really the villain" idea, if your cause is really just, then although the lawful-goods are less effective than you, their existence is good for you. Not everything they do is good, but on balance they are a positive influence. You're not going to convince them to attempt to be dark side users for good like you are attempting to be, so stop giving them reasons to dislike you.

Even if you can convince them, the lawful-evils who think they are lawful-goods are listening to your arguments. Most people think they are good. It is hard to tell when you're not good. So the idea that only truly good people are bound by the lawful code is crazy. Lots of lawful evil is an unintentional corruption of lawful good, and this corruption doesn't unilaterally affect your goodness and your lawfulness. They could tell (or at least convince themselves) they weren't really good, if they didn't follow the lawful code, because they think like lawful good people in that respect. The lawful evil people who see you, and know you are opposed to them on the good/evil axis, think they see evil people saying "Forget this honor among enemies thing. We have no honor. Watch me put on this 'I am defectbot' shirt". And that is a much stronger argument to abandon the lawful code of rational argument and become the much more dangerous chaotic evil than what the lawful-goods hear, which is their chaotic good allies telling them to defect.

But in real modern human politics, it's more complicated because although there is one lawful/chaotic axis, there are many good/evil axes. Because there are many separate issues that people can get right or wrong. Arthur Chu thinks that the issue of overriding importance is social justice. So he demands that we drop all cooperation with people who are evil on that axis. He says we aren't playing to win. I can think of 3 issues (2 of them are actually broad categories of issues) that I am confident are more important than social justice, and which are easier to improve than the problems social justice wants to counter. In order of decreasing importance, existential risk, near-term animal suffering including factory farming and wild animals, and mortality/aging.

In real life, you don't demand that your allies be on the same end of every good/evil axis as you. That is not playing to win. A better strategy (and the one Chu is employing) is to pick the most important axis, and try and form a coalition based on that axis. Chu accuses LW of not playing to win, well, I'm just not playing to win along the social justice axis at the cost of everything else. I think different axes are more important.

And there's also the fact that for some causes, "lawful" people (people who play by the rules of rational discourse) are much better to have as allies. If we use bad statistics and dark arts to convince the masses to fund FAI research, they may as well fund Novamente as MIRI. Not all causes can benefit from irrational masses. Something like MIRI can't afford to even take one step down the path to the dark side. When you want to convince academics and experts of your cause, they will smell the dark arts on you and conclude you are a cult. And with the people you will attract by using dark arts, your organization will soon become one. The kind of people who you absolutely need to do object-level work for you are the kind of people who will never join you if you use the dark arts.

If you take a pluralistic "which axes are important" approach instead of the one that Chu takes, then there is a lot to be said for lawfulness, because it tends to promote goodness*, a little. And when get a bunch of lawful-goods and lawful-evils together and you nudge them all a little toward good through rational discussion (on different axes), that is pretty valuable. Because almost everyone is evil on at least one axis. And such a community needs a policy like "we ask that you be lawful,[follow standards of rational discourse] not that you are good [have gotten object-level questions of policy right]," because it is the only defensible Schelling point.

*If you haven't caught on to how I'm using "law vs chaos" and "good vs evil" axes here by now, this may sound like moral realism, but when I mean by "law" is upholding Yvain-style standards of discourse. What I mean by "good" is not just being moral, but being moral and right given that morality about questions of ethics.

Comment by Mestroyer on How to teach to magical thinkers? · 2014-02-24T18:37:41.345Z · LW · GW

Science is tailored to counteract human cognitive biases. Aliens might or might not have the same biases. AIs wouldn't need science.

For example, science says you make the hypothesis, then you run the test. You're supposed to make a prediction, not explain why something happened in retrospect. This is to prevent hindsight bias and rationalization from changing what we think is a consequence of our hypotheses. But the One True Way does not throw out evidence because humans are too weak to use it.

Comment by Mestroyer on White Lies · 2014-02-22T20:40:02.264Z · LW · GW

But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn't mean you should change your value from "happiness" to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn't answer my last question. Why would most people meta-reflectively endorse this method of reflection?