Posts

Comments

Comment by Fleisch on Humans are not automatically strategic · 2011-12-12T09:58:15.251Z · LW · GW

As an aside: The interesting thing to remember about Lysistrata is that it's originally intended as humorous, as the idea that women could withhold sex, especially withhold it better than men, was hilarious at the time. Not because they weren't allowed, but because they were the horny sex back then.

Comment by Fleisch on The curse of identity · 2011-11-20T22:51:36.117Z · LW · GW

I think that defocussing a bit and taking the outside view for a second might be clarifying, so let's not talk about what it is exactly that people do.

Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home. Then you come, and say "there is no problem here," that everything boils down to us just using the wrong definition of motivation (or something). But what's with the charities that can't find anyone to do their mucky jobs? What's with the people who could offer great service and/or reduce their working hours by working from home, if only they could get themselves to do it? Where does your argument solve these problems?

The reason I reacted to your post was not that I saw the exact flaw in your argument. The reason I answered is that I saw that your argument doesn't solve the problem at hand; in fact, it fails to even recognize it in the first place.

I think that you are probably overvaluing criticism. If so, you can increase the usefulness of your thoughts significantly if you stop yourself from paying much attention to flaws and try to identify the heart of the material first, and only apply criticism afterwards, and even then only if it's worth it.

Comment by Fleisch on The curse of identity · 2011-11-18T21:03:56.633Z · LW · GW

tl;dr: Signalling is extremely important to you. Doing away with your ability to signal will leave you helplessly desperate to get it back.

I think that this is a point made not nearly often enough in rationalist circles: Signalling is important to humans, and you are not exempt just because you know that.

Comment by Fleisch on The curse of identity · 2011-11-18T20:52:49.703Z · LW · GW

I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point.

Then you don't talk about the same thing as Kaj Sotala. He talks about all the cases where it seems to you that you are deeply motivated, but the goal turns out to be, or gets turned into nothing beyond strategic self-deception. Your point may be valid, but it is about something else than what his post is about.

Comment by Fleisch on The curse of identity · 2011-11-17T22:41:00.088Z · LW · GW

This is either a very obvious rationalization, or you don't understand Kaj Sotalas point, or both.

The problem Kaj Sotala described is that people have lots of goals, and important ones too, simply as a strategic feature, and they are not deeply motivated to do something about them. This means that most of us who came together here because we think the world could really be better will with all likelihood not achieve much because we're not deeply motivated to do something about the big problems. Do you really think there's no problem at hand? Then that would mean you don't really care about the big problems.

Comment by Fleisch on Existential Risk · 2011-11-15T16:11:23.794Z · LW · GW

There aren't enough nuclear weapons to destroy the world, not by a long shot. There aren't even enough nuclear weapons to constitute an existential risk in and off themselves, though they might still contribute strongly to the end of humanity.

EDIT: I reconsidered, and yes, there is a chance that a nuclear war and its aftereffects permanently cripples the potential of humanity (maybe by extinction), which makes it an existential risk. The point I want to make, which was more clearly made by Pfft in a child post, is that this is still something very different from what Luke's choice of words suggests.

How many people will die is of course somewhat speculative, but I think if the war itself killed 10%, that would be a lot. More links on the subject: The effects of a Global Thermonuclear War Nuclear Warfare 101, 102 and 103

Comment by Fleisch on Memory, Spaced Repetition and Life · 2011-06-12T02:33:10.987Z · LW · GW

I think that you shouldn't keep false formulas so as to not accidentally learn them. In general, this sounds like you could hit on memetically strong corruptions which could contaminate your knowledge.

Comment by Fleisch on Teachable Rationality Skills · 2011-06-01T10:02:17.652Z · LW · GW

(For some reason negotiation in situations of extreme power imbalance seems like it should have a different name, and I don't know what that should be.)

Dominance or Authority spring to mind. In this video Steven Pinker argues that there are three basic relationship types, authority, reciprocity and communality, and negotiation in extreme power imbalance sounds like it uses the social rules for authority rather than reciprocity.

Comment by Fleisch on Self-programming through spaced repetition · 2011-05-31T10:37:50.925Z · LW · GW

Thank you for the link!

I think would fit well into the introduction. You (or rather Luke Grecki) could just split the "spacing effect" link into two.

Comment by Fleisch on Self-programming through spaced repetition · 2011-05-25T10:12:52.543Z · LW · GW

This seems to be a useful technique, thanks for introducing it.

I have a bit of criticism concerining the article: It needs more introduction. Specifically, I would guess I'm not the only one who doesn't know what SR is in the first place; a few sentences of explanation would surely help.

Comment by Fleisch on Singularity FAQ · 2011-05-03T14:59:59.916Z · LW · GW

Thank you so much for writing this!

Comment by Fleisch on Rationality quotes: October 2010 · 2010-10-13T19:58:07.398Z · LW · GW

Probably not, but you wouldn't (need to) quote what he wrote here.

EDIT: Or rather, what he's writing since he's here, unless it's still novel to LW.

Comment by Fleisch on The Strangest Thing An AI Could Tell You · 2010-10-08T12:09:35.879Z · LW · GW

Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.

Comment by Fleisch on What Intelligence Tests Miss: The psychology of rational thought · 2010-07-12T19:58:49.433Z · LW · GW

I think he meant to write "invent".