Posts

Comments

Comment by derefr on Dangers of steelmanning / principle of charity · 2014-01-17T17:58:53.777Z · LW · GW

In such cases, it more-often-than-not seems to me that the arguer has arrived at their conclusion through intuition, and is now attempting to work back to defensible arguments without those arguments being ones that would convince them, if they didn't first have the intuition.

Comment by derefr on New Monthly Thread: Bragging · 2013-10-22T02:35:29.473Z · LW · GW

Indeed, even knowing that in general I'm not a very jealous person, I was surprised at my own reaction to this thread: I upvoted a far greater proportion of the comments here than I usually do. I guess I'm more compersive than I thought!

Comment by derefr on To what degree do you model people as agents? · 2013-08-30T08:06:13.127Z · LW · GW

There's a specific failure-mode related to this that I'm sure a lot of LW has encountered: for some reason, most people lose 10 "agency points" around their computers. This chart could basically be summarized as "just try being an agent for a minute sheesh."

I wonder if there's something about the way people initially encounter computers that biases them against trying to apply their natural level of agency? Maybe, to coin an isomorphism, an "NPC death spiral"? It doesn't quite seem to be learned helplessness, since they still know the problem can be solved, and work toward solving it; they just think solving the problem absolutely requires delegating it to a Real Agent.

Comment by derefr on To what degree do you model people as agents? · 2013-08-30T07:50:27.953Z · LW · GW

A continuum is still a somewhat-unclear metric for agency, since it suggests agency is a static property.

I'd suggest modelling a sentience as a colony of basic Agents, each striving toward a particular utility-function primitive. (Pop psychology sometimes calls these "drives" or "instincts.") These basic Agents sometimes work together, like people do, toward common goals; or override one-another for competing goals.

Agency, then, is a bit like magnetism--it's a property that arises from your Agent-colony when you've got them all pointing the same way; when "enough of you" wants some particular outcome that there's no confusion about what else you could/should be doing instead. In effect, it allows your collection of basic Agents to be abstracted as a single large Agent with its own clear (though necessarily more complex) goals.

Comment by derefr on To what degree do you model people as agents? · 2013-08-30T07:39:16.870Z · LW · GW

This seems to suggest that modelling people (who may be agents) as non-agents has only positive consequences. I would point out one negative consequence, which I'm sure anyone who has watched some schlock sci-fi is familiar with: you will only believe someone when they tell you you are caught in a time-loop if you already model them as an agent. Substitute anything else sufficiently mind-blowing and urgent, of course.

Since only PCs can save the world (nobody else bothers trying, after all), then nobody will believe you are currently carrying the world on your shoulders if they think you're an NPC. This seems dangerous somehow.

Comment by derefr on To what degree do you model people as agents? · 2013-08-30T07:23:52.145Z · LW · GW

I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered "competent" at its job.

Comment by derefr on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-07-01T02:11:35.441Z · LW · GW

Not wanting to give anything away, I would remind you that what we have seen of Harry so far in the story was intended to resemble the persona of an 18-year-old Eliezer. Whatever Harry has done so far that you would consider to be "Beyond The Impossible", take measure of Eliezer's own life before and after a particular critical event. I would suggest that everything Harry has wrought until this moment has been the work of a child with no greater goal--and that, whatever supporting beams of the setting you feel are currently impervious to being knocked down, well, they haven't even had a motivated rationalist give them even a moment of attention, yet.

I mean, it's not like Harry can't extract a perfect copy of Hermione's material information-theoretic mass (both body and mind) using a combination of a fully-dissected time-turner, a pensieve containing complete braindumps of everyone else she's ever interacted with, a computer cluster manipulating the mirror of Erised into flipping through alternate timelines to explore Hermione's reactions to various hypotheticals, or various other devices strewn about the HP continuum. He might end up with a new baby Hermione (who has Hermione's utility function and memories) who he has to raise into being Hermione again, but just because something doesn't instantly restore her, doesn't mean it isn't worth doing. Or he might end up with a "real" copy of Hermione running in his head, which he'll then allow to manifest as a parallel-alter, using illusion charms along with the same mental hardware he uses for occlumency.

In fact, he could have probably done either of those things before, completely lacking in the motivation he has now. With it? I have no idea what will happen. A narrative Singularity-event, one might say.

Comment by derefr on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T12:53:07.941Z · LW · GW

Would you want to give the reader closure for the arc of a character who is, as the protagonist states, going to be coming back to life?

Personally, this reminds me more than anything of Crono's death in Chrono Trigger. Nobody mourns him--mourning is something to do when you don't have control over space and time and the absolute resolve to harness that control. And so the audience, also, doesn't get a break to stop and think about the death. They just hurl themselves, and their avatar, face-first into solving it.

Comment by derefr on Meditation, insight, and rationality. (Part 2 of 3) · 2011-05-07T09:39:50.793Z · LW · GW

Why not? Sure, you might start to recurse and distract yourself if you try to picture the process as a series of iterative steps, just as building any other kind of infinite data structure would—but that's what declarative data structure definitions were made for. :)

Instead of actually trying to construct each new label as you experience it, simply picture the sum total of your current attention as a digraph. Then, when you experience something, you add a label to the graph (pointing to the "real" experience, which isn't as easily visualized as the label—I picture objects in a scripting language's object space holding references to raw C structs here.) When you label the label itself, you simply attach a new label ('labelling') which points to the previous label, but also points to itself (a reflexive edge.) This would be such a regular occurrence of the graph that it would be easier to just visualize such label nodes as being definitionally attached to root labels, and thus able to be left out of any mental diagram in the same way Hydrogen is left out of the diagrams of organic molecules.

Actually, that brings up an interesting point—is the labelling process suggested here inherently subvocally-auditory? Can we visualize icons representing our experiences rather than subvocalizing words representing them, or does switching from Linear to Gestalt#Polis_time.2C_delta.2C_and_perception) change the effect this practice has on executive function?

Comment by derefr on On Being Okay with the Truth · 2011-05-02T04:32:26.979Z · LW · GW

In the sociological "let's all decide what norms to enforce" sense, sure, a lack of "morality" won't kill anyone. But in the more speculative-fictional "let's all decide how to self-modify our utility functions" sense, throwing away our actual morality—the set of things we do or do not cringe about doing—in ourselves, or in our descendants, is a very real possibility, and (to some people) a horrible idea to be fought with all one's might.

What I find unexpected about this is that libertarians (the free-will kind) tend to think in the second sense by default, because they assume that their free will gives them absolute control over their utility function, so if they manage to argue away their morality, then, by gum, they'll stop cringing! It seems you first have to guide people into realizing that they can't just consciously change what they instinctively cringe about, before they'll accept any argument about what they should be consciously scorning.

Comment by derefr on Harry Potter and the Methods of Rationality discussion thread, part 7 · 2011-01-29T07:19:21.607Z · LW · GW

Er, yes, edited.

Comment by derefr on Harry Potter and the Methods of Rationality discussion thread, part 7 · 2011-01-27T22:45:48.495Z · LW · GW

He's quite prepared in a Hero's Journey sense, though. In Harry's own mind, he has lost his mentor. Thus, he is now free to be a mentor. And what better way to grow, as a Hero and über-rationalist, than to teach others to do what you do?

Of course, Harry would say that he's already doing that with Draco—but in the same way that he usually holds back his near-mode instrumental-rationalist dark side, he's holding back the kind of insights that Draco would need to think the way Harry thinks; Harry is training Draco to be a scientist, but not an instrumental rationalist, and therefore, in the context of the story, not a Hero. (To put it another way: Draco will never one-box. He's a virtue-ethicist who is more concerned with "rationality" as just another virtue than with winning per se.)

Mentoring Hermione would be an entirely different matter: he would basically have to instill a dark side into her. Quirrel taught Harry how to lose—Harry would have to teach Hermione how to win.

If Eliezer has planned MoR as a five-act heroic fantasy, it will probably go like this; usually, in a five-act form, acts 4 and 5 mirror the character developments of the Hero in 2 and 3 in another character, for the purposes of re-examining the (developed, and now mostly stagnant) Hero's growth and revealing by juxtaposition what using that particular character as Hero brought to the journey.

It seems more likely to be a three-act form at this point, though, with Azkaban as the central, act 2 ordeal. That's not to say the story is more than half-over already, though; Harry has just found his motivation for acting instead of reacting (to change the magical world such that Azkaban is no longer a part of it.)

Comment by derefr on Were atoms real? · 2010-12-15T12:53:13.453Z · LW · GW

this kind of question-dissolving is not the standard, evolution-provided brain pathway.

Hawkins would agree.

Comment by derefr on Were atoms real? · 2010-12-15T12:32:42.925Z · LW · GW

Whatever substrate supports the computation inscribing your consciousness would be necessarily real, under whatever sense the word "real" could possibly have useful meaning. ("I think; thinking is an algorithm; therefore something is, in order to execute that algorithm.")

Interestingly, proposing a Tegmark multiverse makes the deepest substrate of consciousness "mathematics."

Comment by derefr on A sense of logic · 2010-12-14T13:36:46.764Z · LW · GW

We're built to play games. Until we hit the formal operational stage (at puberty), we basically have a bunch of individual, contextual constraint solvers operating mostly independently in our minds, one for each "game" we understand how to play—these can be real games, or things like status interactions or hunting. Basically, each one is a separately-trained decision-theoretical agent.

The formal operational psychological stage signals a shift where these agents become unified under a single, more general constraint-solving mechanism. We begin to see the meta-rules that apply across all games: things like mathematical laws, logical principles, etc. This generalized solver is expensive to build, and expensive to run (minds are almost never inside it if they can help it, rather staying inside the constraint-solving modes relevant to particular games), but rewards use, as anyone here can attest.

When we are operating using this general solver, and we process an assertion that would suggest that we must restructure the general solver itself, we react in two ways:

Initially, we dread the idea. This is a shade of the same feeling you'd get if your significant other said, very much out of the blue and in very much the sort of tone associated with such things, "we need to talk." Your brain is negatively reinforcing, all at once, all the pathways that led you here, way back as far as it remembers the causal chain proceeding. Your mind reels, thinking "oh crap, I should have studied [1 day ago], I shouldn't have gone out partying [1 week ago], I should have asked friends to form a study group [at the beginning of the semester], I never should have come to this school in the first place... why did I choose this damn major?"

Second, we alienate ourselves from the source of the assertion. We don't want to restructure; not only is it expensive, but our general solver was created as a product of the purified intersection of all experiments that led to success in all played games. That is to say, it is, without exception, the set of the most well-trusted algorithms and highly-useful abstractions in your brain. It's basically read-only. So, like an animal lashing out when something tries to touch its wounds, our minds lash out to stop the assertion from pressing too hard against something that would be both expensive and fruitless to re-evaluate. We turn down the level of identification/trust we have with whoever or whatever made the assertion, until they no longer need to be taken seriously. Serious breaches can cause us to think of the speaker as having a completely alien mental process—this is what some people say of the experience of speaking with sociopathic serial killers, for example.

Of course, the mind can only implement the second "barrier" step when the assertion is associated with something that can vary on trust, like a person or a TV program. If it comes directly as evidence from the environment, only the first reaction remains, and intensifies increasingly as you internalize the idea that you may just have to sit down and throw out your mind.

Comment by derefr on If reductionism is the hammer, what nails are out there? · 2010-12-14T13:00:45.717Z · LW · GW

I would say that it is not that we want essences in our sexuality, but that gender and sexuality are essentialist by nature: the sexual drive is built on top of the parts of our brains that essentialize/abstract/encapsulate, and so reducing the concept would involve modifying the human utility function to desire the parts, rather than the pretended whole.

Or, to put it another way: a heterosexual blegg is not 50% attracted to something with 50% blegg features and 50% rube features; it is attracted only to pure rubes, and the closer something is to being a rube, without exactly being a rube, the less attractive it is. This is basically the Uncanny Valley at work: some of our drives want discrete gestalts, and the harder they have to work to construct them, the less favorably they'll evaluate the things they're constructing on.

Comment by derefr on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-01T21:14:46.051Z · LW · GW

It's pretty common, though. You wanted the other people reading to think of you as clever, and considered that to be "worth" making the author feel a bit bad. This is what the proxy-value of karma, as implemented by the Reddit-codebase discussion engine of this site, reflects: the author can only downvote once (and even then they are discouraged from doing so, unlike with, say, a Whuffie system), but the audience can upvote numerous times.

Thinking back, I've had many discussions on the Internet that devolved into arguments, where, although my interlocutor was trying to convince me of something, I had given up on convincing them of anything in particular, and was instead trying to convince any third-parties reading the post that the other person was not to be trusted, and that their advice was dangerous—at the expense of making myself seem like even less trustworthy to the person I was nominally supposed to be convincing. This is what public fora do.

Comment by derefr on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-01T21:05:57.598Z · LW · GW

The errors of others, or the errors of those of superior social ranking? Do Korean teachers refrain from correcting students?

Comment by derefr on Memetic Hazards in Videogames · 2010-09-11T08:42:53.245Z · LW · GW

This is an example of Conservation of Detail, which is just another way to say that the contrapositive of your statement is true: if you don't need to take something in a game, then the designer won't have bothered to make it take-able (or even to include it.)

I always assume that there's all sorts of stuff lying around in an RPG house that you can't see, because your viewpoint character doesn't bother to take notice of it. It might just be because it's irrelevant, but it might also be for ethical reasons: your viewpoint character only "reports" things to you that his system of belief allows him to act upon.

Comment by derefr on Hacking the CEV for Fun and Profit · 2010-06-08T02:22:01.237Z · LW · GW

This seems to track with the Eliezer's fictional "conspiracies of knowledge": if we don't want our politicians to get their hands on our nuclear weapons (or the theory for their operation), then why should they be allowed a say in what our FAI thinks?

Comment by derefr on It's all in your head-land · 2009-07-23T06:25:09.034Z · LW · GW

In software development, this is known as being "Agile." Originally, software was designed mostly in head-land (a "Big Design Up Front"), but gradually a different process was pushed wherein a smaller, prototype design would first be constructed, then evaluated for its effects in real-land, and then improved upon, repeatedly. I find it interesting that unlike in the world of sports, where "one step at a time" can be almost universally agreed upon, software development is rife with controversy over whether "Agile software development methods" have any real advantages.

Comment by derefr on Closet survey #1 · 2009-05-12T04:12:43.513Z · LW · GW

Here's a theory as to why: the experience may indeed be painful in the psychosocial context of our present society, but perhaps only in that context, or more specifically, because of that context.

That is, we have ideas of shame—that certain things are, or are not shameful—that are culturally based, and when we do things that offend our (learned) sense of shame, we feel, and remember, the associated negative emotions, without necessarily remembering their cause. We associate the negative emotions with the circumstance, instead of the long-gone prior that caused us to feel shameful in such circumstances. In some religions, you can feel shameful working on the Sabbath; in our society, you feel shameful having sex when society says you aren't "ready" to. (I admit that that's a bit of a stretched analogy.)

The more common reply to your argument, though, is that the children are reassigning a negative emotional weight to their memory of the experiences, after the fact, because the therapist/parent/whomever is expecting the experience to be negative. They don't have to prompt for this verbally; they may be using completely neutral language, or simply asking "what happened?" Either way, their body language will show their emotional reaction to every word (and if a horse can do math based on our observed body language, we're obviously not very good at concealing it.)

To demonstrate my meaning: If one of my friends punched me in the arm, I'd interpret that as playful at the time. If a stranger did it, I'd interpret it as hurtful. I literally feel more pain in the latter case, because of this expectation. Now, if, some time later that day, my friend insulted my race, or some other category to which I belong that implied that he just wasn't my friend any more, I'd re-think that punch. I'd remember it hurting more.

Child abuse recountings are extreme versions of this. If you demonize the adult in the child's mind, everything they do is going to take on a negative connotation. They're going to start looking for the negative angle: a hug was really a rough squeeze; a toussle of the hair was really a hair-pulling, and so on. In this light, of course sex was a bad experience—it's extremely physical with all sorts of pleasurable/painful connotations which can be switched around or played with to no end (for example, BDSM is simply a shared agreement on a set of altered connotations.)