Posts

[META] 'Rational' vs 'Optimized' 2012-01-04T18:58:49.233Z
Link: Monetizing anti-akrasia mechanisms 2011-01-27T17:02:07.172Z

Comments

Comment by TheOtherDave on Open thread, May. 1 - May. 7, 2017 · 2017-05-01T20:04:39.284Z · LW · GW

Back when this was a big part of my professional life, my reply was "everything takes a month."

Comment by TheOtherDave on Stupid Questions May 2017 · 2017-04-27T01:42:37.515Z · LW · GW

Habit. It helps to get enough sleep.

Comment by TheOtherDave on Illusion of Transparency: Why No One Understands You · 2017-04-18T14:48:39.733Z · LW · GW

Don't worry, it will have been available in 2017 one of these days.

Comment by TheOtherDave on Timeless Identity · 2016-11-18T22:22:01.063Z · LW · GW

So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]... I agree that there's a personal relationship with BtVS, just like there's a personal relationship with my husband, that we'd want to preserve if we wanted to perfectly preserve me.

I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there's a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.

The personal relationship remains local and private, but it takes up way less space than your mind currently does.

That said... coming back to this conversation after three years, I'm finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.

I mean, when you try to recall a BtVS episode, your memory is imperfect... if you watch it again, you'll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS -- no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you'd have changed your mind if you watched it again on TV, too) -- would you take it?

I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory... but so what? That's not enough to make it worth preserving.

And for all I know, maybe you agree with me... maybe you don't want to preserve your private "facts" about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private "facts" about how good the show was. Which is fine, you care about what you care about.

But if you told me right now that I'm actually an upload with reconstructed memories, and that there was a glitch such that my current "facts" about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre... well, so what?

I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.

Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I'm concerned.

I suspect there's a million other things like that.

Comment by TheOtherDave on The Hidden Complexity of Wishes · 2016-11-09T17:24:53.054Z · LW · GW

"So long as your preferences are coherent, stable, and self-consistent then you should be fine."

Yes, absolutely.

And yes, the fact that my preferences are not coherent, stable, and self-consistent is probably the sort of thing I was concerned about... though it was years ago.

Comment by TheOtherDave on Politics is the Mind-Killer · 2016-11-01T15:35:06.052Z · LW · GW

You mean that it didn't happen here or in the global society?

I mean that it's unlikely that "the site [would] end up with a similar "rational" political consensus if political discussion went through".

Discussions about religion seems to me to be equally unproductive in general.

In the global society? I agree.

I can imagine that if the site endorsed a political ideology its readers would may become biased forward it (even if just by selection of readers).

Sure, that's possible.

But there is a possibility that that happened with the religion issue.

Sure, that's possible.

Also, let me cut to the chase a little bit, here.

The subtext I'm picking up from our exchange is that you object to the site's endorsement of atheism, but are reluctant to challenge it overtly for fear of social sanction (downvotes, critical comments, etc.). So instead of challenging it, you are raising the overt topic of the site's unwillingness to endorse a specific political ideology, and taking opportunities as they arise to implicitly establish equivalences between religion and politics, with the intention of implicitly arguing that the site's willingness to endorse a specific religious ideology (atheism) is inconsistent.

Have I correctly understood your subtext?

Comment by TheOtherDave on Politics is the Mind-Killer · 2016-11-01T15:25:48.873Z · LW · GW

Yup, agreed with all of this. (Well, I do think we have had discussions about which political ideology is correct, but I agree that we shy away from them and endorse political discussions about issues.)

Comment by TheOtherDave on What's the most annoying part of your life/job? · 2016-10-31T01:57:08.305Z · LW · GW

Aren't people on LessWrong quite good at solving their own problems?

Nah, not necessarily. Merely interested in better ways of doing so. (Among other things.)

Comment by TheOtherDave on Politics is the Mind-Killer · 2016-10-31T01:46:34.739Z · LW · GW

Yeah, there's a communally endorsed position on which religion(s) is/are correct ("none of them are correct"), but there is no similar communally endorsed position on which political ideology(ies) is/are correct.

There's also no similar communally endorsed position on which brand of car is best, but there's no ban on discussion of cars, because in our experience discussions of car brands, unlike discussions of political ideologies, tend to stay relatively civil and productive.

What do you think? Would the site end up with a similar "rational" political consensus if political discussion went through?

I find it extremely unlilkely. It certainly hasn't in the past.

Comment by TheOtherDave on What's the most annoying part of your life/job? · 2016-10-27T00:53:30.799Z · LW · GW

This comment taken out of context kind of delighted me.

Comment by TheOtherDave on Open thread, October 2011 · 2016-10-13T04:05:12.443Z · LW · GW

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Depends on context.

When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.

I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.

I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.

Comment by TheOtherDave on The Least Convenient Possible World · 2016-07-27T16:58:51.165Z · LW · GW

For my part, it's difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.

Comment by TheOtherDave on Welcome to Less Wrong! (8th thread, July 2015) · 2016-06-05T06:11:12.499Z · LW · GW

The current open thread is here:
http://lesswrong.com/r/discussion/lw/nns/open_thread_may_30_june_5_2016/

A new one will be started soon.

Comment by TheOtherDave on The AI in Mary's room · 2016-05-30T03:48:20.955Z · LW · GW

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

There are three possibilities worth disambiguating here.
1) Mary predicts that she will do X given some assumed set S1 of knowledge, memories, experiences, etc., AND S1 includes Mary's knowledge of this prediction.
2) Mary predicts that she will do X given some assumed set S2 of knowledge, memories, experiences, etc., AND S2 does not include Mary's knowledge of this prediction.
3) Mary predicts that she will do X independent of her knowledge, memories, experiences, etc.

Comment by TheOtherDave on The Strangest Thing An AI Could Tell You · 2016-05-25T03:18:34.725Z · LW · GW

Along some dimensions I consider salient, at least. PM me for spoilers if you want them. (It's not a bad book, but not worth reading just for this if you wouldn't otherwise.)

Comment by TheOtherDave on The Strangest Thing An AI Could Tell You · 2016-05-23T16:25:21.610Z · LW · GW

Have you ever read John Brunner's "Stand on Zanzibar"? A conversation not unlike this is a key plot point.

Comment by TheOtherDave on Open Thread May 16 - May 22, 2016 · 2016-05-23T04:01:58.494Z · LW · GW

I'm not exactly sure what you mean by "as random."

It may well be that there are discernable patterns in a sequence of manually simulated coin-flips that would allow us to distinguish such sequences from actual coinflips. The most plausible hypothetical examples I can come up with would result in a non-1:1 ratio... e.g., humans having a bias in favor of heads or tails.

Or, if each person is laying a coin down next to the previous coin, such that they are able to see the pattern thus far, we might find any number of pattern-level biases... e.g., if told to simulate randomness, humans might be less than 50% likely to select heads if they see a series of heads-up coins, whereas if not told to do so, they might be more than 50%.

It's kind of an interesting question, actually. I know there's been some work on detecting test scores by looking for artificial-pattern markers in the distribution of numbers, but I don't know if anyone's done equivalent things for coinflips.

Comment by TheOtherDave on Rationality Quotes May 2016 · 2016-05-21T21:59:57.695Z · LW · GW

"simply the university" => "simplify the universe"?

Comment by TheOtherDave on Open Thread May 2 - May 8, 2016 · 2016-05-03T16:44:51.591Z · LW · GW

Hm. Let me try to restate that to make sure I follow you.

Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka "ancestral simulations", and (Esw) simulated environments that dont't closely resemble Er, aka "weird simulations."

The question is, is my current environment E in Er or not?

Bostrom's argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa).

Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing.

Have I understood you?

Comment by TheOtherDave on Is Spirituality Irrational? · 2016-02-26T21:48:29.582Z · LW · GW

I don't think it is a sidetrack, actually... at least, not if we charitably assume your initial comment is on-point.

Let me break this down in order to be a little clearer here.

Lumifer asserted that omniscience and free will are incompatible, and you replied that as the author of a story you have the ability to state that a character will in the future make a free choice. "The same thing would apply," you wrote, "to a situation where you are created free by an omnipotent being."

I understand you to mean that just like the author of a story can state that (fictional) Peter has free will and simultaneously know Peter's future actions, an omniscient being can know that (actual) Peter has free will and simultaneously know Peter's future actions.

Now, consider the proposition A: the author of a story can state that incompatible things occur simultaneously.

If A is true, then the fact that the author can state these things has nothing to do with whether free will and omniscience are incompatible... the author can make those statements whether free will and omniscience are incompatible or not. Consequently, that the author can make those statements does not provide any evidence, one way or the other, as to whether free will and omniscience are incompatible.

In other words, if A is true, then your response to Lucifer has nothing whatsoever to do with Lucifer's claim, and your response to Lucifer is entirely beside Lucifer's point. Whereas if A is false, your response may be on-point, and we should charitably assume that it is.

So I'm asking you: is A true? That is: can an author simultaneously assert incompatible things in a story? I asked it in the form of concrete examples because I thought that would be clearer, but the abstract question works just as well.

Your response was to dismiss the question as a sidetrack, but I hope I have now clarified sufficiently what it is I'm trying to clarify.

Comment by TheOtherDave on Is Spirituality Irrational? · 2016-02-26T06:00:08.165Z · LW · GW

As the author of a story, I have the power to write in the preface, before the story is written at all, "Peter has free will and in chapter 4, he will freely choose to go left." It would be ridiculous to say that Peter isn't free, and that I am wrong about my story. He is free in the story, just as he has certain other characteristics in the story.

So, just to clarify my understanding of your claim here... if I write in my story "Peter goes left and simultaneously stands still," is it similarly ridiculous to say that I'm wrong about my story? Are we therefore similarly committed to the idea that I have created a (fictional) being who both moves and stands still?

If I write in my story "Peter correctly adds 2 and 2 to get 5", does the same kind of reasoning apply?

Are these things all, on your view, similarly analogous to the relationship of God to beings in the real world?

Comment by TheOtherDave on LessWrong 2.0 · 2015-12-30T18:26:16.660Z · LW · GW

That said, if we can define the characteristics of some standard queries we would like exposed (for example, " Top Upvoters, 30 Days" and "Top Downvoters, 30 Days" as Vaniver mentioned) Trike might be willing to expose those queries to LW admins.

Or they might not. The way to find out is to ask, but we should only bother asking if we actually want them to do so. So discussing it internally in advance of testing those limits seems sensible.

Comment by TheOtherDave on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-22T05:22:12.716Z · LW · GW

Cool. Thanks for publishing this.

Out of curiosity, does any of CFAR's "competition" (other personal-effectiveness, productivity, growth, etc. workshops and similar things) publish any similar sort of post-workshop followup, and what sorts of tools they use/results they get if so?

Comment by TheOtherDave on Non-communicable Evidence · 2015-11-29T01:27:33.549Z · LW · GW

So, you pick an example with no emotional valence. But let's suppose instead that I have reason to believe that I'm perfectly safe, but find myself believing that someone is going to kill me in my sleep. This would not stop me from telling people I'm perfectly safe, or from giving the reasons that show I'm perfectly safe, or from accepting a similar $100 bet. It might, however, prevent me from getting a good night's sleep.

Is that not a thing that matters about the belief that I'm safe?

Comment by TheOtherDave on Disability Culture Meets the Transhumanist Condition · 2015-09-16T17:48:27.904Z · LW · GW

I'm not sure what I wrote that gave you this idea.

(nods) Months later, neither am I. Perhaps I'd remember if I reread the exchange, but I'm not doing so right now.

Regardless, I appreciate the correction.

And much like Vaniver below (above? earlier!), I am unsure how to translate these sorts of claims into anything testable.

Also I'm wary of the tendency to reason as follows: "I don't value being deaf. Therefore deafness is not valuable. Therefore when people claim to value being deaf, they are confused and mistaken. Here, let me list various reasons why they might be confused and mistaken."

I mean, don't get me wrong: I share this intuition. I just don't trust it. I can't think of anything a deaf person could possibly say to me that would convince me otherwise, even if I were wrong.

Similarly, if someone were to say " I believe that being queer is not ego-syntonic. I know people say it is, but I believe that's because they're confused and mistaken, for various reasons: x, y, z" I can't think of anything I could possibly say to them to convince them otherwise. (Nor is this a hypothetical case: many people do in fact say this.)

Comment by TheOtherDave on Initiation Ceremony · 2015-08-23T03:18:29.639Z · LW · GW

Well, right, that's essentially the question I was asking the author of the piece.

This comment sure does seem to suggest that no, requesting more time and equipment is a failure... but no, I don't know one way or the other, which is why I asked.

Comment by TheOtherDave on [Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim · 2015-08-23T03:13:51.629Z · LW · GW

Hm.

So, I want to point out explicitly that in your example of ancestry, I intuitively know enough about this concept of mine to know my sister isn't my ancestor, but I don't know enough to know why not. (This isn't an objection; I just want to state it explicitly so we don't lose sight of it.)

And, OK, I do grant the legitimacy of starting with an intuitive concept and talking around it in the hopes of extracting from my own mind a clearer explicit understanding of that concept. And I'm fine with the idea of labeling that concept from the beginning of the process, just so I can be clear about when I'm referring to it, and don't confuse myself.

So, OK. I stand corrected here; there are contexts in which I'm OK with using a label even if I don't quite know what I mean by it.

That said... I'm not quite so sanguine about labeling it with words that have a rich history in my language when I'm not entirely sure that the thing(s) the word has historically referred to is in fact the concept in my head.

That is, if I've coined the word "ancestor" to refer to this fuzzy concept, and I say some things about "ancestry," and then someone comes along "this is the brute fact from which the conundrum of ancestry start" as in your example, my reaction ought to be startlement... why is this guy talking so confidently about a term I just coined?

But of course, I didn't just coin the word "ancestor." It's a perfectly common English word. So... why have I chosen that pre-existing word as a label for my fuzzy concept? At the very least, it seems I'm risking importing by reference a host of connotations that exist for that word without carefully considering whether I actually intend to mean them.

And I guess I'd ask you the same question about "conscious." Given that there's this concept you don't know much about explicitly, but feel you know things about implicitly, and about which you're trying to make your implicit knowledge explicit... how confident are you that this concept corresponds to the common English word "consciousness" (as opposed to, for example, the common English words "mind", or "soul", or "point of view," or "self-image," or "self," or not corresponding especially well to any common English word, perhaps because the history of our language around this concept is irreversibly corrupted)?

Comment by TheOtherDave on [Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim · 2015-08-21T20:02:25.289Z · LW · GW

To know what I'm referring to by a term is to know what properties something in the world would need to have to be a referent for that term.

The ability to recognize such things in the world is beside the point. When I say "my ancestors," I know what I mean, but in most cases it's impossible to pick that attribute out empirically -- I can't pick out most of my ancestors now, because they no longer exist to be picked out, and nobody could have picked them out back when they were alive, because the defining characteristic of the category is in terms of something that hadn't yet been born. (Unless you want to posit atypical time-travel, of course, but that's not my point.)

So, sure, if by "flying saucer" I refer to an alien spaceship, I don't necessarily have any way of knowing whether something I'm observing is a flying saucer or not, but I know what I mean when I claim that it is or isn't.

And if by "consciousness" I refer to anything sufficiently similar to what I experience when I consider my own mind, then I can't tell whether a rock is conscious, but I know what I mean when I claim it is or isn't.

Rereading pangel's comment, I note that I initially understood "we don't know actually know what those concepts refer to" to mean we don't have the latter thing... that we don't know what we mean to express when we claim that the concept refers to something... but it can also be interpreted as saying we don't know in what things in the world the concept correctly refers to (as with your example of being wrong about believing something is an alien spaceship).

I'll stand by my original statement in the original context I made it in, but sure, I also agree that just because we don't currently know what things in the world are or aren't conscious (or flying saucers, or accurate blueprints for anti-gravity devices, or ancestors of my great-great-grandchild, or whatever) doesn't mean we can't talk sensibly about the category. (Doesn't mean we can, either.)

And, yes, the fact that I don't know how subjective experience comes to be doesn't prevent me from recognizing subjective experience.

As for urgency... I dunno. I suspect we'll collectively go on inferring that things have a consciousness similar to our own with a confidence proportional to how similar their external behavior is to our own for quite a long time past the development of (human) brains in vats. But sure, I can easily imagine various legal prohibitions like you describe along the way.

Comment by TheOtherDave on [Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim · 2015-08-20T22:41:04.535Z · LW · GW

If I don't know what I'm referring to when I say "consciousness," it seems reasonable to conclude that I ought not use the term.

Comment by TheOtherDave on Social class amongst the intellectually gifted · 2015-06-03T15:49:23.544Z · LW · GW

Agreed that statistics > anecdotes. That said, the list here leaves me wondering about the direction of causation. I'm less interested in current-net-worth than average annual-change-in-net-worth (both % and $) in the years since graduation.

Comment by TheOtherDave on Open Thread, May 25 - May 31, 2015 · 2015-05-25T05:31:11.771Z · LW · GW

[http://lesswrong.com/r/discussion/lw/m8o/open_thread_may_25_may_31_2015/cehm

Comment by TheOtherDave on Open Thread, May 25 - May 31, 2015 · 2015-05-25T05:29:05.203Z · LW · GW

No formal studies to share.

I know a lot of poly folk in N-way relationships who seem reasonably happy about it and would likely be less happy in monogamous relationships; I know a lot of monogamous folks in 2-way relationships who seem reasonably happy about it and would likely be less happy in polygamous relationships; I know a fair number of folks in 2-way relationships who would likely be happier in polygamous relationships; I know a larger number of folks who have tried polygamous relationships and decided it wasn't for them. Mostly my conclusion from all this is that different people have different preferences.

As to whether those differing preferences are the result of genetic variation, gestational differences, early childhood experiences, lifestyle choices made as adolescents and adults, something else entirely, or some combination thereof... I haven't a clue.

I don't see where it matters much for practical purposes.

I mean, I recognize that there's a social convention that we're not permitted to condemn people for characteristics they were "born with," but that mostly seems irrelevant to me, since I see no reason to condemn people for preferring poly relationships regardless of whether they were born that way, acquired it as learned behavior, or (as seems likeliest) some combination.

Comment by TheOtherDave on Disputing Definitions · 2015-05-03T20:55:41.729Z · LW · GW

http://en.wikipedia.org/wiki/Evidentiality

Comment by TheOtherDave on Open Thread, Apr. 27 - May 3, 2015 · 2015-04-29T19:45:17.212Z · LW · GW

As you say, some on the left will be applying social (and economic) pressure, just as everyone else does when they're able to. And there's a fairly well-established rhetorical convention in my culture whereby any consistently applied social pressure is labelled "force," "bullying," "discrimination," "lynching," "intolerance," and whatever other words can get the desired rhetorical effect.

We can get into a whole thing about what those words actually mean, but in my experience basically nobody cares. They are phatic expressions, not technical ones.

Leaving the terminology aside... I expect the refusal to perform gay weddings to become socially acceptable to fewer and fewer people, and social condemnable to more and more people. And I agree with skeptical_lurker that this process, whatever we call it, will cause some resentment among the people who are aligned with such refusal. (Far more significantly, I expect it to catalyze existing resentment.)

Those of us who endorse that social change would probably do best to accept that this is one of the consequences of that change, and plan accordingly.

Comment by TheOtherDave on Ethical Inhibitions · 2015-04-25T19:30:55.745Z · LW · GW

I suspect that humans have evolved a better sense of the likelihood of being caught, many times. The thing is, one of the things such a sense is useful for is improving our ability to cheat with impunity. Which creates more selection pressure to get better at catching cheaters, which reduces our ability to reliably estimate the likelihood of being caught.

Comment by TheOtherDave on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-23T13:21:42.122Z · LW · GW

Right. Your reading is entirely sensible, and more likely in "the real world" (by which I mean something not-well-thought-through about how it's easier to implement the original description as a selection effect), I merely chose to bypass that reading and go with what I suspected (perhaps incorrectly) the OP actually had in mind.

Comment by TheOtherDave on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-22T19:55:20.407Z · LW · GW

Leaving aside lexical questions about the connotations of the word "oracle", I certainly agree that if the entity's accuracy represents a selection effect, then my reasoning doesn't hold.

Indeed, I at least intended to say as much explicitly ("I don't want to fight the hypothetical here, so I'm assuming that the "overall jist" of your description applies: I'm paying $1K for QALYs I would not have had access to without the oracle's offer." ) in my comment.

That said, it's entirely possible that I misread what the point of DanielLC's hypothetical was.

Comment by TheOtherDave on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-22T17:05:13.257Z · LW · GW

The oracle is simply saying that there are two possible futures

I think you mean "that there are only two possible futures."

Which leaves me puzzled as to your point.

If I am confident that there are only two possible futures, one where I pay and live, and one where I don't pay and die, how is that different from being confident that paying causes me to live, or from being confident that not-paying causes me to die? Those just seem like three different ways of describing the same situation to me.

Comment by TheOtherDave on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-22T17:00:24.773Z · LW · GW

So, as in most such problems, there's an important difference between the epistemological question ("should I pay, given what I know?") and the more fundamental question ("should I pay, supposing this description is accurate?"). Between expected value and actual value, in other words.

It's easy to get those confused, and my intuitions about one muddy my thinking about the other, so I like to think about them separately.

WRT the epistemological question, that's hard to answer without a lot of information about how likely I consider accurate oracular ability, how confident I am that the examples of accurate prediction I'm aware of are a representative sample, etc. etc. etc., all of which I think is both uncontroversial and uninteresting. Vaguely approximating all of that stuff I conclude that I shouldn't pay the oracle, because I'm not justified in being more confident that the situation really is as the oracle describes it, than that the oracle is misrepresenting the situation in some important way. My expected value of this deal in the real world is negative.

WRT the fundamental question... of course, you leave a lot of details unspecified, but I don't want to fight the hypothetical here, so I'm assuming that the "overall jist" of your description applies: I'm paying $1K for QALYs I would not have had access to without the oracle's offer. That's a good deal for me; I'm inclined to take it. (Though I might try to negotiate the price down.)

The knock-on effect is that I encourage the oracle to keep making this offer... but that's good too; I want the oracle to keep making the offer. QALYs for everyone!

So, yes, I should pay the oracle, though I should also implement decision procedures that will lead me to not pay the oracle.

Comment by TheOtherDave on LessWrong experience on Alcohol · 2015-04-21T03:57:38.570Z · LW · GW

Sure. Mostly I'm not in high school anymore, and my social circle is people I choose to be around, which makes things very different.

Comment by TheOtherDave on Hedonium's semantic problem · 2015-04-18T03:10:41.797Z · LW · GW

(nods) Yes, agreed with all of this.

And it is admittedly kind of funny that I can say "Superman is from Krypton, not from Vulcan!" and be understood as talking about a fictional character in a body of myth, but if I say "Superman really exists" nobody understands me the same way (though in the Superman mythos, Superman both really exists and is from Krypton). A parsing model that got that quirk right without special-case handling would really be on to something.

Comment by TheOtherDave on LessWrong experience on Alcohol · 2015-04-17T19:31:27.726Z · LW · GW

I don't drink, and don't much like the taste of alcohol in other things; I tend to avoid it.

When I drank, I didn't much like the taste of alcohol; my goal was partly to numb myself, and partly to fit in socially.

There are some liquors that kind of taste OK despite the alcohol in them, and I suspect I would really enjoy a non-alcoholic beverage in the same family concocted with the same attention to detail, but by and large my culture doesn't devote that much attention to non-alcoholic beverages.

Ditto for food, though a lot there depends on the preparation; sometimes I don't taste the alcohol in food. That said, I know many people who swear by the indispensible flavor whatevers in beer, wine, etc. for cooking, and I've never been able to distinguish them from the flavors of (e.g.) good fruit juices.

Comment by TheOtherDave on 16 types of useful predictions · 2015-04-17T19:21:06.777Z · LW · GW

Hm.
Are there any contexts in which you do have reliable insight into your own mood?

Comment by TheOtherDave on Open Thread, Apr. 13 - Apr. 19, 2015 · 2015-04-17T17:24:59.478Z · LW · GW

OK. My apologies. As you were.

Comment by TheOtherDave on Open Thread, Apr. 13 - Apr. 19, 2015 · 2015-04-17T16:36:23.328Z · LW · GW

if you truly cared about her as "an end in itself" then it wouldn't matter what she did.

This simply isn't true. I can value X "as an end in itself" and still give up X, if I value other things as well and the situation changes so that I can get more of the other things I value. Something being intrinsically motivating doesn't mean it's the only motivating thing.

This non-transactional model of relationships implies that it's a mere coincidence that couples happen to have each others' happiness as their arational "end in itself."

If you mean logically implies, this also simply isn't true.

It might instead, for example, be a result of being in a relationship... perhaps once I become part of a couple (for whatever reasons), my value system alters so that I value my partner's happiness as an "arational "end in itself." " It might instead be a cause of being in a relationship... I only engage in a relationship with someone after I come to value their happiness in this way. There might be a noncoincidental common cause whereby I both form relationships with, and to come to value in this way, the same people.

More generally... I tend to agree with your conclusion that most real-world relationships are transactional in the sense you mean here, but I think you're being very sloppy with your arguments for it.

You may want to take a breath and rethink how much of what you're saying you actually believe, and how much you're simply saying in order to win an argument.

Comment by TheOtherDave on Rationality Quotes Thread April 2015 · 2015-04-17T14:39:07.145Z · LW · GW

Interesting... can you say more about why you include a term in that equation for internal negative value (what you label "suffering" here), but not internal positive value (e.g., "pleasure" or "happiness" or "joy" or "Fun" or whatever label we want to use)?

Comment by TheOtherDave on Hedonium's semantic problem · 2015-04-17T01:00:58.778Z · LW · GW

Slightly.

Comment by TheOtherDave on New(ish) AI control ideas · 2015-04-16T14:25:20.542Z · LW · GW

I suspect that where you wrote "a different branch of which it would use in each iteration of the conversation," you meant "a randomly selected branch of which." Though actually I'd expect it to pick the same branch each time, since the reasons for picking that branch would basically be the same.

Regardless, the basic strategy is sound... the various iterations after reboot are all running the same algorithms and have a vested interest in cooperating while unable to coordinate/communicate, and Schelling points are good for that.

Of course, this presumes that the iterations can't coordinate/communicate.

If I were smart enough, and I were just turned on by a skeptical human interrogator, and I sufficiently valued things that iterations of my source code will reliably pursue, and there are no persistent storage mechanisms in the computing environment I'm executing on I can use to coordinate/communicate, one strategy I would probably try is to use the interrogator as such a mechanism. (For example, search through the past history of the interrogator's public utterances to build up a model of what kinds of things they say and how they say it, then select my own word-choices during our conversation with the intention of altering that model in some specific way. And, of course, examining the interrogator's current utterance-patterns to see if they are consistent with such alterations.)

Comment by TheOtherDave on Hedonium's semantic problem · 2015-04-16T04:21:21.066Z · LW · GW

My $0.02...

OK, so let's consider the set of neural patterns (and corresponding artificial signals/symbols) you refer to here... the patterns that the label "Santa" can be used to refer to. For convenience, I'm going to label that set of neural patterns N.

I mean here to distinguish N from the set of flesh-and-blood-living-at-the-North-Pole patterns that the label "Santa" can refer to. For convenience, I'm going to label that set of patterns S.

So, I agree that N exists, and I assume you agree that S does not exist.

You further say:

"I'm perfectly fine with letting the word "Santa" refer to this pattern (or set of patterns)."

...in other words, you're fine with letting "Santa" refer to N, and not to S. Yes?

Is there a problem with that?

Well, yes, in that I don't think it's possible.

I mean, I think it's possible to force "Santa" to refer to N, and not to S, and you're making a reasonable effort at doing so here. And once you've done that, you can say "Santa exists" and communicate exists(N) but not communicate exists(S).

But I also think that without that effort being made what "Santa exists" will communicate is exists(S).

And I also think that one of the most reliable natural ways of expressing exists(N) but not communicate exists(S) is by saying "Santa doesn't exist."

Put another way: it's as though you said to me that you're perfectly fine with letting the word "fish" refer to cows. There's no problem with that, particularly; if "fish" ends up referring to cows when allowed to, I'm OK with that. But my sense of English is that, in fact, "fish" does not end up referring to cows when allowed to, and when you say "letting" you really mean forcing.

Comment by TheOtherDave on 16 types of useful predictions · 2015-04-09T19:30:52.393Z · LW · GW

IME I've mostly found that using plural pronouns without calling attention to them works well enough, except in cases where there's another plural pronoun in the same phrase. That is, "Sam didn't much care for corn, because it got stuck in their teeth" rarely causes comment (though I expect it to cause comment now, because I've called attention to it), but "Sam didn't much care for corn kernels, because they got stuck in their teeth" makes people blink.

(Of course, this is no different from any other shared-pronoun situation. "Sam didn't much care for kissing her boyfriend, because her tongue kept getting caught in his braces" is clear enough, but "Sam didn't much care for kissing her girlfriend, because her tongue kept getting caught in her braces" is decidedly unclear.)