Posts

FAI and the Information Theory of Pleasure 2015-09-08T21:16:02.863Z
The mystery of pain and pleasure 2015-03-01T19:47:05.251Z

Comments

Comment by johnsonmx on A Problem for the Simulation Hypothesis · 2017-04-05T17:21:06.275Z · LW · GW

I think the elephant in the room is the purpose of the simulation.

Bostrom takes it as a given that future intelligences will be interested in running ancestor simulations. Why is that? If some future posthuman civilization truly masters physics, consciousness, and technology, I don't see them using it to play SimUniverse. That's what we would do with limitless power; it's taking our unextrapolated, 2017 volition and asking what we'd do if we were gods. But that's like asking a 5-year-old what he wants to do when he grows up, then taking the answer seriously.

Ancestor simulations sound cool to us- heck, they sound amazingly interesting to me, but I strongly suspect posthumans would find better uses for their resources.

Instead, I think we should try to reason about the purpose of a simulation from first principles.

Here's an excerpt from Principia Qualia, Appendix F:

Why simulate anything?

At any rate, let’s assume the simulation argument is viable- i.e., it's possible we're a simulation, and due to the anthropic math, that it's plausible that we're in one now.

Although it's possible that we are being simulated but for no reason, let's assume entities smart enough to simulate universes would have a good reason to do so. So- what possible good reason could there be to simulate a universe? Two options come to mind: (a) using the evolution of the physical world to compute something, or (b) something to do with qualia.

In theory, (a) could be tested by assuming that efficient computations will exhibit high degrees of Kolmogorov complexity (incompressibility) from certain viewpoints, and low Kolmogorov complexity from others. We could then formulate an anthropic-aware measure for this applicable from ‘within’ a computational system, and apply it to our observable universe. This is outside the scope of this work.

However, we can offer a suggestion about (b): if our universe is being simulated for some reason associated with qualia, it seems plausible that it has to do with producing a large amount of some kind of particularly interesting or morally relevant qualia.

Comment by johnsonmx on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-27T00:27:26.337Z · LW · GW

We don't live in a universe that's nice or just all the time, so perhaps there are nightmare scenarios in our future. Not all traps have an escape. However, I think this one does, for two reasons.

(1) all the reasons that RobinHanson mentioned;

(2) we seem really confused about how consciousness works, which suggests there are large 'unknown unknowns' in play. It seems very likely that if we extrapolate our confused models of consciousness into extreme scenarios such as this, we'll get even more confused results.

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-28T22:14:56.521Z · LW · GW

A rigorous theory of valence wouldn't involve cultural context, much as a rigorous theory of electromagnetism doesn't involve cultural context.

Cultural context may matter a great deal in terms of how to build a friendly AGI that preserves what's valuable about human civilization-- or this may mostly boil down to the axioms that 'pleasure is good' and 'suffering is bad'. I'm officially agnostic on whether value is simple or complex in this way.

One framework for dealing with the stuff you mention is Coherent Extrapolated Volition (CEV)- it's not the last word on anything but it seems like a good intuition pump.

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-19T21:32:35.463Z · LW · GW

We're not on the same page. Let's try this again.

  • The assertion I originally put forth is AI safety; it is not about reverse-engineering qualia. I'm willing to briefly discuss some intuitions on how one may make meaningful progress on reverse-engineering qualia as a courtesy to you, my anonymous conversation partner here, but since this isn't what I originally posted about I don't have a lot of time to address radical skepticism, especially when it seems like you want to argue against some strawman version of IIT.

  • You ask for references (in a somewhat rude monosyllabic manner) on "some of the empirical work on coma patients IIT has made possible" and I give you exactly that. You then ignore it as "not really qualia research"- which is fine. But I'm really not sure how you can think that this is completely irrelevant to supporting or refuting IIT: IIT made a prediction, Casali et al. tested the prediction, the prediction seemed to hold up. No qualiometer needed. (Granted, this would be a lot easier if we did have them.)

This apparently leads to you say,

You are taking a problem we don;t know how to make a start on, and turning it into a smaller problem we also don't know how to make a start on.

More precisely, I'm taking a problem you don't know how to make a start on, and am turning it into a smaller problem that you also don't seem to know how to make a start on. Which is fine, and I don't wish to be a jerk about it, and not merely because Tononi/Tegmark/Griffith could be wrong in how they're approaching consciousness, and I could be wrong in how I'm adapting their stuff to try to explain some specific things about qualia. But you seem to just want to give up, to put this topic beyond the reach of science, and criticize anyone trying to find clever indirect approaches. Needless to say I vehemently disagree with the productiveness of that attitude.

I think we are in agreement that valence could be a fairly simple property. I also agree that the brain is Vastly Complex, and that qualia research has some excruciatingly difficult methodological hurdles to overcome, and I agree that IIT is still a very speculative hypothesis which shouldn't be taken on faith. I think we differ radically on our understandings of IIT and related research. I guess it'll be an empirical question whether IIT morphs into something that can substantially address questions of qualia- based on my understandings and intuitions, I'm pretty optimistic about this.

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-18T20:38:40.309Z · LW · GW

If you're looking for a Full, Complete Data-Driven And Validated Solution to the Qualia Problem, I fear we'll have to wait a long, long time. This seems squarely in the 'AI complete' realm of difficulty.

But if you're looking for clever ways of chipping away at the problem, then yes, Casali's Perturbational Complexity Index should be interesting. It doesn't directly say anything about qualia, but it does indirectly support Tononi's approach, which says much about qualia. (Of course, we don't yet know how to interpret most of what it says, nor can we validate IIT directly yet, but I'd just note that this is such a hard, multi-part problem that any interesting/predictive results are valuable, and will make the other parts of the problem easier down the line.)

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-18T00:40:10.884Z · LW · GW

The stuff by Casali is pretty topical, e.g. his 2013 paper with Tononi.

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-17T17:41:45.929Z · LW · GW

Testing hypotheses derived from or inspired by IIT will probably be on a case-by-case basis. But given some of the empirical work on coma patients IIT has made possible. I think it may be stretching things to critique IIT as wholly reliant on circular reasoning.

That said, yes there are deep methodological challenges with qualia that any approach will need to overcome. I do see your objection quite clearly- I'm confident that I address this in my research (as any meaningful research on this must do) but I don't expect you to take my word for it. The position that I'm defending here is simply that progress in valence research will have relevance to FAI research.

Out of curiosity, do you think valence has a large or small kolgoromov complexity?

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-17T03:18:03.426Z · LW · GW

I do have some detailed thoughts on your two questions-- in short, given certain substantial tweaks, I think IIT (or variants by Tegmark/Griffiths) can probably be salvaged from its (many) problems in order to provide a crisp dataset on which to base testable hypotheses about qualia.

(If you're around the Bay Area I'd be happy to chat about this over a cup of coffee or something.)

I would emphasize, though, that this post only talks about the value results in this space would have for FAI, and tries to be as agnostic as possible on how any reverse-engineering may happen.

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-15T18:44:43.783Z · LW · GW

Are you referring to any specific "current research into qualia", or just the idea of qualia research in general? I definitely agree that valence research is a subset of qualia research- but there's not a whole lot of either going on at this point, or at least not much that has produced anything quantitative/predictive.

I suspect valence is actually a really great path to approach more 'general' qualia research, since valence could be a fairly simple property of conscious systems. If we can reverse-engineer one type of qualia (valence), it'll help us reverse other types.

Comment by johnsonmx on FAI and the Information Theory of Pleasure · 2015-09-09T00:39:28.446Z · LW · GW

It would probably be highly dependent on the AI's architecture. The basic idea comes from Shulman and Bostrom - Superintelligence, chapter 9, in the "Incentive methods" section (loc 3131 of 8770 on kindle).

My understanding is that such a strategy could help as part of a comprehensive strategy of limitations and inventivization but wouldn't be viable on its own.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-06-28T01:20:33.822Z · LW · GW

Right, absolutely. These are all things that we don't know, but should.

Are you familiar with David Pearce's Hedonistic Imperative movement? He makes a lot of the same points and arguments, basically outlining that it doesn't seem impossible that we could (and should) radically reduce, and eventually eliminate, suffering via technology.

But the problem is, we don't know what suffering is. So we have to figure that out before we can make much radical progress on this sort of work. I.e., I think a rigorous definition of suffering will be an information-theoretic one-- that it's a certain sort of pattern within conscious systems-- but we know basically nothing about what sort of pattern it is.

(I like the word "valence" instead of pain/pleasure, joy/suffering, eudaimonia, hedonic tone, etc. It's a term from psychology that just means 'the pleasantness or unpleasantness attached to any experience' and seems to involve less baggage than these other terms.)

I hope to have a formal paper on this out by this winter. In the meantime, if you're in the Bay Area, feel free to ping me and I can share some thoughts. You may also enjoy a recent blog post: Effective Altruism, and building a better QALY.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-05T18:44:35.368Z · LW · GW

Although life, sin, disease, redness, maleness, and dogness are (I believe) inherently 'leaky' / 'fuzzy' abstractions that don't belong with electromagnetism, this is a good comment. If a hypothesis is scientific, it will make falsifiable predictions. I hope to have something more to share on this soon.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-03T01:01:25.145Z · LW · GW

I think we're still not seeing eye-to-eye on the possibility that valence, i.e., whatever pattern within conscious systems innately feels good, can be described crisply.

If it's clear a priori that it can't, then yes, this whole question is necessarily confused. But I see no argument to that effect, just an assertion. From your perspective, my question takes the form: "what's the thing that all dogs have in common?"- and you're trying to tell me it's misguided to look for some platonic 'essence of dogness'. Concepts don't work like that. I do get that, and I agree that most concepts are like that. But from my perspective, your assertion sounds like, "all concepts pertaining to this topic are necessarily vague, so it's no use trying to even hypothesize that a crisp mathematical relationship could exist." I.e., you're assuming your conclusion. Now, we can point to other contexts where rather crisp mathematical models do exist: electromagnetism, for instance. How do you know the concept of valence is more like 'dogness' than electromagnetism?

Ultimately, the details, or mathematics, behind any 'universal' or 'rigorous' theory of valence would depend on having a well-supported, formal theory of consciousness to start from. It's no use talking about patterns within conscious systems when we don't have a clear idea of what constitutes a conscious system. A quantitative approach to valence needs a clear ontology, which we don't have yet (Tononi's IIT is a good start, but hardly a final answer). But let's not mistake the difficulty in answering these questions with them being inherently unanswerable.

We can imagine someone making similar critiques a few centuries ago regarding whether electromagnetism was a sharply-defined concept, or whether understanding it matters. It turned out electromagnetism was a relatively sharply-defined concept: there was something to get, and getting it did matter. I suspect a similar relationship holds with valence in conscious systems. I'm not sure it does, but I think it's more reasonable to accept the possibility than not at this point.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-02T21:09:20.999Z · LW · GW

Right- good questions.

First, I think getting a rigorous answer to this 'mystery of pain and pleasure' is contingent upon having a good theory of consciousness. It's really hard to say anything about which patterns in conscious systems lead to pleasure without a clear definition of what our basic ontology is.

Second, I've been calling this "The Important Problem of Consciousness", a riff off Chalmers' distinction between the Easy and Hard problems. I.e., if someone switched my red and green qualia in some fundamental sense it wouldn't matter; if someone switched pain and pleasure, it would.

Third, it seems to me that patternist accounts of consciousness can answer some of your questions, to some degree, just by ruling out consciousness (things can only experience suffering insofar as they're conscious). How to rank each of your examples in severity, however, is... very difficult.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-02T21:01:12.127Z · LW · GW

Right. It might be a little bit more correct to speak of 'temporal arrangements of arrangements of particles', for which 'processes' is a much less awkward shorthand.

But saying "pleasure is a neurological process" seems consistent with saying "it all boils down to physical stuff- e.g., particles, eventually", and doesn't seem to necessarily imply that "you can't find a 'pleasure pattern' that's fully generalized. The information is always contextual."

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-02T20:56:20.210Z · LW · GW

Good is a complex concept, not an irreducible basic constituent of the universe. It's deeply rooted in our human stuff like metabolism (food is good), reproduction (sex is good), social environment (having allies is good) etc

It seems like you're making two very distinct assertions here: first, that valence is not a 'natural kind', that it doesn't 'carve reality at the joints', and is impossible to form a crisp, physical definition of; and second, that valence is highly connected to drives that have been evolutionarily advantageous to have. The second is clearly correct; the first just seems to be an assertion (one that I understand, and I think reasonable people can hold at this point, but that I disagree with).

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-02T08:31:40.686Z · LW · GW

I see the argument, but I'll note that your comments seem to run contrary to the literature on this: see, e.g., Berridge on "Dissecting components of reward: ‘liking’, ‘wanting’, and learning", as summed up by Luke in The Neuroscience of Pleasure. In short, behavior, memory, and enjoyment ('seeking', 'learning', and 'liking' in the literature) all seem to be fairly distinct systems in the brain. If we consider a being with a substantially different cognitive architecture, whether through divergent evolution or design, it seems problematic to view behavior as the gold standard of whether it's experiencing pleasure or suffering. At this point it may be the most practical approach, but it's inherently imperfect.

My strong belief is that although there is substantial plasticity in how we interpret experiences as positive or negative, this plasticity isn't limitless. Some things will always feel painful; others will always feel pleasurable, given a not-too-highly-modified human brain. But really, I think this line of thinking is a red herring: it's not about the stimulus, it's about what's happening inside the brain, and any crisp/rigorous/universal principles will be found there.

Is valence a 'natural kind'? Does it 'carve reality at the joints'? Intuitions on this differ (here's a neat article about the lack of consensus about emotions). I don't think anger, or excitement, or grief carve reality at the joints- I think they're pretty idiosyncratic to the human emotional-cognitive architecture. But if anything about our emotions is fundamental/universal, I think it'd have to be their valence.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-02T08:12:34.612Z · LW · GW

Surely neurological processes are "arrangements of particles" too, though.

I think your question gets to the heart of the matter- is there a general principle to be found with regard to which patterns within conscious systems innately feel good, or isn't there? It would seem very surprising to me if there wasn't.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-03-02T08:09:27.196Z · LW · GW

I had posted the original in 2013, and did a major revision today, before promoting it (leaving the structure of the questions intact, to preserve previous discussion referents).

I hope I haven't committed any faux pas in doing this.

Comment by johnsonmx on The mystery of pain and pleasure · 2015-02-17T23:41:52.614Z · LW · GW

Thank you- that paper is extremely relevant and I appreciate the link.

To reiterate, mostly for my own benefit: As Tegmark says- whether we're talking about a foundation to ethics, or a "final goal", or we simply want to not be confused about what's worth wanting, we need to figure out what makes one brain-state innately preferable to another, and ultimately this boils down to arrangements of particles. But what makes one arrangement of particles superior to another? (This is not to give credence to moral relativism- I do believe this has a crisp answer).

Comment by johnsonmx on Innovation's low-hanging fruits: on the demand or supply sides? · 2014-02-27T17:44:08.432Z · LW · GW

Very interesting. No objections to your main points, but a few comments on side points and conclusions:

  • You say "it's not like we know of a specific technological innovation that would solve poverty, if only someone would develop it." I would identify Greg Cochran's 'genetic spellcheck' as such a tech, along with what other people are suggesting. http://westhunt.wordpress.com/2012/02/27/typos/

  • "We might have exhausted the low-hanging fruits in our desires." I think this is right, but it's complicated. I think the Robin Hanson way to frame this could be the following: innovation has been this rising technological tide that has made it a lot easier to meet most of Maslow's hierarchy of needs. But now most of the 'gains' from innovation are made in positional goods and services, which aren't the same sort of gains as, say, flush toilets, so they don't feel "real".

Comment by johnsonmx on The mystery of pain and pleasure · 2013-05-12T07:19:55.975Z · LW · GW

On the first point-- what you say is clearly right, but is also consistent with the notion that there are certain mathematical commonalities which hold across the various 'flavors' of pleasure, and different mathematical commonalities in pain states.

Squashing the richness of human emotion into a continuum of positive and negative valence sounds like a horribly lossy transform, but I'm okay with that in this context. I expect that experiences at the 'pleasure' end of the continuum will have important commonalities 'under the hood' with others at that same end. And those commonalities will vanish, and very possibly invert, when we look at the 'agony' end.

On the second point, the evidence points to physical and emotional pain sharing many of the same circuits, and indeed, drugs which reduce physical pain also reduce emotional pain. On the other hand, as you might expect, there are some differences in the precise circuitry each type of pain activates. But by and large, the differences are subtle.

Comment by johnsonmx on The mystery of pain and pleasure · 2013-05-12T02:06:27.624Z · LW · GW

I understand the type of criticism generally, but could you say more about this specific case?

I'm curious if the objection stems from some mismatch of abstraction layers, or just the habit of not speaking about certain topics in certain terms.

Comment by johnsonmx on The mystery of pain and pleasure · 2013-05-11T19:04:39.563Z · LW · GW

It does, and thank you for the reply.

How should we define "pleasure"? -- A difficult question. As you mention, it is a cloud of concepts, not a single one. It's even more difficult because there appears to be precious little driving the standardization of the word-- e.g., if I use the word 'chair' differently than others, it's obvious, people will correct me, and our usages will converge. If I use the word 'pleasure' differently than others, that won't be as obvious because it's a subjective experience, and there'll be much less convergence toward a common usage.

But I'd say that in practice, these problems tend to work themselves out, at least enough for my purposes. E.g., if I say "think of pure, unadulterated agony" to a room of 10000 people, I think the vast majority would arrive at fairly similar thoughts. Likewise, if I asked 10000 people to think of "pure, unadulterated bliss… the happiest moment in your life", I think most would arrive at thoughts which share certain attributes, and none (<.01%) would invert answers to these two questions.

I find this "we know it when we see it" definitional approach completely philosophically unsatisfying, but it seems to work well enough for my purposes, which is to find mathematical commonalities across brain-states people identify as 'pleasurable', and different mathematical commonalities across brain-states people identify as 'painful'.

I see what you mean by "the meaning of a word is hardly ever accurately given by any necessary-and-sufficient conditions that can be stated explicitly in a reasonable amount of space, because that just isn't the way human minds work." On the other hand, all words are imperfect and we need to talk about this somehow. How about this: (1) what are the characteristic mathematics of (i.e., found disproportionally in) self-identified pleasurable brain states?

Comment by johnsonmx on The mystery of pain and pleasure · 2013-05-11T05:46:38.204Z · LW · GW

We seem to be talking past each other, to some degree. To clarify, my six questions were chosen to illustrate how much we don't know about the mathematics and science behind psychological valence. I tried to have all of them point at this concept, each from a slightly different angle. Perhaps you interpret them as 'disguised queries' because you thought my intent was other than to seek clarity about how to speak about this general topic of valence, particularly outside the narrow context of the human brain?

I am not trying to "Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?" -- my focus is entirely on speaking about psychological valence in clear terms, illustrating that there's much we don't know, and make the case that there are empirical questions about the topic that don't seem to have empirical answers. Also, in very tentative terms, to express the personal belief that a clear theory on exactly what states of affairs are necessary and sufficient for creating pain and pleasure may have some applicability to FAI/AGI topics (e.g., under what conditions can simulated people feel pain?).

I did not find 'necessary and sufficient', or any permutation thereof, in the human's guide to words. Perhaps you'd care to explicate why you didn't care for my usage?

Re: (3) and (4), I'm certain we're not speaking of the same things. I recall Eliezer writing about how creating pleasure isn't as simple as defining a 'pleasure variable' and incrementing it:

int pleasure = 5; pleasure++

I can do that on my macbook pro; it does not create pleasure.

There exist AGIs in design space that have the capacity to (viscerally) feel pleasure, much like humans do. There exist AGIs in design space with a well-defined reward channel. I'm asking: what principles can we use to construct an AGI which feels visceral pleasure when (and only when) its reward channel is activated? If you believe this is trivial, we are not communicating successfully.

I'm afraid we may not share common understandings (or vocabulary) on many important concepts, and I'm picking up a rather aggressive and patronizing vibe, but a genuine thanks for taking the time to type out your comment, and especially the intent in linking that which you linked. I will try not to violate too many community norms here.

Comment by johnsonmx on The mystery of pain and pleasure · 2013-05-11T04:16:30.564Z · LW · GW

Tononi's Phi theory seems somewhat relevant, though it only addresses consciousness and explicitly avoids valence. It does seem like something that could be adapted toward answering questions like this (somehow).

Current models of emotion based on brain architecture and neurochemicals (e.g., EMOCON) are relevant, though ultimately correlative and thus not applicable outside of the human brain.

There's also a great deal of quality literature about specific correlates of pain and happiness- e.g., Building a neuroscience of pleasure and well-being and An fMRI-Based Neurologic Signature of Physical Pain.

In short, I've found plenty of plenty of research around the topic but nothing that's particularly predictive outside of very constrained contexts. No generalized theories. There's some interesting stuff happening around panpsychism (e.g., see these two pieces by Chalmers) but they focus on consciousness, not valence.

My intuition is valence will be encoded within frequency dynamics in a way that will be very amiable to mathematical analysis, but right now I'm seeking clarity about how to speak about the problem.

Edit: I'll add this to the bottom of the post

Comment by johnsonmx on How to Be Happy · 2013-01-15T07:30:54.125Z · LW · GW

I'd just like to say thanks for posting this. Cogent, researched, cheerful, and helpful.

Comment by johnsonmx on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T23:35:50.637Z · LW · GW

Another view of Philosophy, which I believe Russell also subscribed to (but I can't seem to find a reference for presently) is that philosophy was the 'mother discipline'. It was generative. You developed your branch of Philosophy until you got your ontology and methodology sorted out, and then you stopped calling what you were doing philosophy. (This has the amusing side-effect of making anything philosophers say wrong by definition-- sometimes useful, but always wrong.)

The Natural Sciences, Psychology, Logic, Mathematics, Linguistics-- they all got their start this way.

That's how Philosophy used to work. Nowadays, I think the people who can do that type of "mucking around with complex questions of ontology and methodology" thinking have largely moved on to other disciplines. If we define Philosophy as this messily complex discipline-generating process, it no longer happens in the discipline we call "Philosophy".[1]

That said--- while I would personally enjoy the "intro to philosophy" syllabus Luke proposes, I think it's a stretch to label the course a philosophy course, much less [The One And True] Intro To Philosophy. It's cool and a great idea, but the continuity with many models (be they aspirational or descriptive) of Philosophy is fairly tenuous, and without a lot of continuity I think it'd be hard to push into established departments.[2]

If we're speaking more modestly, that philosophers should be steeped in modern science and logic and that when they're not, what they do is often worse than useless, I can certainly agree with that.

[1] E.g., Axiology.

[2] Why not call it "introduction to scientific epistemology"?

Comment by johnsonmx on Causal Reference · 2012-10-28T19:59:47.245Z · LW · GW

We can speak of different tiers of stuff, interacting (or not) through unknown causal mechanisms, but Occam's Razor would suggest these different tiers of stuff might actually be fundamentally the same 'stuff', just somehow viewed from different angles. (This would in turn suggest some form of panpsychism.)

In short, I have trouble seeing how we make these metaphysical hierarchizations pay rent. Perhaps that's your point also.

Comment by johnsonmx on Babies and Bunnies: A Caution About Evo-Psych · 2012-10-24T22:55:38.587Z · LW · GW

Yes, and I would say finding bunnies cuter than human babies isn't a strong argument against Dennett's hypothesis. Supernormal Stimuli are quite common in humans and non-humans.

I think this argument could be analogously phrased: "The reason why exercise makes us feel good can't be to get us to exercise more, because cocaine feels even better than exercise." Seems wrong when we put it that way.

Comment by johnsonmx on Awww, a Zebra · 2012-09-30T22:20:15.862Z · LW · GW

You probably get a much richer sensation of zebra-ness under some conditions (being there, touching the zebra, smelling the zebra, seeing it move) than just seeing a picture of one on flickr. Experiencing zebra-ness isn't a binary value, and some types of exposures will tend to commandeer many more neurons than others.

Comment by johnsonmx on Awww, a Zebra · 2012-09-30T22:15:58.385Z · LW · GW

I think the first 3/4ths are very well stated. I couldn't agree more.

On the last bit, my personal intuition is there are plenty of things people can do for FAI research beyond raising money. Moreover, such intangibles are likely often more important to the cause of FAI than cash.

(Also, the argument that "some of those willing and able able to do FAI research are spending their time raising money, right now, for lack of other ways to get money" may be undermined by the paragraph above it; e.g., I'd rather be thinking about FAI than raising money for others to think about FAI.)

Comment by johnsonmx on The Apocalypse Bet · 2012-09-24T02:26:20.165Z · LW · GW

I would suggest that a breakdown in social order (without a singularity occurring) is another scenario that might be roughly as probable as the others you mentioned. In such case, it would seem the manner by which you invest in equities would matter. I.e.., the value of most abstract investments may vanish, and the value of equities held in trust by various institutions (or counterparties) may also vanish.

Comment by johnsonmx on Welcome to Less Wrong! (July 2012) · 2012-09-09T21:26:09.465Z · LW · GW

I think it's possible that any leaky abstraction used in designing FAI might doom the enterprise. But if that's not true, we can use this "qualia translation function" to make a leaky abstractions in a FAI context a tiny bit safer(?).

E.g., if we're designing an AGI with a reward signal, my intuition is we should either (1) align our reward signal with actual pleasurable qualia (so if our abstractions leak it matters less, since the AGI is drawn to maximize what we want it to maximize anyway); (2) implement the AGI in an architecture/substrate which produces as little emotional qualia as possible, so there's little incentive for behavior to drift.

My thoughts here are terribly laden with assumptions and could be complete crap. Just thinking out loud.

Comment by johnsonmx on Welcome to Less Wrong! (July 2012) · 2012-09-09T20:40:37.989Z · LW · GW

I'd say nobody does! But a little less glibly, I personally think the most productive strategy in biologically-inspired AGI would be to focus on tools that help quantify the unquantified. There are substantial side-benefits to such a focus on tools: what you make can be of shorter-term practical significance, and you can test your assumptions.

Chalmers and Tononi have done some interesting work, and Tononi's work has also had real-world uses. I don't see Tononi's work as immediately applicable to FAI research but I think it'll evolve into something that will apply.

It's my hope that the (hypothetical, but clearly possible) "qualia translation function" I mention above could be a tool that FAI researchers could use and benefit from regardless of their particular architecture.

Comment by johnsonmx on Welcome to Less Wrong! (July 2012) · 2012-09-09T20:37:36.291Z · LW · GW

I don't think an AGI failing to behave in the anticipated manner due to its qualia* (orgasms during cat creation, in this case) is a special or mysterious problem, one that must be treated differently than errors in its reasoning, prediction ability, perception, or any aspect of its cognition. On second thought, I do think it's different: it actually seems less important than errors in any of those systems. (And if an AGI is Provably Safe, it's safe-- we need only worry about its qualia from an ethical perspective.) My original comment here is (I believe) fairly mild: I do think the issue of qualia will involve a practical class of problems for FAI, and knowing how to frame and address them could benefit from more cross-pollination from more biology-focused theorists such as Chalmers and Tononi. And somewhat more boldly, a "qualia translation function" would be of use to all FAI projects.

*I share your qualms about the word, but there really are few alternatives with less baggage, unfortunately.

Comment by johnsonmx on Welcome to Less Wrong! (July 2012) · 2012-09-09T18:51:03.733Z · LW · GW

I definitely agree with your first paragraph (and thanks for the tip on SIAI vs SI). The only caveat is if evolved/brain-based/black-box AGI is several orders of magnitude easier to create than an AGI with a more modular architecture where SI's safety research can apply, that's a big problem.

On the second point, what you say makes sense. Particularly, AGI feelings haven't been completely ignored at LW; if they prove important, SI doesn't have anything against incorporating them into safety research; and AGI feelings may not be material to AGI behavior anyway.

However, I still do think that an ability to tell what feelings an AGI is experiencing-- or more generally, being able to look at any physical process and being able to derive what emotions/qualia are associated with it-- will be critical. I call this a "qualia translation function".

Leaving aside the ethical imperatives to create such a function (which I do find significant-- the suffering of not-quite-good-enough-to-be-sane AGI prototypes will probably be massive as we move forward, and it behooves us to know when we're causing pain), I'm quite concerned about leaky reward signal abstractions.

I imagine a hugely-complex AGI executing some hugely-complex decision process. The decision code has been checked by Very Smart People and it looks solid. However, it just so happens that whenever it creates a cat it (internally, privately) feels the equivalent of an orgasm. Will that influence/leak into its behavior? Not if it's coded perfectly. However, if something of its complexity was created by humans, I think the chance of it being coded perfectly is Vanishingly small. We might end up with more cats than we bargained for. Our models of the safety and stability dynamic of an AGI should probably take its emotions/qualia into account. So I think all FAI programmes really would benefit from such a "qualia translation function".

Comment by johnsonmx on Welcome to Less Wrong! (July 2012) · 2012-09-09T07:14:29.730Z · LW · GW

Thank you.

I'd frame why I think biology matters in FAI research in terms of research applicability and toolbox dividends.

On the first reason--- applicability--- I think more research focus on biologically-inspired AGI would make a great deal of sense is because the first AGI might be a biologically-inspired black box, and axiom-based FAI approaches may not particularly apply to such. I realize I'm (probably annoyingly) retreading old ground here with regard to which method will/should win the AGI race, but SIAI's assumptions seem to run counter to the assumptions of the greater community of AGI researchers, and it's not obvious to me the focus on math and axiology isn't a simple case of SIAI's personnel backgrounds being stacked that way. 'If all you have is a hammer,' etc. (I should reiterate that I don't have any alternatives to offer here and am grateful for all FAI research.)

The second reason I think biology matters in FAI research--- toolbox dividends--- might take a little bit more unpacking. (Forgive me some imprecision, this is a complex topic.)

I think it's probable that anything complex enough to deserve the term AGI would have something akin to qualia/emotions, unless it was specifically designed not to. (Corollary: we don't know enough about what Chalmers calls "psychophysical laws" to design something that lacks qualia/emotions.) I think it's quite possible that an AGI's emotions, if we did not control for their effects, could produce complex feedback which would influence its behavior in unplanned ways (though perfectly consistent with / determined by its programming/circuitry). I'm not arguing for a ghost in the machine, just that the assumptions which allow us to ignore what an AGI 'feels' when modeling its behavior may prove to be leaky abstractions in the face of the complexity of real AGI.

Axiological approaches to FAI don't seem to concern themselves with psychophysical laws (modeling what an AGI 'feels'), whereas such modeling seems a core tool for biological approaches to FAI. I find myself thinking being able to model what an AGI 'feels' will be critically important for FAI research, even if it's axiom/math-based, because we'll be operating at levels of complexity where the abstractions we use to ignore this stuff can't help but leak. (There are other toolbox-based arguments for bringing biology into FAI research which are a lot simpler than this one, but this is on the top of my list.)

Comment by johnsonmx on Welcome to Less Wrong! (July 2012) · 2012-09-08T20:23:02.835Z · LW · GW

I'm Mike Johnson. I'd estimate I come across a reference to LW from trustworthy sources every couple of weeks, and after working my way through the sequences it feels like the good outweighs the bad and it's worth investing time into.

My background is in philosophy, evolution, and neural nets for market prediction; I presently write, consult, and am in an early-stage tech startup. Perhaps my highwater mark in community exposure has been a critique of the word Transhumanist at Accelerating Future. In the following years, my experience has been more mixed, but I appreciate the topics and tools being developed even if the community seems a tad insular. If I had to wear some established thinkers on my sleeve I'd choose Paul Graham, Lawrence Lessig, Steve Sailer, Gregory Cochran, Roy Baumeister, and Peter Thiel. (I originally had a comment here about having an irrational attraction toward humility, but on second thought, that might rule out Gregory "If I have seen farther than others, it's because I'm knee-deep in dwarves" Cochran… Hmm.)

Cards-on-the-table, it's my impression that

(1) Lesswrong and SIAI are doing cool things that aren't being done anywhere else (this is not faint praise);

(2) The basic problem of FAI as stated by SIAI is genuine;

(3) SIAI is a lightning rod for trolls and cranks, which is really detrimental to the organization (the metaphor of autoimmune disease comes to mind) and seems partly its own fault;

(4) Much of the work being done by SIAI and LW will turn out to be a dead-end. Granted, this is true everywhere, but in particular I'm worried that axiomatic approaches to verifiable friendliness will prove brittle and inapplicable (I do not currently have an alternative);

(5) SIAI has an insufficient appreciation for realpolitik;

(6) SIAI and LW seem to have a certain distaste for research on biologically-inspired AGI, due in parts to safety concerns, an organizational lack of expertise in the area, and (in my view) ontological/metaphysical preference. I believe this distaste is overly limiting and also leads to incorrect conclusions.

Many of these impressions may be wrong. I aim to explore the site, learn, change my mind if I'm wrong, and hopefully contribute. I appreciate the opportunity, and I hope my unvarnished thoughts here haven't soured my welcome. Hello!