Posts

Comments

Comment by Jef_Allbright on Dunbar's Function · 2008-12-31T20:45:44.000Z · LW · GW

I think it bears repeating here:

Influence is only one aspect of the moral formula; the other aspect is the particular context of values being promoted.

These can be quite independent, as with a tribal chief, with substantial influence, acting to promote the perceived values of his tribe, vs. the chief acting to promote his narrower personal values. [Note that the difference is not one of fitness but of perceived morality. Fitness is assessed only indirectly within an open context.]

Comment by Jef_Allbright on A New Day · 2008-12-31T18:55:09.000Z · LW · GW

Excellent advice Eliezer!

I have a game I play ever few months or so. I get on my motorcycle, usually on a Friday, pack spare clothes and toiletries, and head out in a random direction. At most every branch in the road I choose randomly, and take my time exploring and enjoying the journey. After a couple of days, I return hugely refreshed, creative potential flowing.

Comment by Jef_Allbright on Dunbar's Function · 2008-12-31T17:33:51.000Z · LW · GW
But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band... If you lived in a world the size of a hunter-gatherer band, then it would be easier to find something important at which to be the best - or do something that genuinely struck you as important, without becoming lost in a vast crowd of others with similar ideas.

Can you see the contradiction, bemoaning that people are now "less in control" while exercising ever-increasing freedom of expression? Harder to "find something important" with so many more opportunities available? Can you see the confusion over context that is increasingly not ours to control?

Eliezer, here again you demonstrate your bias in favor of the context of the individual. Dunbar's (and others') observations on organizational dynamics apply generally, while your interpretation appears to speak quite specifically of your experience of Western culture and your own perceived place in the scheme of things.

Plentiful contrary views exist to support a sense of meaning, purpose, pride implicit in the recognition of competent contribution to community without the (assumed) need to be seen as extraordinary. Especially still in modern Japan and Asia, the norm is to bask in recognition of competent contribution and to recoil from any suggestion that one might substantially stand out. False modesty this is not. In Western society too, examples of fulfillment and recognition through service run deeply, although this is belied in the (entertainment) media.

Within any society, recognition confers added fitness, but to satisfice it is not necessary to be extraordinary.

But if people keep getting smarter and learning more - expanding the number of relationships they can track, maintaining them more efficiently...[relative to the size of the interacting population]..then eventually there could be a single community of sentients, and it really would be a single community.

Compare:

But as the cultural matrix keeps getting smarter—supporting increasing degrees of freedom with increasing probability—then eventually you could see self-similarity of agency over increasing scale, and it really would be a fractal agency.

Well, regardless of present point of view—wishing all a rewarding New Year!

Comment by Jef_Allbright on What I Think, If Not Why · 2008-12-11T20:46:40.000Z · LW · GW

Ironic, such passion directed toward bringing about a desirable singularity, rooted in an impenetrable singularity of faith in X. X yet to be defined, but believed to be [meaningful|definable|implementable] independent of future context.

It would be nice to see an essay attempting to explain an information or systems-theoretic basis supporting such an apparent contradiction (definition independent of context.)

Or, if the one is arguing for a (meta)invariant under a stable future context, an essay on the extended implications of such stability, if the one would attempt to make sense of "stability, extended."

Or, a further essay on the wisdom of ishoukenmei, distinguishing between the standard meaning of giving one's all within a given context, and your adopted meaning of giving one's all within an unknowable context.

Eliezer, I recall that as a child you used to play with infinities. You know better now.

Comment by Jef_Allbright on The Mechanics of Disagreement · 2008-12-10T15:38:43.000Z · LW · GW

Coming from a background in scientific instruments, I always find this kind of analysis a bit jarring with its infinite regress involving the rational, self-interested actor at the core.

Of course two instruments will agree if they share the same nature, within the same environment, measuring the same object. You can map onto that a model of priors, likelihood function and observed evidence if you wish. Translated to agreement between two agents, the only thing remaining is an effective model of the relationship of the observer to the observed.

Comment by Jef_Allbright on Disjunctions, Antipredictions, Etc. · 2008-12-09T19:56:22.000Z · LW · GW

I'll second jb's request for denser, more highly structured representations of Eliezer's insights. I read all this stuff, find it entertaining and sometimes edifying, but disappointing in that it's not converging on either a central thesis or central questions (preferably both.)

Comment by Jef_Allbright on Is That Your True Rejection? · 2008-12-06T19:03:30.000Z · LW · GW

Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?

Comment by Jef_Allbright on Is That Your True Rejection? · 2008-12-06T18:24:50.000Z · LW · GW

billswift wrote:

…but the self-taught will simply extend their knowledge when a lack appears to them.

Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on "will simply extend" but on "when a lack appears."

In this forum, and others like it, we characteristically fail to distinguish between the relative ease of learning from the already abstracted explicit and latent regularities in our environment and the fundamentally hard (and increasingly harder) problem of extracting novelty of pragmatic value from an exponentially expanding space of possibilities.

Therein lies the problem—and the opportunity—of increasingly effective agency within an environment of even more rapidly increasing uncertainty. There never was or will be safety or certainty in any ultimate sense, from the point of view of any (necessarily subjective) agent. So let us each embrace this aspect of reality and strive, not for safety but for meaningful growth.

Comment by Jef_Allbright on Worse Than Random · 2008-11-11T20:42:58.000Z · LW · GW

A few posters might want to read up on Stochastic Resonance, which was surprisingly surprising a few decades ago. I'm getting a similar impression now from recent research in the field of Compressive Sensing, which ostensibly violates the Nyquist sampling limit, highlighting the immaturity of the general understanding of information-theory.

In my opinion, there's nothing especially remarkable here other than the propensity to conflate the addition of noise to data, with the addition of "noise" (a stochastic element) to (search for) data.

This confusion appears to map very well onto the cybernetic distinction between intelligently knowing the answer and intelligently controlling for the answer.

Comment by Jef_Allbright on Worse Than Random · 2008-11-11T19:32:34.000Z · LW · GW

And that's why I always say that the power of natural selection comes from the selection part, not the mutation part.

And the power of the internal combustion engine comes from the fuel part... Right, or at least, not even wrong. It seems that my congratulations a few months ago for your apparent immanent escape from simple reductionism were premature.

Comment by Jef_Allbright on Ask OB: Leaving the Fold · 2008-11-09T22:58:41.000Z · LW · GW

Jo -

Above all else, be true to yourself. This doesn't mean you must or should be bluntly open with everyone about your own thoughts and values; on the contrary, it means taking personal responsibility for applying your evolving thinking as a sharp instrument for the promotion of your evolving values.

Think of your values-complex as a fine-grained hierarchy, with some elements more fundamental and serving to support a wider variety of more dependent values. For example, your better health, both physical and mental, is probably more fundamental and necessary to support better relationships, and a relatively few deeper relationships will tend to support a greater variety of subsidiary values than would a larger number of more shallow relationships, and so on.

Of course no one can compute and effectively forecast the future in such complex terms, but to the extent you can clarify for yourself the broad outlines, in principle, of (1) your values and (2) your thinking on how to promote those values into the future you create, then you'll tend to proceed in the direction of increasing optimality. Wash, rinse, repeat.

We wish you the best. Your efforts toward increasingly intelligent creation of an increasingly desirable world contribute to us all.

Comment by Jef_Allbright on Recognizing Intelligence · 2008-11-08T04:49:10.000Z · LW · GW

In my opinion, EY's point is valid—to the extent that the actor and observer intelligence share neighboring branches of their developmental tree. Note that for any intelligence rooted in a common "physics", this says less about their evolutionary roots and more about their relative stages of development.

Reminds me a bit of the jarred feeling I got when my ninth grade physics teacher explained that a scrambled egg is a clear and generally applicable example of increased entropy. [Seems entirely subjective to me, in principle.] Also reminiscent of Kardashev with his "obvious" classes of civilization, lacking consideration of the trend toward increasing ephemeralization of technology.

Comment by Jef_Allbright on Complexity and Intelligence · 2008-11-04T01:33:56.000Z · LW · GW

@pk I don't understand. Am I too dumb or is this gibberish?

It's not so complicated; it's just that we're so formal...

Comment by Jef_Allbright on Complexity and Intelligence · 2008-11-03T23:46:02.000Z · LW · GW

It might be worthwhile to note that cogent critiques of the proposition that a machine intelligence might very suddenly "become a singleton Power" do not deny the inefficacies of the human cognitive architecture offering improvement via recursive introspection and recoding, nor do they deny the improvements easily available via hardware substitution and expansion of more capable hardware and I/O.

The do, however, highlight the distinction between a vastly powerful machine madly exploring vast reaches of a much vaster "up-arrow" space of mathematical complexity, and a machine of the same power bounded in growth of intelligence -- by definition necessarily relevant -- due to starvation for relevant novelty in its environment of interaction.

If, Feynman-like, we imagine the present state of knowledge about our world in terms of a distribution of vertical domains, like silos, some broader with relevance to many diverse facets of real-world interaction, some thin and towering into the haze of leading-edge mathematical reality, then we can imagine the powerful machine quickly identifying and making a multitude of latent connections and meta-connections, filling in the space between the silos and even somewhat above -- but to what extent, given the inevitable diminishing returns among the latent, and the resulting starvation for the novel?

Given such boundedness, speculation is redirected to growth in ecological terms, and the Red Queen's Race continues ever faster.

Comment by Jef_Allbright on BHTV: Jaron Lanier and Yudkowsky · 2008-11-03T17:24:00.000Z · LW · GW

Frelkins and Marshall pretty well sum up my impressions of the exchange between Jaron and EY.

Perhaps pertinent, I'd suggest an essay on OvercomingBias on our unfortunate tendency to focus on the other's statements, rather than focusing on a probabilistic model of the likelihood function generating those statements. Context is crucial to meaning, but must be formed rather than conveyed. Ironically—but reflecting the fundamentally hard value of intelligence—such contextual asymmetry appears to work against those who would benefit the most.

More concretely, I'm referring to the common tendency to shake one's head in perplexity and say "He was so wrong, he didn't make much sense at all." in comparison with laughing and saying "I can see how he thinks that way, within his context (which I may have once shared.)"

Comment by Jef_Allbright on Economic Definition of Intelligence? · 2008-10-29T21:27:47.000Z · LW · GW

My (not so "fake") hint:

Think economics of ecologies. Coherence in terms of the average mutual information of the paths of trophic I/O provides a measure of relative ecological effectiveness (absent prediction or agency.) Map this onto the information I/O of a self-organizing hierarchical Bayesian causal model (with, for example, four major strata for human-level environmental complexity) and you should expect predictive capability within a particular domain, effective in principle, in relation to the coherence of the hierarchical model over its context.

As to comparative evaluation of the intelligence of such models without actually running them, I suspect this is similar to trying to compare the intelligence of phenotypical organisms by comparing the algorithmic complexity of their DNA.

Comment by Jef_Allbright on Which Parts Are "Me"? · 2008-10-23T16:38:00.000Z · LW · GW

@Tim Tyler: "That's no reason not to talk about goals, and instead only mention something like "utility"."

Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about "expected utility" or "goals" as it is about promoting a present model of evolving values into one's future, via increasingly effective interaction with one's (necessarily local) environment of interaction. Wash, rinse, repeat. Certainty, goals, and utility are always only a special case, applicable to the extent that the context is adequately specifiable. This is the key to so-called "paradoxes" such a Prisoners's Dilemma and Parfit's Repugnant Conclusion as well.

Tim, this forum appears to be over-heated and I'm only a guest here. Besides, I need to pack and get on my motorcycle and head up to San Jose for Singularity Summit 08 and a few surrounding days of high geekdom.

I'm (virtually) outta here.

Comment by Jef_Allbright on Which Parts Are "Me"? · 2008-10-23T16:01:00.000Z · LW · GW

@Eliezer: There's emotion involved. I enjoy calling people's bluffs.

Jef, if you want to argue further here, I would suggest explaining just this one phrase "functional self-similarity of agency extended from the 'individual' to groups".

Eliezer, it's clear that your suggestion isn't friendly, and I intended not to argue, but rather, to share and participate in building better understanding. But you've turned it into a game which I can either play, or allow you to use it against me. So be it.

The phrase is a simple one, but stripped of context, as you've done here, it may indeed appear meaningless. So to explain, let's first restore context.

Your essay, Which Parts are "Me", highlighted some interesting and significant similarities -- and differences -- in our thinking. Interesting, because they match an epistemological model I held tightly and would still defend against simpler thinking, and significant, because a coherent theory of self, or rather agency, is essential to a coherent meta-ethics.

So I wrote (after trying to establish some similarity of background):

"At some point about 7 years later (about 1985) it hit me one day that I had completely given up belief in an essential "me", while fully embracing a pragmatic "me". It was interesting to observe myself then for the next few years; every 6 months or so I would exclaim to myself (if no one else cared to listen) that I could feel more and more pieces settling into a coherent and expanding whole. It was joyful and liberating in that everything worked just as before, but I had to accommodate one less hypothesis, and certain areas of thinking, meta-ethics in particular, became significantly more coherent and extensible. [For example, a piece of the puzzle I have yet to encounter in your writing is the functional self-similarity of agency extended from the "individual" to groups.]"

So I offered a hint, of an apparently unexplored (for you) direction of thought, which, given a coherent understanding of the functional role of agency, might benefit your further thinking on meta-ethics.

The phrase represents a simple concept, but rests on a subtle epistemic foundation which, as Mathew C pointed out, tends to bring out vigorous defenses in support of the Core Self. Further to the difficulty, an epistemic foundation cannot be conveyed, but must be created in the mind of thinker as described pretty well recently by Meltzer in a paper that "stunned" Robin Hanson, entitled Pedagogical Motives for Esoteric Writing. So, the phrase is simple, but the meaning depends on background, and along the road to acquiring that background, there is growth.

To break it down: "Functional self-similarity of agency extended from the 'individual' to groups."

"Functional" indicates that I'm referring to similarity in terms of function, i.e. relations of output to input, rather than e.g. similarities of implementation, structure, or appearance. More concretely [I almost neglected to include the concrete.] I'm referring to the functional aspects of agency, in essence, action on behalf of perceived interests (an internal model of some sort) in relation to which the agent acts on its immediate environment so as to (tend to) null out any differences.

"Self-similarity" refers to some entity replicated, conserved, re-used over a range of scale. More concretely, I'm referring to patterns of agency which repeat -- in functional terms, even though the implementation may be quite different in structure, substrate, or otherwise.

"Extended from the individual to groups" refers to the scale of the subject, in other words, that functional self-similarity of agency is conserved over increasing scale from the common and popularly conceived case of individual agency, extending to groups, groups of groups, and so on. More concretely, I'm referring to the essential functional similarities, in terms of agency, which are conserved when a model scales for example, from individual human acting on its interests, to a family acting on its interests, to tribe, company, non-profit, military unit, city-state, etc. especially in terms of the dynamics of its interactions with entities of similar (functional) scale, but also with regard to the internal alignments (increasing coherence) of its own nature due to selection for "what works."

As you must realize, regularities observed over increasing scale tend to indicate and increasingly profound principle. That was the potential value I offered to you.

In my opinion, the foregoing has a direct bearing on a coherent meta-ethics, and is far from "fake". Maybe we could work on "increasing coherence with increasing context" next?

Comment by Jef_Allbright on Which Parts Are "Me"? · 2008-10-23T00:28:57.000Z · LW · GW

Mathew C: "And the biggest threat, of course, is the truth that the self is not fundamentally real. When that is clearly seen, the gig is up."

Spot on. That is by far the biggest impasse I have faced anytime I try to convey a meta-ethics denying the very existence of the "singularity of self" in favor of the self of agency over increasing context. I usually to downplay this aspect until after someone has expressed a practical level of interest, but it's right there out front for those who can see it.

Thanks. Nice to be heard...

Based on the disproportionate reaction from our host, I'm going to sit quietly now.

Comment by Jef_Allbright on Which Parts Are "Me"? · 2008-10-23T00:21:17.000Z · LW · GW

@Cyan: "... you're going to need more equations and fewer words."

Don't you see a lower-case sigma representing a series every time I say "increasingly"? ;-)

Seriously though, I read a LOT of technical papers and it seems to me much of the beautiful LaTex equations and formulas are only to give the impression of rigor. And there are few equations that could "prove" anything in this area of inquiry.

What would help my case, if it were not already long lost in Eliezer's view, is to have provided examples, references, and commentary along with each abstract formulation. I lack the time to do so, so I've always considered my "contributions" to be seeds of thought to grow or not depending on whether they happen to find fertile soil.

Comment by Jef_Allbright on Which Parts Are "Me"? · 2008-10-22T23:14:45.000Z · LW · GW

@Eliezer: I can't imagine why I might have been amused at your belief that you are what a grown-up Eliezer Yudkowsky looks like.

No, but of course I wasn't referring to similarity of physical appearance, nor do I characteristically comment at such a superficial level. Puhleease.

I don't know if I've mentioned this publicly before, but as you've posted in this vein several times now, I'll go ahead and say it:

functional self-similarity of agency extended from the 'individual' to groups

I believe that the difficult-to-understand, high-sounding ultra-abstract concepts you use with high frequency and in great volume, are fake. I don't think you're a poor explainer; I think you have nothing to say.

If I don't give you as much respect as you think you deserve, no more explanation is needed than that, a conclusion I came to years ago.

Well that explains the ongoing appearance of disdain and dismissal. But my kids used to do something similar and then I was sometimes gratified to see shortly after an echo of my concepts in their own words.

Let me expand on my "fake" hint of a potential area of growth for your moral epistemology:

If you can accept that the concept of agency is inherent to any coherent meta-ethics, then we might proceed. But, you seem to preserve and protect a notion of agency that can't be coherently modeled.

You continue to posit agency that exploits information at a level unavailable to the system, and wave it away with hopes of math that "you don't yet have." Examples are your post today that has "real self" somehow dominating lesser aspects of self as if quite independent systems, or with your "profound" but unmodelable interpretation of ishoukenmei which bears only a passing resemblance to the very realistic usage I learned while living in Japan.

You continue to speak (and apparently think) in terms of "goals", even when such "goals" can't be effectively specified in the uncertain context of a complex evolving future, and you don't seem to consider the cybernetic or systems-theoretic reality that ultimately no system of interesting complexity, including humans, actually attains long-term goals so much as it simply tries to null out the difference between its (evolving) internal model and its perceptions of its present reality. All the intelligence is in the transform function effecting its step-wise actions. And that's good enough, but never absolutely perfect. But the good enough that you can have is always preferable to the absolutely perfect that you can never have (unless you intend to maintain a fixed context.)

You posit certainty (e.g. friendliness) as an achievable goal, and use rigorous-sounding terms like "invariant goal" in regard to decision-making in an increasingly uncertain future, but blatantly and blithely ignore concerns addressed to you over the years by myself and others as to how you think that this can possibly work, given the ineluctable combinatorial explosion, and the fundamentally critically underspecified priors.

I realize it's like a Pascal's Wager for you, and I admire your contributions in a sense somewhat tangential to your own, but like an isolated machine intelligence of high processing power but lacking an environment of interaction of complexity similar to its own - eventually you run off at high speed exploring quite irrelevant reaches of possibility space.

As to my hint to you today, if you have a workable concept of agency, then you might profit from consideration of the functional self-similarity of agencies composed of agencies, and so on, self-similar with increasing scale, and how the emergent (yeah, I know you dismiss "emergence" too) dynamics will tend to be perceived as increasingly moral (from within the system, as each of us necessarily is) due to the multi-level selection and therefore alignment for "what works" (nulling out the proximal difference between their model and their perceived reality, wash, rinse, repeat) by agents each acting in their own interest within an ecology of competing interests.

Sheesh, I may be abstract, I may be a bit too out there to relate to easily, but I have a hard time with "fake."

I meant to shake your tree a bit, in a friendly way, but not to knock you out of it. I've said repeatedly that I appreciate the work you do and even wish I could afford to do something similar. I'm a bit dismayed, however, by the obvious emotional response and meanness from someone who prides himself on sharpening the blade of his rationality by testing it against criticism.

Comment by Jef_Allbright on Which Parts Are "Me"? · 2008-10-22T20:26:31.000Z · LW · GW

Matthew C quoting Einstein: "A human being is a part of the whole, called by us, "Universe," a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest -- a kind of optical delusion of his consciousness."

Further to this point, and Eliezer's description of the Rubicon: It seems that recognizing (or experiencing) that perceived separation is a step necessary to its eventual resolution. Those many who've never even noticed to ask the question will not notice the answer, no matter how close to them it may be.

Comment by Jef_Allbright on Which Parts Are "Me"? · 2008-10-22T19:45:53.000Z · LW · GW

Eliezer, A few years ago I sat across from you at dinner and mentioned how much you reminded me of my younger self. I expected, incorrectly, that you would receive this with the appreciation of a person being understood, but saw instead on your face an only partially muted expression of snide mirth. For the next hour you sat quietly as the conversation continued around us, and on my drive home from the Bay Area back to Santa Barbara I spent a bit more time reflecting on the various interactions during the dinner and updating my model of others and you.

For as long as I can remember, well before the age of 4, I've always experienced myself from both within and without as you describe. On rare occasions I've found someone else who knows what I'm talking about, but I can't say I've ever known anyone closely for whom it's such a strong and constant part of their subjective experience as it has been for me. The emotions come and go, in all their intensity, but they are both felt and observed. The observations of the observations are also observed, and all this up to typically and noticeably, about 4 levels of abstraction. (Reflected in my natural writing style as well.)

This leads easily and naturally to a model representing a part of oneself dealing with another part of oneself. Which worked well for me up until about the age of 18, when a combination of long-standing unsatisfied questions of an epistemological nature on the nature of induction and entropy, readings (Pirsig, Buckminster Fuller, Hofstadter, Dennett, and some of the more coherent and higher-integrity books on Buddhism) lead me to question and then reorganize my model of my relationship to my world. At some point about 7 years later (about 1985) it hit me one day that I had completely given up belief in an essential "me", while fully embracing a pragmatic "me". It was interesting to observe myself then for the next few years; every 6 months or so I would exclaim to myself (if no one else cared to listen) that I could feel more and more pieces settling into a coherent and expanding whole. It was joyful and liberating in that everything worked just as before, but I had to accommodate one less hypothesis, and certain areas of thinking, meta-ethics in particular, became significantly more coherent and extensible. [For example, a piece of the puzzle I have yet to encounter in your writing is the functional self-similarity of agency extended from the "individual" to groups.]

Meanwhile I continued in my career as a technical manager and father, and had yet to read Cosmides and Tooby, Kahneman and Tversky, E.T. Jaynes or Judea Pearl -- but when I found them they felt like long lost family.

I know of many reasons why its difficult to nigh impossible to convey this conceptual leap, and hardly any reason why one would want to make it, other than one who's values already drive him to continue to refine his model of reality.

I offer this reflection on my own development, not as a "me too" or any sort of game of competition of perceived superiority, but only as a gentle reminder that, as you've already seen in your previous development, what appears to be a coherent model now, can and likely will be upgraded (not replaced) to accommodate a future, expanded, context of observations.

Comment by Jef_Allbright on Ethical Inhibitions · 2008-10-20T23:59:05.000Z · LW · GW

@G: " if ethics were all about avoiding "getting caught", then the very idea that there could be an ethical "right thing to do" as opposed to what society wants one to do would be incoherent."

Well, I don't think anyone here actually asserted that the basis of ethics was avoiding getting caught, or even fear of getting caught. It seems to me that Eliezer posited an innate moral sense inhibiting risk-taking in the moral domain, and in my opinion this is more a reflection of his early childhood environment of development than any innate moral sense such as pride or disgust. Even though I think Eliezer was working from the wrong basis, I think he's offered a valid observation on the apparent benefit of "deep wisdom" with regard to tending to avoid "black swans."

But there seems to be an even more direct problem with your query, in that it's strictly impractical in terms of the information model it would entail, that individual agents would somehow be equipped with the same model of "right" as the necessarily larger model supported by society.

Apologies in advance, but I'm going to bow out of this discussion now due to diminishing returns and sensitivity to our host.

Comment by Jef_Allbright on Ethical Inhibitions · 2008-10-20T21:51:11.000Z · LW · GW

@George Weinberg: "...from an evolutionary perspective: why do we have a sense that we ought to do what is right as opposed to what society wants us to do?"

In other words, why don't humans function as mindless drones serving the "greater good" of their society? Like ants or bees? Well, if you were an ant or a bee, even one capable of speculating on evolutionary theory, you wouldn't ask that question, but rather its obverse. ;-)

Peter Watts wrote an entertaining bit of fiction, Blindsight on a similar question, but to ask why would evolution do X rather than Y, imputes an inappropriate teleology.

Otherwise, if you were asking as to the relative merits of X versus Y, I think the most powerful answer would hinge on the importance of diversity at multiple levels for robust adaptability, rather than highest degree of adaptation.

And, it might help to keep in mind that biological organisms are adaptation executers, not fitness maximizers, and also that evolutionary economics favors satisficing over "optimizing."

Comment by Jef_Allbright on Ethical Inhibitions · 2008-10-20T20:48:03.000Z · LW · GW

@Caledonian: "...we must therefore conclude that a fatal flaw exists in our model..."

It's not necessarily that a "fatal flaw" exists in a model, but that all models are necessarily incomplete.

Eliezer's reasoning is valid and correct -- over a limited context of observations supporting meaning-making. It may help to consider that groups promote individual members, biological organisms promote genes, genes promote something like "material structures of increasing synergies"...

In cybernetic terms, in the bigger picture, there's nothing particularly privileged about the role of the gene, nor about biological evolutionary processes as a special case of a more fundamental organizing principle.

Comment by Jef_Allbright on Ethical Inhibitions · 2008-10-20T19:42:10.000Z · LW · GW

Eliezer: "The problem is that it's nigh mathematically impossible for group selection to overcome a countervailing individual selection pressure..."

While Eliezer's point here is quite correct within its limited context of individual selection versus group selection, it seems obvious, supported by numerous examples in nature around us, that his case is overly simplistic, failing to address multi-level or hierarchical selection effects, and in particular, the dynamics of selection between groups.

This would appear to bear also on difficulty comprehending selection between (and also within) multi-level agencies in the moral domain.

Comment by Jef_Allbright on Protected From Myself · 2008-10-19T21:46:15.000Z · LW · GW

odf23ds: "Ack. Could you please invent some terminology so you don't have to keep repeating this unwieldy phrase?"

I'm eager for an apt idiom for the concept, and one also for "increasing coherence over increasing context."

It seems significant, and indicative of our cultural unfamiliarity -- even discomfort -- with concepts of systems, information, and evolutionary theory, that we don't have such shorthand.

But then I look at the gross misunderestimation of almost every issue of any complexity at every level of supposed sophistication of social decision-making, and then geek speak seems not so bad.

Suggestions?

Comment by Jef_Allbright on Protected From Myself · 2008-10-19T19:14:52.000Z · LW · GW

Russell: "ethics consists of hard-won wisdom from many lifetimes, which is how it is able to provide me with a safety rail against the pitfalls I have yet to encounter in my single lifetime."

Yes, generations of selection for "what works" encoded in terms of principles tends to outweigh assessment within the context of an individual agent in terms of expected utility -- to the extent that the present environment is representative of the environment of adaptation. To the extent it isn't, then the best one can do is rely on the increasing weight of principles perceived hierarchically as increasingly effective over increasing scope of consequences, e.g. action on the basis of the principle known as the "law of gravity" is a pretty certain bet.

Comment by Jef_Allbright on Dark Side Epistemology · 2008-10-18T13:43:33.000Z · LW · GW

I'm in strong agreement with Peter's examples above. I would generalize by saying that the epistemic "dark side" tends to arise whenever there's an implicit discounting of the importance of increasing context. In other words, whenever, for the sake of expediency, "the truth", "the right", "the good". etc., is treated categorically rather than contextually (or equivalently, as if the context were fixed or fully specified.)

Comment by Jef_Allbright on Ends Don't Justify Means (Among Humans) · 2008-10-16T00:00:44.000Z · LW · GW

Phil: "Is that on this specific question, or a blanket "I never respond to Phil or Jef" policy?"

I was going to ask the same question, but assumed there'd be no answer from our gracious host. Disappointing.

Comment by Jef_Allbright on Ends Don't Justify Means (Among Humans) · 2008-10-15T22:05:12.000Z · LW · GW

Eliezer: "I'm not responding to Phil Goetz and Jef Allbright. And you shouldn't infer my positions from what they seem to be arguing with me about - just pretend they're addressing someone else."

Huh. That doesn't feel very nice.

Comment by Jef_Allbright on Ends Don't Justify Means (Among Humans) · 2008-10-15T20:45:10.000Z · LW · GW

@Cyan: "Hostile hardware", meaning that an agent's values-complex (essentially the agent's nature, driving its actions) contains elements misaligned (even to the extent of being in internal opposition on some level(s) of the complex hierarchy of values) is addressed by my formulation in the "increasing coherence" term. Then, I did try to convey how this is applicable to any moral agent, regardless of form, substrate, or subjective starting point.

I'm tempted to use n's very nice elucidation of the specific example of political corruption to illustrate my general formulation (politician's relatively narrow context of values, relatively incoherent if merged with his constituents' values, scope of consequences amplified disproportionately by the increased instrumental effectiveness of his office) but I think I'd better let it go at this. [Following the same moral reasoning applied to my own relatively narrow context of values with respect to the broader forum, etc.]

Comment by Jef_Allbright on Ends Don't Justify Means (Among Humans) · 2008-10-15T18:17:00.000Z · LW · GW

@Cyan: Substituting "consider only actions that have predictable effects..." is for me much clearer than "limit the universe of discourse to actions that have predictable effects..." ["and note that Eliezer's argument still makes strong claims about how humans should act."]

But it seems to me that I addressed this head-on at the beginning of my initial post, saying "Of course the ends justify the means -- to the extent that any moral agent can fully specify the ends."

The infamous "Trolley Paradox" does not demonstrate moral paradox at all. It does, however, highlight the immaturity of the present state of our popular framework for moral reasoning. The Trolley problem is provided as if fully specified, and we are supposed to be struck by the disparity between the "true" morality of our innate moral sense, and the "true" morality of consequentialist reasoning. The dichotomy is false; there is no paradox.

All paradox is a matter of insufficient context. In the bigger picture, all the pieces must fit. Or as Eliezer has repeated recently, "it all adds up to normalcy." So in my posts on this topic, I proceeded to (attempt to) convey a larger and more coherent context making sense of the ostensible issue.

Problem is, contexts (being subjective) can't be conveyed. Best that can be done is to try to enrich the (discursive - you're welcome) environment sufficiently that you might form a comprehensibly congruent context in relevant aspects of your model of the world.

Comment by Jef_Allbright on Ends Don't Justify Means (Among Humans) · 2008-10-15T15:55:02.000Z · LW · GW

Cyan: "...tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions."

On the contrary, promotion into the future of a [complex, hierarchical] evolving model of values of increasing coherence over increasing context, would seem to be central to the topic of this essay.

Fundamentally, any system, through interaction with its immediate environment, always only expresses its values (its physical nature.) "Intention", corresponding to "free-will" is merely derivative and for practical purposes in regard to this analysis of the system dynamics, is just "along for the ride."

But to the extent that the system involves a reflexive model of its values -- an inherently subjective view of its nature -- then increasing effectiveness in principle, indirectly assessed in terms of observations of those values being promoted over increasing external scope of consequences, tends to correspond with increasing coherence of the (complex, hierarchical) inter-relationships of the elements within the model, over increasing context of meaning-making (increasing web of supporting evidence.) Wash, rinse, repeat with ongoing interaction --> selection for "that which tends to work" --> updating of the model...

"Morality" enters the picture only in regard to groups of agents. For a single, isolated, agent "morality" doesn't apply; there is only the "good" of that which is assessed as promoting that agent's (present, but evolving) values-complex. At the other end of the scale of subjectivity, in the god's-eye view, there is no morality since all is simply and perfectly as it is.

But along that scale, regardless of the subjective starting point (whether human agency of various scale, other biological, or machine-phase agency) action will tend to be assessed as increasingly moral to the extent that it is assessed as promoting, in principle, (1) a subjective model of values increasingly coherent over increasing context (of meaning-making, evidential observation) over (2) increasing scope of objective consequences.

Evolutionary processes have encoded this accumulating "wisdom" slowly and painfully into the heuristics supporting the persistence of the physical, biological and cultural branch with which we self-identify. With the ongoing acceleration of the Red Queen's Race, I see this meta-ethical theory becoming ever more explicitly applicable to "our" ongoing growth as intentional agents of whatever form or substrate.


Cyan: "...limit the universe of discourse to actions which have predictable effects..."

I'm sorry, but my thinking is based almost entirely in systems and information theory, so when terms like "universe of discourse" appear, my post-modernism immune response kicks in and I find myself at a loss to continue. I really don't know what to do with your last statement.

Comment by Jef_Allbright on Ends Don't Justify Means (Among Humans) · 2008-10-14T22:43:52.000Z · LW · GW

Phil: "I don't know what "a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences" means."

You and I engaged briefly on this four or five years ago, and I have yet to write the book. [Due to the explosion of branching background requirements that would ensue.] I have, however, effectively conveyed the concept face to face to very small groups.

I keep seeing Eliezer orbiting this attractor, and then veering off as he encounters contradictions to a few deeply held assumptions. I remain hopeful that the prodigious effort going into the essays on this site will eventually (and virtually) serve as that book.

Comment by Jef_Allbright on Ends Don't Justify Means (Among Humans) · 2008-10-14T22:12:34.000Z · LW · GW

There's really no paradox, nor any sharp moral dichotomy between human and machine reasoning. Of course the ends justify the means -- to the extent that any moral agent can fully specify the ends.

But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences. Rather the moral agent must necessarily fall back on heuristics, fundamentally hard-to-gain wisdom based on increasingly effective interaction with relevant aspects of the environment of interaction, promoting in principle a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences.

Comment by Jef_Allbright on Beyond the Reach of God · 2008-10-04T17:43:44.000Z · LW · GW

"I don't think that even Buddhism allows that."

Remove whatever cultural or personal contextual trappings you find draped over a particular expression of Buddhism, and you'll find it very clear that Buddhism does "allow" that, or more precisely, un-asks that question.

As you chip away at unfounded beliefs, including the belief in an essential self (however defined), or the belief that there can be a "problem to solved" independent of a context for its specification, you may arrive at the realization of a view of the world flipped inside-out, with everything working just as before, less a few paradoxes.

The wisdom of "adult" problem-solving is not so much about knowing the "right" answers and methods, but about increasingly effective knowledge of what doesn't work. And from the point of view of any necessarily subjective agent in an increasingly uncertain world, that's all there ever was or is.

Certainty is always a special case.

Comment by Jef_Allbright on Trying to Try · 2008-10-01T23:20:09.000Z · LW · GW

It seems you've missed the point here on a point common to Eastern Wisdom and to systems theory. The "deep wisdom" which you would mock refers to the deep sense there is no actual "self" separate from that which acts, thus thinking in terms of "trying" is an incoherent and thus irrelevant distraction. Other than its derivative implication that to squander attention is to reduce one's effectiveness, it says nothing about the probability of success, which in systems-theoretic terms is necessarily outside the agent's domain.

Reminds me of the frustratingly common incoherence of people thinking that they decide intentionally according to their innate values, in ignorance of the reality that they are nothing more nor less than the values expressed by their nature.

Comment by Jef_Allbright on Friedman's "Prediction vs. Explanation" · 2008-09-29T15:40:11.000Z · LW · GW

Among the many excellent, and some inspiring, contributions to OvercomingBias, this simple post, together with its comments, is by far the most impactful for me. It's scary in almost the same way as the way the general public approaches selection of their elected representatives and leaders.

Comment by Jef_Allbright on Competent Elites · 2008-09-27T13:10:34.000Z · LW · GW

For me, a highlight of each year is a multi-day gathering of about 40 individuals selected for their intelligence, integrity and passion to make the world a better place. We share our current thinking and projects and actively refine and synergize plans for the year ahead. Nearly everyone there displays perceptiveness, creativity, joy of life, "sparkle", well above the norm, but -- these qualities are NOT highly predictive of effectiveness outside the individual's preferred environment.

Comment by Jef_Allbright on The Level Above Mine · 2008-09-26T19:22:25.000Z · LW · GW

@Roland

I suppose you could google "(arrogant OR arrogance OR modesty) eliezer yudkowsky" and have plenty to digest. Note that the arrogance at issue is neither dishonest nor unwarranted, but it is an impairment, and a consequence of trade-offs which, from within a broader context, probably wouldn't be taken in the same way.

That's as far as I'm willing to entertain this line of inquiry, which ostensibly neutral request for facts appears to belie an undercurrent of offense.

Comment by Jef_Allbright on The Level Above Mine · 2008-09-26T15:50:02.000Z · LW · GW

Eliezer, I've been watching you with interest since 1996 due to your obvious intelligence and "altruism." From my background as a smart individual with over twenty years managing teams of Ph.D.s (and others with similar non-degreed qualifications) solving technical problems in the real world, you've always struck me as near but not at the top in terms of intelligence. Your "discoveries" and developmental trajectory fit easily within the bounds of my experience of myself and a few others of similar aptitudes, but your (sheltered) arrogance has always stood out. I wish you continued progress, not so much in ever-sharper analysis, but in ever more effective synthesis of the leading-edge subjects you pursue.

Comment by Jef_Allbright on The True Prisoner's Dilemma · 2008-09-04T16:00:47.000Z · LW · GW

I see this discussion over the last several months bouncing around, teasingly close to a coherent resolution of the ostensible subjective/objective dichotomy applied to ethical decision-making. As a perhaps pertinent meta-observation, my initial sentence may promulgate the confusion with its expeditious wording of "applied to ethical decision-making" rather than a more accurate phrasing such as "applied to decision-making assessed as increasingly ethical over increasing context."

Those who in the current thread refer to the essential element of empathy or similarity (of self models) come close. It's important to realize that any agent always only expresses its nature within its environment -- assessments of "rightness" arise only in the larger context (of additional agents, additional experiences of the one agent, etc.)

Our language and our culture reinforce an assumption of an ontological "rightness" that pervades our thinking on these matters. An even greater (perceived) difficulty is that to relinquish ontological "rightness" entails ultimately relinquishing an ontological "self". But to relinquish such ultimately unfounded beliefs is to gain clarity and coherence while giving up nothing actual at all.

"Superrationality" is an effective wrapper around these apparent dilemmas, but even proponents such as Hofstadter confused description with prescription in this regard. Paradox is always only a matter of insufficient context. In the bigger picture all the pieces must fit. [Or as Eliezer has taken to saying recently: "It all adds up to normalcy."

Apologies if my brief pokings and proddings on this topic appear vague or even mystical. I can only assert within this limited space and bandwidth that my background in science, engineering and business is far from that of one who could harbor vagueness, relativism, mysticism, or postmodernist patterns of thought. I appreciate the depth and breadth of Eliezer's written explorations of this issue whereas I lack the time to do so myself.

Comment by Jef_Allbright on The Meaning of Right · 2008-07-29T14:45:14.000Z · LW · GW

Watching the ensuing commentary, I'm drawn to wishfully imagine a highly advanced Musashi, wielding his high-dimensional blade of rationality such that in one stroke he delineates and separates the surrounding confusion from the nascent clarity. Of course no such vorpal katana could exist, for if it did, it would serve only to better clear the way for its successors.

I see a preponderance of viewpoints representing, in effect, the belief that "this is all well and good, but how will this guide me to the one true prior, from which Archimedian point one might judge True Value?"

I see some who, given a method for reliably discarding much which is not true, say scornfully in effect "How can this help me? It says nothing whatsoever about Truth itself!"

And then there are the few who recognize we are each like leaves of a tree rooted in reality, and while we should never expect exact agreement between our differing subjective models, we can most certainly expect increasing agreement -- in principle -- as we move toward the root of increasing probability, pragmatically supporting, rather than unrealistically affirming, the ongoing growth of branches of increasing possibility. [Ignoring the progressive enfeeblement of the branches necessitating not just growth but eventual transformation.]

Eliezer, I greatly appreciate the considerable time and effort you must put into your essays. Here are some suggested topics that might help reinforce and extend this line of thought:

  • Two communities, separated by a chasm Would it be seen as better (perhaps obviously) to build a great bridge between them, or to consider the problem in terms of an abstract hierarchy of values, for example involving impediments to transfer of goods, people, ... ultimately information, for which building a bridge is only a special-case solution? In general, is any goal not merely a special case (and utterly dependent on its specifiability) of values-promotion?

  • Fair division, etc. Probably nearly all readers of Overcoming Bias are familiar with a principled approach to fair division of a cake into two pieces, and higher order solutions have been shown to be possible with attendant computational demands. Similarly, Rawles proposed that we ought to be satisfied with social choice implemented by best-known methods behind a veil of ignorance as to specific outcomes in relation to specific beneficiaries. Given the inherent uncertainty of specific future states within any evolving system of sufficient complexity to be of moral interest, what does this imply about shifting moral attention away from expected consequences, and toward increasingly effective principles reasonably optimizing our expectation of improving, but unspecified and indeed unspecifiable, consequences? Bonus question: How might this apply to Parfit's Repugnant Conclusion and other well-recognized "paradoxes" of consequentialist utilitarianism?

  • Constraints essential for meaningful growth Widespread throughout the "transhumanist" community appears the belief that considerable, if not indefinite progress can be attained via the "overcoming of constraints." Paradoxically, the accelerating growth of possibilities that we experience arises not with overcoming constraints, but rather embracing them in ever-increasing technical detail. Meaningful growth is necessarily within an increasingly constrained possibility space -- fortunately there's plenty of fractal interaction area within any space of real numbers -- while unconstrained growth is akin to a cancer. An effective understand of meaningful growth depends on an effective understanding of the subjective/objective dichotomy.

Thanks again for your substantial efforts.

Comment by Jef_Allbright on The Meaning of Right · 2008-07-29T04:12:04.000Z · LW · GW

Eliezer, it's a pleasure to see you arrive at this point. With an effective understanding of the subjective/objective aspects supporting a realistic metaethics, I look forward to your continued progress and contributions in terms of the dynamics of increasingly effective evolutionary (in the broadest sense) development for meaningful growth, promoting a model of(subjective) fine-grained, hierarchical values with increasing coherence over increasing context of meaning-making, implemts principles of (objective) instrumental action increasingly effective over increasing scope of consequences. Wash, rinse, repeat...

There's no escape from the Red Queen's race, but despite the lack of objective milestones or markers of "right", there's real progress to be made in the direction of increasing rightness.

Society has been doing pretty well at the increasingly objective model of instrumental action known commonly known as warranted scientific knowledge. Now if we could get similar focus on the challenges of values-elicitation, inductive biases, etc., leading to an increasingly effective (and coherent) model of agent values...

  • Jef
Comment by Jef_Allbright on Probability is in the Mind · 2008-03-12T14:10:23.000Z · LW · GW

In other words, probability is not likelihood.

Comment by Jef_Allbright on Circular Altruism · 2008-01-25T15:02:00.000Z · LW · GW

Anon wrote: "Any question of ethics is entirely answered by arbitrarily chosen ethical system, therefore there are no "right" or "better" answers."

Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.

Comment by Jef_Allbright on Circular Altruism · 2008-01-23T20:06:00.000Z · LW · GW

Once again we've highlighted the immaturity of present-day moral thinking -- the kind that leads inevitably to Parfit's Repugnant Conclusion. But any paradox is merely a matter of insufficient context; in the bigger picture all the pieces must fit.

Here we have people struggling over the relative moral weight of torture versus dust specks, without recognizing that there is no objective measure of morality, but only objective measures of agreement on moral values.

The issue at hand can be modeled coherently in terms of the relevant distances (regardless of how highly dimensional, or what particular distance metric) between the assessor's preferred state and the assessor's perception of the alternative states. Regardless of the particular (necessarily subjective) model and evaluation function, there must be some scalar distance between the two states within the assessor's model (since a rational assessor can have only a single coherent model of reality, and the alternative states are not identical.) Furthermore introducing a multiplier on the order of a googolplex overwhelms any possible scale in any realizable model, leading to an effective infinity, forcing one (if one's reasoning is to be coherent) to view that state as dominant.

All of this (as presented by Eliezer) is perfectly rational -- but merely a special case and inappropriate to decision-making within a complex evolving context where actual consequences are effectively unpredictable.

If one faces a deep and wide chasm impeding desired trade with a neighboring tribe, should one rationally proceed to achieve the desired outcome: an optimum bridge?

Or should one focus not on perceived outcomes, but rather on most effectively expressing one's values-complex: Ie, valuing not the bridge, but effective interaction (including trade), and proceeding to exploit best-known principles promoting interaction, for example communications, air transport, replication rather than transport...and maybe even a bridge?

The underlying point is that within a complex evolutionary environment, specific outcomes can't be reliably predicted. Therefore to the extent that the system (within its environment of interaction) cannot be effectively modeled, an optimum strategy is one that leads to discovering the preferred future through the exercise of increasingly scientific (instrumental) principles promoting an increasingly coherent model of evolving values.

In the narrow case of a completely specified context, it's all the same. In the broader, more complex world we experience, it means the difference between coherence and paradox.

The Repugnant Conclusion fails (as does all consequentialist ethics when extrapolated) because it presumes to model a moral scenario incorporating an objective point of view. Same problem here.

Comment by Jef_Allbright on Circular Altruism · 2008-01-23T02:23:31.000Z · LW · GW

"I think people would be more comfortable with your conclusion if you had some way to quantify it; right now all we have is your assertion that the math is in the dust speck's favor."

The actual tipping point depends on your particular subjective assessment of relative utility. The actual tipping point doesn't matter; what matters is that there is crossover at some point, therefore such reasoning about preferences, like San Jose --> San Francisco --> Oakland --> San Jose is incoherent.