Posts
Comments
I love this post! I've been thinking about it a lot. I think it's mostly wrong, but it's been very stimulating. The main problem is that it wobbles between treating agency in the stated sense of "a capacity to act and make choices", and treating it as intelligence, and treating it as personhood. For one thing, a game player whose moves are predictable will seem dumb, but they still have agency in the stated sense.
Let me see if I can sketch out some slightly improved categories. A predictably moving pendulum is non-agentic, and therefore also not in the class of things that we see as stupid or intelligent: we don't use inverse planning to infer a pendulum's beliefs, preferences, goals, or behavioral intents. When we see an event that's harder to predict or retrodict, like the swaying of a weather vane, the wandering of a tornado, or the identity of a card drawn out from a deck, we reach out for explanations, and planning process explanations may occasionally momentarily be credible, and may be credible more than momentarily if someone is indeed pulling the strings.
As for the relationship between estimation of intelligence and the algorithmic comprehensibility of a goal-directed agent, I propose that AlphaGo seems intelligent even to the people who wrote it while they are playing against it, and that it's rather a judge's uncertainty about individual moves or actions which is a determining factor (alongside the success of the move) in judging intelligence. Or maybe uncertainty about the justification of the move? Either way, a person's ability to examine (or even edit) a mind in detail should not necessarily cause the person to see the mind as being inanimate or non-agentic or non-consequentialist or non-intelligent or non-personlike or unworthy of moral consideration. Whew! I'm glad that the moral status of a thing doesn't depend on an attribution that I can only make in ignorance. That would be devastating!
To recap in more direct terms, predictable things can have agency. Choice-processes are a kind of explanation which may be considered when examining an unpredictable thing, but that doesn't mean that prediction uncertainty always or typically leads to attributions of agency or intelligence (and I really think that a hypothesis space of "agenty or random" is very impoverished).
As for the cognitive antecedents of personification of inanimate things and the conditions that make useful an impersonal, debugging mindset in dealing with people, those are very interesting. From your examples, it seems like there's an implicit belief when we attribute personhood that the thing will respond predictably to displays of anger, and that an impersonal stance is useful or appropriate when a person won't alter their tendency toward a behavior when they hear expressions of negative moral judgement. This would mean that when a judge makes a well-founded attribution of personhood, what they're doing is recognizing that a thing's behavior is well explained by something like "a decision process that considers social norms and optimizes its behavior with an eye toward social consequences". As for what leads to unfounded attributions of personhood, like getting uselessly angry at a beeping smoke alarm, that's still an open question in my mind! Or maybe getting angry at a smoke alarm isn't a kind of personification, and hitting inanimate things that won't stop noising at you is actually a fine response? Hmmmm.
A while ago I asked myself, "If agency is bugs and uncertainty, does that tell us anything about akrasia?" Well, I no longer think that agency is bugs and uncertainty, but re-reading the comments in the thread gave me some new insight into the topic. Oh shit, it's 3:00 am. Maybe I'll sleep on it and write it up tomorrow.
These are the exact three points that I wanted to voice. The fewer steps there are between entering lesswrong and seeing articles, the fewer steps there are between entering lesswrong and participating in discussions. That our landing page is a navigation list and not a a set of recent articles, the way any other group blog website would have, has irked me since the previous skull graphic was introduced.
The International Phonetic Alphabet was originally meant to be used as a natural language writing system (for example, the journal of the International Phonetic Association was originally written in IPA: http://phonetic-blog.blogspot.com/2012/06/100-years-ago.html). Between IPA's theoretical (physiological) grounding, its wide use by linguists, and its near-legibility by untrained English literati, IPA is over-determined as the obvious choice for a reformed orthography, if English were every made to conform phonetically to a standard pronunciation. That said, it's not going to happen, because spelling reform is not urgent to anyone with capital to try it. Like, someone could make a browser extension that would replace words their IPA spellings, so that an online community could familiarize themselves with the new spelling, but no one has made that, or paid for it to be made, and this places a strong upper bound on how much anyone cares about spelling reform.
I have a few recurrent self-actualization fantasies that make use of fanciful abilities and resources. Sometimes the ability is time travel, which made this tweet by Liron Shapira stand out to me:
A time machine is a mechanism that lets you pretend like something far from you is actually near you, with respect to causal distance.
Likewise with your telekinesis and "vanishing into the floor", I propose that daydreams (as recurrent, unproductive consideration of situations involving plans that are, in reality, non-actionable for their use of fanciful skills and resources) commonly serve as agency-superstimuli: imagined successes, relying on expanded abilities (such as those which reduce effort, cost, or uncertainty of achieving some material effect), produce an inference with in-pretense-validity of one's own exceptional personal character.
Maybe it's worth distinguishing "wishing for an outcome", and "imagining the experience of the desired outcome (eating breakfast)", and "imagining a fantastical plan for achieving the outcome" as having different effects on one's motivation / decisions.
For your copy writing example, you list a few interesting techniques which show up later in only abbreviated form in the section of responses to Type 3 problems. Rephrasing and expanding a little bit, if you're worried about poor task performance you might motivate yourself by: 1) highlighting to yourself that you are uncertain about your performance quality, rather than certain that it will be bad, and that you're thus neglecting the possibility that you will do well at the task, 2) highlighting your comparative advantage in solving the problem for reasons other than skill (such as being in a unique position / time / place to solve the problem, or having special access to relevant resources, or having a title with related useful liberties / authorities), or 3) highlighting your (role-consonant) duty to try / to perform, while trivializing your duty to evaluate your performance (perhaps diffusing that responsibility by deciding that it belongs to some non-specified others).
I might add that questioning whether your own performance will be adequate / sufficient, probably has at least these three functions: 1) to make you change/improve your plans, or give up on your plans if they seem inadequate, 2) to motivate you to ask other people for information about your current/future performance, and 3) to excuse future failure ("I knew I couldn't do this. I kept saying I didn't know how. Everyone heard me. I shouldn't have been forced to do this. This isn't my fault. This outcome shouldn't/doesn't justify an inference decreasing anyone's estimation of my skill / social standing."). (Please not that I'm using "you" rhetorically. I don't know the specifics of your work with CFAR, haven't perceived any failure, and am not trying to accuse you of any, I don't know what you call that, "epistemic misfeasance" maybe.)
These suggest a few ways to suppress (the decision relevance / import of) such worries: 1) making more resolute in your mind that a) nothing more can/should be done to improve your plans / your expected future performance, that b) "backing out" would be more costly than continuing, 2) asking aloud for others to help evaluate / improve your performance, or 3) verifying (to your satisfaction in advance of the performance) a communal perception of the validity of your excuse by (confirming that others will not / persuading others that they should not) make such a judgment to bad character, due to whatever circumstances are in effect.
I think one of the most interesting parts of this post is your conceptualization of System I and System II as not just being parts of your decision making apparatus, but as being separate person with their own preference, beliefs, and signature behavioral characteristics. Is there other literature which suggests that dual process of decision making are paired with dual processes of motivation (appetitive/aversive drives (and also maybe some preference-like psychological state behind habitual / scripted action) vs. reflective / higher order, verbally endorsed, ego-syntonic preferences)?
To say that justice matters intrinsically is to say that sometimes, for justice's sake, we should do things that would make people worse off than if justice were not an issue. Or more accurately, there will at least sometimes be policies trading some welfare (or for any other component of utility) for some justice, that are equally as good as policies which do not (according to the enlarged set of concerns containing justice).
This comment is good, but it could be improved by using symmetric terms to describe the two conditions.
Objectified: Others will..
1) give you few freedoms or choices,
2) dominate you, make decisions for you, control you,
3) have uses for you,
4) initiate romance with little confirmation of your participatory consent
5) want/expect you to care about their well being
6) not care about your well being
7) support you with resources / financially
8) value you for your attractiveness, help, concern, (and child raising and housekeeping)
a) rather than for your financial support or decision making / control
9) want you to value them for their financial support and decision making / control
a) rather than for their attractiveness, help, concern
Subjectified: Others will...
1) give you many freedoms and choices,
2) submit to you, rely on you to make decisions for them, want you to control them
3) want you to use them for things,
4) want you to initiate romance with little confirmation of their participatory consent
5) care about your well being
6) want/expect you to not care abut their well being
7) depend on you for resources / financially
8) value you for your financial support and decision making / control
a) rather than for your attractiveness, help, concern
9) want you to value them for their attractivenss, help, concern, (and child raising and housekeeping)
a) rather than for their financial support or decision making / control
Is that a fair, symmetric restatement of your points?
Jung didn't really play the science game, and Jungians continue not to play the science game, but if we squint a little, we can see that some of his ideas have been vindicated by science.
The term ‘archetype’ is often misunderstood as meaning certain definite mythological images or motifs. ... The archetype is a tendency to form such representations of a motif. ... My critics have incorrectly assumed that I am dealing with "inherited representations"
From Jung's Approaches to the unconscious (1964), quoted in Jones, Mixed Metaphors and Narrative Shifts (2003).
In this narrow sense, Jung's archetypes are uncontroversial: Much reasoning in humans is framed in cognitive science as proceeding by a mechanism of counter-factual simulation called manipulation of mental imagery. Kosslyn, for example, was a pioneer of studies of (visual modality) mental imagery manipulation. The knowledge used in this reasoning, stored in neural degrees of freedom, such as firing biases or synaptic connectivity, may be called abstract, semantic, or conceptual when it is generalized from across many episodic contexts and when its access does not strongly evoke vivid, specific memories of the experiences from which it was generalized (i.e. so much as you can reason about dogs without thinking of the last dog you saw, your dog knowledge may be called semantic rather than episodic). Features of concepts which are typical or discriminative of instances of those concepts are sometimes called prototypical, stereotypical, or (occasionally) archetypal. For science here, the study of cognitive biases relating to typical and discriminative features of conceptual reasoning were pioneered by Rosch. In addition to cached knowledge which is source-situated (exemplar knowledge from episodic memories of individual entities), there may be cached abstract knowledge, such as a mental image of a bird which takes its character from typical or discriminative features of the concept (perhaps specified randomly or unspecified in regard to features which have wide variance across birds), rather than as a cut-and-past mixture of bird-instance parts.
This is all pretty strongly based on visual-modality knowledge of physical objects, and it doesn't always generalize exceptionally well to other modalities. Like Ullman does nice research on how the declarative / procedural division of memory types maps well onto a lexical / grammatical division of language capacities; a lexicon, Ullman says, is cached linguistic knowledge, and certainly I can recall cached lexical entities, like idioms, which are not episodically situated ("sick as a dog" without thinking of anyone saying that phrase"), yet my introspection suggests that when I think about idioms generally, I'm not usually looking to "sick as a dog" to guide my reasoning. And it's much harder to find a reasonable interpretation wherein "sick as a dog" is a conjunction of discriminative features of idioms, than it is to understand how a mental image of a bird could have typical features of birds and yet not be identifiable as belonging to a recognized genus.
Other parts of the Jungian archetype construct than "capacity to reason with non-situated mental images having discriminative features" probably haven't been borne out so well by scientific inquiry.
Sometimes I feel uncomfortable talking to strangers, and will put off scheduling appointments. Today, after a few days of trying to beat myself into getting a haircut at a barber's shop or salon, I decided to cut my own hair instead. I'm very pleased with the results, and I will probably make a habit of cutting my own hair from now on. I know this solution doesn't generalize to other appointments, such as medical examinations, but I'm very glad to have put that one source of distress to rest.
Relevant in winter, when air is dry and noses are frequently blown: Placing petroleum jelly in one's nostrils for moisture, despite being icky, is a superior experience to a nosebleed.
Not learning combat skills as a commitment to avoid conflict is a nice mirror image of Schelling's Xenophon example, where cutting off your ability to retreat is a way to commit yourself to winning a fight.
Pathological counter example: "Passive propulsion in vortex wakes" by Beal et al. PDF
It seems to me that the less personal MPs are, and the fewer opportunities we allow for anthropomorphic persuasion between them (through appeals such as issue framing, pleading, signaling loyalty to a coalition, ingratiation, defamation, challenges to the MPs status, deceit (e.g. unreliable statements by MPs about their private info relevant to probable consequences of acts resulting from the passage of bills)), then all the more we will encapsulate away the hard problems of moral reasoning within the MPs.
Even persuasive mechanisms more amenable to formalization - like agreements between MPs to reallocate their computational resources, or like risk-sharing agreements between MPs based on their expectations that they might lose future influence in the parliament if the agent changes its assignment of probabilities to the MPs' moral correctness based on its observation of decision consequences - even these sound to me, in the absence of reasons why they should appear in a theory of how to act given a distribution over self-contained moral theories, like complications that will impede crisp mathematical reasoning, introduced mainly for their similarity to the mechanisms that function in real human parliaments.
Or am I off base, and your scare quotes around "personality" mean that you're talking about something else? Because what I'm picturing is basically someone building cognitive machinery for emotions, concepts, habits and styles of thinking, et cetera, on top of moral theories.