Posts

PaulK's Shortform 2022-12-15T23:50:25.539Z

Comments

Comment by PaulK on When and why should you use the Kelly criterion? · 2023-11-08T13:50:30.843Z · LW · GW

I wonder if you can recover Kelly from linear utility in money, plus a number of rounds unknown to you and chosen probabilistically from a distribution.

Comment by PaulK on How could AIs 'see' each other's source code? · 2023-06-03T23:56:07.201Z · LW · GW

In the soaking-up-extra-compute case? Yeah, for sure, I can only really picture it (a) on a very short-term basis, for example maybe while linking up tightly for important negotations (but even here, not very likely). Or (b) in a situation with high power asymmetry. For example maybe there's a story where 'lords' delegate work to their 'vassals', but the workload intensity is variable, so the vassals have leftover compute, and the lords demand that they spend it on something like blockchain mining. To compensate for the vulnerability this induces, the lords would also provide protection.

Comment by PaulK on How could AIs 'see' each other's source code? · 2023-06-03T23:50:10.025Z · LW · GW
  1. Yup, all that would certainly make it more complicated. In a regime where this kind of tightly-controlled delegation were really important, we might also demand our counterparties standardize their hardware so they can't play tricks like this.
  2. I was picturing a more power-asymmetric situation, more like a feudal lord giving his vassals lots of busywork so they don't have time to plot anything.
Comment by PaulK on How could AIs 'see' each other's source code? · 2023-06-03T00:41:23.894Z · LW · GW

We might develop schemes for auditable computation, where one party can come in at any time and check the other party's logs. They should conform to the source code that the second party is supposed to be running; and also to any observable behavior that the second party has displayed. It's probably possible to have logging and behavioral signalling be sufficiently rich that the first party can be convinced that that code is indeed being run (without it being too hard to check -- maybe with some kind of probabilistically checkable proof).

However, this only provides a positive proof that certain code is being run, not a negative proof that no other code is being run at the same time. This part, I think, inherently requires knowing something about the other party's computational resources. But if you can know about those, then it's possible it might be possible. For a perhaps dystopian example, if you know your counterparty has compute A, and the program you want them to run takes compute B, then you could demand they do something (difficult but easily checkable) like invert hash functions, that'll soak up around A-B of their compute, so they have nothing left over to do anything secret with.

Comment by PaulK on Is "Strong Coherence" Anti-Natural? · 2023-04-11T19:10:13.031Z · LW · GW

Sorry, I guess I didn't make the connection to your post clear. I substantially agree with you that utility functions over agent-states aren't rich enough to model real behavior. (Except, maybe, at a very abstract level, a la predictive processing? (which I don't understand well enough to make the connection precise)). 

Utility functions over world-states -- which is what I thought you meant by 'states' at first -- are in some sense richer, but I still think inadequate.

And I agree that utility functions over agent histories are too flexible.

I was sort of jumping off to a different way to look at value, which might have both some of the desirable coherence of the utility-function-over-states framing, but without its rigidity.

And this way is something like, viewing 'what you value' or 'what is good' as something abstract, something to be inferred, out of the many partial glimpses of it we have in the form of our extant values.

Comment by PaulK on Is "Strong Coherence" Anti-Natural? · 2023-04-11T07:40:38.794Z · LW · GW

Oh, huh, this post was on the LW front page, and dated as posted today, so I assumed it was fresh, but the replies' dates are actually from a month ago.

Comment by PaulK on Is "Strong Coherence" Anti-Natural? · 2023-04-11T07:38:11.518Z · LW · GW

(A somewhat theologically inspired answer:)

Outside the dichotomy of values (in the shard-theory sense) vs. immutable goals, we could also talk about valuing something that is in some sense fixed, but "too big" to fit inside your mind. Maybe a very abstract thing. So your understanding of it is always partial, though you can keep learning more and more about it (and you might shift around, feeling out different parts of the elephant). And your acted-on values would appear mutable, but there would actually be a, perhaps non-obvious, coherence to them.

It's possible this is already sort of a consequence of shard theory? In the way learned values would have coherences to accord with (perhaps very abstract or complex) invariant structure in the environment?

Comment by PaulK on A stylized dialogue on John Wentworth's claims about markets and optimization · 2023-03-26T22:15:04.226Z · LW · GW

I still don't know exactly what parts of my comment you're responding to. Maybe talking about a concrete sub-agent coordination problem would help ground this more.

But as a general response: in your example it sounds like you already have the problem very well narrowed down, to 3 possibilities with precise probabilities. What if there were 10^100 possibilities instead? Or uncertainty where the full real thing is not contained in the hypothesis space?

Comment by PaulK on A stylized dialogue on John Wentworth's claims about markets and optimization · 2023-03-26T16:53:56.606Z · LW · GW

This is for logical coordination? How does it help you with that?

Comment by PaulK on A stylized dialogue on John Wentworth's claims about markets and optimization · 2023-03-26T01:32:32.016Z · LW · GW

IMO, coordination difficulties among sub-agents can't be waved away so easily. The solutions named, side-channel trades and counterfactual coordination, are both limited.

I would frame the nature of their limits, loosely, like this. In real minds (or at least the human ones we are familiar with), the stuff we care about lives in a high-dimensional space. A mind could be said to be, roughly, a network spanning such a space. A trade between elements (~sub-agents) that are nearby in this space will not be too hard to do directly. But for long-distance trades, side-channel reward will need to flow through a series of intermediaries -- this might involve several changes of local currencies (including traded favors or promises). Each local exchange needs to be worthwhile to its participants, and not overload the relationships that it's piggybacking on.

These long-distance trades can be really difficult to set up sometimes. The same way it would be hard for a random villager in the middle ages in France to send $10 to another random villager in China.

The difficulty depends on things like the size / dimensionality of the space; how well-connected it is; and how much slack is available in the relevant places in the system (for the intermediate elements to wiggle around enough to make all the local trades possible). Note that the need for slack makes this a holistic constraint: if you just have one really important trade to make, then sure, you can probably make it happen, by using up lots of slack (locking a lot of intermediate elements into orientations optimized for that big trade). But you can't do that for every possible trade. So these issues really show up when you have a lot of heterogeneous trades to make.

Counterfactual ("logical" ) coordination has similar issues. If A and B want to counterfactually coordinate, but they're far apart in this mind-space, then they can only communicate or understand one another in a limited way, via intermediaries (or via the small # of dimensions they do share). This just makes things harder -- hard to get shared meaning, hard to agree on what's fair, hard to find a solution together that will generalize well instead of being brittle.

BTW, I'm not denying that intelligence (whatever that might mean) helps with all this, but I am denying that it's a panacea.

Comment by PaulK on Tabooing "Frame Control" · 2023-03-20T19:36:29.069Z · LW · GW

Probably some students will actually be quite bothered by this and be left with lingering, subtle confusion and discomfort. It is, in a sense, taking a shortcut past all the objections and alternatives that real humans had historically to these ideas. And IMO some students will be much better served by going the long way around, studying the ideas along with their history.

Comment by PaulK on Tabooing "Frame Control" · 2023-03-20T19:31:43.324Z · LW · GW

One response to frame-control-y situations is, instead of making accusations that as you say can lead to a he-said-she-said situation, to personally fall back to a more careful, defensive posture vis a vis framing, accepting that there seem to be strong framing differences among the people here, and communicating this posture to others. In other words, accepting when it seems to be too hard to directly create common knowledge about what is happening at the level of framing.

Comment by PaulK on Local Memes Against Geometric Rationality · 2022-12-21T09:27:31.308Z · LW · GW

Random question, tangential to this post in particular (but not the series): should we expect genes to be doing something like geometric rationality in their propagation? When a new gene emerges and starts to spread, even if it greatly increases host fitness on average, its # of copies could easily drop to 0 by chance. So it "should want" to be cautious, like a kelly better, and maximize its growth geometrically rather than arithmetically.

Not sure quite how that logic should cash out though. For one, genes that make their hosts more cautious (reduce fitness variance) should be systematically advantaged by this effect, at least during their early growth phase. More speculatively, to take advantage of this effect optimally, genes should somehow suss out how large their population (# of copies) is and push their host to be risk-taking vs. cautious in a way that's calibrated to that. Which is maybe biologically plausible?

I don't actually know much about population genetics though, and would be curious to hear from anyone who does.

Comment by PaulK on PaulK's Shortform · 2022-12-15T23:50:25.963Z · LW · GW

Is there an arithmetic vs. geometric rationality thing (a la Scott Garrabrant's recent series) going on with genes?

Like, at equilibrium, the ratio of different genetic variants should be determined by the arithmetic expectation of the number of copies they pass on to the next generation. But for new variants just starting out, the population size (# of copies of that variant) could easily hit 0 and get wiped out, so it should be more cautious -- the population should want to maximize the geometric expectation of its growth rate, like a kelly better.

Does this make sense? I don't know actual population genetics math.

Comment by PaulK on Geometric Exploration, Arithmetic Exploitation · 2022-11-30T20:50:48.576Z · LW · GW

Wow, I came here to say literally the same thing about commensurability: that perhaps AM is for what's commensurable, and GM is for what's incommensurable.

Though, one note is that to me it actually seems fine to consider different epistemic viewpoints as incommensurate. These might be like different islands of low K-complexity, that each get some nice traction on the world but in very different ways, and where the path between them goes through inaccessibly-high K-complexity territory.

Comment by PaulK on Geometric Rationality is Not VNM Rational · 2022-11-28T19:28:29.835Z · LW · GW

Another setting that seems natural and gives rise to multiplicative utility is if we are trying to cover as much of a space as possible, and we divide it dimension-wise into subspace, each tracked by a subagent. To get the total size covered, we multiply together the sizes covered within each subspace.

We can kinda shoehorn unequal weighing in here if we have each sub-agent track not just the fractional or absolute coverage of their subspace, but the per-dimension geometric average of their coverage.

For example, say we're trying to cover a 3D cube that's 10x10x10, with subagent A minding dimension 1 and subagent B minding dimensions 2 and 3. A particular outcome might involve A having 4/10 coverage and B having 81/100 coverage, for a total coverage of (4/10)*(81/100), which we could also phrase as (4/10)*(9/10)^2.

I'm not sure how to make uncertainty work correctly within each factor though.

Comment by PaulK on Geometric Rationality is Not VNM Rational · 2022-11-28T18:42:37.632Z · LW · GW

These are super interesting ideas, thanks for writing the sequence!

I've been trying to think of toy models where the geometric expectation pops out -- here's a partial one, which is about conjunctivity of values:

Say our ultimate goal is to put together a puzzle (U = 1 if we can, U = 0 if not), for which we need 2 pieces. We have sub-agents A and B who care about the two pieces respectively, each of whose utility for a state is its probability estimates for finding its piece there. Then our expected utility for a state is the product of their utilities (assuming this is a one-shot game, so we need to find both pieces at once), and so our decision-making will be geometrically rational.

This easily generalizes to an N-piece puzzle. But, I don't know how to extend this interpretation to allow for unequal weighing of agents.

Comment by PaulK on Here's the exit. · 2022-11-21T22:04:27.844Z · LW · GW

I also think that the fact that AI safety thinking is so much driven by these fear + distraction patterns, is what's behind the general flail-y nature of so much AI safety work. There's a lot of, "I have to do something! This is something! Therefore, I will do this!"

Comment by PaulK on Here's the exit. · 2022-11-21T22:01:45.504Z · LW · GW

I think your diagnosis of the problem is right on the money, and I'm glad you wrote it. 

As for your advice on what a person should do about this, it has a strong flavor of: quit doing what you're doing and go in the opposite direction. I think this is going to be good for some people but not others. Sometimes it's best to start where you are. Like, one can keep thinking about AI risk while also trying to become more aware of the distortions that are being introduced by these personal and collective fear patterns.

That's the individual level though, and I don't want that to deflect from the fact that there is this huge problem at the collective level. (I think rationalist discourse has a libertarian-derived tendency to focus on the former and ignore the latter.)

Comment by PaulK on Autonomy as taking responsibility for reference maintenance · 2022-08-19T19:27:56.390Z · LW · GW

Nice essay, makes sense to me! Curious how you see this playing into machine intelligence.

One thought is that "help maintain referential stability", or something in that ballpark, might be a good normative target for an AI. Such an AI would help humans think, clarify arguments, recover dropped threads of meaning. (Of course, done naively, this could be very socially disruptive, as many social arrangements depend on the absence of clear flows of meaning.)

Comment by PaulK on why assume AGIs will optimize for fixed goals? · 2022-06-13T04:52:40.256Z · LW · GW

I agree with that.

Comment by PaulK on why assume AGIs will optimize for fixed goals? · 2022-06-11T20:47:38.527Z · LW · GW

As a slightly tangential point, I think if you start thinking about how to cast survival / homeostasis in terms of expected-utility maximization, you start having to confront a lot of funny issues, like, "what happens if my proxies for survival change because I self-modified?", and then more fundamentally, "how do I define / locate the 'me' whose survival I am valuing? what if I overlap with other beings? what if there are multiple 'copies' of me?". Which are real issues for selfhood IMO.

Comment by PaulK on why assume AGIs will optimize for fixed goals? · 2022-06-11T20:36:29.413Z · LW · GW

>There is no way for the pursuit of homeostasis to change through bottom-up feedback from anything inside the wrapper.  The hierarchy of control is strict and only goes one way.

Note that people do sometimes do things like starve themselves to death or choose to become martyrs in various ways, for reasons that are very compelling to them. I take this as a demonstration that homeostatic maintenance of the body is in some sense "on the same level" as other reasons / intentions / values, rather than strictly above everything else.

Comment by PaulK on why assume AGIs will optimize for fixed goals? · 2022-06-11T07:01:06.874Z · LW · GW

I do see the inverse side: a single fixed goal would be something in the mind that's not open to critique, hence not truly generally intelligent from a Deutschian perspective (I would guess; I don't actually know his work well).

To expand on the "not truly generally intelligent" point: one way this could look is if the goal included some tacit assumptions about the universe that turned out later not to be true in general -- e.g. if the agent's goal was something involving increasingly long-range simultaneous coordination, before the discovery of relativity -- and if the goal were really unchangeable, then it would bar or at least complicate the agent's updating to a new, truer ontology.

Comment by PaulK on why assume AGIs will optimize for fixed goals? · 2022-06-10T03:13:13.008Z · LW · GW

I've been thinking along the same lines, very glad you've articulated all this!

Comment by PaulK on Frame Control · 2021-11-30T02:38:53.469Z · LW · GW

The way I understand the intent vs. effect thing is that the person doing "frame control" will often contain multitudes: an unconscious, hidden side that's driving the frame control, and then the more conscious side that may not be very aware of it, and would certainly disclaim any such intent.

Comment by PaulK on An Intuitive Guide to Garrabrant Induction · 2021-06-07T19:11:18.609Z · LW · GW

Oh, nevermind then

Comment by PaulK on An Intuitive Guide to Garrabrant Induction · 2021-06-06T20:13:50.292Z · LW · GW

Small typo: you have two sections numbered [7.2]

Comment by PaulK on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-04T19:55:52.204Z · LW · GW

(I assume that by "gears-level models" you mean a combination of reasoning about actors' concrete capabilities; and game-theory-style models of interaction where we can reach concrete conclusions? If so,)

I would turn this around, and say instead that "gears-level models" alone tend to not be that great for understanding how power works. 

The problem is that power is partly recursive. For example, A may have power by virtue of being able to get B to do things for it, but B's willingness also depends on A's power. All actors, in parallel, are looking around, trying to understand the landscape of power and possibility, and making decisions based on their understanding, changing that landscape in turn. The resulting dynamics can just be incredibly complicated. Abstractions can come to have something almost like causal power, like a rumor starting a stampede.

We have formal tools for thinking about these kinds of things, like common knowledge, and game-theoretic equilibria. But my impression is that they're pretty far from being able to describe most important power dynamics in the world.

Comment by PaulK on Decoupling deliberation from competition · 2021-05-27T03:49:11.206Z · LW · GW

Interesting essay!

In your scenario where people deliberate while their AIs handle all the competition on their behalf, you note that persuasion is problematic: this is partly because, with intent-aligned AIs, the system is vulnerable to persuasion in that "what the operator intends" can itself become a target of attack during conflict.

Here is another related issue. In a sufficiently weird or complex situation, "what the operator intends" may not be well-defined -- the operator may not know it, and the AI may not be able to infer it with confidence. In this case, clarifying what the human really wants seems to require more deliberation, which is what we were trying to screen off in the first place!

Furthermore, it seems to me that unbounded competition tends to continually spiral out, encompassing more and more stuff, and getting weirder and more complex: there are the usual arms race dynamics. There are anti-inductive dynamics around catching your opponent by surprise by acting outside their ontology. And there is also just the march of technology, which in your scenario hasn't stopped, and which keeps creating new possibilities and new dimensions for us to grapple with around what we really want. (I'm using state-run social media disinformation campaigns as an intuition pump here.)

So in your scenario, I just imagine the human operators getting overwhelmed pretty quickly, unable to keep from being swept up in conflict. This is unless we have some kind of pretty strong limits on it.

Comment by PaulK on Time & Memory · 2021-05-20T18:47:16.677Z · LW · GW

The next time you are making a complicated argument, if you can, try and watch yourself recalling bits and pieces at a time. To me, it feels viscerally like I have the whole argument in mind, but when I look closely, it's obviously not the case. I'm just boldly going on and putting faith in my memory system to provide the next pieces when I need them. And usually it works out.

Yes! And, I would offer an additional, alternative way of phrasing this: "you" actually do have the whole argument in mind, but it's a higher-level "you", a slower but more inclusive one, corresponding to a higher level of memory caching.

(When it doesn't, there's this whole failure mode where people continue viscerally feeling like they can make the argument, even though they don't have the pieces; and I think this is where a lot of bad reasoning comes from.)

The problem here ^ then becomes becomes a problem of maintaining appropriate relationships among the different self-layers.

Comment by PaulK on Gradations of Inner Alignment Obstacles · 2021-04-22T01:23:50.977Z · LW · GW

I disagree that mesa optimization requires explicit representation of values. Consider an RL-type system that (1) learns strategies that work well in its training data, and then (2) generalizes to new strategies that in some sense fit well or are parsimonious with respect to its existing strategies. Strategies need not be explicitly represented. Nonetheless, it's possible for those initially learned strategies to implicitly bake in what we could call foundational goals or values, that the system never updates away from.

For another angle, consider that value-directed thought can be obfuscated. A single central value could be transformed into a cloud of interlocking heuristics that manage to implement essentially the same logic. (This might make it more difficult to generalize that value, but not impossible.) This is a common strategy in humans, in situations where they want to avoid been seen as holding certain values, but still reap the benefits of effectively acting according to those values.
 

Comment by PaulK on Grokking illusionism · 2021-01-07T06:57:54.677Z · LW · GW

(tl;dr: I think a lot of this is about one-way (read-only) vs. two-way communication)

As a long-term meditator and someone who takes contents of phenomenal consciousness as quite "real" in their own way, I enjoyed this post -- it helped me clarify some of my disagreements with these ideas, and to just feel out this conceptual-argumentative landscape.

I want to draw out something about "access consciousness" that you didn't mention explicitly, but that I see latent in both your account (correct me if I'm wrong) and the SEP's discussion of it (ctrl-F for "access consciousness"). Which is: an assumed one-way flow of information. Like, an element of access consciousness carries information, which is made available to the rest of the system; but there isn't necessarily any flow back to that element. 

I believe to the contrary (personal speculation) that all channels in the mind are essentially two-way. For example, say we're walking around at night, and we see a patch of grey against the black of the darkness ahead. That information is indeed made available to the rest of the system, and we ask ourselves: "could it be a wild animal?". But where does that question go? I would say it's addressed to the bit of consciousness that carried the patch of grey. This starts a process of the question percolating down the visual processing hierarchy till it reaches a point where it can be answered -- "no, see that curve there, it's just the moonlight catching a branch". (In reality the question might kick off lots of other processes too, which I'm ignoring here.)

Anyway, the point is that there is a natural back and forth between higher-level consciousness, which deals in summaries and can relate disparate considerations, and lower-level e.g. sensory consciousness, which deals more in details. And I think this back-and-forth doesn't fit well in the "access consciousness" picture.

More generally, in terms of architectural design for a mind, we want whatever process carries a piece of information to also be able to act as a locus of processing for that information. The same way, if a CEO is getting briefed on some complex issue by a topic expert, it's much more efficient if they can ask questions, propose plans and get feedback, and keep them as a go-to-person for that issue, rather than just hear a report.

I think "acting as an addressable locus of processing" accounts for at least a lot of the nature of "phenomenal consciousness" as opposed to "access consciousness".

Comment by PaulK on Search versus design · 2020-08-19T02:02:48.816Z · LW · GW

Also, on your description of designs factorizing into parts, maybe you already know this, but I wanted to highlight that often "factorization", even when neat, isn't just a straightforward decomposition into separate parts. For example, say you're designing a distributed system. You might have a kind of "vertical" decomposition into roles like leader and follower. But then also a "horizontal" decomposition into different kinds of data that get shared in different ways. The logic of roles and kinds of data might then interact, so that the algorithm is really conceptually two-dimensional.

(These kinds of issues make cognition harder to factorize)

Comment by PaulK on Search versus design · 2020-08-19T01:48:58.461Z · LW · GW

Thanks for the thought-provoking post, Alex.

Thinking about how exactly design stories help create trust, I came upon what might be a useful distinction: whether the design is good according to the considerations known to the designer, vs. whether all relevant considerations are present. A good design story lets us check both of these. The first being false means the designer just did a bad job, or perhaps is hiding something. The second being false means there are actually just considerations the designer didn't know about -- for example because they live implicit in some other human's head -- and spelling things out in a story lets us recognize that, and correct it.

The latter use of stories lets you catch honest mistakes around issues that are unknown unknowns to you, but knowns for someone else. And when I think intuitively about trusting an AI -- or another human for that matter -- this is a big part of what I care about: beyond them being competent, and not actively deceiving me, I should also trust that they'll communicate with me enough to fill in all the blind spots they might have about me and the things I care about.

Comment by PaulK on Degrees of Freedom · 2019-04-02T23:25:24.904Z · LW · GW

On the first, more philosophical part of your post: I think your notion of "freedom-as-arbitrariness" is actually also what allows for "freedom-as-optimization", in the following way.

Suppose I have an abstract set of choices. These can be instantiated in a concrete situation, which then carries its own set of considerations. When I go to do my optimizing in a given concrete situation, the more constrained or partisan my choice is in the abstract, the more difficult is my total optimization. Conversely, the freer, the more arbitrary the choice is in the abstract, the less constrained my optimization is in any concrete situation, and the better I can do.

For example, if I were hiring a programmer for a project, then (all else equal) I'd rather have someone who knew a variety of technologies and wasn't too strongly attached to any, so that they could simply use whatever the situation called for.

You could state this as system design principle: if you're designing a subsystem that's going to be doing something, but you don't really know what yet, optimize the subsystem for being able to potentially do anything (arbitrariness).

I feel there's much more to say along these lines about systems being well-factored (the pattern of concrete-abstract, as above, is a kind of factorization (as in lambda abstraction)), but I'm having trouble putting it into words at the moment.

Comment by PaulK on Why do Contemplative Practitioners Make so Many Metaphysical Claims? · 2019-01-03T05:38:40.934Z · LW · GW

Cool. I've had one brief, spontaneous experience, while circling, of that sort of concept -> vision 'synaesthesia': seeing dark halos around people, that I think represented their anxiety and desire to avoid talking about certain things.

But I'd never imagined working deliberately with vision in that way.

Comment by PaulK on Why do Contemplative Practitioners Make so Many Metaphysical Claims? · 2019-01-02T07:37:23.134Z · LW · GW

So is this a fair summary?

Contemplative practitioners sometimes have great psyche-refactoring experiences, "insights". But, when interpreting & integrating them, they fail to keep a strong enough epistemic distinction between their experience and the ultimate reality it arises from. And then they make crazy inferences about the nature of that ultimate reality.

Comment by PaulK on Why do Contemplative Practitioners Make so Many Metaphysical Claims? · 2019-01-02T07:12:54.317Z · LW · GW
When this happens with parts of the network that are involved with the visual system, for instance, the visual field can actually dissolve into a bunch of vibrations temporarily as you refactor parts of the network related to extremely low level things like edge or motion detection (this is also where 'auras' come from imo)

Wow, I've never heard of this, and it sounds really interesting. Would you care to elaborate, on what kind of refactoring is going on, and what the resulting 'auras' are / mean?

Comment by PaulK on Player vs. Character: A Two-Level Model of Ethics · 2018-12-16T09:08:11.282Z · LW · GW

You can get into some weird, loopy situations when people reflect enough to lift up the floorboards, infer some "player-level" motivations, and then go around talking or thinking about them at the "character level". Especially if they're lacking in tact or social sophistication. I remember as a kid being so confused about charitable giving -- because, doesn't everyone know that giving is basically just a way of trying to make yourself look good? And doesn't everyone know that that's Wrong? So shouldn't everyone just be doing charity anonymously or something?

Luckily, complex societies develop ways for handling different, potentially contradictory levels of meaning with grace and tact; and nobody listens too much to overly sincere children.

Comment by PaulK on On Rationalist Solstice and Epistemic Caution · 2018-12-06T07:57:23.115Z · LW · GW

Yeah, I think costly signalling is definitely part of it. I think there's really several different things going on in the birthday example. One, the friend knows that you decided to spend the evening with them, so they can infer that you want to perform friendship, and/or anticipate having a good time with them, enough to make you decide that. This is the costly signalling part. But then there's also the stuff that actually happens at the party: talking, laughing together, etc. I think this is what actually accounts for most of the "feeling closer". (Or perhaps these two effects act on different levels of "feeling closer").

Anyway this is maybe getting unnecessarily analytical.

Comment by PaulK on On Rationalist Solstice and Epistemic Caution · 2018-12-06T00:56:24.604Z · LW · GW
A ritual is about making a sacrifice to imbue a moment with symbolic power, and using that power to transform yourself.

I'm really curious where you're getting the sacrifice part from! Or how important you think it is. Because my experience with rituals doesn't generally include sacrificing anything; and the bits of sociology I've read about ritual (mostly Randall Collins' book Interaction Ritual Chains) don't mention it much. It does resonate with perhaps a western-magical perspective?

Comment by PaulK on Conversational Cultures: Combat vs Nurture (V2) · 2018-11-11T03:28:55.147Z · LW · GW

Great essay!

Another aspect of this divide is about articulability. In a nurturing context, it's possible to bring something up before you can articulate it clearly, and even elicit help articulating it.

For example, "Something about <the proposal we're discussing> strikes me as contradictory -- like it's somehow not taking into account <X>?". And then the other person and I collaborate to figure out if and what exactly that contradiction is.

Or more informally, "There's something about this that feels uncomfortable to me". This can be very useful to express even when I can't say exactly what it is that I'm uncomfortable with, IF my conversation partner respects that, and doesn't dismiss what I'm saying because it's not precise enough.

In a combative context, on the other hand, this seems like a kind of interaction you just can't have (I may be wrong, I don't have much experience in them). Because there, inarticulateness just reads as your arguments being weak. And you don't want to run the risk of putting half-baked ideas out there and having them swatted down. So your only real choices are to figure out how to articulate things, by yourself, on the fly, or remain silent.

And that's too bad, because the edge of what can be articulated is IME the most interesting place to be.

(Gendlin's Focusing is an extreme example of being at the edge of what can be articulated, and in the paired version you have one person whose job is basically to be a nurturing & supportive presence.)