Shapes of Mind and Pluralism in Alignment
post by adamShimi · 2022-08-13T10:01:42.102Z · LW · GW · 2 commentsContents
2 comments
This post is part of the work done at Conjecture.
This post has been written for the first Refine [AF · GW] blog post day, at the end of a week of readings, discussions, and exercises about epistemology for doing good conceptual research.
I have recently presented [AF · GW] my model behind the Refine incubator that I'm running. Yet in the two weeks since this post was published, multiple discussions helped me make legible an aspect of my intuitions that I didn't discuss in this post: the notion of different "shapes of mind".
There are two points to this intuition:
- Different people will have different "shapes of mind" — ways of revealing hidden bits of evidence from the world;
- And alignment is the kind of hard problem where the bits of evidence are dispersed, such that there's no one-trick that is enough.
I've given my current best model of the different forms of pluralism and when to use them in another recent post [AF · GW]. What I want to explore here is the first point: this notion of shape of mind. For that, let's recall the geometric model of bits of evidence I introduced in Levels of Pluralism [AF · GW].
- We have a high-dimensional space with objects in it. The space is the problem and the objects are bits of evidence.
- Because we suck at high-dimensional geometry, we use frames/perspectives that reduce the dimensionality and highlight some aspects of the space. These are operationalizations.
- There are clusters of bits of evidence in the space (whether they are rich or poor). These clusters are veins of evidence.
Here the shapes of mind are favored operationalizations — that is, the favored low-dimensional compression of the high-dimensional space where the bits of evidence lie. More precisely, a shape of mind is a cluster of "close" such transforms.
What makes someone have a given shape of mind?
- (Education) One of the most obvious I've observed on how people tackle problems come from their background. For an alignment example, John tackle problems like a statistical physicist whereas Paul tackles problem like a theoretical computer scientists, leading to very different perspectives: True Names [AF · GW] vs Building-Breaker [AF · GW].
- (Knowledge) What you know influences your shape of mind, since you can see and link more things. But when I'm talking about knowing here, I mean the kind of deep knowledge that framing exercises [? · GW] are supposed to provide.[1]
- (Past Life) This one feels easily missed by most people; after all, why should what you did in your past (especially personal life) should influence your scientific research? Because it clearly does. A notable example is how easy it becomes to see hidden assumptions about how the world is supposed to work when you don't come from a background where they make any sense.
One thing this handle makes clear is the difference between my model for different programs as Refine [AF · GW], SERI MATS, and PIBBSS respectively aim at:
- Refine is looking for different shapes of mind that the mind currently at work in conceptual alignment, and aims at empowering them to contribute productively.
- MATS (according to my current model) is mainly looking for shapes of mind closed to archetypal ones (the mentors), and focuses on making them fit for alignment research by helping them approach this initial example (while still maintaining enough diversity for productive disagreement).
- PIBBSS is looking for new shapes of minds, but ones that are visibly relevant and useful from existing object-level shapes of mind. That is, PIBBSS starts with current conceptual alignment researchers and the shapes of mind that they feel they might get something out of in their own research.
I'm excited to finally be in a field with all three.
- ^
Thus we can see framing exercises as a way of shaping your mind to see the hidden bits of evidence that you want to access.
2 comments
Comments sorted by top scores.
comment by MSRayne · 2022-08-13T12:18:09.234Z · LW(p) · GW(p)
Are there any shapes of mind you think don't have much to offer alignment, or will have unusual hindrances making it more difficult?
Replies from: NicholasKross↑ comment by Nicholas / Heather Kross (NicholasKross) · 2023-08-29T05:07:08.648Z · LW(p) · GW(p)
I, too, am interested in this question.
(One note that may be of use: I think the incentives for "cultivating more/better researchers in a preparadigmatic field" lean towards "don't discourage even less-promising researchers, because they could luck out and suddenly be good/useful to alignment in an unexpected way". Like how investors encourage startup founders because they bet on a flock of them, not necessarily because any particular founder's best bet is to found a startup. This isn't necessarily bad, it just puts the incentives into perspective.)