Conversation about paradigms, intellectual progress, social consensus, and AI
post by Ruby, RobertM (T3t) · 2023-09-05T21:30:17.498Z · LW · GW · 6 commentsContents
6 comments
I have a number of thoughts here that I haven't gotten around to writing up, and sharing via a public conversation seems like a decent way to try.
A few opening thoughts:
- People talk about the field of AI Alignment being "pre-paradigmatic". I don't have a sense that people have shared precise models behind this, or at least I don't know what other people mean when they say it. Most people don't seem to have read Kuhn for example..
- I used to think the goal was to "become paradigmatic". I now think a better goal is to become the kind of field that can go through cycles of paradigm creation, paradigm crisis, and then recreation of a new paradigm.
- I think there are ways to increase our ability to do that, and I'd like to have people thinking more about them.
Happy to launch off in any direction that seems interesting or productive to you.
6 comments
Comments sorted by top scores.
comment by Chris_Leong · 2023-09-09T05:54:07.365Z · LW(p) · GW(p)
I'm very pleased to see that the LessWrong team is thinking about these kinds of topics.
I just wanted to add a few more thoughts on this topic myself.
I suspect that one important aspect of creating a new paradigm is characterising the previous paradigm and its underlying assumptions. Often once these assumptions are stated out loud, it becomes clearer where they might break down.
Another important aspect of allowing a new paradigm to form is having a space where it can form. This can often be quite difficult as many people may be mostly happy with the existing spaces that work within the paradigm or at least not unhappy enough to want to join something new.
There's also the problem that people who disagree with the paradigm might want to take it all kinds of different directions, preventing any from building critical mass. When an existing paradigm has many possible issues that you could focus on, there's something of an art in picking off an area that contains a group of sufficiently important and compelling differences, which also has a certain level of coherence such that you can explain the value of what you're doing to other people without them being confused.
↑ comment by Randomized, Controlled (BossSleepy) · 2023-09-06T20:59:14.733Z · LW(p) · GW(p)
The Standford Phil Encylopedia gives:
Replies from: BossSleepyAccording to Kuhn the development of a science is not uniform but has alternating ‘normal’ and ‘revolutionary’ (or ‘extraordinary’) phases. The revolutionary phases are not merely periods of accelerated progress, but differ qualitatively from normal science. Normal science does resemble the standard cumulative picture of scientific progress, on the surface at least. Kuhn describes normal science as ‘puzzle-solving’ (1962/1970a, 35–42). While this term suggests that normal science is not dramatic, its main purpose is to convey the idea that like someone doing a crossword puzzle or a chess problem or a jigsaw, the puzzle-solver expects to have a reasonable chance of solving the puzzle, that his doing so will depend mainly on his own ability, and that the puzzle itself and its methods of solution will have a high degree of familiarity. A puzzle-solver is not entering completely uncharted territory... Revolutionary science, however, is not cumulative in that, according to Kuhn, scientific revolutions involve a revision to existing scientific belief or practice (1962/1970a, 92). Not all the achievements of the preceding period of normal science are preserved in a revolution, and indeed a later period of science may find itself without an explanation for a phenomenon that in an earlier period was held to be successfully explained...
↑ comment by Randomized, Controlled (BossSleepy) · 2023-09-06T21:11:45.541Z · LW(p) · GW(p)
Replies from: BossSleepyKuhn’s view is that during normal science scientists neither test nor seek to confirm the guiding theories of their disciplinary matrix. Nor do they regard anomalous results as falsifying those theories. (It is only speculative puzzle-solutions that can be falsified in a Popperian fashion during normal science (1970b, 19).) Rather, anomalies are ignored or explained away if at all possible. It is only the accumulation of particularly troublesome anomalies that poses a serious problem for the existing disciplinary matrix. A particularly troublesome anomaly is one that undermines the practice of normal science. For example, an anomaly might reveal inadequacies in some commonly used piece of equipment, perhaps by casting doubt on the underlying theory. If much of normal science relies upon this piece of equipment, normal science will find it difficult to continue with confidence until this anomaly is addressed. A widespread failure in such confidence Kuhn calls a ‘crisis’
↑ comment by Randomized, Controlled (BossSleepy) · 2023-09-07T17:30:36.249Z · LW(p) · GW(p)
Under this view, perhaps a certain set of interpretability techniques might emerge under a paradigm that makes certain assumptions (eg, that ML kernals are "mostly" linear, that systems are "mostly" stateless, that exotic hacks of the underlying hardware aren't in play, etc). If a series of anomalies were to accumulate that couldn't be explained within this matrix, you might expect to see a new paradigm needed.
comment by Algon · 2023-09-06T00:40:03.652Z · LW(p) · GW(p)
how do people come to agree that "X is a good researcher"[?]
[...]
Pointing in the same direction involves a lot of agreeing on "this is progress" vs "this is not progress". (There is also an object-level of "and whatever you agree is progress points in the direction of reality itself")
To what extent do alignment researchers agree on who is a good researcher/what progress is? I'd guess there's a bunch of disagreement there, even amongst researchers who agree the problem is hard e.g. Eliezer vs Paul vs Steven. And I can think of relatively few cases of progress on alignment in my view, let alone anyone elses. (TurnTrout's work on Power/Insturmental Convergence, and Stuart's work on value indifference in case you're wondering). Likewise for what the hard parts of the problem are. I'm not confident there'll be that much disagreement. My reasoning is that a lot of disagreements look strong but aren't really. Say, whether some probability is 0.1 or 0.9 isn't that big a difference.
EXPERIMENT: To test whether consensus on progress points in the direction of reality, check what N year old results are most commonly considered progress now, and see how much researchers thought this result made progress M years ago. Of course you'd have to use proxy measures in most cases, e.g. karma and citations.
↑ comment by Chris_Leong · 2023-09-07T02:24:21.478Z · LW(p) · GW(p)
Agree. It seems unlikely that the initial paradigm will get everything correct. It’s important to be able to tentatively set down some principles, explore there consequences, then after a while to step back and discuss whether the field is headed in the right direction.