Posts

Comments

Comment by Paradiddle (barnaby-crook) on Modern Transformers are AGI, and Human-Level · 2024-03-27T13:56:36.432Z · LW · GW

I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more. 

In this case, it is clear to me that there are important senses of the term "general" which modern AI satisfies the criteria for. You made that point persuasively in this post. However, it is also clear that there are important senses of the term "general" which modern AI does not satisfy the criteria for. Steven Byrnes made that point persuasively in his response. So far as I can tell you will agree with this. 

If we all agree with the above, the most important thing is to disambiguate the sense of the term being invoked when applying it in reasoning about AI. Then, we can figure out whether the source of our disagreements is about semantics (which label we prefer for a shared concept) or substance (which concept is actually appropriate for supporting the inferences we are making).

What are good discourse norms for disambiguation? An intuitively appealing option is to coin new terms for variants of umbrella concepts. This may work in academic settings, but the familiar terms are always going to have a kind of magnetic pull in informal discourse. As such, I think communities like this one should rather strive to define terms wherever possible and approach discussions with a pluralistic stance. 

Comment by Paradiddle (barnaby-crook) on Believing In · 2024-02-08T15:23:14.148Z · LW · GW

I actually think what you are going for is closer to JL Austin's notion of an illocutionary act than anything in Wittgenstein, though as you say, it is an analysis of a particular token of the type ("believing in"), not an analysis of the type. Quoting Wikipedia:

"According to Austin's original exposition in How to Do Things With Words, an illocutionary act is an act:

  • (1) for the performance of which I must make it clear to some other person that the act is performed (Austin speaks of the 'securing of uptake'), and
  • (2) the performance of which involves the production of what Austin calls 'conventional consequences' as, e.g., rights, commitments, or obligations (Austin 1975, 116f., 121, 139)."

Your model of "believing in" is essentially an unpacking of the "conventional consequences" produced by using the locution in various contexts. I think it is a good unpacking, too!

I do think that some of the contrasts you draw (belief vs. believing in) would work equally well (and with more generality) as contrasts between beliefs and illocutionary acts, though.

Comment by Paradiddle (barnaby-crook) on Leading The Parade · 2024-02-02T09:37:21.311Z · LW · GW

In Leibniz’ case, he’s known almost exclusively for the invention of calculus.

Was this supposed to be a joke (if so, consider me well and truly whooshed)? At any rate, it is most certainly not the case. Leibniz is known for a great many things (both within and without mathematics) as can be seen from a cursory glance at his Wikipedia page

Comment by Paradiddle (barnaby-crook) on Being nicer than Clippy · 2024-01-17T18:00:41.648Z · LW · GW

Rather, they might be mere empty machines. Should you still tolerate/respect/etc them, then?"

My sense is that I'm unusually open to "yes," here.


I think the discussion following from here is a little ambiguous (perhaps purposefully so?). In particular, it is unclear which of the following points are being made:

1: Sufficient uncertainty with respect to the sentience (I'm taking this as synonymous with phenomenal consciousness) of future AIs should dictate that we show them tolerance/respect etc... 
2: We should not be confident that sentience is a good criterion for moral patienthood (i.e., being shown tolerance/respect etc...), even though sentience is a genuine thing. 
3: We should worry that sentience isn't a genuine thing at all (i.e, illusionism / as-yet-undescribed re-factorings of what we currently call sentience). 

When you wrote that you are unusually open to "yes" in the quoted sentence, I took the qualifier "unusual" to indicate that you were making point 2, since I do not consider point 1 to be particularly unusual (Schwitzgebel has pushed for this view, for example). However, your discussion then mostly seemed to be making the case for point 1 (i.e., we could impose a criterion for moral worth that is intended to demarcate non-sentient and sentient entities but that fails). For what it's worth, I would be very interested to hear arguments for point 2 which do not collapse into point 1 (or, alternatively, some reason why I am mistaken for considering them distinct points). From my perspective, it is hard to understand how something which really lacks what I mean by phenomenal consciousness could possibly be a moral patient. Perhaps it is related to the fact that I have, despite significant effort, utterly failed to grok illusionism. 

Comment by Paradiddle (barnaby-crook) on The Consciousness Box · 2023-12-14T18:15:49.005Z · LW · GW

Apologies, I had thought you would be familiar with the notion of functionalism. Meaning no offence at all but it's philosophy of mind 101, so if you're interested in consciousness, it might be worth reading about it. To clarify further, you seem to be a particular kind of computational functionalist. Although it might seem unlikely to you, since I am one of those "masturbatory" philosophical types who thinks it matters how behaviours are implemented, I am also a computational functionalist! What does this mean? It means that computational functionalism is a broad tent, encompassing many different views. Let's dig into the details of where we differ...

If something can talk, then, to a functionalist like me, that means it has assembled and coordinated all necessary hardware and regulatory elements and powers (that is, it has assembled all necessary "functionality" (by whatever process is occurring in it which I don't actually need to keep track of (just as I don't need to understand and track exactly how the brain implements language))) to do what it does in the way that it does.

This is a tautology. Obviously anything that can do a thing ("talk") has assembled the necessary elements to do that very thing in the way that it does. The question is whether or not we can make a different kind of inference, from the ability to implement a particular kind of behaviour (linguistic competence) to the possession of a particular property (consciousness). 

Once you are to the point of "seeing something talk fluently" and "saying that it can't really talk the way we can talk, with the same functional meanings and functional implications for what capacities might be latent in the system" you are off agreeing with someone as silly as Searle. You're engaged in some kind of masturbatory philosophy troll where things don't work and mean basically what they seem to work and mean using simple interactive tests.

Okay, this is the key passage. I'm afraid your view of the available positions is seriously simplistic. It is not the case that anybody who denies the inference from 'displays competent linguistic behaviour' to 'possesses the same latent capacities' must be in agreement with Searle. There is a world of nuance between your position and Searle's, and most people who consider these questions seriously occupy the intermediate ground. 

To be clear, Searle is not a computational functionalist. He does not believe that non-biological computational systems can be conscious (well, actually he wrote about "understanding" and "intentionality", but his arguments seem to apply to consciousness as much or even more than they do to those notions). On the other hand, the majority of computational functionalists (who are, in some sense, your tribe) do believe that a non-biological computational system could be conscious. 

The variation within this group is typically with respect to which computational processes in particular are necessary. For example, I believe that a computational implementation of a complex biological organism with a sufficiently high degree of resolution could be conscious. However, LLM-based chatbots are nowhere near that degree of resolution. They are large statistical models that predict conditional probabilities and then sample from them. What they can do is amazing. But they have little in common with living systems and only by totally ignoring everything except for the behavioural level can it even seem like they are conscious. 

By the way, I wouldn't personally endorse the claim that LLM-based chatbots "can't really talk the way we talk". I am perfectly happy to adopt a purely behavioural perspective on what it means to "be able to talk". Rather, I would deny the inference from that ability to the possession of consciousness. Why would I deny that? For the reasons I've already given. LLMs lack almost all of the relevant features that philosophers, neuroscientists, and biologists have proposed as most likely to be necessary for consciousness.

Unsurprisingly, no, you haven't changed my mind. Your claims require many strong and counterintuitive theoretical commitments for which we have either little or no evidence. I do think you should take seriously the idea that this may explain why you have found yourself in a minority adopting this position. I appreciate that you're coming from a place of compassion though, that's always to be applauded! 

Comment by Paradiddle (barnaby-crook) on The Consciousness Box · 2023-12-13T08:56:05.792Z · LW · GW

I am sorry that you got the impression I was trolling. Actually I was trying to communicate to you. None of the candidate criteria I suggested were conjured ex nihilo out of a hat or based on anything that I just made up. Unfortunately, collecting references for all of them would be pretty time consuming. However, I can say that the global projection phrase was gesturing towards global neuronal workspace theory (and related theories). Although you got the opposite impression, I am very familiar with consciousness research (including all of the references you mentioned, though I will admit I don't think much of IIT). 

The idea of "meat chauvinism" seems to me a reductive way to cast aside the possibility that biological processes could be relevant to consciousness. I think this is a theoretical error. It is not the case that taking biological processes seriously when thinking about consciousness implies (my interpretation of what you must mean by) "meat chauvinism". One can adopt a functionalist perspective on biological processes that operate far below the level of a language production system. For example, one could elaborate a functional model of metabolism which could be satisfied by silicon-based systems. In that sense, it isn't meat chauvinism to suggest that various biological processes may be relevant to consciousness.

This would discount human uploads for silly reasons. Like if I uploaded and was denied rights for lack of any of these things they I would be FUCKING PISSED OFF

Assuming what you mean by "fucking pissed off" involves subjective experience and what you mean by "I was uploaded" would not involve implementing any of the numerous candidates for necessary conditions on consciousness that I mentioned, this is simply begging the question. 

To me, it doesn't make any sense to say you would have been "uploaded" if all you mean is a reasonably high fidelity reproduction of your input-output linguistic behaviour had been produced. If what you mean by uploaded is something very different which would require numerous fundamental scientific breakthroughs then I don't know what I would say, since I don't know how such an upload would fare with respect to the criteria for conscious experience I find most compelling. 

Generally speaking, there is an enormous difference between hypothetical future simulated systems of arbitrary sophistication and the current generation of LLM-based chatbots. My sense is that you are conflating these things when assessing my arguments. The argument is decidedly not that the LLM-based chatbots are non-biological, therefore they cannot be conscious. Nor is it that no future silicon-based systems, regardless of functional organisation, could ever be conscious. Rather, the argument is that LLM-based chatbots lack almost all of the functional machinery that seems most likely to be relevant for conscious experience (apologies that using the biological terms for these aspects of functional machinery was misleading to you), therefore they are very unlikely to be conscious. 

I agree that the production of coherent linguistic output in a system that lacks this functional machinery is a scientific and technological marvel, but it is only evidence for conscious experience if your theory of consciousness is of a very particular and unusual variety (relative to the fields which study the topic in a professional capacity, perhaps such ideas have greater cache on this website in particular). Without endorsing such a theory, the evidence you provide from what LLMs produce, given their training, does not move me at all (we have an alternative explanation for why LLMs produce such outputs which does not route through them being subjectively experiencing entities, and what's more, we know the alternative explanation is true, because we built them). 

Given how you responded above, I have the impression you think neuroscience and biology are not that relevant for understanding consciousness. Clearly, I disagree. May I ask what has given you the impression that the biological details don't matter (even when given a functional gloss such that they may be implemented in silico)? 

Comment by Paradiddle (barnaby-crook) on The Consciousness Box · 2023-12-12T18:22:31.205Z · LW · GW

I think you're missing something important.

Obviously I can't speak to the reason there is a general consensus that LLM-based chatbots aren't conscious (and therefore don't deserve rights). However, I can speak to some of the arguments that are still sufficient to convince me that LLM-based chatbots aren't conscious. 

Generally speaking, there are numerous arguments which essentially have the same shape to them. They consist of picking out some property that seems like it might be a necessary condition for consciousness, and then claiming that LLM-based chatbots don't have that property. Rather than spend time on any one of these arguments, I will simply list some candidates for such a property (these may be mentioned alone or in some combination):

Metabolism, Temporally continuous existence, Sensory perception, Integration of sensory signals, Homeostatic drives, Interoception, Coherent self-identity, Dynamic coupling to the environment, Affective processes, A nervous system, Physical embodiment, Autonomy, Autopoiesis, Global projection of signals, Self-monitoring, Synchronized neuronal oscillations, Allostasis, Executive function, Nociceptors, Hormones... I could keep going.

Naturally, it may that some of these properties are irrelevant or unnecessary for consciousness. Or it could be that even altogether they are insufficient. However, the fact that LLM-based chatbots possess none of these properties is at least some reason to seriously doubt that they could be conscious. 

A different kind of argument focuses more directly on the grounds for the inference that LLM-based chatbots might be conscious. Consider the reason that coherent linguistic output seems like evidence of consciousness in the first place. Ordinarily, coherent linguistic output is produced by other people and suggests consciousness to us based on a kind of similarity-based reasoning. When we encounter other people, they are engaging in similar behaviour to us, which suggests they might be similar to us in other ways, such as having subjective experience. However, this inference would no longer be justified if there was a known, significantly different reason for a non-human entity to produce coherent linguistic output. In the case of LLM-based chatbots, we do have such a reason. In particular, the reason is the data-intensive training procedure, a very different story for how other humans come to learn to produce coherent linguistic output. 

Nobody should be 100% confident in any claims about which entities are or are not conscious, but the collective agreement that LLM-based chatbots are not seems pretty reasonable. 

Comment by Paradiddle (barnaby-crook) on [Valence series] 1. Introduction · 2023-12-05T17:40:54.903Z · LW · GW

Enjoyable post, I'll be reading the rest of them. I especially appreciate the effort that went into warding off the numerous misinterpretations that one could easily have had (but I'm going to go ahead an ask something that may signal I have misinterpreted you anyhow). 

Perhaps this question reflects poor reading comprehension, but I'm wondering whether you are thinking of valence as being implemented by something specific at a neurobiological level or not? To try and make the question clearer (in my own head as much as anything), let me lay out two alternatives to having valence implemented by something specific. First, one might imagine that valence is an abstraction over the kind of competitive dynamics that play out among thoughts. On this view, valence is a little like evolutionary fitness (the tautology talk in 1.5.3 brought this comparison to mind). Second, one might imagine that valence is widely distributed across numerous brain systems. On this view, valence is something like an emotion (if you'll grant the hopefully-no-longer-controversial claim that the neural bases of emotions are widely distributed). I don't think either of these alternatives are what you are going for, but I also didn't see the outright claim that valence is something implemented by a specific neurobiological substrate. What do you believe?

Comment by Paradiddle (barnaby-crook) on Complex systems research as a field (and its relevance to AI Alignment) · 2023-12-04T09:56:55.808Z · LW · GW

In other words, you think that even in a world where the distribution of mathematical methods were very specific to subject areas, this methodology would have failed to show that? If so, I think I disagree (though I agree the evidence of the paper is suggestive, not conclusive). Can you explain in more detail why you think that? Just to be clear, I think the methodology of the paper is coarse, but not so coarse as to be unable to pick out general trends.

Perhaps to give you a chance to say something informative, what exactly did you have in mind by "united around methodology" when you made the original comment I quoted above? 

Comment by Paradiddle (barnaby-crook) on Complex systems research as a field (and its relevance to AI Alignment) · 2023-12-03T15:26:22.929Z · LW · GW

Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.


The extent to which fields are united around methodologies is an interesting question in its own right. While there are many ways we could break this question down which would probably return different results, a friend of mine recently analysed it with respect to mathematical formalisms (paper: https://link.springer.com/article/10.1007/s11229-023-04057-x). So, the question here is, are mathematical methods roughly specific to subject areas, or is there significant mathematical pluralism within each subject area? His findings suggest that, mostly, it's the latter. In other words, if you accept the analysis here (which is rather involved and obviously not infallible), you should probably stop thinking of fields as being united by methodology (thus making complex systems research a genuinely novel way of approaching things).

Key quote from the paper: "if the distribution of mathematical methods were very specific to subject areas, the formula map would exhibit very low distance scores. However, this is not what we observe. While the thematic distances among formulas in our sample are clearly smaller than among randomly sampled ones, the difference is not drastic, and high thematic coherence seems to be mostly restricted to several small islands."

Comment by Paradiddle (barnaby-crook) on Social Dark Matter · 2023-11-17T10:04:49.112Z · LW · GW

I don't have an answer for your question about how you might become confident that something really doesn't exist (other than a generic 'reason well about social behaviour in general, taking all possible failure modes into account'). However, I would point out that the example you give is about your group of friends in particular, which is a very different case from society at large. Shapeshifting lizardmen are almost certainly not evenly distributed across friendship groups such that every group of a certain size has one, but rather clumped together as we would expect due to homophily.
 

Edit: I see this point was already addressed in Bezzi's response on filter bubbles.

Comment by Paradiddle (barnaby-crook) on Consciousness as a conflationary alliance term for intrinsically valued internal experiences · 2023-07-14T18:18:32.393Z · LW · GW

Thanks for the response.

Personally I'm confident that whatever people are managing to refer to by "consciousness" is a process than runs on matter

I don't disagree that consciousness is a process that runs on matter, but that is a separate question from whether the typical referent of consciousness is that process. If it turned out my consciousness was being implemented on a bunch of grapes it wouldn't change what I am referring to when I speak of my own consciousness. The referents are the experiences themselves from a first-person perspective.

I asked people to attend to the process there were referring to, and describe it.

Right, let me try again. We are talking about the question of 'what people mean by consciousness'. In my view, the obvious answer to what people mean by consciousness is the fact that it is like something to be them, i.e., they are subjective beings. Now, if I'm right, even if the people you spoke to believe that consciousness is a process that runs on physical matter and even if they have differing opinions on what the structure of that process might be, that doesn't stop the basic referent of consciousness being shared by those people. That's because that referent is (conceptually) independent of the process that realises it (note: one need not be dualist to think this way. Indeed, I am not a dualist.). 

The fact that their answers were coherent, and seemed to correspond to processes that almost certainly actually exist in the human mind/brain, convinced me to just believe them that they were detecting something real and managing to refer to it through introspection, rather than assuming they were all somehow wrong and failing to describe some deeper more elusive thing that was beyond their experience.

First, I wonder if the use of the word 'detect' may help us locate the source of our disagreement. A minimal notion of what consciousness is does not require much detection. Consciousness captures the fact that we have first-person experience at all. When we are awake and aware, we are conscious. We can't help but detect it. 

Second, with regards to the 'wrong and failing' talk... as Descartes put it, the only thing I cannot doubt is that I exist. This could equally be phrased in terms of consciousness. As such, that consciousness is real is the thing I can doubt least (even illusionists like Keith Frankish don't actually doubt minimal consciousness, they just refuse to ascribe certain properties to it). However, there are several further things you may be referring to here. One is the contents of people's consciousness. Can we give faulty reports of what we experience? Undoubtedly yes, but like you I see no reason to doubt the veracity of the reports you elicited. Another is the structure of the neural system that implements consciousness (assuming that it is, indeed, a physical process). I don't know what kind of truth conditions you have in mind here, but I think it very unlikely that your subjects' descriptions accurately represent the physical processes occurring in their brains. 

Third, consciousness, as I am speaking of it, is decidedly not some deeper elusive thing that is beyond our experience. It is our experience. The reason consciousness is still a philosophical problem is not because it is elusive in the sense of 'hard to experience personally', but because it is elusive in the sense of 'resists satisfying analysis in a traditional scientific framework'.

Is any of this making sense to you? I get that you have a different viewpoint, but I'd be interested to know whether you think you understand this viewpoint, too, as opposed to it seeming crazy to you. In particular, do you get how I can simultaneously think consciousness is implemented physically without thinking that the referent of consciousness need contain any details about the implementational process?

Comment by Paradiddle (barnaby-crook) on Consciousness as a conflationary alliance term for intrinsically valued internal experiences · 2023-07-11T16:39:09.406Z · LW · GW

Really interesting stuff, thanks for sharing it! 

I'm afraid I'm sceptical that you methodology licenses the conclusions you draw. You state that you pushed people away from "using common near-synonyms like awareness or experience" and "asked them to instead describe the structure of the consciousness process, in terms of moving parts and/or subprocesses". You end up concluding, on the basis of people's radically divergent responses when so prompted, that they are referring to different things with the term 'consciousness'.

The problem I see is that the near-synonyms you ruled out are the most succinct and theoretically-neutral ways of pointing at what consciousness is. We mostly lack other ways of gesturing towards what is shared by most (not all) people's conception of consciousness. That we are aware. That we experience things. That there is something it like to be us. These are the minimal notions of consciousness for which there may be a non-conflationary alliance. when you push people away from using those notions, they are left grasping at poorly evidenced claims about moving parts and sub-processes. That there is no convergence here does not surprise me in the slightest. Of course people differ with respect to intuitions about the structure of consciousness. But the structure is not the typical referent of the word 'conscious', the first-person, phenomenal character of experience itself is. 

Comment by Paradiddle (barnaby-crook) on Contra Yudkowsky on Doom from Foom #2 · 2023-04-27T15:19:10.811Z · LW · GW

The distinction is that without the initial 0-1 phase transition, none of the other stuff is possible. They are all instances of cumulative cultural accretion, whereas the transition constitutes entering the regime of cumulative cultural accretion (other biological organisms and extant AI systems are not in this regime). If I understand the author correctly, the creation of AGI will increase the pace of cumulative cultural accretion, but will not lead us (or them) to exit that regime (since, according to the point about universality, there is no further regime).

I think this answer also applies to the other comment you made, for what it's worth. It would take me more time than I am willing to spend to make a cogent case for this here, so I will leave the discussion for now.

Comment by Paradiddle (barnaby-crook) on Contra Yudkowsky on Doom from Foom #2 · 2023-04-27T15:02:57.789Z · LW · GW

I have to say I agree that there is vagueness in the transition to universality. That is hardly surprising seeing as it is a confusing and contentious subject that involves integrating perspectives on a number of other confusing and contentious subjects (language, biological evolution, cultural evolution, collective intelligence etc...). However, despite the vagueness, I personally still see this transition, from being unable to accrete cultural innovations to being able to do so, as a special one, different in kind from particular technologies that have been invented since.

Perhaps another way to put it is that the transition seems to bestow on us, as a collective, a meta-ability to obtain new abilities (or increased intelligence, as you put it), that we previously lacked. It is true that there are particular new abilities that are particularly valuable, but there may not be any further meta-abilities to obtain.

Just so we aren't speaking past each other. Do you get what I am saying here? Even if you disagree that this is relevant, which may be reasonable, does the distinction I am driving at even make sense to you, or still not?

Comment by Paradiddle (barnaby-crook) on Contra Yudkowsky on Doom from Foom #2 · 2023-04-27T14:37:33.387Z · LW · GW

Okay, sure. If my impression of the original post is right, the author would not disagree with you, but would rather claim that there is an important distinction to be made among these innovations. Namely, one of them is the 0-1 transition to universality, and the others are not. So, do you disagree that such a distinction may be important at all, or merely that it is not a distinction that supports the argument made in the original post?

Comment by Paradiddle (barnaby-crook) on Contra Yudkowsky on Doom from Foom #2 · 2023-04-27T14:04:33.008Z · LW · GW

At the risk of going round in circles, you begin your post by saying you don't care which ones are special or qualitative, and end it by wondering why the author is confident certain kinds of transition are not "major". Is this term, like the others, just standing in for 'significant enough to play a certain kind of role in an "AI leads to doom" argument'? Or does it mean something else? 

I get the impression that you want to avoid too much wrangling over which labels should be applied to which kinds of thing, but then, you brought up the worry about the original post, so I don't quite know what your point is. 

Comment by Paradiddle (barnaby-crook) on Contra Yudkowsky on Doom from Foom #2 · 2023-04-27T07:44:50.261Z · LW · GW

I think this is partially a matter of ontological taste. I mean, you are obviously correct that many innovations coming after the transition the author is interested in seem to produce qualitative shifts in the collective intelligence of humanity. On the other hand, if you take the view that all of these are fundamentally enabled by that first transition, then it seems reasonable to treat that as special in a way that the other innovations are not. 

I suppose where the rubber meets the road, if one grants both the special status of the transition to universal cultural learning and that other kinds of innovation can lead to qualitative shifts in collective intelligence, is whether or not further innovations of the second kind can still play the role that foom is supposed to play in EY's argument (I take Nathan Helm-Burger's comment to be one argument that such innovations can play this role).

Comment by Paradiddle (barnaby-crook) on My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" · 2023-03-21T18:09:34.860Z · LW · GW

One distinction I think is important to keep in mind here is between precision with respect to what software will do and precision with respect to the effect it will have. While traditional software engineering often (though not always) involves knowing exactly what software will do, it is very common that the real-world effects of deploying some software in a real-world environment are impossible to predict with perfect accuracy. This reduces the perceived novelty of unintended consequences (though obviously, a fully-fledged AGI would lead to significantly more novelty than anything that preceded it).

Comment by Paradiddle (barnaby-crook) on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-02-28T17:14:28.627Z · LW · GW

I don't want to cite anyone as your 'leading technical opposition'. My point is that many people who might be described as having 'coherent technical views' would not consider your arguments for what to expect from AGI to be 'technical' at all. Perhaps you can just say what you think it means for a view to be 'technical'?

As you say, readers can decide for themselves what to think about the merits of your position on intelligence versus Chollet's (I recommend this essay by Chollet for a deeper articulation of some of his views: https://arxiv.org/pdf/1911.01547.pdf). Regardless of whether or not you think you 'easily struck down' his 'wack ideas', I think it is important for people to realise that they come from a place of expertise about the technology in question.


You mention Scott Aaronson's comments on Chollet. Aaronson says (https://scottaaronson.blog/?p=3553) of Chollet's claim that an Intelligence Explosion is impossible: "the certainty that he exudes strikes me as wholly unwarranted." I think Aaronson (and you) are right to point out that the strong claim Chollet makes is not established by the arguments in the essay. However, the same exact criticism could be levelled at you. The degree of confidence in the conclusion is not in line with the nature of the evidence.

Comment by Paradiddle (barnaby-crook) on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-02-24T12:08:24.158Z · LW · GW

Fair point.

Comment by Paradiddle (barnaby-crook) on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-02-24T12:07:40.673Z · LW · GW

Yes, I've read it. Perhaps that does make it a little unfair of me to criticise lack of engagement in this case. I should be more preicse: Kudos to Yudkowsky for engaging, but no kudos for coming to believe that someone having a very different view to the one he has arrived at must not have a 'coherent technical view'.

Comment by Paradiddle (barnaby-crook) on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-02-24T10:36:07.355Z · LW · GW

Eliezer: Well, the person who actually holds a coherent technical view, who disagrees with me, is named Paul Christiano.

What does Yudkowsky mean by 'technical' here? I respect the enormous contribution Yudkowsky has made to these discussions over the years, but I find his ideas about who counts as a legitimate dissenter from his opinions utterly ludicrous. Are we really supposed to think that Francois Chollet, who created Keras, is the main contributor to TensorFlow, and designed the ARC dataset (demonstrating actual, operationalizable knowledge about the kind of simple tasks deep learning systems would not be able to master), lacks a coherent technical view? And on what should we base this? The word of Yudkowsky who mostly makes verbal, often analogical, arguments and has essentially no significant technical contributions to the field? 

To be clear, I think Yudkowsky does what he does well, and I see value in making arguments as he does, but they do not strike me as particularly 'technical'. The fact that Yudkowsky doesn't even know enough about Chollet to pronounce his name displays a troubling lack of effort to engage seriously with opposing views. This isn't just about coming across poorly to outsiders, it's about dramatic miscalibration with respect to the value of other people's opinions as well as the rigour of his own.

Comment by Paradiddle (barnaby-crook) on How should DeepMind's Chinchilla revise our AI forecasts? · 2022-09-16T10:00:43.436Z · LW · GW

This analogy is misleading because it pumps the intuition that we know how to generate the algorithmic innovations that would improve future performance, much as we know how to tie our shoelaces once we notice they are untied. This is not the case. Research programmes can and do stagnate for long periods because crucial insights are hard to come by and hard to implement correctly at scale. Predicting the timescale on which algorithmic innovations occur is a very different proposition from predicting the timescale on which it will be feasible to increase parameter count.

Comment by Paradiddle (barnaby-crook) on Oversight Misses 100% of Thoughts The AI Does Not Think · 2022-08-13T10:24:03.076Z · LW · GW

As some other commenters have said, the analogy with other species (flowers, ants, beavers, bears) seems flawed. Human beings are already (limited) generally intelligent agents. Part of what that means is that we have the ability to direct our cognitive powers to arbitrary problems in a way that other species do not (as far as we know!). To my mind, the way we carelessly destroy other species' environments and doom them to extinction is a function of both the disparity in both power and the disparity in generality, not just the former. That is not to say that a power disparity alone does not constitute an existential threat, but I don't see the analogy being of much use in reasoning about the nature of that threat.

If the above is correct, perhaps you are tempted to respond that a sufficiently advanced AI would replicate the generality gap as well as the power gap. However, I think the notion of generality that is relevant here (which, to be sure, is not the only meaningful notion) is a 0 to 1 phase transition. Our generality allows us to think about, predict, and notice things that could thwart our long term collective goals. Once we start noticing such things, there is no level of intelligence an unaligned third-party intelligence can reach which somehow puts us back in the position of not noticing, relative to that third-party intelligence.