Ramble on STUFF: intelligence, simulation, AI, doom, default mode, the usual

post by Bill Benzon (bill-benzon) · 2023-08-26T15:49:47.781Z · LW · GW · 0 comments

Contents

  The discourse of intelligence
  Bostrom’s simulation hypothesis seems very thin to me
  Performance and competence in AI test-taking
  Super-intelligence
  Why’d Time print Yudkowsky’s essay?
  Why doom, after all? (a search for the real?)
  Confabulation & default mode
  What is REALITY anyhow?
None
No comments

Cross posted from New Savanna. I've published a number of these rambles over at New Savanna. They are a way for me to think about what's on my mind and consist of a variety of quick and crude remarks on various topics: Epistemic status: I'm still thinking about it.

x x x x x 

Time for another one of these. Lots on my mind. Difficult to sort it out. So, some quick hits just to get them all into the same space.

The discourse of intelligence

As far as I can tell the notion of intelligence that’s instantiated in such phrases as “intelligent life” or “artificial intelligence” is relatively new, dating back to the late 19th and early 20th centuries with the emergence of tests to estimate general cognitive capacity, that is, intelligence, as in “intelligence quotient.” This conception is pervasive in the modern world. It pervades the educational system and our discourse on race and public policy, including eugenics. AI seems to be a cross between this idea and computation, the idea and its realization in digital computers.

How does this discourse show up in fiction? I’m thinking of such characters as Sherlock Holms or, much more recently, Adrian Monk? These characters are noted for their intelligence, but are otherwise odd and in Monk’s case, extremely odd. 

Bostrom’s simulation hypothesis seems very thin to me

The more I think about it, it is not at all clear just what he means by simulation. He focuses on human experience. But what about animals? They are part of human experience. Are animals just empty shells, existing only as appearances for humans. But what about their interactions with one another? What drives their actions? Are they (quasi) automatous agents or are their actions entirely subordinate to the mechanisms which present appearances to humans? What happens when animals or humans reproduce? Do we have simulated DNA, fertilization, embryonic development? That seems to be rather granular and would impose a heavy computational burden. But if the simulation doesn’t do that, then how are the characteristics of the offspring related to those of its parents? What mechanism is used of that? Lots of problems.

Other posts on the simulation argument.

Performance and competence in AI test-taking

https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ Rodney Brooks has introduced a distinction between competence and performance into his ongoing discussion of AI. It is common to benchmark large language models on standardized tests given to humans, such as advanced placement (AP) tests or professional qualification tests (e.g. bar exams. Those are expressions of performance. What underlying competencies can we validly infer from those performances? In particular, are they similar to the competencies we infer about humans from those tests. Given that both a human and an LLM score at a certain level on the AP physics exam, is the LLM’s underlying competence in physics comparable to the human’s?

Super-intelligence

It’s a vague notion, no? In chess, computers have had the advantage over humans for a decade and a half, but that’s not quite what we mean by superintelligence (SI), is it? Too specialized. When talking of SI we mean general intelligence, no? LLMs such as GPT-4 and Claude-2 have a much broader range of knowledge than any human. Super-intelligence? In a way ‘yes,’ but probably no, not really.

Geoffrey Hinton has imagined an intelligence that exceeds human intelligence by as much as human intelligence exceeds a frog’s. This trope is common enough, and easy enough to cough-up. But what does it mean? There is a sense in which a contemporary Harvard graduate is a superintelligence with respect to a well-educated person of the 17th century or, for that matter, ancient Greece or Rome. That Harvard graduate understands things which are would be unintelligible to those earlier people – Mark Twain used this (kind of) disparity as a theme for a book, A Connecticut Yankee in King Arthur’s Court.

Note that, no matter how a frog is raised and ‘educated,’ it will never have the mental capacities of a human. But, if you took an infant from the 17th century and time-traveled it into the contemporary world, it could become a Harvard graduate. When talking of a future computational superintelligence, which of these disparities (frog vs. human, 17th C. vs. now) are we talking about. Maybe both?

I fear this could go on and on.

Why’d Time print Yudkowsky’s essay?

I was shocked when Time Magazine published Eliezer Yudkowsky’s essay at the end of March. Why? Yudkowsky’s vision of AI Doom strikes me as being mostly a creature of science fiction rather than being a plausible hypothesis about the future of AI. I didn’t think Time was in the business of publishing science fiction.

So, I was wrong about something. I’ve got to revise some priors, as the Bayesians say. But which ones, and how? For the moment I see no reason to revise my thoughts about AI existential risk. Which implies that I was wrong about Time, which I regard as a bastion of middle-brow opinion.

But, really, just what does extinction-via-AI really mean to that audience? Of course, people fear job loss, that’s been around since, like, forever. And now we’ve got crazy behavior from LLMs. My guess is that AI Doom is continuous with and an intensification of those fears. Nothing special.

Why doom, after all? (a search for the real?)

Why is AI doom such a compelling idea to some people, particularly those in high tech? What problem does it solve for them? Well, since it is an idea about the future, and pretty much the whole future at that, let’s posit that that’s the problem it solves: How do with think about the future? Posit the emergence of (a godlike) superintelligence that is so powerful that it can exert almost totally control over human life. Further posit that, for whatever reason, it decides that humankind is expendable. However, if – and it’s a BIG IF – we can exert control over this superintelligence (‘alignment’), so that we can control it, then WE ARE IN CONTROL of the future.

Problem solved.

But why not just assume that we will be able to control this thing – as Yann LeCun seems to assume? Because that doesn't give us much guidance about what to do. If we can control this thing, whatever it is, then the future is wide open. Anything is possible. That’s no way to focus our efforts. But if it IS out to get us, that gives a pretty strong focus for our activity.

Do I believe this? How would I know? I just made it up.

Confabulation & default mode

One well-known problem for LLMs is that they tend to make things about. Well, it’s struck me that confabulation is the default mode for human mentation as well. What passes through your mind when you aren’t doing anything in particular, when you’re daydreaming? Some of those fleeting thoughts may be anchored in reality, but many will not be, many will be fabulation of one kind or another.

Neuroscientists have been examining brain activation during that activity. They’ve identified a distinct configuration of brain structures that supports it. This has been called the default mode network.

Hmmmm....

What is REALITY anyhow?

Indeed. More and more reality is seeming rather fluid. As David Hays and I remarked in, A Note on Why Natural Selection Leads to Complexity, Reality is not perceived, it is enacted – in a universe of great, perhaps unbounded, complexity.

0 comments

Comments sorted by top scores.