Whence Your Abstractions?
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-20T01:07:46.000Z · LW · GW · Legacy · 6 commentsContents
6 comments
Reply to: Abstraction, Not Analogy
Robin asks:
Eliezer, have I completely failed to communicate here? You have previously said nothing is similar enough to this new event for analogy to be useful, so all we have is "causal modeling" (though you haven't explained what you mean by this in this context). This post is a reply saying, no, there are more ways using abstractions; analogy and causal modeling are two particular ways to reason via abstractions, but there are many other ways.
Well... it shouldn't be surprising if you've communicated less than you thought. Two people, both of whom know that disagreement is not allowed, have a persistent disagreement. It doesn't excuse anything, but - wouldn't it be more surprising if their disagreement rested on intuitions that were easy to convey in words, and points readily dragged into the light?
I didn't think from the beginning that I was succeeding in communicating. Analogizing Doug Engelbart's mouse to a self-improving AI is for me such a flabbergasting notion - indicating such completely different ways of thinking about the problem - that I am trying to step back and find the differing sources of our differing intuitions.
(Is that such an odd thing to do, if we're really following down the path of not agreeing to disagree?)
"Abstraction", for me, is a word that means a partitioning of possibility - a boundary around possible things, events, patterns. They are in no sense neutral; they act as signposts saying "lump these things together for predictive purposes". To use the word "singularity" as ranging over human brains, farming, industry, and self-improving AI, is very nearly to finish your thesis right there.
I wouldn't be surprised to find that, in a real AI, 80% of the actual computing crunch goes into drawing the right boundaries to make the actual reasoning possible. The question "Where do abstractions come from?" cannot be taken for granted.
Boundaries are drawn by appealing to other boundaries. To draw the boundary "human" around things that wear clothes and speak language and have a certain shape, you must have previously noticed the boundaries around clothing and language. And your visual cortex already has a (damned sophisticated) system for categorizing visual scenes into shapes, and the shapes into categories.
It's very much worth distinguishing between boundaries drawn by noticing a set of similarities, and boundaries drawn by reasoning about causal interactions.
There's a big difference between saying "I predict that Socrates, like other humans I've observed, will fall into the class of 'things that die when drinking hemlock'" and saying "I predict that Socrates, whose biochemistry I've observed to have this-and-such characteristics, will have his neuromuscular junction disrupted by the coniine in the hemlock - even though I've never seen that happen, I've seen lots of organic molecules and I know how they behave."
But above all - ask where the abstraction comes from!
To see a hammer is not good to hold high in a lightning storm, we draw on pre-existing objects that you're not supposed to hold electrically conductive things to high altitudes - this is a predrawn boundary, found by us in books; probably originally learned from experience and then further explained by theory. We just test the hammer to see if it fits in a pre-existing boundary, that is, a boundary we drew before we ever thought about the hammer.
To evaluate the cost to carry a hammer in a tool kit, you probably visualized the process of putting the hammer in the kit, and the process of carrying it. Its mass determines the strain on your arm muscles. Its volume and shape - not just "volume", as you can see as soon as that is pointed out - determine the difficulty of fitting it into the kit. You said "volume and mass" but that was an approximation, and as soon as I say "volume and mass and shape" you say, "Oh, of course that's what I meant" - based on a causal visualization of trying to fit some weirdly shaped object into a toolkit, or e.g. a thin ten-foot thin pin of low volume and high annoyance. So you're redrawing the boundary based on a causal visualization which shows that other characteristics can be relevant to the consequence you care about.
None of your examples talk about drawing new conclusions about the hammer by analogizing it to other things rather than directly assessing its characteristics in their own right, so it's not all that good an example when it comes to making predictions about self-improving AI by putting it into a group of similar things that includes farming or industry.
But drawing that particular boundary would already rest on causal reasoning that tells you which abstraction to use. Very much an Inside View, and a Weak Inside View, even if you try to go with an Outside View after that.
Using an "abstraction" that covers such massively different things, will often be met by a differing intuition that makes a different abstraction, based on a different causal visualization behind the scenes. That's what you want to drag into the light - not just say, "Well, I expect this Singularity to resemble past Singularities."
Robin said:
I am of course open to different way to conceive of "the previous major singularities". I have previously tried to conceive of them in terms of sudden growth speedups.
Is that the root source for your abstraction - "things that do sudden growth speedups"? I mean... is that really what you want to go with here?
6 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Robin_Hanson2 · 2008-11-20T03:07:45.000Z · LW(p) · GW(p)
Everything is new to us at some point; we are always trying to make sense of new things by using the abstractions we have collected from trying to understand all the old things.
We are always trying to use our best abstractions to directly assessing their characteristics in their own right. Even when we use analogies that is the goal.
I said the abstractions I rely on most here come from the economic growth literature. They are not just some arbitrary list of prior events.
comment by derekz2 · 2008-11-20T03:23:16.000Z · LW(p) · GW(p)
This public exercise, where two smart people hunt for the roots of their disagreement on a complex issue, has the potential to be the coolest thing I've seen yet on this blog. I wonder if it's actually possible -- there are so many ways that it could go wrong, humans being what they are. I really appreciate the two of you giving it a try.
Not that you asked for people to take shots from the cheap seats, but Eliezer, your ending question: "I mean... is that really what you want to go with here?" comes across (to me at least) as an unnecessarily belligerent way to proceed, likely to lead the process in unfortunate directions. It's not an argument to win, with points to score.
comment by Aron · 2008-11-20T04:31:14.000Z · LW(p) · GW(p)
The Socrates paragraph stands out to me. It doesn't seem sporting to downplay one approach in comparison to another by creating two scenarios, with one being what a five-year old might say and the other being what a college grad (or someone smart enough to go to college) might say. Can that point be illustrated without giving such an unbalanced appearance?
The problem of course (to the discussion and to the above example)is: how much do you think you know about the underlying mechanics of what you are analyzing?
comment by Robin_Hanson2 · 2008-11-20T15:10:29.000Z · LW(p) · GW(p)
To elaborate, as I understand it a distinctive feature of your scenario is a sudden growth speedup, due to an expanded growth feedback channel. This is the growth of an overall capability of a total mostly autonomous system whose capacity is mainly determined by its "knowledge", broadly understood. The economic growth literature has many useful abstractions for understanding such scenarios. These abstractions have been vetted over decades by thousands of researchers, trying ton use them to understand other systems "like" this, at least in terms of these abstractions.
comment by r.s · 2008-11-21T05:57:25.000Z · LW(p) · GW(p)
Let's stop using hammers and Socrates and talk about what real tools you both are using to harness your intuitions. Whence the confidence, Eliezer? Whence the doubt, Robin?
I get the feeling you are getting closer in the last comment Robin, but I still can't get through that dense block of text to get a feel for what you're getting at.
Kudos to both of you for standing honorably by such a heated and potentially enlightening discussion.