It "wanted" ...

post by jmh · 2020-02-15T20:52:07.094Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    4 David_Gross
    1 Andrew Jacob Sauer
None
3 comments

I've seen a few post using that construction when the "It" has not capacity to want at all. They have prompted a question in my own approach to thinking about things.

First, I am not critiquing the posts or otherwise suggesting a problem with them, though that could be inferred so want to put that disclaimer up front. I do think there is a place for the use of such a rhetorical device. I also think there is a place for expressing what might be incomplete thoughts. Moreover, I don't think the specific word is the concern either -- this is not about computers, or viruses, or even perhaps plants "wanting" something.

The questions is should we generally take pause when we find ourselves using that type of rhetoric to ask if we are perhaps trying to work from an incorrect or seriously deficient map to navigate the territory we're trying to traverse?

In other words, should we use that as a marker to come back to and try to express conditions or functional structure more precisely, or at least confirm it really was a harmless short cut via language to get the idea across (and the idea is not dependent on the rhetoric).

Answers

answer by David Gross (David_Gross) · 2020-02-15T23:57:25.037Z · LW(p) · GW(p)

This is perhaps tangential to your point or pedantic, but "want" was not always merely synonymous for "desire" in English. "Desire" implies something with conscious awareness of a lack wishing to have that lack redressed. "Want" can mean simply the lack without the consciousness of it: e.g. "this watch wants a seconds hand", or "as he wanted education, his wits were poor." This way of using the word seems to have been dropping off in recent decades, but may explain some of the examples you have seen.

comment by jmh · 2020-02-16T15:21:12.792Z · LW(p) · GW(p)

Very true and I could be one of those that misses such a distinction at times (I think I have on this very site). However, the cases I was thinking of are not like that and in such a case the quotes would never be used, unless one was actually quoting.

answer by Andrew Jacob Sauer · 2020-02-16T08:43:27.524Z · LW(p) · GW(p)

Perhaps in many cases, if "X wants Y" then that means X will do or bring about Y unless it is prevented by something external. In some cases X is an unconscious optimization procedure, which therefore "wants" the thing that it is optimizing, in other cases X is the output of some optimization procedure, as in the case of a program that "wants" to complete its task or a microorganism that "wants" to reproduce, but optimization is not always involved, as illustrated by "high-pressure gas wants to expand".

comment by jmh · 2020-02-16T15:41:03.763Z · LW(p) · GW(p)

I get what you are saying an as such it may well be harmless. However, it's a bit odd to say the light wants to stay, so I have to toggle the switch to prevent it from staying on.

Yes, that is true but in reality it is really that the electrons "want" to flow to ground and will do so through the light bulb, so producing the illumination, as they flow. So if we don't know much about electricity, saying the "light wants" may lead to a lot of troubleshooting of the bulb when the circuit breaker has been thrown.

And that is part of my musing here. How often might we simplify, abstract or rely on metaphor when we lack more specific knowledge.

3 comments

Comments sorted by top scores.

comment by Pattern · 2020-02-16T00:19:01.988Z · LW(p) · GW(p)

Can you give some examples?

Replies from: jmh, jmh
comment by jmh · 2020-02-16T15:19:19.634Z · LW(p) · GW(p)

A recent post used the self-driving car that "wants" (perhaps) to get the passengers to their destination safely.

In one case the "want" was clearly a metaphor, the author actually stated the car cannot want. However, there was a failure mode such that even if the car did want passenger safety the car was going to fail. The problem was the training used.

Then there was the case where the car AI could have such a desire, but didn't. However, it did act that way for a while, then would fail (this seemed more of an alignment problem but no an AI type so....) as whatever constraint forced the AI to comply with the externally imposed want rather than its own wants.

In the first case we need the scare quotes to indicate a non-factual working assumption. In the second case we don't need the quotes (even if the situation is still only theory).

Now, like I said, I'm not critiquing the OP an liked reading it. However, in the first case we might be able to say setting the story as "car wants" is hiding a bit. Yes, we still got to the view the failure was introduced in the training but never got to why that might have occurred.

So what if we dig in deeper. Perhaps the designers were also thinking along the same metaphor, the car "wants" (though actually lacks that capacity) so they never actually wanted safety while designing the car, the AI or the training -- they mostly assumed the car AI would accomplish that.

If we move the "want" to the human designers scenario one starts looking a little more like scenario two and will find a solutions not within the car per se but within the thinking and choices of the designers (or the organization where the designers work). But it was just accept that "want" was a useful rhetorical short cut do we ever start looking very far outside the car?

comment by jmh · 2020-02-16T15:44:46.126Z · LW(p) · GW(p)

Here is another: https://www.lesswrong.com/posts/u9Azdu6Z7zFAhd4rK/bayesian-evolving-to-extinction?commentId=h5esN3hwNo9EEpQjH [LW(p) · GW(p)]

The comment seems to be a response that is reflective of my question.