Posts

Rationalism and social rationalism 2023-03-10T23:20:54.513Z
Republishing an old essay in light of current news on Bing's AI: "Regarding Blake Lemoine's claim that LaMDA is 'sentient', he might be right (sorta), but perhaps not for the reasons he thinks" 2023-02-17T03:27:19.387Z
ChatGPT understands language 2023-01-27T07:14:42.790Z
The AI Control Problem in a wider intellectual context 2023-01-13T00:28:16.410Z
Verbal parity: What is it and how to measure it? + an edited version of "Against John Searle, Gary Marcus, the Chinese Room thought experiment and its world" 2022-12-31T03:46:59.867Z
Language models are nearly AGIs but we don't notice it because we keep shifting the bar 2022-12-30T05:15:15.625Z
Against John Searle, Gary Marcus, the Chinese Room thought experiment and its world 2022-12-29T03:26:12.485Z
Regarding Blake Lemoine's claim that LaMDA is 'sentient', he might be right (sorta), but perhaps not for the reasons he thinks 2022-12-28T01:55:40.565Z
Recent advances in Natural Language Processing—Some Woolly speculations (2019 essay on semantics and language models) 2022-12-27T02:11:36.960Z
Theodicy and the simulation hypothesis, or: The problem of simulator evil 2022-12-26T18:55:15.872Z

Comments

Comment by philosophybear on Six economics misconceptions of mine which I've resolved over the last few years · 2023-03-18T09:53:24.083Z · LW · GW

The monopsony approach to the labor market says they're the rule. A company doesn't actually formally have to be the only buyer of labor power in its region to hold monopsony power.

Comment by philosophybear on Rationalism and social rationalism · 2023-03-17T12:41:26.361Z · LW · GW

I added this to the blog post to explain why I don't think your objection goes through:

"[Edit: To respond to an objection that was made on another forum to this blog- advocate for in the context of this section does not necessarily mean the claim is true. If the public thinks the likelihood of X is 1%, and your own assessment, not factoring in the weight of others’ judgments, is 30%, you shouldn’t lie and say you think it’s true. Advocacy just means making a case for it, which doesn’t require lying about your own probability assessment.]"

Comment by philosophybear on The idea that ChatGPT is simply “predicting” the next word is, at best, misleading · 2023-02-21T04:17:22.694Z · LW · GW

Here's an analogy. AlphaGo had a network which considered the value of any given board position. It was separate from it's monte carlo tree search network- which explicitly planned the future. However it seems probable that in some sense, in considering the value of the board, AlphaGo was (implicitly) evaluating the future possibilities of the position. Is that the kind of evaluation you're suggesting is happening? "Explicitly" ChatGPT only looks one word ahead, but "implicitly" it is considering those options in light of future directions of development for the text?

Comment by philosophybear on The AI Control Problem in a wider intellectual context · 2023-01-13T04:20:03.433Z · LW · GW

Thankyou, I will start to have a read. At first glance, this reminds me of the phenomena of reference magnetism often discussed in philosophy of language. I suspect a good account of natural abstractions will involve the concept of reference magnetism in some way, although teasing out the exact relationship between the concepts might take a while.

Comment by philosophybear on Theodicy and the simulation hypothesis, or: The problem of simulator evil · 2022-12-28T00:49:53.561Z · LW · GW

I see your point now, but I think this just reflects the current state of our knowledge. We haven't yet grasped that we are implicitly creating- if not minds, then things a-bit-mind-like every time we order artificial intelligence to play a particular character.

When this knowledge becomes widespread, we'll have to confront the reality of what we do every time we hit run. And then we'll be back to the problem of theodicy- the God being the being that presses play- and the question being- is pressing play consistent with their being good people?* If I ask GPT-3 to tell a story about Elon Musk, is that compatible with me being a good person?

* (in the case of GPT-3, probably yes, because the models created are so simple as to lack ethical status, so pressing play doesn't reflect poorly on the simulation requester. For more sophisticated models, the problem gets thornier.)

Comment by philosophybear on Theodicy and the simulation hypothesis, or: The problem of simulator evil · 2022-12-27T00:23:08.534Z · LW · GW

Certainly, it is possible, but I see little to guarantee our descendants won't create simulations that are like the world we live in now.

  1. Our descendants may well not regard sims as having the same rights as persons.
  2. Even if they do, if even a small number of rogue beings (or nations etc.) conducted such simulations, unethical as they may be, it is possible that simulations would soon outnumber real people- especially for critical junctures in history (e.g., right before the discovery of AGI.)
  3. The essay gives at least two ethical reasons which, in my view at least, may offer enough good to outweigh the suffering- such that even a person who cared deeply about sims might still sanction the existence of a world in which they suffer to achieve their aims.

So given those factors, we may be in a simulation, and given that, I think an interesting question is "is our being in a simulation compatible with our simulators being good people"

Comment by philosophybear on Theodicy and the simulation hypothesis, or: The problem of simulator evil · 2022-12-27T00:16:41.604Z · LW · GW

I have to disagree here. I strongly suspect that GPT, when it, say, pretends to be a certain character, is running a rough and ready approximate simulation of that character's mental state and its interacting components (various beliefs, desires etc.) I have previously discussed this in an essay, which I will soon be posting.