A chess game against GPT-4

post by Rafael Harth (sil-ver) · 2023-03-16T14:05:17.559Z · LW · GW · 23 comments

Contents

23 comments

I just bought a subscription to access GPT-4 and played the following chess game against it, with me playing white. (No particular agenda, was just curious how good it is.)

At this point (move 31), GPT-4 suggested Kxc4, which is not legal, and when I asked it to correct, it suggested Kd5 and Kb6 which aren't legal either (only legal move here is Kc6.)

Stuff I noticed:

(PGN: 1. d4 Nf6 2. c4 e6 3. Nf3 d5 4. Nc3 Be7 5. Bf4 O-O 6. Nb5 $2 Na6 $9 7. e3 c6 $6 8. Nc3 Nc7 9. Rc1 $6 b6 10. Qb3 Ba6 11. Qa4 $6 Qd7 $4 12. Bxc7 $1 Qxc7 13. Qxa6 dxc4 14. Qxc4 Rac8 15. Bd3 c5 16. O-O cxd4 17. Qxc7 Rxc7 18. Nxd4 Rd8 19. Rfd1 e5 20. Nf5 Bb4 21. Ng3 Rcd7 22. Bb5 Rxd1+ 23. Rxd1 Rxd1+ 24. Nxd1 Kf8 25. Nc3 Ke7 26. a3 Bxc3 27. bxc3 Kd6 28. Kf1 Kc5 29. c4 a6 $6 30. Bxa6 Ne4 31. Nxe4+)

23 comments

Comments sorted by top scores.

comment by Matt Vogel (matt-vogel) · 2023-03-16T19:52:05.591Z · LW(p) · GW(p)

I'm very eager to see its performance once we can use visual inputs and showing it board states visually after each move.

Replies from: yitz
comment by Yitz (yitz) · 2023-03-19T05:36:36.816Z · LW(p) · GW(p)

If I get early access to the visual model, I will definitely try this

comment by Erik Jenner (ejenner) · 2023-03-16T18:01:24.723Z · LW(p) · GW(p)

It explained both my and its moves every time; those explanations got wrong earlier.

Note that at least for ChatGPT (3.5), telling it to not explain anything and only output moves apparently helps [LW(p) · GW(p)]. (It can play legal moves for longer that way). So that might be worth trying if you want to get better performance. Of course, giving it the board state after each move could also help but might require trying a couple different formats.

Replies from: localdeity
comment by localdeity · 2023-03-16T20:28:17.179Z · LW(p) · GW(p)

To describe the current board state, something like this seems reasonable.

comment by Max Hager (max-hager) · 2023-03-31T09:44:26.992Z · LW(p) · GW(p)

I've created an interface for playing against llm powered chess agents. Here is the link https://llmchess.org/. 

comment by Hailey Collet (hailey-collet) · 2023-03-20T23:53:23.835Z · LW(p) · GW(p)

I had it play hundreds of games against stockfish, mostly at lowest skill level, using the API. After a lot of experimentation, I was giving it a fresh slate every prompt. The prompt was basically telling it it was playing chess, what color it was, and the PGN (it did not do as well with the FEN or both in either order). If it made invalid move (s), the next prompt (s) for that turn I added a list of the invalid moves it had attempted. After a few tries I had it forfeit the game.

I had a system set up to rate it, but it wasn't able to complete nearly enough games. As described, it finished maybe 1 in 40. I added a list of all legal moves on second and third attempt for a turn. It was then able to complete about 1 in 10 and won about half of them. Counting the forfeits and calling this a legal strategy, that's something like a 550 iirc? But. It's MUCH worse in the late-middle and end games, even with the fresh slate every turn. Until that point - including well past any opening book it could possibly have "lossless in its database" (not how it works) - it plays much better, subjectively 1300-1400.

Replies from: aron-gohr
comment by GoteNoSente (aron-gohr) · 2023-03-26T11:15:50.365Z · LW(p) · GW(p)

That is odd. I certainly had a much, much higher completion rate than 1 in 40; in fact I had no games that I had to abandon with my prompt. However, I played manually, and played well enough that it mostly did not survive beyond move 30 (although my collection has a blindfold game that went beyond move 50), and checked at every turn that it reproduced the game history correctly, reprompting if that was not the case. Also, for GPT3.5 I supplied it with the narrative fiction that it could access Stockfish. Mentioning Stockfish might push it towards more precise play.

Trying again today, ChatGPT 3.5 using the standard chat interface did however seem to have a propensity to listing only White moves in its PGN output, which is not encouraging.

For exact reproducibility, I have added a game played via the API at temperature zero to my collection and given exact information on model, prompt and  temperature in the PGN:

https://lichess.org/study/ymmMxzbj/SyefzR3j

If your scripts allow testing this prompt, I'd be interested in seeing what completion rate/approximate rating relative to some low Stockfish level is achieved by chatgpt-3.5-turbo.

comment by wunan · 2023-03-16T19:29:11.317Z · LW(p) · GW(p)

Did you and GPT4 only output the moves, or did you also output the board state after each turn?

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-03-16T20:24:14.314Z · LW(p) · GW(p)

Moves only

comment by Super AGI (super-agi) · 2023-05-09T19:42:00.219Z · LW(p) · GW(p)

Here's an example for how to play chess with GPT4, using only txt in https://chat.openai.com ...

Me: Can you play chess? If so, start a chess board here using:
- a letter for each player.. e.g. K = King, Q = Queen, P = Pawn, etc
- I will be capital letters, your players can be lower case letters

GPT4: As a text-based AI, I am not able to display a full chess board or play chess with you in real time. However, I can help you understand chess strategies, explain the rules, and discuss famous games. Let me know if you have any questions or need assistance with chess concepts.

Me: Can you draw a chess board from text? e.g. 
- a letter for each player.. e.g. K = King, Q = Queen, P = Pawn, etc
- I will be capital letters, your players can be lower case letters

GPT4: Sure! I can display the initial position of a chess board using text representation. Here is a standard chess board at the beginning of a game:

R N B Q K B N R
P P P P P P P P
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
p p p p p p p p
r n b q k b n r

Me: Ok, thanks I'll go first.  Move my pawn on e4 to d4.

GPT4: I apologize for any confusion, but it seems there is a mistake in your move description. The pawn on e4 is not yet on the board. The initial position of the pawns is on the second rank. To move a pawn to e4, you can move the pawn from e2 to e4. Here is the updated board after your move:

R N B Q K B N R
P P P P . P P P
. . . . . . . .
. . . . P . . .
. . . . . . . .
. . . . . . . .
p p p p p p p p
r n b q k b n r

Now it's my turn. I'll move my pawn from e7 to e5:

R N B Q K B N R
P P P P . P P P
. . . . . . . .
. . . . P . . .
. . . . . p . .
. . . . . . . .
p p p p . p p p
r n b q k b n r

Me: This is good. Can you add the place position labels to the outside of the board?

GPT4: Certainly! Here's the updated chessboard with rank and file labels after your move (e2 to e4) and my move (e7 to e5):

8 r n b q k b n r
7 p p p p . p p p
6 . . . . . . . .
5 . . . . . p . .
4 . . . . P . . .
3 . . . . . . . .
2 P P P P . P P P
1 R N B Q K B N R
  a b c d e f g h

Now it's your turn.

Me: Huh, you flipped the board so now I'm on the bottom?

GPT4: Apologies for the confusion. I misunderstood your original intent. Here's the corrected board with your pieces at the top and the rank and file labels:

8 r n b q k b n r
7 p p p p . p p p
6 . . . . . . . .
5 . . . . . p . .
4 . . . . P . . .
3 . . . . . . . .
2 P P P P . P P P
1 R N B Q K B N R
  a b c d e f g h

Now it's your turn.

Me: Ok my pawn on e4 takes your pawn on f5.

GPT4: After your pawn on e4 captures my pawn on f5, the board looks like this:

8 r n b q k b n r
7 p p p p . p p p
6 . . . . . . . .
5 . . . . . P . .
4 . . . . . . . .
3 . . . . . . . .
2 P P P P . P P P
1 R N B Q K B N R
  a b c d e f g h
  
Now it's my turn. I'll move my knight from g8 to f6:

8 r n b q k b . r
7 p p p p . p p p
6 . . . . . n . .
5 . . . . . P . .
4 . . . . . . . .
3 . . . . . . . .
2 P P P P . P P P
1 R N B Q K B N R
  a b c d e f g h
  
Me: Ok, move my pawn on f2 to f4

etc, etc...
Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-05-17T07:41:42.006Z · LW(p) · GW(p)

does it play better / make legal moves for longer this way?

comment by reallyeli · 2023-03-19T07:49:33.069Z · LW(p) · GW(p)

Caleb Parikh and I were curious about GPT-4's internal models of chess as a result of this post, so we asked it some questions about the state partway through this game:

The following is a partial chess transcript in PGN. Please describe all legal moves that could come next. 1. d4 Nf6 2. c4 e6 3. Nf3 d5 4. Nc3 Be7 5. Bf4 O-O 6. Nb5 $2 Na6 $9

It replied:

The current position after 1. d4 Nf6 2. c4 e6 3. Nf3 d5 4. Nc3 Be7 5. Bf4 O-O 6. Nb5 Na6 is as follows:

8 r b q r p b p
7 p p b n p p p
6 n . . p . n .
5 . N . . p . .
4 . . P P . B .
3 . . . . N . .
2 P P . . . P P
1 R . B Q K . R
 a b c d e f g h

(and explained that lowercase letters were black pieces and uppercase letters were white pieces, which I didn't know; I don't play chess).

This... is not an accurate picture of the game board (what are all those pawns doing on Black's back row?) We also asked it for a list of legal moves that White could make next, and it described some accurately but some inaccurately (e.g. listed one as a capture even though it wasn't).

Replies from: sil-ver, reallyeli
comment by Rafael Harth (sil-ver) · 2023-03-19T08:04:41.374Z · LW(p) · GW(p)

This is pretty funny because the supposed board state has only 7 columns. Yet it's also much better than random. A lot of the pieces are correct... that is, if you count from the left (real board state is here).

Also, I've never heard of using upper and lowercase to differentiate white and black, I think GPT-4 just made that up. (edit: or not; see reply.)

Extra twist: I just asked a new GPT-4 instance whether any chess notation differentiates lower and upper case, and it told me algebraic notation does, but that's the standard notation, and it doesn't. Wikipedia article also says nothing about it. Very odd.

Replies from: lechmazur, reallyeli
comment by Lech Mazur (lechmazur) · 2023-03-19T10:30:11.943Z · LW(p) · GW(p)

Also, I've never heard of using upper and lowercase to differentiate white and black, I think GPT-4 just made that up.

No, this is common. E.g. https://github.com/niklasf/python-chess

comment by reallyeli · 2023-03-19T08:14:06.083Z · LW(p) · GW(p)

This is pretty funny because the supposed board state has only 7 columns

Hah, I didn't even notice that.

Also, I've never heard of using upper and lowercase to differentiate white and black, I think GPT-4 just made that up.

XD

comment by reallyeli · 2023-03-19T08:24:45.194Z · LW(p) · GW(p)

On a retry, it didn't decide to summarize the board and successfully listed a bunch of legal moves for White to make. Although I asked for all legal moves, the list wasn't exhaustive; upon prompting about this, it apologized and listed a few more moves, some of which were legal and some which were illegal, still not exhaustive.

comment by Mitchell_Porter · 2023-03-16T14:59:54.970Z · LW(p) · GW(p)

it still started playing illegal moves eventually

I was just talking with Bing about how quickly transformer AI might surpass human intelligence, and it was a sensible conversation until it hallucinated a nonexistent study in which GPT-4 was tested on 100 scenarios and dilemmas and performed badly. 

What these interactions have in common, is that GPT-4 does well for a while, then goes off the rails. It makes me curious about the probability of going wrong - is there a constant risk per unit time, or does the risk per unit time actually increase with the length of the interaction, and if so, why? 

Replies from: martin-kristiansen
comment by ws27b (martin-kristiansen) · 2023-03-16T16:44:04.673Z · LW(p) · GW(p)

The probability of going wrong increases as the novelty of the situation increases. As the chess game is played, the probability that the game is completely novel or literally never played before increases. Even more so at the amateur level. If a Grandmaster played GPT3/4, it's going to go for much longer without going off the rails, simply because the first 20 something moves are likely played many times before and have been directly trained on. 

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-03-16T20:28:16.990Z · LW(p) · GW(p)

Right, though 20 moves until a new game is very rare afaik (assuming the regular way of counting, where 1 move means one from both sides). But 15 is commonplace. According to chess.com (which I think only includes top games, though not sure) this one was new up from move 6 by white.

comment by alexanderherbertkurz · 2023-03-21T01:04:37.363Z · LW(p) · GW(p)

How did your prompt GPT4?

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2023-03-21T07:57:07.434Z · LW(p) · GW(p)

If you mean how I accessed it at all, I used the official channel from OpenAI: https://chat.openai.com/chat

If you have a premium account (20$/month), you can switch to GPT-4 after starting a new chat.

comment by GoteNoSente (aron-gohr) · 2023-03-17T03:59:10.220Z · LW(p) · GW(p)

I think with the right prompting, it is around 1400 Elo, at least against strong opponents. Note, however, that this is based on a small sample; on the flip side, all my test games (against myself and three relatively weak computer opponents, with the strongest computer opponent tried being fairly strong club player level) are in a lichess study linked to from here:

https://www.lesswrong.com/posts/pckLdSgYWJ38NBFf8/gpt-4?commentId=TaaAtoM4ahkfc37dR [LW(p) · GW(p)]

The prompting used is heavily inspired by Bucky's comments from the Sydney-and-chess thread. I haven't optimised it for GPT-4 in any way.

I also tested if GPT-4 can play a game taking queen odds against an opponent that is strong compared to most humans (Leela Chess Zero at a few nodes per move). This was the case, with GPT-4 winning. However, I haven't documented that game.

It is much weaker at commenting than at playing under these conditions. However, it does know when its position is very bad, as I have seen it resign at a late but reasonable point when I worked the possibility to resign into the setup prompt.

comment by Yair Halberstadt (yair-halberstadt) · 2023-03-16T16:19:57.621Z · LW(p) · GW(p)

I wonder if you were to take GPT-4 and train it in self play, how good it would get, and how quickly...