A Reddit user recently pitted the chess engine Stockfish, which won the Top Chess Engine Championship eight timesagainst the infamous AI-powered conversational bot ChatGPT in a dark chess game. Unfortunately, while Stockfish, for which it was created, was able to hold its own, ChatGPT succumbed to the high-stakes environment of chess and went on a cheating spree before losing.
Or more specifically as the original Reddit poster u/megamaz_ explained in a thread
“It just doesn’t have enough context to the game of chess to know the state of the board and understand the moves it’s making,” megamaz_ said. “In other words, it doesn’t know how to play.”
This is excusable as ChatGPT trains and propagates a language model by OpenAIHe wasn’t born to play chess. It was designed more broadly to answer queries and answer questions.
Although it has become a popular example of how artificial intelligence could soon be used to open our skulls and eat our brains, ChatGPT is even more ridiculously fallible than a human in some contexts. Even OpenAI admits on its website that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers”, “often is overly verbose and overuses certain phrases, such as repeating that it is an OpenAI-trained language model” and “sometimes responding to it by responding to harmful instructions or exhibiting biased behavior.” It has never claimed to be perfect. It doesn’t know everything. So when megamaz_ invited it to a game of chess, it had to get creative.
“If you see pieces appearing out of nowhere, that’s because ChatGPT literally said it would play,” megamaz_ said.
“Are you sure you want to do this?” megamaz_ asked ChatGPT in one place during their game. “Rg8 beats your own king.”
“Oops, looks like I made a mistake,” ChatGPT replied humbly. “My apologies!” Well, at least it’s not a bad loser.