Chess Buddy

Inside.TechLabs
5 min readApr 14, 2022

--

This project was carried out as part of the TechLabs “Digital Shaper Program” in Münster (winter term 2021/22).

Abstract

Ever since 1997, when IBM’s Deep Blue won a chess match against the reigning world champion Garry Kasparov, the technology world became increasingly convinced that Artificial Intelligence can and will outperform humans in almost any task. Up to then, it was common belief that computers cannot imitate the intuition of a professional player. Yet with time, this belief was proven wrong and many sophisticated methods have been developed. Back then, the chess computers used highly specialized hardware and algorithms specifically tailored to chess. In modern times, deep learning provides a framework to solve not only games like chess or go but also perform image recognition and natural language processing. In this project, we attempt to use these frameworks, together with pygame and pychess, to create a playable chess program, similar to IBM’s Deep Blue, in which a human competes with an algorithm.

Our Idea

Chess Buddy was planned as a game in which a human player competes against a selection of AI players. The latter differ in their method on how to play the game, i.e., the choice which move should be played from a given set of possible legal moves. The question of how to do this effectively is a rather challenging one: The number of possible board states exceeds the number of atoms in the universe. Consequently, no look-up table or similar methods are viable to play competitively and the computer needs to recognize certain (and possibly highly abstract) patterns. This can be either predefined, e.g., the method is told a queen has to be the same value as nine pawns or that control over the center of the board is equivalent to having an additional pawn, or undefined. In the latter machine-learning case, the computer needs to reevaluate chess games and learn/find general patterns of what makes a good board state. Our aim was to include both of these cases as two individual AI players the human can compete with.

To realize this idea, one needs to make a playable chess game interface, a chess engine, and the implementation of the algorithms for the AI players.

Chess board and engine

he first step in the project is naturally the creation of a chess game, including a graphical user interface (GUI) that the user can interact with. For this purpose, we used the two python libraries python-chess and pygame. The former provides a backend that tracks the current board state, and gives us a list of legal moves. The latter is used to generate the GUI and handle the human player’s input. The GUI can be seen in the image below, where the white player has currently selected his queen to make a move and the game displays the available moves in green.

Chess AI

In the most general sense, any chess AI works by searching for the best possible move in the tree of board states. In this tree, the current board state is the root node and every possible move creates a branch, from which new subtrees are then created recursively for the following moves. Due to the large number of possible moves in each turn of a chess game, the full tree quickly explodes in complexity, which makes it necessary to find a way to prune paths that are unlikely to lead to the optimal outcome. Instead of looking at the full trees of all possible moves, we discard the subtrees of all but the n best moves in each layer of evaluation. This is demonstrated in the code snippet for make_AI1_move() below.

First, a list of the best moves is obtained from get_best_moves(), which takes all legal moves and rates the resulting board states via get_board_rating(). This list is then sorted by rating and only the n best ones are kept. The function make_AI1_move() then recursively calls itself for each of those board states until the set recursion depth is reached.

This leaves the rating of a given board state as the last open question. Our first implementation uses the Stockfish API, which is a state-of-the-art chess engine, to provide these local ratings.

Deep Learning

From this, a deep-learning based player can be straightforwardly implemented by changing just the method get_board_rating() to a machine-learning analogue.

A neural network was implemented by making use of the fast.ai library and trained with pre-evaluated chess positions for which we used the data set provided in kaggle: chess-evaluations. This dataset includes about 12 Mio. evaluations of the Stockfish engine with search depth 20.

For the architecture of our neural network, we used a multi-layered fully connected network with varying number of nodes: The input layer is the bitboard representation of the chess board and always includes 773 bits that is reduced in each following layer to first 600, then 400, 200, 100, and finally one output node.

The output is trained with the Stockfish board rating. ReLU was used as the activation function.

Our results with this were of limited success. As it turns out, the Stockfish evaluation is very profound and contains a multitude of hidden information. The neural network architecture used here is simply not sophisticated enough to account for these features and a much more complex neural network is required to learn how Stockfish evaluates positions.

Conclusions and outlook

Chess Buddy was, albeit not every desired feature was achieved, successful. In this project, we were able to disentangle the complex mechanics of ‘good chess games’ and wrote an algorithm that simplified the playing of chess to a tree search. The AI player based on the Stockfish API is working exceptionally well. However, the deep-learning-based player was unable to mimic Stockfish.

In the future, the problem with deep learning may be resolved in two different ways: A much more complex network, which is trained with much more data, may be able to learn the sophisticated inner workings of the evaluation function of Stockfish. Note that such an undertaking does, however, also require a lot of computational resources.

Another possible solution for the betterment of the deep-learning player is to include domain knowledge, i.e., one would predefine what characteristics of the board state may be important, allowing simpler neural networks.

Our Github Repository

The team

Jens Bickmann Deep Learning

Pit Duwentäster Deep Learning

Roles inside the team

All participants contributed equally.

Mentor

Nils Schlüter

--

--

Inside.TechLabs
Inside.TechLabs

Written by Inside.TechLabs

Our community Members share their insights into the TechLabs Experience

No responses yet