- Overview
- Software Installation
- Data Collection and Model Training
- Usage of Main Program
- Technical Details
- Acknowledgements
- Contact
Lobsterpincer Spectator (named after the "Lobster Pincer mate") is a chessboard processor that gives players feedback in real time. There are three versions of the Lobsterpincer Spectator: Windows standalone version, Raspberry Pi's standalone version, and Windows and Raspberry Pi's combination version. This repository contains the Windows standalone version of the Lobsterpincer Spectator. This version is the most compact of the three in terms of hardware (no hardware configuration is required), and it has the following features:
- register each move in less than 6 seconds with manual chessboard detection (with Intel Core i5-8250U)
- register each move in less than 8 seconds with automatic chessboard detection (with Intel Core i5-8250U)
- alert the players (via speaker) at critical moments of the game
- inform the players (via OpenCV window) of the evaluation of the current position
- show the players (via OpenCV window) the move played in the previous position
The only dependencies of "ChessPieceModelTraining" are numpy
and Pillow
, which are automatically installed during the installation procedure for "LobsterpincerSpectatorForWin" presented below.
The installation procedure (for "LobsterpincerSpectatorForWin") below has been tested to be fully functional for Windows 11.
First, install Python 3.11 from Microsoft Store.
Then make sure your pip
is up to date by running the following command in Windows PowerShell:
pip install --upgrade pip
If you see any warning about some directory not on PATH, follow this and restart the computer to resolve it.
In order to successfully install tensorflow
, you need to first enable long paths. To do so, open another PowerShell as administrator and run the following command:
New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force
Now you can install all the relevant packages by running the following commands in Windows PowerShell:
pip install numpy
pip install opencv-python
pip install chess
pip install scipy
pip install pygame
pip install Pillow
pip install tensorflow
pip install onnxruntime
pip install matplotlib
pip install pyclipper
pip install scikit-learn
(Alternatively, you may use pip install -r requirements.txt
to install all
the relevant packages.)
Finally, in order to successfully import tensorflow
, you also need to install a Microsoft Visual C++ Redistributable package from here. Since Windows 11 only has the 64-bit version, you can simply download and install this.
The "SqueezeNet1p1_all_last.onnx" chess-piece model (in "LobsterpincerSpectatorForWin/livechess2fen/selected_models") provided in this repository was obtained by transfer learning based on 508 images of this specific chessboard under various lighting conditions. If you have a different chessboard, you should follow the procedure below to collect your own data and obtain your own model.
First, collect labeled image data using "capture_and_label_img.py" (in "LobsterpincerSpectatorForWin/lpspectator"):
-
You will need an app on your phone that turns your phone into an IP camera. For Android, you can use IP Webcam. Make sure your phone and the computer (that will run "capture_and_label_img.py") are in the same Wi-Fi network, open the app, and edit the
IMAGE_SOURCE
variable in "capture_and_label_img.py" accordingly. You will also need some kind of physical structure (such as a phone holder) that you can use to hold the phone. -
Paste the PGN of the game to be played (during data collection) into "game_to_be_played.pgn" (in "LobsterpincerSpectatorForWin").
-
Run "capture_and_label_img.py" from the "LobsterpincerSpectatorForWin" directory (NOT from the "LobsterpincerSpectatorForWin/lpspectator" directory) to collect image data.
-
Cut everything in the "Captured Images" folder (in "LobsterpincerSpectatorForWin") and paste it into a subfolder in "ChessPieceModelTraining/BoardSlicer/images/chessboards" (NOT directly in "ChessPieceModelTraining/BoardSlicer/images/chessboards").
-
Repeat steps 2-4 until you have a sufficient number (e.g., hundreds) of labeled images under various lighting conditions.
Next, process the data and obtain the trained model as follows:
-
Run "board_slicer.py" and copy (or cut) all the output in the "ChessPieceModelTraining/BoardSlicer/images/tiles" folder into the "ChessPieceModelTraining/DataSplitter/data/full" folder.
-
Run "data_splitter.py" to randomize and split the data. The next two steps are optional (but somewhat recommended):
-
Delete the "ChessPieceModelTraining/DataSplitter/data/full" folder (to reduce the size of the "ChessPieceModelTraining/DataSplitter/data" folder and thus reduce the time it takes to upload the data to Google Colab later).
-
Discard a significant amount of the empty-square data in "ChessPieceModelTraining/DataSplitter/data/train/_" and "ChessPieceModelTraining/DataSplitter/data/validation/_" (such that, for example, the amount of the remaining empty-square data is comparable to that of the white-pawn data or black-pawn data).
-
-
Compress the "ChessPieceModelTraining/DataSplitter/data" folder into a "data.zip" ZIP-file (in the "ChessPieceModelTraining/DataSplitter" folder).
-
Open "SqueezeNet1p1_model_training.ipynb" (in "ChessPieceModelTraining/ModelTrainer") with Google Colab, enable GPU on Google Colab, and upload the "data.zip" (in "ChessPieceModelTraining/DataSplitter") and "models.zip" (in "ChessPieceModelTraining/ModelTrainer") files to Google Colab.
-
Run the entire "SqueezeNet1p1_model_training.ipynb" notebook to perform transfer learning (which should take at least a couple of hours, but exactly how long it takes depends on how much image data you collected in the first place).
-
Download the "SqueezeNet1p1_all_last.onnx" (and, optionally, "SqueezeNet1p1_all_last.h5") from Google Colab (in the "models" folder) to the "LobsterpincerSpectatorForWin/livechess2fen/selected_models" folder.
The following video walks through the entire data-collection-and-model-training procedure. Only 5 images under the same lighting condition are collected in this demo in order to keep the video brief; you want to collect hundreds of images under various lighting conditions in practice. Also, even though "LobsterpincerSpectatorForRPi" and Raspberry Pi are used for data collection in this demo, the procedure is very much the same for "LobsterpincerSpectatorForWin" and a Windows computer.
To use the main program, "lobsterpincer_spectator.py" (in "LobsterpincerSpectatorForWin"):
-
Make sure your phone and Windows computer are in the same Wi-Fi network.
-
Open the app on your phone (that turns your phone into an IP camera), mount the phone on some kind of physical structure, and edit the
IMAGE_SOURCE
variable in "capture_and_label_img.py" (see step 1 of the data-collection procedure above). -
Edit the
FULL_FEN_OF_STARTING_POSITION
,A1_POS
, andBOARD_CORNERS
variables in "lobsterpincer_spectator.py" (feel free to edit other variables as well, but these three are generally the most relevant to the user). -
Run "lobsterpincer_spectator.py" from the "LobsterpincerSpectatorForWin" directory and tune the slider values.
-
Play the game against your opponent (the game you play has nothing to do with the "LobsterpincerSpectatorForWin/game_to_be_played.pgn" file, by the way, which is only relevant to data collection). At any point during the game, feel free to press 'p' to pause the program, press 'r' to resume the program, or press 'q' to quit the program.
-
After the game, feel free to use "saved_game.pgn" (in "LobsterpincerSpectatorForWin") for postgame analysis.
The video in the Overview section demos the case where BOARD_CORNERS
is set to [[0, 0], [1199, 0], [1199, 1199], [0, 1199]]
. In this case, manual (predetermined) chessboard detection is used, which accelerates the move-registration process (each move takes at most 6 seconds to register with Intel Core i5-8250U). If BOARD_CORNERS
is set to None
, automatic (neural-network-based) chessboard detection is used, and each moves takes at most 8 seconds to register with Intel Core i5-8250U.
The figure below shows a high-level diagram for the signal-processing workflow:
There are a few things to note:
-
The Windows computer is responsible for all the heavy computation.
-
The chess-piece model discussed in the Data Collection and Model Training section above is responsible for move detection.
-
After each move is registered (i.e., validated), a sound effect is played. There are sound effects for making "regular" moves, capturing, castling, promoting, checking, and checkmating. These are the same sound effects that you would hear in an online game on chess.com.
-
Engine evaluation is accomplished with Stockfish 16.1 at depth 17, which corresponds to an ELO rating of about 2695.
-
A critical moment is defined as one when one of the two conditions is satisfied:
-
The best move forces a checkmate (against the opponent) whereas the second-best move does not.
-
Neither the best move nor the second-best move forces checkmate, but the best move is significantly better than the second-best move (a floating-point evaluation difference of 2 or more), and the position would not be completely winning (a position is considered completedly winning if its floating-point evaluation is at least 2) for the player if they played the second-best move.
The precise definition can be found in the
is_critical_moment()
function in "evaluate_position.py" (in "LobsterpincerSpectatorForWin/lpspectator"). -
-
Besides the ability to detect critical moments, the program also detects Harry the h-pawn and the Lobster Pincer mate. When a player pushes Harry the h-pawn into (or further into) the opponent's territory (but Harry has not promoted into a queen yet) and the player pushing the h-pawn is not losing (a position is considered losing if its floating-point evaluation is at most -2), the "Look at Harry! Come on, Harry!" audio is played. When the Lobster Pincer mate happens, a special piece of audio is played as well.
I give special thanks to David Mallasén Quintana. This project was made possible by Quintana's work: LiveChess2FEN. LiveChess2FEN provided me with the foundation for chess-piece identification. The "models.zip" file (in "ChessPieceModelTraining/ModelTrainer") came directly from the LiveChess2FEN repository, and the "SqueezeNet1p1_model_training.ipynb" notebook (in "ChessPieceModelTraining/ModelTrainer") was written largely based on the work in "cpmodels" folder in the repository as well.
I also thank Linmiao Xu for his chessboard-recognizer project, which helped me develop the "ChessPieceModelTraining/BoardSlicer" program.
Finally, I thank Simon Williams and Daniel Naroditsky for creating the entertaining YouTube videos that I used to create the audio files. They also inspired and helped me to become a much stronger chess player than I would be without them.
If you find this repository to be useful (but please use my work responsibly; use it in friendly practice games instead of tournament games!), or if you have any feedback, please do not hesitate to reach out to me at davidlxl@umich.edu.