Skip to content

In this project, the AI uses the minimax algorithm with alpha-beta pruning to make the best next move. I implemented the AI to have an adjustable search depth to speed up the decision-making time on larger board sizes and allow for comparing the strengths of AI with varying depths.

License

Notifications You must be signed in to change notification settings

ubangura/Tic-Tac-Toe-AI

Repository files navigation

Tic Tac Toe AI 🤖

A dynamic AI with a 0% loss rate against humans

AI (Player O) wins against human (Player x)

📖 Table of Contents

💡 Motivation

As an avid chess player (with an ELO of 1100), I developed this project to explore game theory and be a starting point for understanding the algorithms behind chess engines. After three years of playing chess, I wondered how the underlying algorithms of bots are adjusted to create varying difficulty levels. Chess engines are complex and use techniques such as neural networks. But a much simpler element of these engines is the minimax algorithm. Through learning more about minimax, new questions came up related to how developers factor it into difficulty settings. Do the engineers decrease the depth of the game tree, limit the branches searched for each possible position, choose the nth best-ranked move, or any combination of these? To explore these questions, I applied the minimax algorithm to tic tac toe.

☁️ Overview

In this project, the AI uses the minimax algorithm with alpha-beta pruning to make the best next move. I implemented the AI to have an adjustable search depth to speed up the decision-making time on larger board sizes and allow for comparing the strengths of AI with varying depths.

File Structure

│
├── console implementation
│   ├── test
│   │   ├── AITests.java
│   │   ├── BoardTests.java
│   │   ├── HumanTests.java
│   │   ├── TicTacToeTests.java
│   │
│   ├── tictactoe
│   │   ├── AI.java
│   │   ├── Board.java
│   │   ├── Driver.java
│   │   ├── GameStatus.java
│   │   ├── Human.java
│   │   ├── Mark.java
│   │   ├── Player.java
│   │   ├── TicTacToe.java

🛠️ Getting Started Guide

Note: You will need the Java Development Kit (JDK) installed on your local computer. If you need to install one, you can do so at Oracle Java SE.

Run the command below in the terminal to copy the code from the repository to your local computer.

git clone https://github.com/ubangura/Tic-Tac-Toe-AI.git

🎮 Minimax Algorithm

The minimax algorithm makes the most optimal move in any turn-based zero-sum game

In programming a game AI, we ideally want to look ahead and evaluate all possible next moves. However, this is not always realistic due to space and time constraints. For example, in chess, there are between 1040 and 10120 final game states the computer would need to evaluate. So, we look ahead as far as possible. We can visualize possible next moves as a game tree where the root node is the current game state, and all its successors represent possible moves.


Game Tree Features:

  • Branching factor (b) → for each node we can derive n nodes
  • Depth (d) → the longest path from the root to a leaf node
  • Leaf node → final game state for a branch
  • bd → maximum number of static evaluations

In this game tree, the branching factor and depth of the tree are both 2, so the AI would perform 4 final game state evaluations.


In the minimax algorithm, there are two players, the minimizing player, and the maximizing player. The minimizer wants to minimize the static evaluation (score) propagated to the root node. The maximizer wants to maximize the static evaluation.

Here, the root node represents the turn of the maximizer. The next level of the tree is the minimizer's turn which leads to four leaf nodes evaluated as 2, 7, 1, and 8. With the evaluations determined, we work our way back up the tree. The minimizer will choose the min of 2 and 7 in the left subtree and the min of 1 and 8 in the right subtree. Then the maximizer will choose the max of 2 and 1. With the tree finished, the maximizer now knows to follow the branch that leads to a static evaluation of 2.

Note: The minimax algorithm assumes the minimizer will play perfectly. In other words, the minimizer will always choose the move leading to the worst outcome for the maximizer.

It's important to know that the branching factor in tic tac toe won't stay constant throughout the game. It will decrease by 1 with each move made. While the number of static evaluations in tic tac toe is overshot by bd, it still provides an upper bound for the algorithm. In addition, we can shortcut the minimax process using alpha-beta pruning.

❌ Alpha-beta Pruning

Alpha-beta pruning is an extension to the minimax algorithm that skips the evaluation of branches in the game tree which won't affect the outcome

Take the completed game tree above, and we will simulate alpha-beta pruning.

First, we evaluate one leaf node as 2. Because the other leaf node has not yet been evaluated, the minimizer will minimize to at most a 2 in the left subtree.

Next, we evaluate the other leaf node as 7, and the minimizer will choose the move leading to an evaluation of 2. At this moment, the maximizer is guaranteed a score of at least 2 because regardless of the right subtree's evaluation, the maximizer will always choose the largest of the two trees.

Finally, we evaluate another leaf node as 1. Now, the minimizer will minimize to at most a 1 in the right subtree. From the perspective of the maximizing player, evaluating any other leaf nodes down this branch is irrelevant as we already know the best score from this branch, 1, is less than the better score we get from moving down the left subtree. At this point in the minimax algorithm, alpha-beta pruning tells us that we can ignore the last leaf node on the right and save computing time.

Alpha-beta pruning is not guaranteed to occur. If we scored the node as 3 instead of 1 we would need to evaluate the last leaf node. Now, the minimizer's best option down this branch might be 3, if we evaluate the final leaf node as, let's say 4. If we evaluate the branch as 3, then that would be the move the maximizer would want to play.


📚 References

About

In this project, the AI uses the minimax algorithm with alpha-beta pruning to make the best next move. I implemented the AI to have an adjustable search depth to speed up the decision-making time on larger board sizes and allow for comparing the strengths of AI with varying depths.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published