Skip to content

Latest commit

 

History

History
156 lines (116 loc) · 7.34 KB

README.md

File metadata and controls

156 lines (116 loc) · 7.34 KB

Logo

Multi-agent Surveillance

Cops 'n Robbers but with AI

Explore the docs »

View Demo . Report Bug . Request Feature

Downloads Contributors Issues

Table Of Contents

About The Project

Screen Shot

This project is developed in assignment of the department of Data Science & Knowledge Engineering from the University of Maastricht.

Multi-agent surveillance is a cops and robbers (Guards & Intruders) kind of game purely played by automated agents.

Here the Intruders have to try and steal an object, while the guards have to try to prevent this object from being stolen. The guards do not know which object is targeting and neither agents know the full state of the game, they'll have to explore in order to reach their goals.

Some additional game elements such as indirect communication, teleports and shadow visibility are also included.

Built With

Build with stress and sleepless nights

Getting Started

Prerequisites

Before running the application make sure you have Java 17 and Gradle installed.

Installation

  1. clone or download: git clone git@github.com:nscharrenberg/Multi-agent-Surveillance.git

  2. run gradle build to install al required libraries & build the app

  3. run gradle run to compile the game

In case you want to run experiments perform steps 1 and 2 above, following by: 4. run gradle terminal to simulate a game and record important data 5. run gradle dataManager to get analytical data from the terminal simulation

Note: You can change the map by putting the map in test/resources/maps/{your_map_here} and changing the path in MAP_PATH of com.nscharrenberg.um.multiagentsurveillance.headless.repositories.GameRepository. Make sure to also redo step 2 of the steps above.

Note: Experiment screens have to be changed in com.nscharrenberg.um.multiagentsurveillance.gui.dataGUI.DataHelper X and Y values can be changed to any of the properties mentioned in the comments of this file.

Note: Statistical Analysis Experiment data can be generated from com.nscharrenberg.um.multiagentsurveillance.gui.training.simulation.RLApp

Note: Change the agents that are being used in com.nscharrenberg.um.multiagentsurveillance.headless.repositories.PlayerRepository on the intruderType and guardType.

You can choose between:

  • RandomAgent.class
  • YamauchiAgent.class
  • EvaderAgent.class (Intruder)
  • PursuerAgent.class (Guard)
  • SBOAgent.class
  • RLAgent.class (Guard)
  • DQN_Agent.class (Intruder)
  • DeepQN_Agent.class (Intruder)

Roadmap

See the open issues for a list of proposed features (and known issues).

Known Issues

  • It takes very long for 5 agents to complete the exam map (+- 80 minutes) --> Multi-threading might fix this
  • File Importing uses hardcoded file path
  • Menu isn't correctly working
  • Experiments require you to change parameters through the code
  • Frontier sometimes gives Concurrency Exception (Temporary workaround is to rerun the game a few times)
  • Agent doesn't work properly with Basic Vision
  • Orthagonal Agent throws NPE on empty stack

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  • If you have suggestions for adding or removing projects, feel free to open an issue to discuss it, or directly create a pull request after you edit the README.md file with necessary changes.
  • Please make sure you check your spelling and grammar.
  • Create individual PR for each suggestion.
  • Please also read through the Code Of Conduct before posting your first idea as well.

Creating A Pull Request

  1. Clone the project
  2. Create an Issue (if it does not exist yet)
  3. Create a new branch from dev (git checkout -b feature/AmazingFeature)
  4. Commit your changes (git commit -m "Add some AmazingFeature")
  5. Open a Pull Request from your branch to the master branch
  6. Wait for approval from the reviewer
  7. Either process the given feedback or once approved, merge the PR.

Authors

Acknowledgements