OCAtari v0.1.0
Inspired by the work of Anand et. al., we present OCAtari, an improved, extended and object-centric version of their ATARI ARI project.
OCAtari provides for over 40 Atari environemnts with extended object centric state representations and still counting.
Installation
Simply install ocatari by using pip install ocatari
and pip install "gymnasium[atari, accept-rom-license]"
. As an alternative you can clone this repo and run: python setup.py install
or if you want to modify the code python setup.py develop
Features
- Vision Extraction Mode (VEM): Return a list of objects currently on the screen with their X, Y, Width, Height, R, G, B Values, based on handwritten rules used on the visual representation.
- Ram Extraction Mode (REM): Uses the object values stored in the RAM to detect the objects currently on the screen.
- Gym wrapper: We provide an easy to use wrapper for gymnasium, making the adaptation of already existing code a simple as possible.
- Producing your own dataset: OCAtari can be used to generate datasets consisting of a representation of the current state in form of an RGB array and a list of all objects within the state.
- Scripts to look inside OCAtari: We provide a huge amount of scripts to help and test while implementing, extending or using OCAtari. Most are used to reverse engineer the RAM state, like searching for correlations within the RAM state.
Models and additional Information
- We do not provide any trained models yet. However, to test our framework we recommend to use the following models to start with: Atari-Agents.
- It is also possible to train own models, by e.g., using CleanRL
Usage
A basic example how to use our OCAtari environments:
from ocatari.core import OCAtari
import random
env = OCAtari("Pong", mode="ram", hud=True, render_mode="rgb_array")
observation, info = env.reset()
action = random.randint(env.nb_actions-1)
obs, reward, terminated, truncated, info = env.step(action)