MindColor is a simple GAN that takes brain waves as input, and attempts to recreate the image that the person was looking at while its mind was being recorded. This is my final project for class "Machine Learning Advanced Models".
Brainwaves
- brainwaves were collected using a meditation headband, made by Flowtime link;
- 5 different types of waves: alpha, betha, gamma, delta, theta
Images
- images were generated with python and matplotlib;
- four categories of images: red, blue, green, yellow as squares.
In detail I have recorded my brain activity while staring at each image, for about 15 minutes. Data is collected by the headband every 0.6 seconds.
Two dataset were created:
- one for the generator, made of brainwaves
- one for the discriminator, made of images
A simple GAN was implemented. The generator input is an array of floats, length is 5 (one datapoint for each kind of brainwave). Its output is a rgb image 28x28x3. The discriminator input is the image generated and its output is a "judgement", which is a float that can be positive or negative. Loss function is cross_entropy for either the generator and the discriminator.
Training Loss
Generated Images
Test Loss
Generated Images
- bigger input matrix for generator (ex. 50x5 instead of 1x5); this means more data has to be collected
- we could predict the color of the image instead of generating the image; this shift the task to a classifcation one, and ligthens the model (since we don't need 28x28x3 but just a few pixels)
- normalization of input data (!)
- another model
- more epochs (like 1000s)