Skip to content

Documentation

AR102 edited this page May 5, 2023 · 3 revisions

Config Options

TRAIN_DATA & TEST_DATA

TBD

BATCH_SIZE::Int

Defines the amount of training samples used each iteration for training.

If it is the same as the amount of training samples, the model will be trained with all data each training iteration.

While a BATCH_SIZE smaller than the number of samples may make the training less effective per iteration and epoch, it also speeds up the training and decreases the memory usage, which is especially useful when using a GPU.

Lower the BATCH_SIZE if you get Out Of Memory errors.

The effects of BATCH_SIZE are affected by SHUFFLE and PARTIAL.

SHUFFLE::Bool

If true, shuffle data every epoch when dividing data into batches.

PARTIAL::Bool

If false, raise error whenever data gets left over after dividing data into epochs. If true, accept having a last batch which is only partial (smaller than BATCH_SIZE).

NUM_CHANNELS::Int

The number of EEG channels that was used for recording the EEG data. If you change this parameter, you NEED to create a new model since the structure depends on it.

MAX_FREQUENCY::Int

The max frequency recorded.

USE_CUDA::Bool

If you have an Nvidia GPU and want to use it for training, set to true. This is recommended as it is a lot faster, for my setup ~3 times faster.

LEARNING_RATE::Float64

Affects how much the gradients get applied. A bigger LEARNING_RATE can speed up training. A smaller LEARNING_RATE can help escape plateaus.

MODEL()

TBD

OPTIMIZER::Flux.Optimise.AbstractOptimiser

The optimizer used for adjusting the model to the data, needs to be a function name. There are options predefined by Flux. You can find them in the Flux documentation.

LOSS::Function

Of course, you can define your own function with the first input being the model estimate, the second one the actual output, and the returned output the loss.

Example:

myloss(ŷ, y) = (y - ŷ) ^ 2 # Mean Squared Error
OPTIMIZER = myloss