-
-
Notifications
You must be signed in to change notification settings - Fork 23
Dann Object
When you create a neural network, you need to specify the size of the input & output layers.
const nn = new Dann(2,2);
- arch
This value represents the architecture of the model in the form of an array.
- lr
This defines the learning rate of the model. This value is set to0.001
by default.
- epoch
This is an empty value. This is meant for you to increase whenever you have completed one epoch. This serves as a way to save the number of epochs along with the weights in the dannData.json file.
- loss
This is the most recent loss value of the model. If the model has never been trained before, this value will be set to 0.
This function feeds data through the model to obtain an output.
- input
Takes an array of inputs to feed forward through the network.
- options (optional)
An object including specific properties.
Property | Type | Function |
---|---|---|
log | Boolean | If set to true, it will log a report in the console. |
table | Boolean | If the 'log' option is set to true, setting this value to true will print the arrays of this function in tables. |
mode * for development |
String | When gpu support will be implemented, specifing the string 'gpu' as opposed to 'cpu' will run the function on a kernel. This funtionality is not yet implemented |
- returns
an array of outputs
Train the model's weights.
- input
Takes an array of inputs.
- target
Takes an array of desired outputs.
- options (optional)
An object including specific properties.
Property | Type | Function |
---|---|---|
log | Boolean | If set to true, it will log a report in the console. |
table | Boolean | If the 'log' option is set to true, setting this value to true will print the arrays of this function in tables. |
saveLoss | Boolean | Whether or not to save the losses in the neural network object. After a lot of training, carrying loss data in the neural network object gets heavy, this is why it is set to false by default. |
mode * for development |
String | When gpu support will be implemented, specifing the string 'gpu' as opposed to 'cpu' will run the function on a kernel. This funtionality is not yet implemented |
This function mutates each weights randomly. This is for Neuroevolution tasks.
- range
This will multiply with a random number from -range to range and add to each weight.
- probability
The probability of a weight being affected by a random mutation. Ranging from 0 to 1. Setting this value to 1 would mutate all the model's weights.
This function mutates the weights. This is for Neuroevolution tasks.
- percent
This will multiply by percent and add to each weight.
Adds one hidden layer.
- size
Layer size, the number of neurons in the layer.
- activation
Takes a string of the activation function's name. If left empty, the activation function will be set to'sigmoid'
by default. See available activation functions Here.
Creates the weights. This function should be called after all the hidden layers were added. The optional parameters determine the range in which starting weights are going to be set randomly. If no arguments are specified, weights are going to be set in between -1 and 1.
- min (optional)
The minimum range value.
- max (optional)
The maximum range value.
Sets the activation function of the output
- activation
Takes a string of the activation function's name. If this function is not called, the activation function will be set to 'sigmoid' by default. See available activation functions Here.
Set the loss function of the model
- lossfunc
Takes a string of the loss function's name. If this function is not called, the loss function will be set to 'mse' by default. See available loss functions Here.
Displays information about the model in the console.
- options (optional)
An object including specific properties.
Property | Type | Function |
---|---|---|
details | Boolean | If set to true, the function will log more advanced details about the model. |
decimals | integer | The number of decimals the logged data is going to have. It is set to 3 by default. |
table | Boolean | Whether or not we want to print our matrices in the form of a table or Matrix object log. |
gradients | Boolean | If this is set to true, the the function will log the gradients of the model. |
biases | Boolean | If this is set to true, the the function will log the biases of the model. |
weights | Boolean | If this is set to true, the the function will log the weights of the model. |
struct | Boolean | If this is set to true, the the function will log the structure of the model. |
errors | Boolean | If this is set to true, the the function will log the errors of the model. |
misc | Boolean | If this is set to true, the the function will log the loss of the model, the learning rate of the model and the loss function (the learning rate could also be logged as console.log(Dann.lr)). |
saves a name.json file containing information about the network and its current state. When the function is called, a local file dialogue is opened by the browser.
saves a json file containing information about the network and its current state in ./savedDanns/name/modelData.json. It also saves the losses in ./savedDanns/name/losses.csv if the Dann.recordLoss object property is set to true.
When this function is called, an input tag requesting a file appears on screen. When clicked, it opens a local file dialogue. Once the appropriate file is selected the dann data automatically uploads. The filename argument is not required for this version since the browser dialog takes care of it.
ex:
const nn = new Dann();
//opens a DOM file selector
nn.load('nn',function(err) {
if (err) {
console.log('Error loading the Dann model');
} else {
console.log('Successfully loaded the Dann model');
nn.log();
}
});
Load a previously saved json file from ./savedDanns/. If the network's architechture is not the same, it is going to overwrite the Dann object.
Here is a nerural network solving XOR:
const Dann = require('dannjs').dann; //nodejs only
// XOR neural network
const nn = new Dann(2,1);
nn.addHiddenLayer(4,'tanH');
nn.outputActivation('sigmoid');
nn.makeWeights();
nn.lr = 0.1;
// feeding data to the model before training
nn.feedForward([0,0],{log:true});
// training the model
const epoch = 10000;
for (let e = 0; e < epoch; e++) {
nn.backpropagate([0,0],[0]);
nn.backpropagate([1,1],[0]);
nn.backpropagate([0,1],[1]);
nn.backpropagate([1,0],[1]);
}
// feeding data to the model after training
nn.feedForward([0,0],{log:true});