You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Neuroplytorch approach seems to have issues with small window sizes (tested with windows between 2 and 5).
The issue seems to be that the system is not able to train the perception layer for some of the MNIST digits, meaning that it is then not able to detect the complex events correctly.
I'm not sure on what may be the cause of this. It may be the case that the LSTM layer is not able to deal with smaller window sizes for some reason, but I'm not sure how this could happen.
Another option that would be worth experimenting with is the number of training cases generated for the reasoning layer, to evaluate if overfitting it prevents the correct training in the perception layer. The tests performed where the system did not perform well were done using Neuroplex generating 100 000 training cases. This seemed to work well for windows of 10 and 15, but not for windows between 2 and 5. This may be because, using the 10 MNIST digit classes, there are 10Window combinations, meaning that 100 000 is a small percentage of the cases for windows of 10 and 15, but significantly more cases than windows of 2 to 4.
As a result, this may be causing the reasoning layer to overfit. This does not instinctively seem like a big issue, but it may be preventing the reasoning layer from training correctly.
TLDR; It would be interesting to see if changing the number of training cases for the reasoning layer when using small window sizes affects the performance of the reasoning and perception layers, as well as the performance of the system overall.
Of course, this is under the assumption that the same issue happens in Neuroplytorch as it happened in Neuroplex. I don't see why this would not be the case, but it's worth checking before doing the experiments.
The text was updated successfully, but these errors were encountered:
The Neuroplytorch approach seems to have issues with small window sizes (tested with windows between 2 and 5).
The issue seems to be that the system is not able to train the perception layer for some of the MNIST digits, meaning that it is then not able to detect the complex events correctly.
I'm not sure on what may be the cause of this. It may be the case that the LSTM layer is not able to deal with smaller window sizes for some reason, but I'm not sure how this could happen.
Another option that would be worth experimenting with is the number of training cases generated for the reasoning layer, to evaluate if overfitting it prevents the correct training in the perception layer. The tests performed where the system did not perform well were done using Neuroplex generating 100 000 training cases. This seemed to work well for windows of 10 and 15, but not for windows between 2 and 5. This may be because, using the 10 MNIST digit classes, there are 10Window combinations, meaning that 100 000 is a small percentage of the cases for windows of 10 and 15, but significantly more cases than windows of 2 to 4.
As a result, this may be causing the reasoning layer to overfit. This does not instinctively seem like a big issue, but it may be preventing the reasoning layer from training correctly.
TLDR; It would be interesting to see if changing the number of training cases for the reasoning layer when using small window sizes affects the performance of the reasoning and perception layers, as well as the performance of the system overall.
Of course, this is under the assumption that the same issue happens in Neuroplytorch as it happened in Neuroplex. I don't see why this would not be the case, but it's worth checking before doing the experiments.
The text was updated successfully, but these errors were encountered: