Model |
Dataset |
EvaluationSet |
meanAveragePrecisionAtK |
precisionAtK |
ndcgAtK |
recallAtK |
RMSE |
MAE |
Baseline (basic) |
ml-latest-small |
val |
0.1674 |
0.2657 |
0.3071 |
0.2146 |
2.8901 |
2.6847 |
Baseline (basic) |
ml-latest-small |
test |
0.1332 |
0.2497 |
0.3097 |
0.2513 |
2.7168 |
2.5017 |
Baseline (enhanced) |
ml-latest-small |
val |
0.1162 |
0.2306 |
0.2797 |
0.1875 |
0.9102 |
0.7173 |
Baseline (enhanced) |
ml-latest-small |
val |
0.1053 |
0.2125 |
0.2581 |
0.2077 |
0.9265 |
0.7279 |
We tried the following hypermeters:
regularizationParams = [.01, .05, .1, .2]
latentRanks = [10, 50, 100, 150]
data |
map@K |
precision@K |
ndcg@K |
recall@K |
RMSE |
MAE |
small |
0.0039 |
0.0454 |
0.0413 |
0.0163 |
0.9631 |
0.7525 |
large |
0.0021 |
0.0338 |
0.0253 |
0.0099 |
0.861 |
0.6797 |
Dataset |
evalset |
precision@K |
auc |
recall@K |
reciprocal rank |
small |
train |
0.3366 |
0.9979 |
0.4642 |
0.7111 |
small |
test |
0.0448 |
0.8943 |
0.2395 |
0.1609 |
large |
train |
0.3918 |
0.9617 |
0.0616 |
0.82 |
large |
test |
0.0311 |
0.8538 |
0.0311 |
0.278 |