We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running the sample, python dlrm_s_pytorch.py --mini-batch-size=2 --data-size=6 --debug-mode
Outputs:
Using CPU... model arch: mlp top arch 3 layers, with input to output dimensions: [8 4 2 1] of interactions 8 mlp bot arch 2 layers, with input to output dimensions: [4 3 2] of features (sparse and dense) 4 dense feature size 4 sparse feature size 2 of embeddings (= # of sparse features) 3, with dimensions 2x: [4 3 2] data (inputs and targets): mini-batch: 0 tensor([[0.6965, 0.2861, 0.2269, 0.5513], [0.7195, 0.4231, 0.9808, 0.6848]]) tensor([[1, 2], [1, 1], [1, 1]], dtype=torch.int32) [tensor([1, 0, 1]), tensor([0, 1]), tensor([1, 0])] tensor([[0.3618], [0.2283]]) mini-batch: 1 tensor([[0.2937, 0.6310, 0.0921, 0.4337], [0.4309, 0.4937, 0.4258, 0.3123]]) tensor([[1, 2], [1, 2], [1, 1]], dtype=torch.int32) [tensor([3, 0, 2]), tensor([1, 1, 2]), tensor([1, 1])] tensor([[0.6031], [0.5451]]) mini-batch: 2 tensor([[0.3428, 0.3041, 0.4170, 0.6813], [0.8755, 0.5104, 0.6693, 0.5859]]) tensor([[2, 1], [1, 2], [1, 1]], dtype=torch.int32) [tensor([2, 3, 2]), tensor([0, 0, 2]), tensor([1, 1])] tensor([[0.5568], [0.1590]]) initial parameters (weights and bias): [[-0.34693 0.19553] [-0.18123 0.19197] [ 0.05438 -0.11105] [ 0.42513 0.34167]] [[-0.16466 -0.52702] [-0.22543 -0.11757] [ 0.23667 0.57199]] [[-0.20377 0.3713 ] [ 0.13177 0.27111]] [[-0.16825 -0.58044 -0.39152 -0.64812] [ 1.11561 0.0879 0.61481 -0.67743] [ 0.09677 0.62959 -0.17907 0.55115]] [-0.62618 -0.7872 0.21905] [[-0.23981 0.40607 -1.25093] [ 0.45048 1.64331 -0.01557]] [0.02414 0.12696] [[-0.76015 0.17397 -0.65541 -0.1746 0.5074 -0.30015 0.20463 0.41345] [ 0.1138 -0.55969 -0.13573 0.79993 -0.82672 -0.11259 -0.2254 0.04929] [ 0.30546 0.65675 -0.11032 0.33164 0.20402 0.19365 -0.23022 -0.40715] [-0.44909 -0.30881 0.13133 0.31066 0.13206 -0.22411 0.73728 0.62007]] [-0.177 -0.41172 0.06511 0.63365] [[ 0.19212 0.32132 -0.12244 0.26343] [ 0.89174 -0.13837 0.08274 0.14654]] [ 0.20062 -0.99836] [[-1.53246 -0.83254]] [0.16794] time/loss/accuracy (if enabled): zsh: segmentation fault python dlrm_s_pytorch.py --mini-batch-size=2 --data-size=6 --debug-mode
Using CPU... model arch: mlp top arch 3 layers, with input to output dimensions: [8 4 2 1]
8 mlp bot arch 2 layers, with input to output dimensions: [4 3 2]
4 dense feature size 4 sparse feature size 2
[4 3 2] data (inputs and targets): mini-batch: 0 tensor([[0.6965, 0.2861, 0.2269, 0.5513], [0.7195, 0.4231, 0.9808, 0.6848]]) tensor([[1, 2], [1, 1], [1, 1]], dtype=torch.int32) [tensor([1, 0, 1]), tensor([0, 1]), tensor([1, 0])] tensor([[0.3618], [0.2283]]) mini-batch: 1 tensor([[0.2937, 0.6310, 0.0921, 0.4337], [0.4309, 0.4937, 0.4258, 0.3123]]) tensor([[1, 2], [1, 2], [1, 1]], dtype=torch.int32) [tensor([3, 0, 2]), tensor([1, 1, 2]), tensor([1, 1])] tensor([[0.6031], [0.5451]]) mini-batch: 2 tensor([[0.3428, 0.3041, 0.4170, 0.6813], [0.8755, 0.5104, 0.6693, 0.5859]]) tensor([[2, 1], [1, 2], [1, 1]], dtype=torch.int32) [tensor([2, 3, 2]), tensor([0, 0, 2]), tensor([1, 1])] tensor([[0.5568], [0.1590]]) initial parameters (weights and bias): [[-0.34693 0.19553] [-0.18123 0.19197] [ 0.05438 -0.11105] [ 0.42513 0.34167]] [[-0.16466 -0.52702] [-0.22543 -0.11757] [ 0.23667 0.57199]] [[-0.20377 0.3713 ] [ 0.13177 0.27111]] [[-0.16825 -0.58044 -0.39152 -0.64812] [ 1.11561 0.0879 0.61481 -0.67743] [ 0.09677 0.62959 -0.17907 0.55115]] [-0.62618 -0.7872 0.21905] [[-0.23981 0.40607 -1.25093] [ 0.45048 1.64331 -0.01557]] [0.02414 0.12696] [[-0.76015 0.17397 -0.65541 -0.1746 0.5074 -0.30015 0.20463 0.41345] [ 0.1138 -0.55969 -0.13573 0.79993 -0.82672 -0.11259 -0.2254 0.04929] [ 0.30546 0.65675 -0.11032 0.33164 0.20402 0.19365 -0.23022 -0.40715] [-0.44909 -0.30881 0.13133 0.31066 0.13206 -0.22411 0.73728 0.62007]] [-0.177 -0.41172 0.06511 0.63365] [[ 0.19212 0.32132 -0.12244 0.26343] [ 0.89174 -0.13837 0.08274 0.14654]] [ 0.20062 -0.99836] [[-1.53246 -0.83254]] [0.16794] time/loss/accuracy (if enabled): zsh: segmentation fault python dlrm_s_pytorch.py --mini-batch-size=2 --data-size=6 --debug-mode
The text was updated successfully, but these errors were encountered:
Sorry, something went wrong.
No branches or pull requests
Running the sample,
python dlrm_s_pytorch.py --mini-batch-size=2 --data-size=6 --debug-mode
Outputs:
The text was updated successfully, but these errors were encountered: