这是indexloc提供的服务,不要输入任何密码
Skip to content
This repository was archived by the owner on Aug 3, 2021. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions Pipfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true

[dev-packages]

[packages]
torch = "*"
numpy = "*"

[requires]
python_version = "3.6"
70 changes: 70 additions & 0 deletions Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,10 @@ The model is based on deep AutoEncoders.

## Requirements
* Python 3.6
* [Pytorch](http://pytorch.org/)
* [Pytorch](http://pytorch.org/): `pipenv install`
* CUDA (recommended version >= 8.0)


## Training using mixed precision with Tensor Cores
* You would need NVIDIA Volta-based GPU
* Checkout [mixed precision branch](https://github.com/NVIDIA/DeepRecommender/tree/mp_branch)
Expand Down
8 changes: 4 additions & 4 deletions run.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,8 @@ def do_eval(encoder, evaluation_data_layer):
targets = Variable(eval.cuda().to_dense() if use_gpu else eval.to_dense())
outputs = encoder(inputs)
loss, num_ratings = model.MSEloss(outputs, targets)
total_epoch_loss += loss.data[0]
denom += num_ratings.data[0]
total_epoch_loss += loss.item()
denom += num_ratings.item()
return sqrt(total_epoch_loss / denom)

def log_var_and_grad_summaries(logger, layers, global_step, prefix, log_histograms=False):
Expand Down Expand Up @@ -195,7 +195,7 @@ def main():
loss.backward()
optimizer.step()
global_step += 1
t_loss += loss.data[0]
t_loss += loss.item()
t_loss_denom += 1

if i % args.summary_frequency == 0:
Expand All @@ -209,7 +209,7 @@ def main():
log_var_and_grad_summaries(logger, rencoder.decode_w, global_step, "Decode_W")
log_var_and_grad_summaries(logger, rencoder.decode_b, global_step, "Decode_b")

total_epoch_loss += loss.data[0]
total_epoch_loss += loss.item()
denom += 1

#if args.aug_step > 0 and i % args.aug_step == 0 and i > 0:
Expand Down
8 changes: 4 additions & 4 deletions test/test_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def test_CPU(self):
loss = loss / num_ratings
loss.backward()
optimizer.step()
print('[%d, %5d] loss: %.7f' % (epoch, i, loss.data[0]))
print('[%d, %5d] loss: %.7f' % (epoch, i, loss.item()))

def test_GPU(self):
print("iRecAutoEncoderTest Test on GPU started")
Expand All @@ -56,7 +56,7 @@ def test_GPU(self):
loss = loss / num_ratings
loss.backward()
optimizer.step()
total_epoch_loss += loss.data[0]
total_epoch_loss += loss.item()
denom += 1
print("Total epoch {} loss: {}".format(epoch, total_epoch_loss/denom))

Expand All @@ -81,7 +81,7 @@ def test_CPU(self):
loss = loss / num_ratings
loss.backward()
optimizer.step()
print('[%d, %5d] loss: %.7f' % (epoch, i, loss.data[0]))
print('[%d, %5d] loss: %.7f' % (epoch, i, loss.item()))
if i == 5: # too much compute for CPU
break

Expand All @@ -108,7 +108,7 @@ def test_GPU(self):
loss = loss / num_ratings
loss.backward()
optimizer.step()
total_epoch_loss += loss.data[0]
total_epoch_loss += loss.item()
denom += 1
print("Total epoch {} loss: {}".format(epoch, total_epoch_loss / denom))

Expand Down