Skip to content

Commit

Permalink
Move to cuda unconditionally so pp-only run works
Browse files Browse the repository at this point in the history
  • Loading branch information
wconstab committed Feb 9, 2024
1 parent e1b61c3 commit b382345
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions torchtrain/parallelisms/parallelize_llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -185,4 +185,6 @@ def parallelize_llama(model, world_mesh, parallel_dims, args):

rank0_log("Applied FSDP to the model...")

# redundant if FSDP is used, but ensure the model is on device consistently regardless with parallelisms were used
model.cuda()
return model

0 comments on commit b382345

Please sign in to comment.