Skip to content
This repository has been archived by the owner on Sep 30, 2023. It is now read-only.

Commit

Permalink
recomment the cuda preprocessor check
Browse files Browse the repository at this point in the history
  • Loading branch information
ravenscroftj committed Aug 26, 2023
1 parent 215a69b commit a00de2a
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/gptj.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -566,7 +566,7 @@ bool GPTJModel::load_model(std::string fname) {



//#if defined(GGML_USE_CLBLAST) || defined(GGML_USE_CUBLAS)
#if defined(GGML_USE_CLBLAST) || defined(GGML_USE_CUBLAS)

if(config.n_gpu_layers > 0){
size_t vram_total = 0;
Expand Down Expand Up @@ -603,7 +603,7 @@ bool GPTJModel::load_model(std::string fname) {
spdlog::info("{}: [GPU] total VRAM used: {} MB\n", __func__, vram_total / 1024 / 1024);
}

//#endif // defined(GGML_USE_CLBLAST) || defined(GGML_USE_CUBLAS)
#endif // defined(GGML_USE_CLBLAST) || defined(GGML_USE_CUBLAS)

return true;
}
Expand Down

0 comments on commit a00de2a

Please sign in to comment.