Replies: 3 comments 4 replies
-
Made a slight edit to my original comment to eliminate ambiguity and confusion. H100GB should be H100 80GB. |
Beta Was this translation helpful? Give feedback.
0 replies
-
And what exactly happens when you try to launch this on some average PC config, lets say 3060ti/16gb RAM? |
Beta Was this translation helpful? Give feedback.
2 replies
-
Thanks for sharing! So is it possible to do a little bit modification to run on 2 nodes with 4 A100 80g on each node? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Run grok-1 with less than π² 420 G VRAM
See: llama2.cpp / grok-1 support
@ibab_ml on X
What are some of the working setups?
llama2.cpp:
Mac
ggerganov/llama.cpp#6204 (comment)
AMD
ggerganov/llama.cpp#6204 (comment)
This repo:
Intel + Nvidia
#168 (comment)
AMD
#130 (comment)
Other / Container / Cloud
#6 (comment)
See:
#42
#130 (comment)
#172 (comment)
Beta Was this translation helpful? Give feedback.
All reactions