Skip to content
This repository has been archived by the owner on Dec 6, 2023. It is now read-only.

Commit

Permalink
update: build version
Browse files Browse the repository at this point in the history
  • Loading branch information
biswaroop1547 committed Jul 28, 2023
1 parent c7de6f3 commit 3c8fc57
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 3 deletions.
2 changes: 1 addition & 1 deletion cht-llama-v2/build.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/bin/bash
set -e
export VERSION=1.0.0
export VERSION=1.0.1

IMAGE=ghcr.io/premai-io/chat-llama-2-7b-gpu
docker buildx build ${@:1} \
Expand Down
6 changes: 4 additions & 2 deletions cht-llama-v2/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ def embeddings(cls, text) -> None:
pass

@abstractmethod
@staticmethod
def stitch_prompt(messages: list) -> str:
pass

Expand Down Expand Up @@ -68,7 +67,10 @@ def generate(
do_sample=kwargs.get("do_sample", True),
stop_sequence=stop[0] if stop else None,
stopping_criteria=cls.stopping_criteria(stop, prompt, cls.tokenizer),
)[0]["generated_text"].rstrip(stop[0] if stop else "")
)[0]["generated_text"]
.rstrip(stop[0] if stop else "")
.rsplit(".", 1)[0]
.strip()
]

@classmethod
Expand Down

0 comments on commit 3c8fc57

Please sign in to comment.