You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I took a look into the code, and the following block of code should do the trick
def split_prompt(text, split_length):
if split_length <= 0:
raise ValueError("Max length must be greater than 0.")
num_parts = -(-len(text) // split_length)
file_data = []
for i in range(num_parts):
start = i * split_length
end = min((i + 1) * split_length, len(text))
if i == num_parts - 1:
content = f'[START PART {i + 1}/{num_parts}]\n' + text[start:end] + f'\n[END PART {i + 1}/{num_parts}]'
content += '\nALL PARTS SENT. Now you can continue processing the request.'
else:
content = f'Do not answer yet. This is just another part of the text I want to send you. Just receive and acknowledge as "Part {i + 1}/{num_parts} received" and wait for the next part.\n[START PART {i + 1}/{num_parts}]\n' + text[start:end] + f'\n[END PART {i + 1}/{num_parts}]'
content += f'\nRemember not answering yet. Just acknowledge you received this part with the message "Part {i + 1}/{num_parts} received" and wait for the next part.'
file_data.append({
'name': f'split_{str(i + 1).zfill(3)}_of_{str(num_parts).zfill(3)}.txt',
'content': content
})
return file_data
Are there any plans to find a way to use this with the openai api, rather than the chatgpt frontend?
The text was updated successfully, but these errors were encountered: