Skip to content

Terminator as Prolog plus GPT4ALL

Igor Maznitsa edited this page Jul 21, 2023 · 6 revisions

image

I think everyone is familiar with the scene from the iconic movie "Terminator" (1984) where the Terminator searches for a possible response to the neighbor who keeps knocking. Take a look at how such behavior might be implemented in the present by supposing that we are developing terminator software.

Movie fragment

It should be a fifth generation programming language, and since we are creating software for a terminator, I choose Prolog. To synthesize text messages, the Terminator needs a vast text base, and an LLM (large language model) engine is a good option in my opinion. I like GPT4ALL with the Hermes model because it can be launched locally on a laptop and has a REST interface that is compatible with OpenAI REST API.

I utilize my own Java-based JProl, which enables doing HTTP requests and JSON conversions, as a Prolog engine.

The software is unexpectedly brief and primarily consists of request and response extraction sections..

gpt4all(Url,JsonIn,JsonOut):-
     to_json(JsonIn,JsonReqText),
     http_req([url=Url,method='POST',
     headers=['Authorization'='Bearer 12345','Content-Type'='application/json'],
     body=JsonReqText],HttpResponse),
     ll2r(HttpResponse,[response_code=200,body=BinResponseBody]),
     string_bytes(ResponseBody,BinResponseBody,'utf-8'),
     from_json(ResponseBody,JsonOut).

list_content([],[]):-!.
list_content([json(X)|T],R):-
     ll2r(X,[index=I,message=json(M)]),
     ll2r(M,[role=Role,content=C]),
     regex_replace_all('(?<=")"(?=\\])|(?<=")\\d+\\.\\s*',C,'',CT), % added because sometime GPT4ALL returns duplicated JSON string end char and index of answer
     from_json(CT,json(CJ)),
     ll2r(CJ,[result=Result]),
     list_content(T,TT),R=[Result|TT].

ask_gpt(Query,PlainR) :- gpt4all(
          'http://127.0.0.1:4891/v1/chat/completions',
          json([model='Hermes',max_tokens=4096,temperature=0.1,messages=
                    [
                         json([role='system',content='You are a JSON generator. The response must be only in plain valid JSON format. The response must not include user request. The response result field must be named as "result". The result field must not have JSON objects but only plain list of answer strings. Any answer string must be in plain string format without numeration. The example of your response is {"result":["answer1","answer2","answer3"]}. If you can not find answer then make response {"error":true}']),
                         json([role='user',content=Query])
                    ]
               ]),
          json(X)
     ),
     ll2r(X,[choices=Choices]),
     list_content(Choices,R),
     append(R,PlainR).

generate_variants(Role,State,NewState,NumVariants,Result) :-
     str_format('I am %s. Current state is %s. The wanted state is %s. Generate %d short brutal phrases of first-person to get into the new state.',
     [Role,State,NewState, NumVariants],X),
     ask_gpt(X,Result).

print_resp([]):-!.
print_resp([X|T]):-write('> '),write(X),nl,print_resp(T).

Let's pretend to be a terminator and submit a request with a description of the existing situation.

?- generate_variants('very busy','a man is knocking my door','the man leaving without dialog and questions', 6, L),
     nl,write('POSSIBLE RESPONSE:'),nl,print_resp(L),nl.

We can use the expected six speech variants produced by the GPT4ALL Hermes model for speech synthesizers as a result, which is what we got. On my laptop, it generates tokens at a pace of roughly three per second, and the response time is about 90 seconds.

POSSIBLE RESPONSE:
> Stay away from me
> Leave now!
> Go away
> I don't want to talk
> Get out of here
> Don't bother me

image

Clone this wiki locally