api/curl used to start OpenAI chat access to LLM

Skip to first unread message

Thomas McGuire

unread,
Sep 21, 2025, 11:54:31 PMSep 21
to fo...@jsoftware.com
Since Bill Lam was so nice in putting together the api/curl FFI for me I thought I would share what I was trying to do with it. 

Using openai-c (that I found on GitHub: https://github.com/LunaStev/openai-c) as a template and using almost all of Bill’s getinmemory.ijs test code (Github: https://github.com/jsoftware/api_curl), I only needed to add a few lines to handle curl headers and json instructions.

So here is the code for the start of an OpenAI API I used an object oriented design since most of the examples in python use an object oriented approach: 

coclass 'OpenAI'


require 'convert/pjson'


load 'api/curl'

coinsert 'jcurl'



NULL =: <0


create =: 3 : 0

if. 0 = $y do.

url =: 'https://api.openai.com/v1/chat/completions'

else.

url =: y

end.

curl_global_init <CURL_GLOBAL_ALL

data_1 =: ''

)


destroy =: 3 : 0

curl_global_cleanup ''

codestroy''

)


cdcallback=: 3 : 0

y=. 15!:17''

if. 4=#y do. writedata y end.

)


writedata=: 3 : 0

'data size nmemb userp'=. y

rsize=. size*nmemb

name=. 'data_',":userp

(name)=: name~, memr data,0,rsize,2

rsize

)


NB. chat with model

chatWithModel =: 3 : 0

'prompt model' =. y


NB. set callback with local scope variable

f=. [: 15!:13 (IFWIN#'+') , ' x' $~ +:@>:


NB. curl_global_init <CURL_GLOBAL_ALL


hcurl =. curl_easy_init''

if. hcurl = 0 do. echo 'curl init failed' return. end.


NB. headers =: mema 8

headers =: <0

headers =: curl_slist_append (0);'Content-Type: application/json'

headers =: curl_slist_append headers;'Authorization: Bearer 1'


messages =. <('role';'user'),:'content';prompt

rt =. ('model';model),('messages';<messages),:'temperature';0.7

tjson =: enc_pjson_ rt

res =. curl_easy_setopt_str hcurl;CURLOPT_URL;setopt_variadic, <url

if. res~:CURLE_OK do. echo memr 0 _1,~ curl_easy_strerror <res end.

res =. curl_easy_setopt_str hcurl;CURLOPT_POSTFIELDS;setopt_variadic,<tjson

if. res~:CURLE_OK do. echo memr 0 _1,~ curl_easy_strerror <res end.

res =. curl_easy_setopt_ptr hcurl;CURLOPT_HTTPHEADER;setopt_variadic,<<headers

if. res~:CURLE_OK do. echo memr 0 _1,~ curl_easy_strerror <res end.

res=. curl_easy_setopt hcurl; CURLOPT_FOLLOWLOCATION; setopt_variadic, <1

if. res~:CURLE_OK do. echo memr 0 _1,~ curl_easy_strerror <res end.

res=. curl_easy_setopt hcurl; CURLOPT_WRITEFUNCTION; setopt_variadic, <(f 4)

if. res~:CURLE_OK do. echo memr 0 _1,~ curl_easy_strerror <res end.

res=. curl_easy_setopt hcurl; CURLOPT_WRITEDATA; setopt_variadic, <1

if. res~:CURLE_OK do. echo memr 0 _1,~ curl_easy_strerror <res end.

res=. curl_easy_perform <hcurl

if. res~:CURLE_OK do. echo memr 0 _1,~ curl_easy_strerror <res end.

curl_slist_free_all <headers

curl_easy_cleanup <hcurl


NB. decode JSON response

parsed =: dec_pjson_ data_1


echo parsed

)


NB. chat api call

NB. usage:

NB. chat__<instance_name> '<prompt_string>';'<model_name>'

chat =: 3 : 0

chatWithModel y

data_1

)



WIth that in place you can connect up directly with chatGPT if you have an account. You will need to get an API key and edit the ‘headers’ portion of the code appropriately. My interest was to run a local LLM on my MacBook Pro. Since I ultimately want to experiment with the Qwen Next model I chose a smaller Qwen model that would fit on my laptop. So first I used the DrDeek repo of the "deepseek-coder-33b-instruct” model (find this here: https://huggingface.co/DrDeek/deepseek-coder-33b-instruct-Q4_K_M-GGUF). To run it I used Llama.cpp’s llama-server command straight from the example on hugging face. This presumes you have llama.cpp up and running on your system.

In a terminal window I ran:
llama-server --hf-repo DrDeek/deepseek-coder-33b-instruct-Q4_K_M-GGUF --hf-file deepseek-coder-33b-instruct-q4_k_m.gguf -c 2048

You need to have a fast internet connection because this will download a 19.9GB model if it’s not already installed on your system. Llama.cpp can be found here: https://github.com/ggml-org/llama.cpp 
If you have a Mac you can use homebrew to install this without having to build and compile it. 

usage of the above code assuming you have it in the temp directory in j-user directory:


load '~temp/jopenai.ijs’

mchat =: 'http://127.0.0.1:8080/v1/chat/completions' conew 'OpenAI'

chat__mchat 'Tell me something interesting about space';'deepseek-coder-33b-instruct'


That code will ultimately return a big boxed structure of json code returned from the LLM. There is one big problem I don’t understand when I first power this up both the llama-server and the program the first access is an error. If I repeat the command then it works without a problem.

Why bother with all of this? Well now that I can access an LLM in J programmatically and receive much more regular JSON responses. We should now be able to use J for “agentic” programming. Just have J interpret the JSON responses and perform the actions that the LLM is saying should be done. I think J is ultimately more secure for this type of work. Partly because I can isolate J into the Jconsole or better yet create an application that runs under JHS in user space in a particular user development account. This would isolate usage behind a username and password and isolate usage to a particular user account. 

I don’t know if anyone has tried claude-code but I tried it briefly and was shocked at how much access it had to my user account. I took it out after running it once because I was afraid of possible security problems. 

Tom McGuire


bill lam

unread,
Sep 22, 2025, 12:50:38 AMSep 22
to fo...@jsoftware.com
I didn't try because my Macbook M1 is limited both in RAM and SSD space.

My previous experience with Bearer authentication is that it needs to
POST the credential in JSON format and then receive a token from the
server.
The token is used in subsequent queries of the session. How did you
get the token?
> To unsubscribe from this group and stop receiving emails from it, send an email to forum+un...@jsoftware.com.

Thomas McGuire

unread,
Sep 22, 2025, 1:03:54 AMSep 22
to fo...@jsoftware.com
a local LLM ignores the API Key so I don’t need one. I kept it in as a place holder for trying to connect with ChatGPT

Tom

Thomas McGuire

unread,
Sep 22, 2025, 2:04:57 AMSep 22
to fo...@jsoftware.com
So I’m not sure about the token. However after giving my chatGPT account some money and setting up a payment method for the API access I was able to do the following: 

Created an api key at chatGPT: sk-proj-. . .

added that to the header code in chat_with_model verb

headers =: <0

headers =: curl_slist_append (0);'Content-Type: application/json'

headers =: curl_slist_append headers;'Authorization: Bearer ','sk-proj-. . .’ NB. your own API Key here


Now after reloading the script:


mchat =: 'https://api.openai.com/v1/chat/completions' conew 'OpenAI'

chat__mchat 'Tell me something interesting about space';'gpt-4o'

{

"id": "chatcmpl-CITpI9fvRtGVSvGayHcbjEj5OANFT",

"object": "chat.completion",

"created": 1758520592,

"model": "gpt-4o-2024-08-06",

"choices": [

{

"index": 0,

"message": {

"role": "assistant",

"content": "Certainly! One fascinating aspect of space is the existence of \"rogue planets,\" which are planets that do not orbit a star. Unlike the planets in our solar system, these rogue planets drift through the galaxy independently. Scientists...

"refusal": null,

"annotations": []

},

"logprobs": null,

"finish_reason": "stop"

}


chat__mchat 'Can you name the first 8 planets?';'gpt-4o'

{

"id": "chatcmpl-CITqfuJ1blOaEGhXhWblc1tHtRm7V",

"object": "chat.completion",

"created": 1758520677,

"model": "gpt-4o-2024-08-06",

"choices": [

{

"index": 0,

"message": {

"role": "assistant",

"content": "The first eight planets in our solar system, starting from the one closest to the Sun, are:\n\n1. Mercury\n2. Venus\n3. Earth\n4. Mars\n5. Jupiter\n6. Saturn\n7. Uranus\n8. Neptune",

"refusal": null,

"annotations": []

},

"logprobs": null,

"finish_reason": "stop"

}


I can make subsequent chat requests with no additional setup. I bolded the 2 chat requests and provided a partial printout of the JSON returned.


You do need a pay as you go account or a pro account so that the API keys have access to the chat interface. I print out a lot of stuff with the current script because I am trying to figure out how to parse the JSON in boxed form with J.


Tom



On Sep 22, 2025, at 12:50 AM, bill lam <bbil...@gmail.com> wrote:

Reply all
Reply to author
Forward
0 new messages