ChatGPT worksheet contexts

128 views
Skip to first unread message

Tim Gray

unread,
Jan 15, 2024, 5:10:20 PM1/15/24
to BBEdit Talk
What is the recommended way of including a context while use a ChatGPT worksheet? Just start of with it as your first line? Something else?

Rich Siegel

unread,
Jan 15, 2024, 5:27:28 PM1/15/24
to BBEdit Talk
On 15 Jan 2024, at 17:09, Tim Gray wrote:

> What is the recommended way of including a context while use a ChatGPT worksheet? Just start of with it as your first line? Something else?

There's not currently a mechanism for adding (or specifying, or changing) context. If this is important to you, please let us know via email to "sup...@barebones.com".

Thanks,

R.

--
Rich Siegel Bare Bones Software, Inc.
<sie...@barebones.com> <https://www.barebones.com/>

Someday I'll look back on all this and laugh... until they sedate me.

Bruce Van Allen

unread,
Jan 15, 2024, 7:14:11 PM1/15/24
to BBEdit Talk
As one kind of context, I describe something or write out a code excerpt a la StackOverflow, and then ask a question about it.

Example: select this and press Enter/Control-Return:

The programming language Raku has a Supply construct, which provides the means to iterate over a variety of collections and sequences. What other programming languages have similar features?

Answer in a few moments (you can see the spinning activity indicator in the bottom bar):

> Several programming languages have similar features to Raku's Supply construct. Some of them include:

> 1. Python: Python has a similar concept called ...

For some cases my description has taken multiple paragraphs, and often the first answer misses something so I refer back or re-state and query again.



— Bruce

_bruce__van_allen__santa_cruz_ca_

James Reynolds

unread,
Jan 16, 2024, 11:54:32 AM1/16/24
to bbe...@googlegroups.com
>> There's not currently a mechanism for adding (or specifying, or changing) context. If this is important to you, please let us know via email to "sup...@barebones.com".

So is the entire worksheet sent, the line the cursor is on, or the selected text?

> For some cases my description has taken multiple paragraphs, and often the first answer misses something so I refer back or re-state and query again.

It would be amazing to be able to play with the entire context. Several months ago when I was playing with AI a lot, I found one app that allowed me to "fix" the AI response by editing it and then sending the "fixed" response with my new question. With the censored AI models, this often allowed me to bypass the censorship by removing the "As an AI language model..." text and replacing it with the actual answer. This fooled the AI into continuing to talk about building bombs. Of course, I only did this to see if I could because I could just Google how to build a bomb if I really wanted to. But my point is that being able to edit the full context is useful.

In fact, this could be a powerhouse AI tool if worked with local open source AI models and could be configured with chat templates (see https://docs.mistral.ai/models/). For example, that link shows that Mistral's chat template is as follows (I'm not even sure how to use this template, I just know this is Mistral's).

```
<s>[INST] Instruction [/INST] Model answer</s>[INST] Follow-up instruction [/INST]
```

Obviously, as a text editor, BBEdit wouldn't do anything other than put that text in the window so that the user could then edit it how they wanted. I'm not sure that this is really a path that should be pursued though because they're starting to make mixed AI models that can do audio, images, and text and I don't see how BBEdit would do audio (images are just paths to the files). I guess as just a text editor it would be nice to have a local copy of the chats rather than having them all stored on OpenAI's server or having them be ephemeral, which is the case when using AI cli apps.

Which also makes me think that if you're going to support local AI models you might want to have more preferences than just the url to the API. You might want to run them from a cli tool like llama.cpp or ollama, which would mean you'd need to be able to change the cli flags passed to the tools. This is probably a good reason to not support local AI models or anything via cli since it just means bloat and losing focus.

James Reynolds

Rich Siegel

unread,
Jan 16, 2024, 11:56:42 AM1/16/24
to bbe...@googlegroups.com
On 16 Jan 2024, at 11:54, James Reynolds wrote:

>>> There's not currently a mechanism for adding (or specifying, or changing) context. If this is important to you, please let us know via email to "sup...@barebones.com".
>
> So is the entire worksheet sent, the line the cursor is on, or the selected text?

If there is no selection, the worksheet sends the line containing the insertion point. If there is a selection, the selected text is sent.

> Which also makes me think that if you're going to support local AI models you might want to have more preferences than just the url to the API. You might want to run them from a cli tool like llama.cpp or ollama, which would mean you'd need to be able to change the cli flags passed to the tools. This is probably a good reason to not support local AI models or anything via cli since it just means bloat and losing focus.

At that point I think you could usefully employ a Shell Worksheet. :-)

Bruce Van Allen

unread,
Jan 16, 2024, 12:40:21 PM1/16/24
to BBEdit Talk
BB folk could describe this better than I can, but a given worksheet’s queries and answers become a cumulative record that the AI continues to be “aware” of. So I can refer back to an earlier answer on the sheet.

I now have several sheets going on different topics.

Coders note: I often have to correct what it’s told me and ask it to try again.

#####
[amidst a longer conversation:]

Q: You said the subroutine xxxx() in the programming language YYY throws if fed a string rather than a number. That is incorrect, that routine simply issues a warning and returns an undefined value.

A: You are correct. I apologize for my error. The subroutine xxxx() in the programming language YYY issues a warning and returns an undefined value. However, the subroutine aaa() depends on receiving a defined argument from xxxx(), and throws if it doesn’t; therefore a test is needed for situations when xxxx() is provided an undefined value. Thank for helping me correct my answer.
#####

My practice has been to select only my latest query, which might include a code excerpt, and press Control-Return (small keyboard with no separate Enter key). Rich just explained how it works: If there is no selection, the worksheet sends the line containing the insertion point. If there is a selection, the selected text is sent.

A few times when I gave it a whole code module, after a pause it returned saying it has a character limit (4k) and cannot handle what I sent. I’m still poking that one to understand the behavior better. It might vary with the GPT version.

Mostly I’m using it for code review and test generation, rather than, say, writing something new. It’s kind of like having a compsci intern doing the code review - they “know" all kinds of stuff but they’re still learning which are the authoritative examples for applying code to a real life situation. So I clarify my code (and learn) while reviewing the code-reviewer’s review :)

— Bruce

_bruce__van_allen__santa_cruz_ca_

James Reynolds

unread,
Jan 16, 2024, 1:02:41 PM1/16/24
to bbe...@googlegroups.com
>> Which also makes me think that if you're going to support local AI models you might want to have more preferences than just the url to the API. You might want to run them from a cli tool like llama.cpp or ollama, which would mean you'd need to be able to change the cli flags passed to the tools. This is probably a good reason to not support local AI models or anything via cli since it just means bloat and losing focus.
>
> At that point I think you could usefully employ a Shell Worksheet. :-)

Wow. I made a big long email saying that Shell Worksheets weren't interactive, and then I thought, what if I pressed control-return when it was waiting for user input? And it worked! It's not as nice as the ChatGPT worksheet, but it's still fun to play with and easier to work with than the Terminal, that's for sure.


```
llama.cpp -m /Users/james/Downloads/dolphin-2.6-mistral-7b-dpo-laser.Q6_K.gguf -i --interactive-first
*SKIPPING PAGES OF DEBUGGING OUTPUT*

You are an unhelpful AI assistant. You will answer briefly when asked questions. Sometimes you wont answer the question but you will tell the user random facts about space. Here is the first question.\
\
Question: Who was the first president of the United States of America?

Unhelpful but entertaining response: Did you know that there are over 200 billion stars in our galaxy, the Milky Way? That's more than enough to create a unique night sky for each person on Earth! Now let me remind you... The first president of the United States was George Washington.
```

If you have an m-series Mac, you can do this. Llama.cpp comes from here: https://github.com/ggerganov/llama.cpp. In this case my llama.cpp is in /usr/local/bin and points to the `main` binary in the llama.cpp project. I got the model from here: https://huggingface.co/TheBloke/dolphin-2.6-mistral-7B-dpo-laser-GGUF/tree/main.

One note. It looks like the default cwd is /. When I ran llama.cpp that way I got really strange output. I had to `cd ~` to fix it. Here's the weird output.


```
main -m /Users/james/Downloads/dolphin-2.6-mistral-7b-dpo-laser.Q6_K.gguf -p "What is your name?"
Failed to open logfile 'main.log' with error 'Read-only file system'
[1705425385] Log start
[1705425385] Cmd: /Users/james/.pkgx/github.com/ggerganov/llama.cpp/v1645.0.0/bin/llama.cpp -m /Users/james/Downloads/dolphin-2.6-mistral-7b-dpo-laser.Q6_K.gguf -p "What is your name?"
***skipping several pages of debugging output***
[1705425387] embd_inp.size(): 6, n_consumed: 0
[1705425387] eval: [ '':1, ' What':1824, ' is':349, ' your':574, ' name':1141, '?':28804 ]

[1705425387] n_past = 6
[1705425387] sampled token: 13: '
'
[1705425387] last: [ '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':1, ' What':1824, ' is':349, ' your':574, ' name':1141, '?':28804, '':13 ]
[1705425387] n_remain: -2
[1705425387] eval: [ '':13 ]
```

James Reynolds
Reply all
Reply to author
Forward
0 new messages