Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

How many Rs are there in elderberry?

5 views
Skip to first unread message

Gmail

unread,
Sep 1, 2024, 11:56:40 PM9/1/24
to RSSC-list
I asked ChatGPT 4, “how many Rs are there in ‘elderberry.’ “

The answer is that it returned were quite enlightening. The first time it said 3. I asked if it was sure.  The second time and said. I then asked it to put the word into a table and then count again.It did that and came up with the number 3 once again.

And then I said : 

You first said 3. Then 2.

And again 2. Finally 3.

Which is it?



You're right to call me out on that. The correct number of "r"s in "elderberry" is 3. My earlier back-and-forth was incorrect.

Thanks for sticking with me!



Why did you say 2?



I initially miscounted the "r"s when I said there were 2. It's a mistake on my part, and I should have double-checked the count more carefully. Thanks for your patience as I sorted it out!



What faulty logic caused your miscount?



The miscount likely happened because I momentarily focused on only the first two occurrences of "r" without scanning the entire word thoroughly. It's a bit like when you're reading quickly and your brain skips over details. This oversight led to an incorrect conclusion.

It wasn't a matter of complex logic, just a simple human-like error in attention to detail during that moment. Thanks for catching that mistake!



So now, chatGPT, a computer program, is having human-like errors?


You're right-saying I made a

"human-like" error isn't quite accurate. As a program, I don't have human experiences like distraction or fatigue. The mistake was more about how I processed the input, not a lapse in attention.

In this case, the error stemmed from an incorrect output during a straightforward task. It's a reminder that even automated processes can sometimes go awry due to issues like imperfect pattern recognition or execution. Thank you for keeping me on my toes.



Please explain to me the exact mechanism that caused the miscount.



The miscount likely happened due to a sequence of processing steps that didn't fully align.

Here's a breakdown of what might have occurred:

  1. Tokenization: When I receive input, I break it down into smaller units called tokens.
    For a word like "elderberry," this process usually happens correctly. However, in counting specific characters, I may have incorrectly parsed or referenced the tokens.
  2. Character Search and Counting: Normally, I would then search the tokens for the specific character, in this case, “r.” This should involve a simple loop through the word to count each occurrence. The mistake likely occurred when I retrieved the counts from this loop. I might have incorrectly summed the occurrences or referenced an incomplete set of results.
  3. State Management: There’s also the possibility that state management—how I keep track of what I’ve counted—didn’t refresh properly between attempts. This might have led to an incomplete or incorrect answer being given.
  4. 4. Error Propagation: After the initial mistake, there might have been a propagation of the error due to not reprocessing the task fully or incorrectly prioritizing one part of the task over another.
  5. When I made the initial mistake, subsequent corrections should have been carefully re-checked, but they might have still been influenced by the initial incorrect output.

This isn't a perfect simulation of human error but more a

reflection of how an Al model like me can sometimes misstep in processing due to the above factors.



Sigh. LLMs have a long way to go. 


Thomas

-  

Need something prototyped, built or coded? I’ve been building prototypes for companies for 15 years. I am now incorporating generative AI into products.

-

Need a great hardworking engineer? I am currently looking for a new job opportunity in robotics and/ or AI. 

Contact me directly or through LinkedIn:   

Carl

unread,
Sep 12, 2024, 8:04:41 PM9/12/24
to RSSC-List
Is counting a new feature in ChatGPT 4?  I saw some older posts saying no.  If not, then there is no reason to think that you would get an accurate count ever, and extra-surprising that it is close.  The explanation mentions counting, but that could be a hallucination?

I was going to say that you should always do a second prompt of "are you sure" to get better results, but clearly that doesn't always help. :-)

Reply all
Reply to author
Forward
0 new messages