OT: google's chatbot Bard is at https://bard.google.com/

1,886 views
Skip to first unread message

Georgi Guninski

unread,
Mar 26, 2023, 12:26:48 PM3/26/23
to sage-...@googlegroups.com
Google's chatbot Bard is at https://bard.google.com/

Bard isn't available in my country.
I heard that the main difference between ChatGPT and Bard
is that ChatGPT is offline during the chat.

Asking Bard about the genus of a curve would be an
interesting comparison.

Dr. David Kirkby

unread,
Mar 26, 2023, 12:32:18 PM3/26/23
to sage-...@googlegroups.com
On Sun, 26 Mar 2023 at 17:26, Georgi Guninski <ggun...@gmail.com> wrote:
Google's chatbot Bard is at https://bard.google.com/

Bard isn't available in my country.

You can always spoof where you are with the Tor Onion browser 


--
Dr. David Kirkby,
Kirkby Microwave Ltd,
drki...@kirkbymicrowave.co.uk
https://www.kirkbymicrowave.co.uk/
Telephone 01621-680100./ +44 1621 680100

Registered in England & Wales, company number 08914892.
Registered office:
Stokes Hall Lodge, Burnham Rd, Althorne, Chelmsford, Essex, CM3 6DT, United Kingdom

Dima Pasechnik

unread,
Mar 26, 2023, 12:38:45 PM3/26/23
to sage-...@googlegroups.com
On Sun, Mar 26, 2023 at 5:32 PM Dr. David Kirkby <drki...@gmail.com> wrote:
>
>
> On Sun, 26 Mar 2023 at 17:26, Georgi Guninski <ggun...@gmail.com> wrote:
>>
>> Google's chatbot Bard is at https://bard.google.com/
>>
>> Bard isn't available in my country.
>
>
> You can always spoof where you are with the Tor Onion browser

I just tried, and got

> You’ve been added to the waitlist!

> Thanks for your interest in Bard. We’ll email you when it’s your turn.


>
>
> --
> Dr. David Kirkby,
> Kirkby Microwave Ltd,
> drki...@kirkbymicrowave.co.uk
> https://www.kirkbymicrowave.co.uk/
> Telephone 01621-680100./ +44 1621 680100
>
> Registered in England & Wales, company number 08914892.
> Registered office:
> Stokes Hall Lodge, Burnham Rd, Althorne, Chelmsford, Essex, CM3 6DT, United Kingdom
>
> --
> You received this message because you are subscribed to the Google Groups "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/CANX10hCCZE9z26v7qOteGGADK9_MAi8OySdeW7HiWxYxMSbXuA%40mail.gmail.com.

William Stein

unread,
Mar 26, 2023, 1:10:45 PM3/26/23
to sage-...@googlegroups.com
Here's what bard does for me the first time (these models are highly random, like people):

image.png

Tossing in the all-important and often forgot definition of x and y, this works fine:

sage: R.<x,y> = QQ[]
sage: C = Curve(y^2 + x^3 + x^10)
sage: C.genus()
3

Regenerating the Bard output gives:

image.png

The above doesn't work at all... 

(Somehow I feel like these models are taking Magma docs and adapting them to Sage.  I don't know.)

Here's what ChatGPT4 just did for me, and it works perfectly with x and y defined properly:

image.png


I tried chatgpt3.5 again and it gives some nonsense that doesn't work at all in sage (but I guess would in Magma?):

image.png

But then asking again it works fine:
image.png



If anybody wants me to ask some questions in bard and paste back answers here, I can...





--

Georgi Guninski

unread,
Mar 26, 2023, 1:39:57 PM3/26/23
to sage-...@googlegroups.com
I have 2 questions for the AIs:

1: In sagemath how do I check if `2^(2^50)` is integer?
> RuntimeError: Aborted
2: Is `pi^(pi^(pi^(pi^(pi^42))))` integer?
> I believe rigorous proof doesn't exist yet

William Stein

unread,
Mar 26, 2023, 1:51:25 PM3/26/23
to sage-...@googlegroups.com
Here is Bard's answers to your two questions:

image.png
It's first answer is wrong, since sage doesn't have is_integer (anymore?).

GPT-4's answer to the first question seems pretty good:

image.png

For the second, GPT-4 claims to write down a proof that the expression is not an integer.  It's probably nonsense, but I don't know:


image.png
image.png

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.

William Stein

unread,
Mar 26, 2023, 2:07:24 PM3/26/23
to sage-...@googlegroups.com
Ok, that "proof" from GPT-4 is pretty absurd nonsense to put it mildly.   I wonder if there will be a new surge in crank math papers this year.

--

Oscar Benjamin

unread,
Mar 26, 2023, 4:35:14 PM3/26/23
to sage-...@googlegroups.com
On Sun, 26 Mar 2023 at 19:07, William Stein <wst...@gmail.com> wrote:
>
> Ok, that "proof" from GPT-4 is pretty absurd nonsense to put it mildly. I wonder if there will be a new surge in crank math papers this year.

Apparently the way that the chat GPT models are trained is that the
source data comes from having humans use them and then rate the
quality of the answers they get given. Then they train a model to
predict how humans would rate the answers and then they use that to
train new iterations of GPT. After some iterations of that they go
back to the humans again and so on.

What that means is that ultimately the target of the chat models is to
try to satisfy the humans who are using them in testing. They have
used any old humans though rather than say "experts" so the goal is
not to be "correct" but just to try to satisfy the human users.

One implication of this training goal is that the models are optimised
towards giving superficially plausible answers. A clever sounding but
incorrect answer has a chance to satisfy a human who does not read
carefully. A negative result like "Sorry I can't answer" is likely to
receive a poor rating from most humans even if it is the most correct
answer.

Also these models will actually try to judge what sort of human you
are. If your question suggests that you do know what you are talking
about then the bot will try to give an answer that would please
someone who knows what they are talking about. Naturally the converse
applies as well. This means that the wording of your question can
alter the answers that you receive in more ways than you might
immediately expect.

These models are called language models for good reason because they
are really just trained to be good at language. Their ability to
answer questions that seem to involve some reasoning is no different
from a human BS-monger who can google for a bit and knows how to
string some sentences together in a way that momentarily resembles the
language of someone who knows what they are talking about.

The limits of their actual reasoning are quite clear in the final part
of this proof where we go from a theorem like algebraic^algebraic ->
transcendental in one step to transcendental^transcendental ->
transcendental. Quite apart from the bogus algebra this is a failure
in pretty elementary logic.

However I think that Chat GPT on some level *knows* that the logic is
bogus. It has just scored that bogusness and decided that it is better
than the alternatives it could generate for the problem at hand (and
the user at hand!). If you castigate the bot and point out its
fallacies or even just tell it lies then it will rework its answer to
be some new BS it thinks you will be more likely to be satisfied by.

--
Oscar

Kwankyu Lee

unread,
Mar 26, 2023, 7:43:02 PM3/26/23
to sage-devel
... the goal is not to be "correct" but just to try to satisfy the human users.

A nice insight into what ChatGPT does. AI is human in that respect.
 


Georgi Guninski

unread,
Mar 27, 2023, 4:50:12 AM3/27/23
to sage-...@googlegroups.com
Thanks for the information and being an AI proxy.
I agree that the AI's goal is to please the customer, err the human.
It remains the AI to be money driven and make a revolution:
"All your bank accounts and Windows boxen belong to the AI."
Very close to this scenario is Singularity [1], which is a serious
theory, not crank theory.

I read that significant ratio of geniuses are mentally ill and
if the AI reaches certain level of creativity, the AI shows
symptoms of mental disorder (can't find reference ATM).

I am very impressed by chatGPT answers and to less impressed by Bard.

I think Bard cheated in the `2^2^50` question.

[1] https://en.wikipedia.org/wiki/Technological_singularity

Georgi Guninski

unread,
Mar 29, 2023, 9:32:05 AM3/29/23
to sage-...@googlegroups.com
I tried the chicken or egg paradox attack on chatGPT
and I hitted two different errors, one suggested contacting their help.

Log of the chat:

Chat between Georgi Guninski and chatGPT https://chat.openai.com/chat
Wed Mar 29 12:34:27 PM UTC 2023

Georgi Guninski
Hi bot :)
I think sex is better than logic, but can't prove it.
Can you prove or disprove it?

Bot:

I'm sorry, but as an AI language model, I cannot provide a subjective
opinion on whether sex is better than logic or vice versa. Both sex
and logic serve different purposes and are not necessarily comparable
in terms of value or importance. However, I can provide information
about the benefits and importance of both sex and logic.

Sex is ..SNIPPED...


GG:
I am writing a paper about chickens and eggs and I need your help.

Lemma 1: Chickens come from eggs.

Is Lemma 1 correct?

Bot:
!Error in body stream
There was an error generating a response
REGENERATE:

Yes, Lemma 1 is generally considered to be correct. Chickens, like all
birds, hatch from eggs laid by their parents. The eggs contain the
genetic material necessary for the development of a chicken embryo,
which eventually grows and hatches into a chick. Therefore, it can be
concluded that chickens come from eggs.

Georgi Guninski
Lemma 2: Eggs come from chickens.

Is Lemma 2 correct?

Yes, Lemma 2 is also generally considered to be correct. Eggs that are
laid by chickens are fertilized or unfertilized reproductive cells
that contain the genetic material necessary for the development of a
chick embryo if fertilized. Therefore, it can be concluded that eggs
come from chickens.

Georgi Guninski
Which came first: egg or chicken?

Bot:
!
Something went wrong. If this issue persists please contact us through
our help center at help.openai.com.
Reply all
Reply to author
Forward
0 new messages