Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ChatGPT in ulm

3 views
Skip to first unread message

Jon Ribbens

unread,
Dec 28, 2023, 10:12:23 AM12/28/23
to
On 2023-12-28, GB <NOTso...@microsoft.invalid> wrote:
> On 28/12/2023 08:42, Jeff Gaines wrote:
>> Does a country have to be a member of/recognise the ICC for it people to
>> be prosecuted for war crimes?
>
> ChatGPT says:

Does anyone agree that we should have a rule that cut'n'pasting
output from ChatGPT and similar is disallowed in ulm, since it's
basically a machine to generate disinformation?

Tim Jackson

unread,
Dec 28, 2023, 11:18:07 AM12/28/23
to
On Thu, 28 Dec 2023 15:12:21 -0000 (UTC), Jon Ribbens wrote...
>
> Does anyone agree that we should have a rule that cut'n'pasting
> output from ChatGPT and similar is disallowed in ulm, since it's
> basically a machine to generate disinformation?

While ChatGPT clearly can generate incorrect information, I don't think
that's always the case, or necessarily the intention. However, you
could say the same about human posts.

I think ChatGPT and other AI-generated posts should be labelled as such,
which the post in question was. People can then make up their own
minds.

The moderation policy could discourage them, and say that they should be
clearly labelled if used. But it would be difficult to police that.

--
Tim Jackson
ne...@timjackson.invalid
(Change '.invalid' to '.plus.com' to reply direct)

Andy Burns

unread,
Dec 28, 2023, 11:20:27 AM12/28/23
to
Jon Ribbens wrote:

> GB wrote:
> >> Jeff Gaines wrote:
>>
>>> Does a country have to be a member of/recognise the ICC for it people to
>>> be prosecuted for war crimes?
>>
>> ChatGPT says:
>
> Does anyone agree that we should have a rule that cut'n'pasting
> output from ChatGPT and similar is disallowed in ulm, since it's
> basically a machine to generate disinformation?

Well, I don't know about specifically disallowing anything, but I don't
see it as helpful ...

GB

unread,
Dec 28, 2023, 11:39:36 AM12/28/23
to
It's the only reply the OP has had. That may be because there's nobody
on the NG with the necessary knowledge.

Norman Wells

unread,
Dec 28, 2023, 11:51:19 AM12/28/23
to
You mean, like Wikipedia, the Daily Mail, homeopaths, eco-warriors,
politicians, cosmetics manufacturers, and people in general, especially
if they have an axe to grind?

GB

unread,
Dec 28, 2023, 11:59:46 AM12/28/23
to
On 28/12/2023 15:12, Jon Ribbens wrote:
ChatGPT is a bit like dictation software (especially in the early days),
which gets an awful lot right but then goes completely wrong.

On this occasion, do you think ChatGPT was wrong?

Jon Ribbens

unread,
Dec 28, 2023, 12:37:31 PM12/28/23
to
On 2023-12-28, Tim Jackson <ne...@timjackson.invalid> wrote:
> On Thu, 28 Dec 2023 15:12:21 -0000 (UTC), Jon Ribbens wrote...
>> Does anyone agree that we should have a rule that cut'n'pasting
>> output from ChatGPT and similar is disallowed in ulm, since it's
>> basically a machine to generate disinformation?
>
> While ChatGPT clearly can generate incorrect information, I don't think
> that's always the case, or necessarily the intention. However, you
> could say the same about human posts.

Yes it doesn't deliberately get them wrong, but then again it doesn't
really try to get them right either. Some humans in the group are the
same it's true, but the only way to do something about that would be
to moderate for truth, and I think we are right not to try to do that.

My feeling is that "AI" adds little benefit, and if someone actually
wants an "AI" answer to their question it's quicker and easier for
them to simply to go ask one directly.

Jon Ribbens

unread,
Dec 28, 2023, 12:39:34 PM12/28/23
to
On 2023-12-28, GB <NOTso...@microsoft.invalid> wrote:
I think I probably know, but not with enough certainty to simply post
it without checking. There are multiple people in the group who are
certainly capable of finding out the answer with a high degree of
certainty and posting it with references. But people may be paying
less attention than usual due to the season ;-)

Jon Ribbens

unread,
Dec 28, 2023, 12:43:08 PM12/28/23
to
I don't know, I didn't read it, except to note that there didn't appear
to be any references included.

The problem is that ChatGPT is designed to provide something that sounds
like an answer, delivered with a high degree of self-confidence, with
little attention paid to whether it's correct or not. It's basically an
automated Norman.

Norman Wells

unread,
Dec 28, 2023, 4:02:49 PM12/28/23
to
It is the future, so you'd better get used to it.

Glad to be leading the way.

billy bookcase

unread,
Dec 28, 2023, 4:04:04 PM12/28/23
to

"Norman Wells" <h...@unseen.ac.am> wrote in message
news:kv5nc4...@mid.individual.net...
> On 28/12/2023 15:12, Jon Ribbens wrote:
>> On 2023-12-28, GB <NOTso...@microsoft.invalid> wrote:
>>> On 28/12/2023 08:42, Jeff Gaines wrote:
>>>> Does a country have to be a member of/recognise the ICC for it people to
>>>> be prosecuted for war crimes?
>>>
>>> ChatGPT says:
>>
>> Does anyone agree that we should have a rule that cut'n'pasting
>> output from ChatGPT and similar is disallowed in ulm, since it's
>> basically a machine to generate disinformation?
>
> You mean, like Wikipedia,

The Wikipedia article on the ICC contains no fewer than 363 cited
references; whereas the quoted ChatGPT article has precisely none.
Zilch. Zero

So that had you consulted Wikipedia rather than ChatGPT, you wouldn't
have made such a complete and utter fool of yourself by claiming that
Tom Lehrer was a comedian and actor. Rather than as wikipedia noted
a musician, singer-songwriter satirist and mathematician. Complete
with discography, list of publications, and 84 cited references.


bb







Norman Wells

unread,
Dec 28, 2023, 4:21:33 PM12/28/23
to
On 28/12/2023 21:04, billy bookcase wrote:
> "Norman Wells" <h...@unseen.ac.am> wrote in message
> news:kv5nc4...@mid.individual.net...
>> On 28/12/2023 15:12, Jon Ribbens wrote:
>>> On 2023-12-28, GB <NOTso...@microsoft.invalid> wrote:
>>>> On 28/12/2023 08:42, Jeff Gaines wrote:
>>>>> Does a country have to be a member of/recognise the ICC for it people to
>>>>> be prosecuted for war crimes?
>>>>
>>>> ChatGPT says:
>>>
>>> Does anyone agree that we should have a rule that cut'n'pasting
>>> output from ChatGPT and similar is disallowed in ulm, since it's
>>> basically a machine to generate disinformation?
>>
>> You mean, like Wikipedia,
>
> The Wikipedia article on the ICC contains no fewer than 363 cited
> references; whereas the quoted ChatGPT article has precisely none.
> Zilch. Zero

That's because it is *not* an article but an answer to a question or
specific request. You clearly don't realise it, but it's how ChatGPT works.

> So that had you consulted Wikipedia rather than ChatGPT, you wouldn't
> have made such a complete and utter fool of yourself by claiming that
> Tom Lehrer was a comedian and actor.

Hardly. I said where the comment had come from, and I quoted verbatim
what it said without any further comment.

*I* claimed nothing.

But what the pedant in you is criticising is quite irrelevant when the
question that was asked was who wrote the article that was quoted here,
not what he did in his spare time.

And I note you have no answer to that whatsoever even from Wikipedia.

So, we're left with the uncontested statement that 'The quote is often
attributed to the American ... Tom Lehrer'. Which it almost certainly is.

> Rather than as wikipedia noted
> a musician, singer-songwriter satirist and mathematician. Complete
> with discography, list of publications, and 84 cited references.

But, sadly, none to the matter we were discussing, and none from your
unhelpful input.


Jeff Gaines

unread,
Dec 29, 2023, 6:27:51 AM12/29/23
to
On 28/12/2023 in message <slrnuor42l.5...@raven.unequivocal.eu>
I'm the OP and I know very little about ChatGPT since it seems it only
allows access if you sign up for it and I'm not keen on that.

Somebody, alleging they were Israeli, posted something I found highly
offensive on Facebook essentially stating that Israel could do what it
liked to Palestinians as Israel was not a member of the ICC so couldn't be
prosecuted.

Seemed odd to me bearing in mind that it didn't exist in 1945 (ish) but
didn't stop thee Nuremberg Trials.

--
Jeff Gaines Dorset UK
This joke was so funny when I heard it for the first time I fell of my
dinosaur.

billy bookcase

unread,
Dec 29, 2023, 8:55:51 AM12/29/23
to

"Norman Wells" <h...@unseen.ac.am> wrote in message
news:kv676q...@mid.individual.net...
> On 28/12/2023 21:04, billy bookcase wrote:
>> "Norman Wells" <h...@unseen.ac.am> wrote in message
>> news:kv5nc4...@mid.individual.net...
>>> On 28/12/2023 15:12, Jon Ribbens wrote:
>>>> On 2023-12-28, GB <NOTso...@microsoft.invalid> wrote:
>>>>> On 28/12/2023 08:42, Jeff Gaines wrote:
>>>>>> Does a country have to be a member of/recognise the ICC for it people to
>>>>>> be prosecuted for war crimes?
>>>>>
>>>>> ChatGPT says:
>>>>
>>>> Does anyone agree that we should have a rule that cut'n'pasting
>>>> output from ChatGPT and similar is disallowed in ulm, since it's
>>>> basically a machine to generate disinformation?
>>>
>>> You mean, like Wikipedia,
>>
>> The Wikipedia article on the ICC contains no fewer than 363 cited
>> references; whereas the quoted ChatGPT article has precisely none.
>> Zilch. Zero
>
> That's because it is *not* an article but an answer to a question
> or specific request. You clearly don't realise it, but it's how
ChatGPT works.

So you're claiming that ChatGPT specifically excludes any possibility
of citing references when answering questions, then are you ?

Whereas by way of contrast, many posters cite references when
answering questions in ULM.

snip

>
> *I* claimed nothing.

Oh sorry you're not claiming that are you ?

That may just be something you read somewhere, or possibly not.

Maybe you just make it all up as you go along.

Norman I have absolutely no intention of allowing myself to be drawn
in to your web of utterly pointless equivocation, obfuscation, and
misdirection.

You may have succeeded in reducing another poster in the group
into an empty shell, a shadow of their former self; who only now dares
stick their head up over the parapet every couple of weeks.or so

But with well directed barbs when they do so, it must be admitted

But that isn't going to happen with me, I can assure you.


bb


>



Norman Wells

unread,
Dec 29, 2023, 9:30:58 AM12/29/23
to
On 29/12/2023 13:55, billy bookcase wrote:
> "Norman Wells" <h...@unseen.ac.am> wrote in message
> news:kv676q...@mid.individual.net...
>> On 28/12/2023 21:04, billy bookcase wrote:
>>> "Norman Wells" <h...@unseen.ac.am> wrote in message
>>> news:kv5nc4...@mid.individual.net...
>>>> On 28/12/2023 15:12, Jon Ribbens wrote:
>>>>> On 2023-12-28, GB <NOTso...@microsoft.invalid> wrote:
>>>>>> On 28/12/2023 08:42, Jeff Gaines wrote:
>>>>>>> Does a country have to be a member of/recognise the ICC for it people to
>>>>>>> be prosecuted for war crimes?
>>>>>>
>>>>>> ChatGPT says:
>>>>>
>>>>> Does anyone agree that we should have a rule that cut'n'pasting
>>>>> output from ChatGPT and similar is disallowed in ulm, since it's
>>>>> basically a machine to generate disinformation?
>>>>
>>>> You mean, like Wikipedia,
>>>
>>> The Wikipedia article on the ICC contains no fewer than 363 cited
>>> references; whereas the quoted ChatGPT article has precisely none.
>>> Zilch. Zero
>>
>> That's because it is *not* an article but an answer to a question
>> or specific request. You clearly don't realise it, but it's how
> ChatGPT works.
>
> So you're claiming that ChatGPT specifically excludes any possibility
> of citing references when answering questions, then are you ?

I'm sure it won't normally. If you want proof of what it says, you'll
have to look elsewhere sufficiently deeply to satisfy your own desires.

But what it said about the article that was posted, and what I quoted,
was a good lead. At least it named someone, which no-one else here has
subsequently managed to do.

And I posted it in response to Mr Perry who, after apparently infringing
copyright in it, rather dismissively said 'Good luck finding the author
of that'. Well, ChatGPT has given him the start he perhaps needs.

> Whereas by way of contrast, many posters cite references when
> answering questions in ULM.

I don't recall that *you* have even attempted to identify the author,
let alone given any references from the glass house in which you live.
So, the only indication we have so far is that provided by ChatGPT, and
even that didn't purport to be definitive, merely saying that 'The quote
is often attributed to the American ... Tom Lehrer'.

I invite you to prove or even suggest a viable alternative. If you can.

> snip
>
>> *I* claimed nothing.
>
> Oh sorry you're not claiming that are you ?
>
> That may just be something you read somewhere, or possibly not.
>
> Maybe you just make it all up as you go along.

I quoted what ChatGPT returned when I put to it the question 'Who wrote
the following' followed by a short passage from the original.

You can try it yourself if you like, and let us know if it comes up with
anything else.

In the meantime, I have no dog in the fight. It matters not to me who
actually wrote it but how easily it actually is to get a lead such as it
provided, when we were dismissively told it would need good luck.

> Norman I have absolutely no intention of allowing myself to be drawn
> in to your web of utterly pointless equivocation, obfuscation, and
> misdirection.

Your febrile imagination is again running riot.

> You may have succeeded in reducing another poster in the group
> into an empty shell, a shadow of their former self; who only now dares
> stick their head up over the parapet every couple of weeks.or so

Who he?

> But with well directed barbs when they do so, it must be admitted
>
> But that isn't going to happen with me, I can assure you.

Jolly good.

GB

unread,
Dec 29, 2023, 9:38:25 AM12/29/23
to
On 29/12/2023 13:55, billy bookcase wrote:

> So you're claiming that ChatGPT specifically excludes any possibility
> of citing references when answering questions, then are you ?


Just to be clear, I didn't ask for references.

ChatGPT 3.5, which I used, is not terribly good at references. V 4,
which is available with Bing, seems much better.

There's obviously nothing about AI that precludes it giving references.

Ian Jackson

unread,
Dec 29, 2023, 3:36:58 PM12/29/23
to
In article <slrnuorctb.5...@raven.unequivocal.eu>,
Jon Ribbens <jon+u...@unequivocal.eu> wrote:
>The problem is that ChatGPT is designed to provide something that sounds
>like an answer, delivered with a high degree of self-confidence, with
>little attention paid to whether it's correct or not. It's basically an
>automated Norman.

It's gems like this and the subthread about Assange that keep me
reading this group :-).

--
Ian Jackson <ijac...@chiark.greenend.org.uk> These opinions are my own.

Pronouns: they/he. If I emailed you from @fyvzl.net or @evade.org.uk,
that is a private address which bypasses my fierce spamfilter.

Roland Perry

unread,
Dec 30, 2023, 3:16:33 AM12/30/23
to
In message <MPG.3ff7afbee...@text.usenet.plus.net>, at
16:17:49 on Thu, 28 Dec 2023, Tim Jackson <ne...@timjackson.invalid>
remarked:
>On Thu, 28 Dec 2023 15:12:21 -0000 (UTC), Jon Ribbens wrote...
>>
>> Does anyone agree that we should have a rule that cut'n'pasting
>> output from ChatGPT and similar is disallowed in ulm, since it's
>> basically a machine to generate disinformation?
>
>While ChatGPT clearly can generate incorrect information, I don't think
>that's always the case, or necessarily the intention. However, you
>could say the same about human posts.
>
>I think ChatGPT and other AI-generated posts should be labelled as such,
>which the post in question was. People can then make up their own
>minds.
>
>The moderation policy could discourage them, and say that they should be
>clearly labelled if used. But it would be difficult to police that.

Recursively, I wonder of there's an AI tool which is reasonably good at
spotting chatbot vs human text?
--
Roland Perry

Andy Burns

unread,
Dec 30, 2023, 3:27:15 AM12/30/23
to
Roland Perry wrote:

> Recursively, I wonder of there's an AI tool which is reasonably good at
> spotting chatbot vs human text?

adversarial moderation?

Roland Perry

unread,
Dec 30, 2023, 3:46:35 AM12/30/23
to
In message <slrnuorcml.5...@raven.unequivocal.eu>, at 17:39:33
on Thu, 28 Dec 2023, Jon Ribbens <jon+u...@unequivocal.eu> remarked:

>There are multiple people in the group who are certainly capable of
>finding out the answer with a high degree of certainty and posting it
>with references.

It's becoming increasingly futile posting to Usenet due to an upsurge in
responses like "I don't believe you, provide cites", and when one has
included cites respondents clearly not having bothered to read them.

Meanwhile, specific to ULM, I've always taken the view it's more about
people recounting their experiences, rather than doing research from
scratch.

Finally, let's not assume everyone has the facilities at their
fingertips to do a lot of research, even if they have time. I often read
Usenet on the train, and connectivity is almost always challenging, as
is finding somewhere to prop the laptop, or even hitting the right keys
as the carriage lurches from side to side.

>But people may be paying less attention than usual due to the season
>;-)

Currently I'm finding "the holidays" very frustrating because it's
almost impossible to get anything done. I have some urgent domestic
duties to pursue, but spent all yesterday afternoon being told "nothing
we can do today because it's Friday afternoon and we can't take on
anything new; then it's a Bank Holiday weekend, so ask again on
Tuesday".

Of course, on Tuesday there's going to be a massive backlog, as almost
nothing has got done since lunchtime on the 22nd [which I call Xmas Eve
Eve] when many organisations closed for the season.

Also, I'm watching the news as I type this, and the railways seem to
have melted down (for a combination of reasons).
--
Roland Perry

Roland Perry

unread,
Dec 30, 2023, 3:56:36 AM12/30/23
to
In message <kva2iv...@mid.individual.net>, at 08:27:11 on Sat, 30
Dec 2023, Andy Burns <use...@andyburns.uk> remarked:
Is that the name of such a tool, or just a throwaway comment?
--
Roland Perry

Roland Perry

unread,
Dec 30, 2023, 3:56:38 AM12/30/23
to
In message <ummj55$rlib$1...@dont-email.me>, at 13:55:48 on Fri, 29 Dec
2023, billy bookcase <bi...@anon.com> remarked:

>you're claiming that ChatGPT specifically excludes any possibility
>of citing references when answering questions, then are you ?

Is it a feature they offer - optional or otherwise?

Of course cites aren't a panacea, as humans come back from time to time
to say "that cite doesn't say what you claim it does" and some cites
themselves contain fundamentally unreliable material.
--
Roland Perry

Roland Perry

unread,
Dec 30, 2023, 4:06:35 AM12/30/23
to
In message <kv83h0...@mid.individual.net>, at 14:30:57 on Fri, 29
Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
It's fundamental when claiming a breach of copyright to be able to
identify the author (who can then decide for themselves if they want to
pursue the matter). A bit like The New York Times is reported as taking
action against ChatGPT.

Internet memes are most unlikely to have a specific author, evolving as
they do, when re-told.

>> Whereas by way of contrast, many posters cite references when
>> answering questions in ULM.
>
>I don't recall that *you* have even attempted to identify the author,
>let alone given any references from the glass house in which you live.
>So, the only indication we have so far is that provided by ChatGPT, and
>even that didn't purport to be definitive, merely saying that 'The
>quote is often attributed to the American ... Tom Lehrer'.
>
>I invite you to prove or even suggest a viable alternative. If you can.

Almost all my objections wrapped up, with a ribbon tied around it.

>> snip
>>
>>> *I* claimed nothing.
>> Oh sorry you're not claiming that are you ?
>> That may just be something you read somewhere, or possibly not.
>> Maybe you just make it all up as you go along.
>
>I quoted what ChatGPT returned when I put to it the question 'Who wrote
>the following' followed by a short passage from the original.
>
>You can try it yourself if you like, and let us know if it comes up
>with anything else.
>
>In the meantime, I have no dog in the fight.

In which case, just stop posting about it.

>It matters not to me who actually wrote it but how easily it actually
>is to get a lead such as it provided, when we were dismissively told it
>would need good luck.
>
>> Norman I have absolutely no intention of allowing myself to be drawn
>> in to your web of utterly pointless equivocation, obfuscation, and
>> misdirection.
>
>Your febrile imagination is again running riot.
>
>> You may have succeeded in reducing another poster in the group
>> into an empty shell, a shadow of their former self; who only now dares
>> stick their head up over the parapet every couple of weeks.or so
>
>Who he?
>
>> But with well directed barbs when they do so, it must be admitted
>> But that isn't going to happen with me, I can assure you.
>
>Jolly good.
>

--
Roland Perry

Andy Burns

unread,
Dec 30, 2023, 4:30:15 AM12/30/23
to
Roland Perry wrote:

> Andy Burns remarked:
>
>> Roland Perry wrote:
>>
>>> Recursively, I wonder of there's an AI tool which is reasonably good
>>> at  spotting chatbot vs human text?
>>
>> adversarial moderation?
>
> Is that the name of such a tool, or just a throwaway comment?

Generative adversarial networks are part of the "war" where one AI tool
attempts to produce more realistic results while pitted against another
tool attempting to detect results which have been created by AI ...

Norman Wells

unread,
Dec 30, 2023, 5:24:02 AM12/30/23
to
On 30/12/2023 09:00, Roland Perry wrote:
> In message <kv83h0...@mid.individual.net>, at 14:30:57 on Fri, 29
> Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>> On 29/12/2023 13:55, billy bookcase wrote:

>>>  So you're claiming that ChatGPT specifically excludes any
>>> possibility  of citing references when answering questions, then are
>>> you ?
>>
>> I'm sure it won't normally.  If you want proof of what it says, you'll
>> have to look elsewhere sufficiently deeply to satisfy your own desires.
>>
>> But what it said about the article that was posted, and what I quoted,
>> was a good lead.  At least it named someone, which no-one else here
>> has subsequently managed to do.
>>
>> And I posted it in response to Mr Perry who, after apparently
>> infringing copyright in it, rather dismissively said 'Good luck
>> finding the author of that'.  Well, ChatGPT has given him the start he
>> perhaps needs.
>
> It's fundamental when claiming a breach of copyright to be able to
> identify the author (who can then decide for themselves if they want to
> pursue the matter).

I think that's actually the responsibility of the person wanting to copy
the work who is the potential infringer. Don't you?

Or do you think it's perfectly okay to act unlawfully if you don't think
you'll be caught?

> A bit like The New York Times is reported as taking
> action against ChatGPT.
>
> Internet memes are most unlikely to have a specific author, evolving as
> they do, when re-told.

It's unlikely to be the case in the matter were considering, which is a
sort of sequential argument that hangs together as a whole. In any
case, even if it was, that doesn't excuse infringing any of the authors'
copyrights in the final work, of which there could be several.

>>> Whereas by way of contrast, many posters cite references when
>>> answering questions in ULM.
>>
>> I don't recall that *you* have even attempted to identify the author,
>> let alone given any references from the glass house in which you live.
>> So, the only indication we have so far is that provided by ChatGPT,
>> and even that didn't purport to be definitive, merely saying that 'The
>> quote is often attributed to the American ... Tom Lehrer'.
>>
>> I invite you to prove or even suggest a viable alternative.  If you can.
>
> Almost all my objections wrapped up, with a ribbon tied around it.

Just because *you* haven't made any effort at all to find the author
doesn't mean he can't be found. ChatGPT has very easily given you a
lead. Why don't you follow that one up at least?

>>>> *I* claimed nothing.
>>>  Oh sorry you're not claiming that are you ?
>>>  That may just be something you read somewhere, or possibly not.
>>>  Maybe you just make it all up as you go along.
>>
>> I quoted what ChatGPT returned when I put to it the question 'Who
>> wrote the following' followed by a short passage from the original.
>>
>> You can try it yourself if you like, and let us know if it comes up
>> with anything else.
>>
>> In the meantime, I have no dog in the fight.
>
> In which case, just stop posting about it.

Why?


Norman Wells

unread,
Dec 30, 2023, 5:29:34 AM12/30/23
to
On 30/12/2023 08:43, Roland Perry wrote:
> In message <slrnuorcml.5...@raven.unequivocal.eu>, at 17:39:33
> on Thu, 28 Dec 2023, Jon Ribbens <jon+u...@unequivocal.eu> remarked:

>> But people may be paying less attention than usual due to the season ;-)
>
> Currently I'm finding "the holidays" very frustrating because it's
> almost impossible to get anything done. I have some urgent domestic
> duties to pursue, but spent all yesterday afternoon being told "nothing
> we can do today because it's Friday afternoon and we can't take on
> anything new; then it's a Bank Holiday weekend, so ask again on Tuesday".
>
> Of course, on Tuesday there's going to be a massive backlog, as almost
> nothing has got done since lunchtime on the 22nd [which I call Xmas Eve
> Eve] when many organisations closed for the season.

Yes, other people have lives to lead and things to do too. Maybe it
will dawn someday that the world does not revolve around you, however
important you think you are.


Simon Parker

unread,
Dec 30, 2023, 8:47:47 AM12/30/23
to
On 29/12/2023 11:27, Jeff Gaines wrote:
> On 28/12/2023 in message
> <slrnuor42l.5...@raven.unequivocal.eu> Jon Ribbens wrote:
>
>> On 2023-12-28, GB <NOTso...@microsoft.invalid> wrote:
>>> On 28/12/2023 08:42, Jeff Gaines wrote:
>>>> Does a country have to be a member of/recognise the ICC for it
>>>> people to
>>>> be prosecuted for war crimes?
>>>
>>> ChatGPT says:
>>
>> Does anyone agree that we should have a rule that cut'n'pasting
>> output from ChatGPT and similar is disallowed in ulm, since it's
>> basically a machine to generate disinformation?
>
> I'm the OP and I know very little about ChatGPT since it seems it only
> allows access if you sign up for it and I'm not keen on that.
>
> Somebody, alleging they were Israeli, posted something I found highly
> offensive on Facebook essentially stating that Israel could do what it
> liked to Palestinians as Israel was not a member of the ICC so couldn't
> be prosecuted.
>
> Seemed odd to me bearing in mind that it didn't exist in 1945 (ish) but
> didn't stop thee Nuremberg Trials.

I've recently posted a quote from the ICC themselves to the thread in
question which makes clear that the ICC do not agree with what the
poster you have referenced above is claiming, given that the ICC have a
current investigation open against Israel for historic activities and
have also stated that Israel's current activities also fall within the
ICC's purview.

Apologies for the delay responding. I've been busy with real-life stuff
which takes precedence over Usenet.

Regards

S.P.

Jeff Gaines

unread,
Dec 30, 2023, 10:47:22 AM12/30/23
to
On 30/12/2023 in message <kvalc1...@mid.individual.net> Simon Parker
wrote:
Many thanks Simon :-)

--
Jeff Gaines Dorset UK
Roses are #FF0000, violets are #0000FF
if you can read this, you're a nerd 10.

Roland Perry

unread,
Dec 30, 2023, 11:16:34 AM12/30/23
to
In message <kva9ob...@mid.individual.net>, at 10:29:33 on Sat, 30
Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>On 30/12/2023 08:43, Roland Perry wrote:
>> In message <slrnuorcml.5...@raven.unequivocal.eu>, at
>>17:39:33 on Thu, 28 Dec 2023, Jon Ribbens <jon+u...@unequivocal.eu>
>>remarked:
>
>>> But people may be paying less attention than usual due to the season ;-)

>> Currently I'm finding "the holidays" very frustrating because it's
>>almost impossible to get anything done. I have some urgent domestic
>>duties to pursue, but spent all yesterday afternoon being told
>>"nothing we can do today because it's Friday afternoon and we can't
>>take on anything new; then it's a Bank Holiday weekend, so ask again
>>on Tuesday".

>> Of course, on Tuesday there's going to be a massive backlog, as
>>almost nothing has got done since lunchtime on the 22nd [which I call
>>Xmas Eve Eve] when many organisations closed for the season.
>
>Yes, other people have lives to lead and things to do too.

Although you have been fairly busy at the keyboard recently.

>Maybe it will dawn someday that the world does not revolve around you,
>however important you think you are.

I'm currently attempting to get "the world revolving around" someone
else, but it's rather bogged down for the reasons I gave.
--
Roland Perry

Roland Perry

unread,
Dec 30, 2023, 11:16:36 AM12/30/23
to
In message <kva9e0...@mid.individual.net>, at 10:24:02 on Sat, 30
Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>On 30/12/2023 09:00, Roland Perry wrote:
>> In message <kv83h0...@mid.individual.net>, at 14:30:57 on Fri, 29
>>Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>>> On 29/12/2023 13:55, billy bookcase wrote:
>
>>>>  So you're claiming that ChatGPT specifically excludes any
>>>>possibility  of citing references when answering questions, then are
>>>>
>>>
>>> I'm sure it won't normally.  If you want proof of what it says,
>>>you'll have to look elsewhere sufficiently deeply to satisfy your
>>>own desires.
>>>
>>> But what it said about the article that was posted, and what I
>>>quoted, was a good lead.  At least it named someone, which no-one
>>>else here has subsequently managed to do.
>>>
>>> And I posted it in response to Mr Perry who, after apparently
>>>infringing copyright in it, rather dismissively said 'Good luck
>>>finding the author of that'.  Well, ChatGPT has given him the start
>>>he perhaps needs.
>> It's fundamental when claiming a breach of copyright to be able to
>>identify the author (who can then decide for themselves if they want
>>to pursue the matter).
>
>I think that's actually the responsibility of the person wanting to
>copy the work who is the potential infringer. Don't you?

No, for the reasons given.

>Or do you think it's perfectly okay to act unlawfully if you don't
>think you'll be caught?

I have stopped beating my wife.

>> A bit like The New York Times is reported as taking action against
>>ChatGPT.

>> Internet memes are most unlikely to have a specific author, evolving
>>as they do, when re-told.
>
>It's unlikely to be the case in the matter were considering, which is a
>sort of sequential argument that hangs together as a whole. In any
>case, even if it was, that doesn't excuse infringing any of the
>authors' copyrights in the final work, of which there could be several.

Although one version of the same narrative has been posted, it's not
identical to the one I did.

>>>> Whereas by way of contrast, many posters cite references when
>>>> answering questions in ULM.
>>>
>>> I don't recall that *you* have even attempted to identify the
>>>author, let alone given any references from the glass house in which
>>>you live. So, the only indication we have so far is that provided by
>>>ChatGPT, and even that didn't purport to be definitive, merely
>>>saying that 'The quote is often attributed to the American ... Tom Lehrer'.
>>>
>>> I invite you to prove or even suggest a viable alternative.  If you can.
>> Almost all my objections wrapped up, with a ribbon tied around it.
>
>Just because *you* haven't made any effort at all to find the author
>doesn't mean he can't be found. ChatGPT has very easily given you a
>lead. Why don't you follow that one up at least?
>
>>>>> *I* claimed nothing.
>>>>  Oh sorry you're not claiming that are you ?
>>>>  That may just be something you read somewhere, or possibly not.
>>>>  Maybe you just make it all up as you go along.
>>>
>>> I quoted what ChatGPT returned when I put to it the question 'Who
>>>wrote the following' followed by a short passage from the original.
>>>
>>> You can try it yourself if you like, and let us know if it comes up
>>>with anything else.
>>>
>>> In the meantime, I have no dog in the fight.

>> In which case, just stop posting about it.
>
>Why?

Because it's "wasting our time".
--
Roland Perry

Jon Ribbens

unread,
Dec 30, 2023, 12:00:03 PM12/30/23
to
I seem to recall reading stories about teachers using AI to attempt to
detect whether students had used AI to generate their work, and the
detection results having basically no correlation to whether the work
was AI or not.

Jon Ribbens

unread,
Dec 30, 2023, 12:14:53 PM12/30/23
to
On 2023-12-30, Roland Perry <rol...@perry.co.uk> wrote:
I expect ChatGPT will provide cites if you ask it to, but whether those
cites actually (a) exist, (b) have any relevance, and (c) say what
ChatGPT says they do is a completely different matter.

Simon Parker

unread,
Dec 30, 2023, 1:07:09 PM12/30/23
to
Several family members work in education and this subject came up in a
recent(ish) discussion.

The consensus was that some/many/most (*) higher education
establishments use software to detect plagiarism. Some/Many/Most (*) of
these software packages also now detect content generated using ChatGPT
and similar.

The software packages with which they were familiar generate a score
between 1 and 100 to indicate plagiarised content and content generated
by ChatGPT and similar.

It is for each individual establishment to perform their own
investigations as they deem appropriate, which may well be triggered by
a score over a certain threshold in the particular package they use.

Early versions of certain packages generated a lot of false positives,
particularly for submissions which had a large amount of quoted content.
However, the better packages are now much more robust, I am told.

I admit that I was surprised by how low a score would trigger an
investigation.

Regards

S.P.

(*) Delete as Applicable

Roland Perry

unread,
Dec 30, 2023, 2:05:01 PM12/30/23
to
In message <slrnup0j4h.2...@raven.unequivocal.eu>, at 17:00:01
on Sat, 30 Dec 2023, Jon Ribbens <jon+u...@unequivocal.eu> remarked:
The technology is evolving rapidly, so unless that story was from
2022-23, it does mean much any more.
--
Roland Perry

Norman Wells

unread,
Dec 30, 2023, 2:21:35 PM12/30/23
to
On 30/12/2023 17:14, Jon Ribbens wrote:
> On 2023-12-30, Roland Perry <rol...@perry.co.uk> wrote:
>> In message <ummj55$rlib$1...@dont-email.me>, at 13:55:48 on Fri, 29 Dec
>> 2023, billy bookcase <bi...@anon.com> remarked:
>>> you're claiming that ChatGPT specifically excludes any possibility
>>> of citing references when answering questions, then are you ?
>>
>> Is it a feature they offer - optional or otherwise?
>>
>> Of course cites aren't a panacea, as humans come back from time to time
>> to say "that cite doesn't say what you claim it does" and some cites
>> themselves contain fundamentally unreliable material.
>
> I expect ChatGPT will provide cites if you ask it to, but whether those
> cites actually (a) exist,

Of course they'll exist. You don't think ChatGPT goes to the lengths of
inventing them, do you?

> (b) have any relevance, and (c) say what
> ChatGPT says they do is a completely different matter.

Quite. In that respect they need to be treated with the same caution as
Mr Parker's.

Norman Wells

unread,
Dec 30, 2023, 2:32:32 PM12/30/23
to
But the unauthorised copier is the one who is potentially liable. It's
in his own interests.

>> Or do you think it's perfectly okay to act unlawfully if you don't
>> think you'll be caught?
>
> I have stopped beating my wife.

And I'm sure all of us here are delighted to hear it.

But your comment seems a little off the wall and irrelevant.

Perhaps you'd tell us why you thought it right and acceptable to copy
someone else's work and post it here?

>>> A bit like The New York Times is reported as taking  action against
>>> ChatGPT.
>
>>>  Internet memes are most unlikely to have a specific author, evolving
>>> as  they do, when re-told.
>>
>> It's unlikely to be the case in the matter were considering, which is
>> a sort of sequential argument that hangs together as a whole.  In any
>> case, even if it was, that doesn't excuse infringing any of the
>> authors' copyrights in the final work, of which there could be several.
>
> Although one version of the same narrative has been posted, it's not
> identical to the one I did.

So what? All that is required for copyright infringement is for you to
have made an unauthorised copy of a substantial part of a work that is
still in copyright, which the article you re-published undoubtedly is,
being obviously comparatively recent.

>>>> I quoted what ChatGPT returned when I put to it the question 'Who
>>>> wrote the following' followed by a short passage from the original.
>>>>
>>>> You can try it yourself if you like, and let us know if it comes up
>>>> with anything else.
>>>>
>>>> In the meantime, I have no dog in the fight.
>
>>>  In which case, just stop posting about it.
>>
>> Why?
>
> Because it's "wasting our time".

Hardly. Anyone who comes here does so entirely voluntarily to use their
own time in the manner of their choosing.


Jon Ribbens

unread,
Dec 30, 2023, 2:39:57 PM12/30/23
to
I suspect the cost of such software may exceed the ulm moderation budget
of $0.00 ;-)

I suspect it also doesn't work by just cut'n'pasting the text into
ChatGPT, preceded by "Did you write this:" (which is what I was
referring to above).

> The software packages with which they were familiar generate a score
> between 1 and 100 to indicate plagiarised content and content generated
> by ChatGPT and similar.
>
> It is for each individual establishment to perform their own
> investigations as they deem appropriate, which may well be triggered by
> a score over a certain threshold in the particular package they use.

Indeed. Provided the output isn't taken as gospel but just used as
a suggestion for which work to subject to manual inspection, it's
probably ok. As with many things, the problems arise when the
computer's output is taken as unquestionable gospel truth.

Roger Hayter

unread,
Dec 30, 2023, 2:59:03 PM12/30/23
to
On 30 Dec 2023 at 19:39:55 GMT, "Jon Ribbens" <jon+u...@unequivocal.eu>
wrote:
I would think it worthless unless it could reference what was plagiarised so a
human could judge and discuss it with the student in borderline cases.

--
Roger Hayter

Jon Ribbens

unread,
Dec 30, 2023, 3:06:24 PM12/30/23
to
Well I would certainly hope it would do that, but I thought about it
before writing that reply and I figure a human ought to be able to use
Google etc to detect plagiarism if they put a bit of effort in - which
they can do if the automated system has cleared 90% of the work first.

Roger Hayter

unread,
Dec 30, 2023, 3:08:45 PM12/30/23
to
On 30 Dec 2023 at 19:21:32 GMT, "Norman Wells" <h...@unseen.ac.am> wrote:

> On 30/12/2023 17:14, Jon Ribbens wrote:
>> On 2023-12-30, Roland Perry <rol...@perry.co.uk> wrote:
>>> In message <ummj55$rlib$1...@dont-email.me>, at 13:55:48 on Fri, 29 Dec
>>> 2023, billy bookcase <bi...@anon.com> remarked:
>>>> you're claiming that ChatGPT specifically excludes any possibility
>>>> of citing references when answering questions, then are you ?
>>>
>>> Is it a feature they offer - optional or otherwise?
>>>
>>> Of course cites aren't a panacea, as humans come back from time to time
>>> to say "that cite doesn't say what you claim it does" and some cites
>>> themselves contain fundamentally unreliable material.
>>
>> I expect ChatGPT will provide cites if you ask it to, but whether those
>> cites actually (a) exist,
>
> Of course they'll exist. You don't think ChatGPT goes to the lengths of
> inventing them, do you?

It must depend how carefully you instruct it. There was a recent famous case
of a lawyer inadvertently submitting several cases cited by ChatGPT which
simply didn't exist.


>
>> (b) have any relevance, and (c) say what
>> ChatGPT says they do is a completely different matter.
>
> Quite. In that respect they need to be treated with the same caution as
> Mr Parker's.


--
Roger Hayter

Roland Perry

unread,
Dec 30, 2023, 4:29:29 PM12/30/23
to
In message <kvb9ie...@mid.individual.net>, at 19:32:30 on Sat, 30
Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:

>Anyone who comes here does so entirely voluntarily to use their own
>time in the manner of their choosing.

In that case I'll voluntarily refrain from approving any of your
postings in ULM. I have better things to do than feed your troll.
--
Roland Perry

Simon Parker

unread,
Dec 30, 2023, 4:41:40 PM12/30/23
to
On 30/12/2023 19:39, Jon Ribbens wrote:
> On 2023-12-30, Simon Parker <simonpa...@gmail.com> wrote:
>> On 30/12/2023 17:00, Jon Ribbens wrote:

>>> I seem to recall reading stories about teachers using AI to attempt to
>>> detect whether students had used AI to generate their work, and the
>>> detection results having basically no correlation to whether the work
>>> was AI or not.
>>
>> Several family members work in education and this subject came up in a
>> recent(ish) discussion.
>>
>> The consensus was that some/many/most (*) higher education
>> establishments use software to detect plagiarism. Some/Many/Most (*) of
>> these software packages also now detect content generated using ChatGPT
>> and similar.
>
> I suspect the cost of such software may exceed the ulm moderation budget
> of $0.00 ;-)

We shall have to give consideration to doubling the subscription costs
to provide a greater budget with which to work.


> I suspect it also doesn't work by just cut'n'pasting the text into
> ChatGPT, preceded by "Did you write this:" (which is what I was
> referring to above).

It does not no, but I cannot give a further insight into how it does
work beyond adding that those familiar with it state it is pretty reliable.

>> The software packages with which they were familiar generate a score
>> between 1 and 100 to indicate plagiarised content and content generated
>> by ChatGPT and similar.
>>
>> It is for each individual establishment to perform their own
>> investigations as they deem appropriate, which may well be triggered by
>> a score over a certain threshold in the particular package they use.
>
> Indeed. Provided the output isn't taken as gospel but just used as
> a suggestion for which work to subject to manual inspection, it's
> probably ok. As with many things, the problems arise when the
> computer's output is taken as unquestionable gospel truth.

As I said in my PP, I was surprised by how low the score needed to be to
trigger a manual investigation.

However, even if the ULM moderators had access to the software in
question, we don't have the time, and speaking for myself only, I don't
have the inclination, to launch an investigation into each post flagged
by the software regardless of the score.

If people want to go down that route, we'd need to add something to the
moderation policy along the lines of "Posts submitted for moderation
that have a score above <n> when assessed by plagiarism and AI detection
software are likely to be rejected."

Regards

S.P.

Norman Wells

unread,
Dec 30, 2023, 4:42:04 PM12/30/23
to
I said 'here', not there.

There, you have a job to do, and it's your responsibility to do it.

It goes with the territory.


Simon Parker

unread,
Dec 30, 2023, 4:45:02 PM12/30/23
to
On 30/12/2023 19:21, Norman Wells wrote:
> On 30/12/2023 17:14, Jon Ribbens wrote:
>> On 2023-12-30, Roland Perry <rol...@perry.co.uk> wrote:

>>> Is it a feature they offer - optional or otherwise?
>>>
>>> Of course cites aren't a panacea, as humans come back from time to time
>>> to say "that cite doesn't say what you claim it does" and some cites
>>> themselves contain fundamentally unreliable material.
>>
>> I expect ChatGPT will provide cites if you ask it to, but whether those
>> cites actually (a) exist,
>
> Of course they'll exist.  You don't think ChatGPT goes to the lengths of
> inventing them, do you?

Oh dear! I see that your knowledge of AI is as woefully lacking as your
knowledge of UK legal matters.

Unfortunately, this is compounded by your poor memory as an example of
this particular issue has been discussed earlier this year in ULM. [1]

The discussion concerned a case in America in which two solicitors
(Steven Schwartz and Peter LoDuca) and their law firm (Levidow, Levidow
& Oberman) were fined $5,000 after they used ChatGPT to perform research
for a case on which they were working involving an aviation injury claim.

ChatGPT suggested several cases involving aviation mishaps that Schwartz
had not been able to find through usual methods used at his law firm.
Several of those cases were not real, misidentified judges or involved
airlines that did not exist.

Levidow, Levidow & Oberman said in a statement at the time that its
lawyers "respectfully" disagreed with the court that they had acted in
bad faith. "We made a good-faith mistake in failing to believe that a
piece of technology could be making up cases out of whole cloth," it said.

Note: We made a... mistake in failing to believe that [ChatGPT] could be
making up cases.

You were saying something about ChatGPT not going "to the lengths of
inventing" cites, I believe?


>> (b) have any relevance, and (c) say what
>> ChatGPT says they do is a completely different matter.
>
> Quite.  In that respect they need to be treated with the same caution as
> Mr Parker's.

The evidence of the accuracy, or otherwise, of our respective posts is
clear for all to see and needs no further explanation.

Furthermore, as you claim to treat my posts with such derision, I would
be delighted, nay overjoyed, were you never to reply to another of my posts.

You could start with this one.

Regards

S.P.

[1] See the thread "Dumb-ass American lawyers" started by The Todal on
Friday, 23rd June 2023 at 12:50. [2]
[2] Message-ID: <kflf83...@mid.individual.net>

Roger Hayter

unread,
Dec 30, 2023, 5:06:33 PM12/30/23
to
Don't tease Roland, I don't think he likes it. But classical trolling there.


--
Roger Hayter

Norman Wells

unread,
Dec 30, 2023, 5:06:55 PM12/30/23
to
On 30/12/2023 21:44, Simon Parker wrote:
> On 30/12/2023 19:21, Norman Wells wrote:
>> On 30/12/2023 17:14, Jon Ribbens wrote:

>>> (b) have any relevance, and (c) say what
>>> ChatGPT says they do is a completely different matter.
>>
>> Quite.  In that respect they need to be treated with the same caution
>> as Mr Parker's.
>
> The evidence of the accuracy, or otherwise, of our respective posts is
> clear for all to see and needs no further explanation.

Parker Stock Reply No 3:

I know everything, you know nothing.

> Furthermore, as you claim to treat my posts with such derision,

Parker Stock Reply No 8:

Claim something was said that wasn’t, avoiding quotation.

'caution' is the word I used. You'll see it above.

> I would be delighted, nay overjoyed, were you never to reply to another of my
> posts.
>
> You could start with this one.

Sorry to disappoint, but this is an open discussion forum where you have
no control. Congratulations on this occasion, though, for not resorting
to your usual potty-mouthed playground insults and ad homs. I applaud
the huge effort that must have involved.


Roger Hayter

unread,
Dec 30, 2023, 5:26:54 PM12/30/23
to
On 30 Dec 2023 at 21:44:58 GMT, "Simon Parker" <simonpa...@gmail.com>
wrote:
ChatGPT seems to exist in a post-truth world where the only criterion of value
is whether something legitimately (sic, US usage) sounds like what someone
would say on the Internet. Case cites on the Internet look like so-and-so -
fine we'll write one that looks like a proper case, and it will be just as
good.

Verification, cross-checking and assessing the value of sources is an
Enlightenment methodology which we no longer need in our real intelligence
armamentarium, let alone AI. Fine as long as the benighted masses don't have
to build bridges etc, just leave that to the Enlightened Ones, human or
computer.

Hopefully there must be someone or something in charge!

--
Roger Hayter

Jon Ribbens

unread,
Dec 30, 2023, 6:25:36 PM12/30/23
to
On 2023-12-30, Roger Hayter <ro...@hayter.org> wrote:
This is the problem. Sadly, the conspiracy theorists are wrong,
and there isn't some secret cabal behind the scenes controlling
everything. I say "sadly" because, no matter what their nefarious
plans were, they'd be likely to be batter than "the end of human
civilization".

Roland Perry

unread,
Dec 30, 2023, 9:30:08 PM12/30/23
to
In message <kvbh5a...@mid.individual.net>, at 21:42:01 on Sat, 30
Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>On 30/12/2023 21:28, Roland Perry wrote:
>> In message <kvb9ie...@mid.individual.net>, at 19:32:30 on Sat, 30
>>Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>>
>>> Anyone who comes here does so entirely voluntarily to use their own
>>>time in the manner of their choosing.

>> In that case I'll voluntarily refrain from approving any of your
>>postings in ULM. I have better things to do than feed your troll.
>
>I said 'here', not there.
>
>There, you have a job to do, and it's your responsibility to do it.
>
>It goes with the territory.

Obviously, I won't answer that, because it's "here".

Sorry if it confuses your chatbot algorithm.
--
Roland Perry

Norman Wells

unread,
Dec 31, 2023, 4:10:21 AM12/31/23
to
Eh? You're not making any sense.

One of the perils of posting after your bedtime, I guess.

Roland Perry

unread,
Dec 31, 2023, 5:35:45 AM12/31/23
to
In message <kvcpfr...@mid.individual.net>, at 09:10:19 on Sun, 31
Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>On 31/12/2023 02:23, Roland Perry wrote:
>> In message <kvbh5a...@mid.individual.net>, at 21:42:01 on Sat, 30
>>Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>>> On 30/12/2023 21:28, Roland Perry wrote:
>>>> In message <kvb9ie...@mid.individual.net>, at 19:32:30 on Sat,
>>>>30 Dec 2023, Norman Wells <h...@unseen.ac.am> remarked:
>>>>
>>>>> Anyone who comes here does so entirely voluntarily to use their
>>>>>own time in the manner of their choosing.
>>
>>>>  In that case I'll voluntarily refrain from approving any of your
>>>>postings in ULM. I have better things to do than feed your troll.
>>>
>>> I said 'here', not there.
>>>
>>> There, you have a job to do, and it's your responsibility to do it.
>>>
>>> It goes with the territory.

>> Obviously, I won't answer that, because it's "here".
>> Sorry if it confuses your chatbot algorithm.
>
>Eh? You're not making any sense.
>
>One of the perils of posting after your bedtime, I guess.

I know it must upset your programmers to realise they've been rumbled.
--
Roland Perry

Jeff Layman

unread,
Jan 1, 2024, 3:39:31 AM1/1/24
to
On 28/12/2023 17:37, Jon Ribbens wrote:
> On 2023-12-28, Tim Jackson <ne...@timjackson.invalid> wrote:
>> On Thu, 28 Dec 2023 15:12:21 -0000 (UTC), Jon Ribbens wrote...
>>> Does anyone agree that we should have a rule that cut'n'pasting
>>> output from ChatGPT and similar is disallowed in ulm, since it's
>>> basically a machine to generate disinformation?
>>
>> While ChatGPT clearly can generate incorrect information, I don't think
>> that's always the case, or necessarily the intention. However, you
>> could say the same about human posts.
>
> Yes it doesn't deliberately get them wrong, but then again it doesn't
> really try to get them right either. Some humans in the group are the
> same it's true, but the only way to do something about that would be
> to moderate for truth, and I think we are right not to try to do that.
>
> My feeling is that "AI" adds little benefit, and if someone actually
> wants an "AI" answer to their question it's quicker and easier for
> them to simply to go ask one directly.

Perhaps of interest...
<https://www.voanews.com/a/us-chief-justice-urges-caution-as-ai-reshapes-legal-field/7419505.html>

--

Jeff

Simon Parker

unread,
Jan 1, 2024, 6:06:05 AM1/1/24
to
The case involving AI "hallucinations" referenced in that article is the
case that was discussed in ULM towards the middle of last year and was
recently referenced in this thread.

It is noteworthy that so prevalent is AI 'inventing' things to support
its claims that it has its own name, namely "hallucinations".

As for AI in the legal field, IME it is already being used extensively
at the larger firms and this will trickle down to all firms eventually
as the cost of the relevant solutions decreases as they are offered at
scale.

I've posted on this previously where the AI solution with which I'm most
familiar will automatically prepare bundles, index and cross-link
documents and even point out if an attachment or referenced work is
missing to ensure it is requested and received in plenty of time.

The mistake made by the now famous American lawyers was that they didn't
check ChatGPT's output. Or more accurately, that they checked it but
when they couldn't find the cited cases using their usual research
methods, rather than digging further in case ChatGPT was suffering from
hallucinations, they included the details in their submission anyway.

This is, IMO, why the court considered they had acted in "bad faith"
which is what led to them being fined.

Regards

S.P.

Jeff Layman

unread,
Jan 1, 2024, 7:47:28 AM1/1/24
to
Yes, the Michael Cohen thing isn't news, but a report by the Chief
Justice of SCOTUS is something to take notice of. I wonder how long it
will be before we see something like the final paragraph of that webpage
appear for UK law. That's if we do such things here; who would issue it
- the Attorney General?

--

Jeff

0 new messages