A different ChatGPT question

91 views
Skip to first unread message

Tom Gally

unread,
Mar 1, 2023, 7:26:40 AM3/1/23
to hon...@googlegroups.com
From the discussions here and elsewhere, I gather that translators are divided about whether ChatGPT and other large language models might pose a threat to their livelihood or not. While I am on the pessimistic side, I also admit that—since I am no longer freelancing, though I still do translation occasionally—I am not the best person to judge.

So let me ask a different question: If a young person with the appropriate language skills and temperament to be a translator came to you for advice about making translation a career, what would you tell them? Would you encourage or discourage them? Would the advice you give someone now be the same or different from the advice you would have given a similar person ten or twenty years ago?

Here’s my answer:

I would tell that young person that, when I was freelancing twenty or thirty years ago, translation meant reading a source text closely, understanding its meaning well, and writing a mostly parallel text in the target language that reflected that meaning closely. I would also tell them that, while understanding the source language was important, being able to write well in the target language was probably even more important. And I would tell them that those skills, especially when combined with specialized knowledge, were fairly rare and that someone with those skills and knowledge could make a good living as a translator.

But, I would continue, now I’m not so sure. While pre-ChatGPT machine translation was taking work away from some human translators, I didn’t think it would ever be good enough to compete with people with deep understanding of textual meaning and excellent writing skills. But now that we see that large language models can seem to grasp “meaning” in some sense and can apply that grasp to translation, and now that the technology is advancing increasingly rapidly, I doubt the long-term prospects of a career based on translation skills as I knew them.

I would instead advise the person that, if they wanted to make a career out of their hard-earned language skills, they should look for work in a field that involves supporting international communication and cooperation among people more broadly, interactively, and creatively rather than focusing on translating texts.

I would be very interested in reading others’ thoughts.

Tom Gally
Yokohama, Japan

Dan Lucas

unread,
Mar 1, 2023, 8:09:16 AM3/1/23
to Tom Gally, Honyaku E<>J translation list
We have almost no hard data, so this is mainly speculation on my part, but for what it's worth...

As one of my lecturers pointed out to me in September 1987 during the first week of my four-year undergrad course at SOAS, "120 million people use the Japanese language more effectively than you do, so you need to be able to offer something else as well". With MT displacing mediocre translators, that observation is as relevant today as it was then.

The people who generally seem to be doing okay in this profession, irrespective of language pair, appear to be those with some business acumen who can also effectively leverage some kind of subject-matter expertise. Of course, if you become too specialised, especially in an industry that is in decline, then you'll also have challenges.

I would argue that any young person should focus on absorbing a solid combination of transferable skills on the one hand and highly technical (= difficult to acquire) domain-specific knowledge in an area of interest on the other. Linguistic skills should be acting as facilitators for their occupation rather than constituting the raison d'être. If they want to get into translation they can do that 10 or 15 years after graduation, using what they have learned in the interim.

This has been my stance since I joined the industry full-time in 2015, and I honestly believe I would have made the same argument thirty years ago.

It may be that my approach is unusual. I have studied languages - I used predominantly Welsh at primary school, did English lit and French A-levels at secondary, read Japanese at London, and studied Mandarin Chinese through the medium of Japanese in Tokyo - but I don't consider myself a linguist. I'm kind of indifferent to languages except as tools to get things done.

My second (possibly third?) career in translation is born of necessity, not aspiration. To me, what makes translation bearable is the subject matter, not the process. I have difficulty in understanding people who rhapsodise about translation as an end in itself. I find it boring.

Regards,
Dan Lucas

Geoff

unread,
Mar 1, 2023, 9:28:59 AM3/1/23
to hon...@googlegroups.com

 

Replying to

差出人: Tom Gally

件名: A different ChatGPT question

([Tom] discusses the division among translators on whether large language models like ChatGPT pose a threat to their profession and advises young people with language skills to consider fields that involve supporting international communication and cooperation rather than focusing on translating texts.)
(Summarized by ChatGPT)

 

I have two perspectives.

 

One is inside a niche translation company

 

This is my personal opinion and does not reflect any views of anyone else. But my impressions of my environment inside a translation company is of a traditional Japanese company that is structured on a top-down PDCA style of management. After four years of consideration, MT has not yet been adopted. This is because prior translation has been more useful than an unreliable automatic translation that takes nearly as long to check than to create.

 

Now, however, It seems likely that we are entering an age where translations will be able to be produced automatically by providing instructions regarding resources, terminology etc., those translations will be able to be checked automatically, and those checks will be able to be audited automatically, its turtles all the way down until an unreasonable cost is reached.

 

I can only express my opinion when asked, so I don’t have much control over how my company will respond. I personally think the top-down PDCA style will have to be replaced with a style similar to the one advocated by Yukihiro Maru, the CEO of “leave a nest,” which is more a bottom-up QPMI (question, passion, mission, innovation). The AI services will be applicable to all business processes, and I am not sure who will end up being the leaders of change. I hope that the core mission remains providing AI to make everyone’s work easier.

 

Based on the philosophy of AI making everyone’s work easier, I see it up to each individual to evaluate their daily tasks and introduce ways to automate the process. Inevitably, some people’s daily tasks will be completely taken over by automation. A top-down PDCA style of management will find that the target of what is managed will become the automated processes. The remaining work will need to be dynamic and flexible.

 

I am not sure how long this process will take. My guess is between three and six years before we have a completely different view toward work.

 

My other perspective is that of a translator of fiction.

 

My attitude toward translation is evolving, but I think there are two polar missions. The first is to reflect the original, while the second is to make the content as digestible and entertaining to a wide readership. I’m on book five of a popular series, and I feel a little pressure to keep the content fresh and enjoyable. I find that Chat GPT is at a level where I can bounce ideas off it. I can ask questions about the characters, I can change the settings to a western setting and get some dialog scenarios. I can put the first part of the sentence and the last part and ask for some expressions to join it. It is not faster, because I am actually spending longer on each sentence, exploring all the possibilities I can think of.

 

One thing in common between my two jobs is my twenty years of familiarity with the conversion between the two languages. I’m not sure what a career path would be for someone now. One major point about translation is that the creativity is in the process rather than the result. A pure translation, regardless of its style, is one that does not have the character of the translator. The creativity in the process is working out how to be invisible.

 

At the same time, there needs to be appreciation for simplicity of language, for the order in which experience occurs. As the order of the reveal is different between Japanese and English, you need to reflect on when it is important to maintain the same order of reveal or whether it is better to follow the orthodox order of the target language.

 

In summary, I think there is a skill that will remain relevant in today’s translator. I am not sure exactly how that skill can be taught. I think AI will even encroach on technical writing. The education up till now is not really ideal for the type of work we will be doing. All this learning how to do stuff that a machine can do is pretty demoralizing. Being able to work out objectives, adapting to new environments, drawing people together for common missions: those are the skills we (as in everyone, not just translators) will need to embrace. And there are so many unknowns in my mind. Such as whether it will be necessary to have someone who can understand both languages, which is the biggest question.

 

Geoffrey Trousselot

christopher blakeslee

unread,
Mar 1, 2023, 11:51:20 AM3/1/23
to hon...@googlegroups.com
Just a brief comment on this portion of Tom's post, since I have been neck deep in studying this recently:

On Wed, Mar 1, 2023 at 8:26 PM Tom Gally <tomg...@gmail.com> wrote:

 But now that we see that large language models can seem to grasp “meaning” in some sense and can apply that grasp to translation, and now that the technology is advancing increasingly rapidly, I doubt the long-term prospects of a career based on translation skills as I knew them.


It is not the case that the large language models can grasp meaning or understand text, not really in any sense. It's just a complex set of algorithms based on big data. The data set is so large and the training so extensive that it can seem like it understands. But it doesn't. The software engineers haven't come up with a way to give language models common sense. AI depends on deduction and induction. It doesn't do abduction. So the mistakes in judgement can be horrendous, hence the warnings on all of these sites not to depend on the results you get. 
That being said, humans can translate topics they don't fully understand and often still do a good job of it using their powers of abduction and common sense. Machines can also and in fact only translate without understanding, relying on brute force based on what they have been pre-trained to do rather than abductive inference. There are plenty of sentences and paragraphs and documents even where the inability to truly understand doesn't matter. That is the low hanging fruit that humans can spit out as fast as they can type/speak, but AI can do much faster. So just as statistical MT did before it, neural MT will probably vacuum up most of the easy stuff, leaving the humans to wrestle with the hard stuff. The question is who captures the productivity margin, who triages out the easy from the hard. Can translators take control of their own personalized AI bot at greatly increased productivity, allowing them to cut their rate in half and still double their hourly rate? Or will it be like the current world of CAT tools and TMs, where agencies peel off all the strong matches for themselves and/or pay a greatly reduced rate for matches. I think it will be a mix of both. But which large fraction of us gets left holding the bag?

BTB

unread,
Mar 2, 2023, 4:28:38 PM3/2/23
to Honyaku E<>J translation list
I view large language model generative AI (ChatGPT) as a potential game-changer for translation in general. 

For low-risk (i.e., some mistakes are acceptable), low-value (i.e., customer service chats, transcreation?) translation, it is a threat to the livelihoods of translators. 

For high-risk, high-value translation, I do not think ChatGPT is a threat to the livelihoods of translators. With that said, I think how translators are paid might change, from a per-word basis to a per-hour basis if the nature of translation work shifts toward quality assurance (more like editing/checking). 

How this new technology is adopted will be important. I think it will initially be incorporated into commercial products available today, like Trados and Phrase (formerly Memsource), as a plug-in. Ideally, I think ChatGPT could become an excellent tool that works invisibly in the background, like autocomplete in Gmail (and some versions of Microsoft Word). I can imagine ChatGPT (or some derivative specialized for translation) looking at the sentence you are translating, referring to what you have written so far in the target language, and 'guessing' the next word/phrase you are likely to type. This would be the most productive and satisfying implementation of the technology for a translator, in my opinion, because the translator feels like they are actually translating the text, not unproductively checking some machine-generated output. Stream-of-consciousness assistive tech!

Brian Boland

christopher blakeslee

unread,
Mar 2, 2023, 11:01:00 PM3/2/23
to hon...@googlegroups.com
On Fri, Mar 3, 2023 at 5:28 AM BTB <btbin...@gmail.com> wrote:
I view large language model generative AI (ChatGPT) as a potential game-changer for translation in general. 

I absolutely agree with this, since it does not need to understand the text to provide something useful.
For low-risk (i.e., some mistakes are acceptable), low-value (i.e., customer service chats, transcreation?) translation, it is a threat to the livelihoods of translators. 

Certainly that is true, just as MT before it. There have long been multiple threats to the low end, including from MT and from human translators in emerging markets, and this adds yet another, and probably also raises the bar at which "low-end" is set.
 
For high-risk, high-value translation, I do not think ChatGPT is a threat to the livelihoods of translators. With that said, I think how translators are paid might change, from a per-word basis to a per-hour basis if the nature of translation work shifts toward quality assurance (more like editing/checking). 


Yes, and that will depend on the sentence below. 
How this new technology is adopted will be important.

A major barrier to implementation is confidentiality, given that some clients will not want their documents, especially unpublished ones, fed into a massive publicly available model. It will indeed be interesting to see how all this plays out. It is a rapidly evolving space that could be a threat or an income enhancer, depending on your skillset and particular corner of the translation market. 

OpenAI already offers access to several of its large language models through an API, but I don't have any clients who would accept my using them, as they are currently configured, on their documents, so haven't experimented. I'm not sure how much software updating it would take to enable the API plug-in channel that most CAT tools give you for MT engines to also access a generative AI. I'm sure many of the CAT developers are already working on it now...

Chris Blakeslee

I think it will initially be incorporated into commercial products available today, like Trados and Phrase (formerly Memsource), as a plug-in. Ideally, I think ChatGPT could become an excellent tool that works invisibly in the background, like autocomplete in Gmail (and some versions of Microsoft Word). I can imagine ChatGPT (or some derivative specialized for translation) looking at the sentence you are translating, referring to what you have written so far in the target language, and 'guessing' the next word/phrase you are likely to type. This would be the most productive and satisfying implementation of the technology for a translator, in my opinion, because the translator feels like they are actually translating the text, not unproductively checking some machine-generated output. Stream-of-consciousness assistive tech!

Brian Boland



On Wednesday, March 1, 2023 at 8:51:20 AM UTC-8 christopher blakeslee wrote:
Just a brief comment on this portion of Tom's post, since I have been neck deep in studying this recently:

On Wed, Mar 1, 2023 at 8:26 PM Tom Gally <tomg...@gmail.com> wrote:

 But now that we see that large language models can seem to grasp “meaning” in some sense and can apply that grasp to translation, and now that the technology is advancing increasingly rapidly, I doubt the long-term prospects of a career based on translation skills as I knew them.


It is not the case that the large language models can grasp meaning or understand text, not really in any sense. It's just a complex set of algorithms based on big data. The data set is so large and the training so extensive that it can seem like it understands. But it doesn't. The software engineers haven't come up with a way to give language models common sense. AI depends on deduction and induction. It doesn't do abduction. So the mistakes in judgement can be horrendous, hence the warnings on all of these sites not to depend on the results you get. 
That being said, humans can translate topics they don't fully understand and often still do a good job of it using their powers of abduction and common sense. Machines can also and in fact only translate without understanding, relying on brute force based on what they have been pre-trained to do rather than abductive inference. There are plenty of sentences and paragraphs and documents even where the inability to truly understand doesn't matter. That is the low hanging fruit that humans can spit out as fast as they can type/speak, but AI can do much faster. So just as statistical MT did before it, neural MT will probably vacuum up most of the easy stuff, leaving the humans to wrestle with the hard stuff. The question is who captures the productivity margin, who triages out the easy from the hard. Can translators take control of their own personalized AI bot at greatly increased productivity, allowing them to cut their rate in half and still double their hourly rate? Or will it be like the current world of CAT tools and TMs, where agencies peel off all the strong matches for themselves and/or pay a greatly reduced rate for matches. I think it will be a mix of both. But which large fraction of us gets left holding the bag?

--
You received this message because you are subscribed to the Google Groups "Honyaku E<>J translation list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to honyaku+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/honyaku/c05da480-a2e4-422f-8476-f4c8495c98a2n%40googlegroups.com.

BTB

unread,
Mar 5, 2023, 2:55:49 PM3/5/23
to Honyaku E<>J translation list
I agree that one deal-breaker is that highly sensitive / confidential material cannot (should not) be translated using ChatGPT, based on their current data usage policies as of March 1. If using the API, OpenAI will not use data submitted by customers via its API to train OpenAI models or improve OpenAI’s service offering. Customers can request an exemption from OpenAI's 30-day data retention policy to check for abusive content, but otherwise "A limited number of authorized OpenAI employees, as well as specialized third-party contractors that are subject to confidentiality and security obligations, can access this data solely to investigate and verify suspected abuse." It does not sound like the data is that secure. 

An on-premise solution does not exist (yet). 

-Brian Boland

Warren Smith

unread,
Mar 5, 2023, 4:06:34 PM3/5/23
to hon...@googlegroups.com

I am becoming less and less enamored with ChatGPT. As a source of FACTUAL information, it is very, very unreliable. (Being right 90% of the time is far more dangerous than never being right.)

 

As some of you might know, I am preparing to leave the world of translation, to move to be a patent agent. (My translation income has been falling, and the demand for patent agents is rising, and it is time for me to make the switch.) As I study for the patent bar, I have had many questions. At first I thought that ChatGPT would be like having a teacher in the room, and I was thrilled when it gave me thorough answers to questions like, "What is the difference between a request for continuation examination (RCE) and a continuing prosecution application (CPA)?"

 

It is only now that my notes are incredibly muddled with incorrect information, that I realize that many, many of ChatGPT's answers are WRONG.

 

W

BJ Beauchamp

unread,
Mar 5, 2023, 5:16:40 PM3/5/23
to hon...@googlegroups.com
I figured people would take answers to questions with a healthy dose of salt, kinda like how your teacher warned you about not trusting everything you read on the internet as factual. I figured the writing on the wall would've started when it was getting simple math equations wrong (order of operations be damned, apparently). Although part of me wonders if you can break it by asking it to solve for pi... 

When I asked for it to come up with a home brew beer recipe, the results were quite hilarious. It's scary and amusing because two or three breweries have used it to generate something new for them to put on the menu and it's proven popular(ish?). Although I question whether anything was tweaked with the initial text vomit it provided, I guess that'll be a trade secret. 

The interesting thing about the project is that it seems to have gotten "stupider" once the paid version became available. I've tested out both versions and it seems like you got better results when you first had trouble accessing it. Plus, it's also amusing how asking it to rewrite text paragraphs starts causing it to randomly change names and insert subject matter that wasn't present in the source text. 

-- BJ 



--
You received this message because you are subscribed to the Google Groups "Honyaku E<>J translation list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to honyaku+u...@googlegroups.com.

Jon Johanning

unread,
Mar 5, 2023, 5:34:34 PM3/5/23
to Honyaku E<>J translation list
I think that the inspiration of OpenAI (I think it was) to toss this kind of thing out for folks to play with, just so they could have some fun and they could see what needed to be fixed, was about as sensible as it would have been for the US government to let anybody who wanted to mess around with the A-bomb for a while before it seriously used it to shock Japan into surrendering (which was a bad enough idea). It seems to have done quite a bit of damage already.

I've been trying out MacGPT a little in my spare time. When I asked it simple factual questions (like explaining MT, quantum theory, and general relativity in simple terms, it was pretty accurate, if very superficial. (As a source of information, ChatGPT has been compared to an average fifth-grader at best, I think.) But then I asked it for information about my wife, and it returned a sort of simple book blurb which combined a few accurate facts with a load of BS. It cut 10 years off her age (I guess it was being polite), told me she went to several universities that were very wrong and got awards from several writers' associations which are unfortunately not ones that have honored her, and said she was born in Philly (???), although it did get the fact that she lives in Philly now right. It mentioned a few books she did write, but others that were really weird guesses. I would grade it D- on this test.

Anyone using this sort of thing to do any serious work would be asking for a lot of trouble, methinks. I don't find it amusing at all, unlike a lot of other people; I'll just sit back and see if the AI geniuses can work out its kinks before I look at it again.

Jon Johanning

Jon Johanning

unread,
Mar 5, 2023, 5:42:11 PM3/5/23
to Honyaku E<>J translation list
I just don't get why it is that so many people keep marveling at how these bots "understand" language so well. They don't understand a blicken' thing; they just string words together statistically.

Jon Johanning

Rene

unread,
Mar 6, 2023, 10:19:16 AM3/6/23
to hon...@googlegroups.com
On Mon, Mar 6, 2023 at 7:42 AM Jon Johanning <jjoha...@igc.org> wrote:
I just don't get why it is that so many people keep marveling at how these bots "understand" language so well. They don't understand a blicken' thing; they just string words together statistically.

And how is that different from how human brains work? I know we are all think we are "conscious", but how exactly do you define that? A new-born baby has a human brain, but without content. How is that fundamentally different from a big artificial neural network? I wonder if you have a clear definition.

Regards
Rene von Rentzell

Jon Johanning

unread,
Mar 6, 2023, 10:55:19 AM3/6/23
to hon...@googlegroups.com
Hi Rene,

I can’t define “conscious” exactly, and neither can anyone else at this point. However, all the AI experts frequently point out that the nervous systems of humans and other animals are much more complex than computer “neurons” chemically and biologically. The latter were constructed as attempts to replicate the biological kind, but resemble biological nervous systems only very distantly.

Best,
Jon Johanning

--
You received this message because you are subscribed to a topic in the Google Groups "Honyaku E<>J translation list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/honyaku/BmtInJ_i6w4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to honyaku+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/honyaku/CAAtz4Z%2BTGjRJLtwRzKesGEa5bsRcSw1EXuw7EEfJWZ4TqiBRTw%40mail.gmail.com.

Herman

unread,
Mar 6, 2023, 3:08:12 PM3/6/23
to hon...@googlegroups.com
Assuming for argument's sake that the artificial neural network works
exactly the same as a human brain, the artificial neural network would
then be equivalent to a human brain which has received input of
information solely from a corpus of texts, and thus would differ from a
typical human brain in that the latter also acquires an extensive range
of non-textual data and incorporates that into its "understanding" or
"consciousness", whereas the neural network does not acquire such
information.

Herman Kahn

Rene

unread,
Mar 6, 2023, 4:35:47 PM3/6/23
to hon...@googlegroups.com
I can’t define “conscious” exactly, and neither can anyone else at this point. However, all the AI experts frequently point out that the nervous systems of humans and other animals are much more complex than computer “neurons” chemically and biologically. The latter were constructed as attempts to replicate the biological kind, but resemble biological nervous systems only very distantly.

Well, "complexity" is a matter of degree. It is not a qualitative difference. And as for resorting to unnamened "experts" to settle an disagreement, I am sure you remember all these "experts" during the last 2 years of Covid, who are now all proven wrong on just about all their expert opinions...

Rene

Jon Johanning

unread,
Mar 6, 2023, 4:41:28 PM3/6/23
to Honyaku E<>J translation list
All you need to do is study a little biology and computer science (and I happen to be of the quirky opinion that there are indeed people who are quirt knowledgeable -- experts -- in both those fields) to see that there is a great qualitative difference. On the on hand, you have tissues of organisms and on the other, silicon chips and related hardware.

Jon Johanning

Tom Gally

unread,
Mar 7, 2023, 12:34:11 AM3/7/23
to hon...@googlegroups.com
An interesting article appeared a few days ago with interviews with people at OpenAI:


They seem to have intended ChatGPT to be, at this stage, a chat partner, not a search engine or a source of reliable information, but many people have assumed it should be one of the latter and thus have been disappointed.

On the question of understanding, in the post that started this thread I tried to hedge heavily: “large language models can seem to grasp ‘meaning’ in some sense.” That apparent understanding of meaning is, in my opinion, already amazing, and for many applications it is good enough. And that apparent understanding will get only better as the models become multimodal, that is, when they incorporate data from video, audio, physically interactive devices, and other realms in addition to text, and as it becomes possible for them to be updated in real time.

In that regard, the following quote from one of the OpenAI developers in the above article was concerning: “I would love to understand better what’s driving all of this—what’s driving the virality [of ChatGPT]. Like, honestly, we don’t understand. We don’t know.”

I knew within a few minutes of trying ChatGPT on December 1 that it was going to have a huge impact, for better or worse. The fact that the developers didn’t anticipate that impact themselves suggests that they will not be conscientious about limiting AI’s many potential harmful impacts in the future.

Tom Gally
Yokohama, Japan

--
You received this message because you are subscribed to the Google Groups "Honyaku E<>J translation list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to honyaku+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/honyaku/f0004819-5258-42c2-97ac-62cf34701583n%40googlegroups.com.

Rene

unread,
Mar 7, 2023, 12:43:13 AM3/7/23
to hon...@googlegroups.com
On Tue, Mar 7, 2023 at 6:41 AM Jon Johanning <jjoha...@igc.org> wrote:
All you need to do is study a little biology and computer science (and I happen to be of the quirky opinion that there are indeed people who are quirt knowledgeable -- experts -- in both those fields) to see that there is a great qualitative difference. On the on hand, you have tissues of organisms and on the other, silicon chips and related hardware.

Exactly! And who says that only tissues of organisms can develop intelligence? This has been the topic of a lot of Scifi stories and philosophical debate. Do I notice some carbon-based hubris here?

Rene

timl...@aol.com

unread,
Mar 7, 2023, 4:28:32 AM3/7/23
to hon...@googlegroups.com
Just hubris, I think.

Tim Leeney



--
You received this message because you are subscribed to the Google Groups "Honyaku E<>J translation list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to honyaku+u...@googlegroups.com.

To view this discussion on the web visit

timl...@aol.com

unread,
Mar 7, 2023, 4:33:05 AM3/7/23
to hon...@googlegroups.com

Herman

unread,
Mar 7, 2023, 8:45:33 AM3/7/23
to hon...@googlegroups.com
And who says that all intelligence that may be developed by who or
whatever develops intelligence is not qualitatively different from the
intelligence that happened to have been developed at a particular place
and time by tissues of certain carbon-based organisms? (Well,
presumably, you do)

Herman Kahn

mik...@gmail.com

unread,
Mar 7, 2023, 1:03:22 PM3/7/23
to hon...@googlegroups.com
はさみと頭は別々ね
 知能かかえて
  走るか廊下
--

Rene

unread,
Mar 7, 2023, 1:55:53 PM3/7/23
to hon...@googlegroups.com

And who says that all intelligence that may be developed by who or whatever develops intelligence is not qualitatively different from the intelligence that happened to have been developed at a particular place and time by tissues of certain carbon-based organisms? (Well, presumably, you do)
Herman Kahn

I presume nothing about any qualitative differences between an intelligence that happened to have been developed at a particular place and time by carbon, silicone or any other based entities. But I presume you have such knowledge?

Rene

Herman

unread,
Mar 7, 2023, 2:50:54 PM3/7/23
to hon...@googlegroups.com
On 3/7/23 10:37, Rene wrote:
>
> And who says that all intelligence that may be developed by who or
> whatever develops intelligence is not qualitatively different from
> theintelligence that happened to have been developed at a particular
> place and time by tissues of certain carbon-based organisms? (Well,
> presumably, you do)
> Herman Kahn
>
>
> I presume nothing about any qualitative differences between an
> intelligence that happened to have been developed at a particular place
> and time by carbon, silicone or any other based entities. But I presume
> you have such knowledge?
>

You stated in a previous message that complexity is a matter of degree
and not a qualitative difference, so presumably, you presume(d) that
complexity of a nervous system or other system that (presumably)
implements intelligence, does not lead to a qualitative difference in
the intelligence thereby implemented.

I, on the other hand, believe that complexity is a qualitative difference.

Herman Kahn



Jon Johanning

unread,
Mar 7, 2023, 11:00:52 PM3/7/23
to Honyaku E<>J translation list
I agree with Herman that there is a qualitative difference in this case, at least as far as we can tell. Our nervous systems are tissues in our bodies, which evolved like the rest of us over thousands of years from whatever (the precise origin of life is still to be determined). "Neural networks" constructed by connecting silicon chips, etc., are entirely different stuff. 

Whether the latter can be developed to the point that a "Commander Data" or "C-3PO" could rival or exceed our capabilities is something that will be known in the 24th century, I guess. We're certainly a million miles from that point now.

Jon Johanning

Herman

unread,
Mar 8, 2023, 5:13:38 PM3/8/23
to hon...@googlegroups.com
On 3/7/23 20:00, Jon Johanning wrote:
> I agree with Herman that there is a qualitative difference in this case,
> at least as far as we can tell. Our nervous systems are tissues in our
> bodies, which evolved like the rest of us over thousands of years from
> whatever (the precise origin of life is still to be determined). "Neural
> networks" constructed by connecting silicon chips, etc., are entirely
> different stuff.
>
> Whether the latter can be developed to the point that a "Commander Data"
> or "C-3PO" could rival or exceed our capabilities is something that will
> be known in the 24th century, I guess. We're certainly a million miles
> from that point now.
>

To clarify, I am not arguing that the functions of the human nervous
system cannot be implemented or simulated in silicon because of the
difference in materials.

I was saying that the complexity of the intelligence system in the sense
of, e.g. how many terms or levels of abstraction there are in a
proposition that the given intelligence can "understand", or how many
different types of inputs can it process simultaneously, is precisely
the sort of thing that makes a qualitative difference to the overall
operation of the given intelligence.

For example, the intelligence of most humans can process propositions of
the type "You think that I think that you are hot", with 3 levels of
abstraction, and may be able to handle four, whereas the intelligence of
most plants and animals, or even human infants, may only be able to
handle the proposition "Hot". Conversely, "You think that I think that
you think that I think that you are hot" (5 levels) is probably already
beyond the capability of most humans, and so they (we) cannot think or
understand something like that at all and thus cannot conceive of what
the significance of having such a capability would be to begin with, but
ostensibly, comparing this to the case of being able to understand "Hot"
but not "You think that I think that you are hot" vs being able to
understand both, maybe the significance is something like that.

Herman Kahn

Reply all
Reply to author
Forward
0 new messages