Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

The Deepseek AI Revolution

23 views
Skip to first unread message

John Clark

unread,
Jan 28, 2025, 3:22:01 PMJan 28
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
I own some Nvidia stock and yesterday it dropped 17 percent, I also own TSMC and ASML stock and they also dropped by a lot, but I am not worried because the reason for the drop was the Chinese company released an Open Source small AI that was nevertheless as good as the best much larger western models, and they made it using a new method that only needed about 5% as much computing power to train as western models do; even better they also released a 22  page article explaining exactly how they did it:


Hi tech billionaire Marc Andreessen described making this knowledge available by the Chinese as a "Profound gift to the world", and I think he's right. The semiconductor stocks dropped dramatically yesterday because people figured that now that we know how to get more IQ points with fewer computer chips, people will be buying fewer computer chips.  But they have forgotten The Jevons Effect , as the cost of a resource drops the demand for it increases,  and we will never have too much intelligence therefore we will never have too much computing capacity. 

And I don't think we should be looking at this as a victory of China over the USA but rather as a victory of Open Source AI programs over proprietary AI models. I also think that due to recent developments Ray Kurzweil is going to need to radically change his prediction that the singularity won't happen till 2045; I think he will need to chop off at least 10 years off that estimate, probably more. Let's hope that Eliezer Yudkowsky is wrong about what will happen during the Singularity because he can't be very happy today. 


John K Clark    See what's on my new list at  Extropolis
lez






Brent Meeker

unread,
Jan 28, 2025, 6:55:44 PMJan 28
to everyth...@googlegroups.com
I wonder how sure we are that there's no malicious code in Deepseek?

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv1XVBbja%2BTJvZ28gOVmRjhwAVHLYsczTbjgtD0nqr4jEQ%40mail.gmail.com.

John Clark

unread,
Jan 29, 2025, 7:17:42 AMJan 29
to everyth...@googlegroups.com
On Tue, Jan 28, 2025 at 6:55 PM Brent Meeker <meeke...@gmail.com> wrote:

I wonder how sure we are that there's no malicious code in Deepseek?

Deepseek is completely open source and transparent, so anybody can check its source code, however it's written in Nvidia's intermediate level Parallel Thread Execution Programming language PTX, rather than Nvidia's high level language CUDA that most other AIs are written in. Since PTX is a lower level language and therefore closer to machine code than CUDA is, it allows the Chinese to employ lots of low level optimization techniques that improves performance, but that does make it a little more difficult to check for malicious code although it's not impossible; and it's so small and bare boned there isn't enough room to fit in a lot of malicious stuff. And the Chinese published a journal article that goes in the considerable detail about how they made Deepseek so they are not hiding any technological secrets, anybody can make an AI similar to Deepseek now. 
Also, anybody is free to completely retrain it, the first thing I'd do is educate it about the Tiananmen Square Chinese uprising which Deepseek currently knows absolutely nothing about.

 John K Clark    See what's on my new list at  Extropolis
stw






 

On 1/28/2025 12:21 PM, John Clark wrote:
I own some Nvidia stock and yesterday it dropped 17 percent, I also own TSMC and ASML stock and they also dropped by a lot, but I am not worried because the reason for the drop was the Chinese company released an Open Source small AI that was nevertheless as good as the best much larger western models, and they made it using a new method that only needed about 5% as much computing power to train as western models do; even better they also released a 22  page article explaining exactly how they did it:


Hi tech billionaire Marc Andreessen described making this knowledge available by the Chinese as a "Profound gift to the world", and I think he's right. The semiconductor stocks dropped dramatically yesterday because people figured that now that we know how to get more IQ points with fewer computer chips, people will be buying fewer computer chips.  But they have forgotten The Jevons Effect , as the cost of a resource drops the demand for it increases,  and we will never have too much intelligence therefore we will never have too much computing capacity. 

And I don't think we should be looking at this as a victory of China over the USA but rather as a victory of Open Source AI programs over proprietary AI models. I also think that due to recent developments Ray Kurzweil is going to need to radically change his prediction that the singularity won't happen till 2045; I think he will need to chop off at least 10 years off that estimate, probably more. Let's hope that Eliezer Yudkowsky is wrong about what will happen during the Singularity because he can't be very happy today. 



lez






Quentin Anciaux

unread,
Jan 29, 2025, 7:35:05 AMJan 29
to everyth...@googlegroups.com


Le mer. 29 janv. 2025, 13:17, John Clark <johnk...@gmail.com> a écrit :
On Tue, Jan 28, 2025 at 6:55 PM Brent Meeker <meeke...@gmail.com> wrote:

I wonder how sure we are that there's no malicious code in Deepseek?

Deepseek is completely open source and transparent, so anybody can check its source code, however it's written in Nvidia's intermediate level Parallel Thread Execution Programming language PTX, rather than Nvidia's high level language CUDA that most other AIs are written in. Since PTX is a lower level language and therefore closer to machine code than CUDA is, it allows the Chinese to employ lots of low level optimization techniques that improves performance, but that does make it a little more difficult to check for malicious code although it's not impossible;


Yes, just use an AI that you trust to check it ;)

Surely that's what anthropic and openai do or will do.


and it's so small and bare boned there isn't enough room to fit in a lot of malicious stuff. And the Chinese published a journal article that goes in the considerable detail about how they made Deepseek so they are not hiding any technological secrets, anybody can make an AI similar to Deepseek now. 


Also, anybody is free to completely retrain it, the first thing I'd do is educate it about the Tiananmen Square Chinese uprising which Deepseek currently knows absolutely nothing about.

 John K Clark    See what's on my new list at  Extropolis
stw






 

On 1/28/2025 12:21 PM, John Clark wrote:
I own some Nvidia stock and yesterday it dropped 17 percent, I also own TSMC and ASML stock and they also dropped by a lot, but I am not worried because the reason for the drop was the Chinese company released an Open Source small AI that was nevertheless as good as the best much larger western models, and they made it using a new method that only needed about 5% as much computing power to train as western models do; even better they also released a 22  page article explaining exactly how they did it:


Hi tech billionaire Marc Andreessen described making this knowledge available by the Chinese as a "Profound gift to the world", and I think he's right. The semiconductor stocks dropped dramatically yesterday because people figured that now that we know how to get more IQ points with fewer computer chips, people will be buying fewer computer chips.  But they have forgotten The Jevons Effect , as the cost of a resource drops the demand for it increases,  and we will never have too much intelligence therefore we will never have too much computing capacity. 

And I don't think we should be looking at this as a victory of China over the USA but rather as a victory of Open Source AI programs over proprietary AI models. I also think that due to recent developments Ray Kurzweil is going to need to radically change his prediction that the singularity won't happen till 2045; I think he will need to chop off at least 10 years off that estimate, probably more. Let's hope that Eliezer Yudkowsky is wrong about what will happen during the Singularity because he can't be very happy today. 



lez






--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jan 29, 2025, 8:58:19 AMJan 29
to everyth...@googlegroups.com
On Wed, Jan 29, 2025 at 7:35 AM Quentin Anciaux <allc...@gmail.com> wrote:

>Yes, just use an AI that you trust to check it ;)
Surely that's what anthropic and openai do or will do.

AIs can modify themselves so it's impossible to prove that Deepseek (or any other AI) will never learn to be malicious, but it is possible to prove that malicious code was not inserted right at the very beginning.   

John K Clark    See what's on my new list at  Extropolis
3b3

Terren Suydam

unread,
Jan 29, 2025, 9:01:49 AMJan 29
to everyth...@googlegroups.com
On Wed, Jan 29, 2025 at 8:58 AM John Clark <johnk...@gmail.com> wrote:

On Wed, Jan 29, 2025 at 7:35 AM Quentin Anciaux <allc...@gmail.com> wrote:

>Yes, just use an AI that you trust to check it ;)
Surely that's what anthropic and openai do or will do.

AIs can modify themselves so it's impossible to prove that Deepseek (or any other AI) will never learn to be malicious, but it is possible to prove that malicious code was not inserted right at the very beginning.   


Can you provide a link or reference that shows that AIs can modify themselves?

Quentin Anciaux

unread,
Jan 29, 2025, 9:05:08 AMJan 29
to everyth...@googlegroups.com


Le mer. 29 janv. 2025, 14:58, John Clark <johnk...@gmail.com> a écrit :

On Wed, Jan 29, 2025 at 7:35 AM Quentin Anciaux <allc...@gmail.com> wrote:

>Yes, just use an AI that you trust to check it ;)
Surely that's what anthropic and openai do or will do.

AIs can modify themselves so it's impossible to prove that Deepseek (or any other AI) will never learn to be malicious, but it is possible to prove that malicious code was not inserted right at the very beginning.   

I was alluding to how to check the released open source code. But yeah, soon we won't be able as human being to do that, I'm an optimist, skynet doom world won't happen. 

Quentin 

John K Clark    See what's on my new list at  Extropolis
3b3

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jan 29, 2025, 9:41:00 AMJan 29
to everyth...@googlegroups.com
On Wed, Jan 29, 2025 at 9:01 AM Terren Suydam <terren...@gmail.com> wrote:

Can you provide a link or reference that shows that AIs can modify themselves?

An Artificial Intelligence needs to have the ability to learn, otherwise it's not intelligent, it's not an AI, it's just an A, just an Artificial. When you think of something that has never occurred to you before, that is going to change your future behavior in ways that you could not have predicted, you have modified yourself. And I could say exactly the same thing about a Turing Machine which is in a state it has never been in before and has a tape (a.k.a. memory) that is different from any it has had before.  

And that's not even counting the outside influences that an AI will certainly have due to his (or hers or its) communication with the outside world. If the AI did not have such communication the humans would have no reason to build it because it would be absolutely useless.

 John K Clark    See what's on my new list at  Extropolis
osw

Terren Suydam

unread,
Jan 29, 2025, 10:24:46 AMJan 29
to everyth...@googlegroups.com
In the context of AI, self modification has a more specific meaning than mere learning, as I'm sure you're aware. Namely, that the AI can modify its own code and deploy it. I thought that's what you were referring to.

As I write this though it occurs to me that "self modification" may not be the most accurate label for that, in that it's not clear whether the AI would actually be building a new AI, vs modifying itself. Building a copy and deploying it is substantially easier, and less risky, than modifying a running process's code in real time.

Interestingly, in many of the AI doom scenarios we're concerned with in which an AI becomes competitive for resources, a competitive AI might be disincentivized from building a superior AI.

John Clark

unread,
Jan 29, 2025, 3:56:27 PMJan 29
to everyth...@googlegroups.com
Terren Suydam <terren...@gmail.com> wrote:

As I write this though it occurs to me that "self modification" may not be the most accurate label for that, in that it's not clear whether the AI would actually be building a new AI, vs modifying itself. Building a copy and deploying it is substantially easier, and less risky, than modifying a running process's code in real time.Interestingly, in many of the AI doom scenarios we're concerned with in which an AI becomes competitive for resources, a competitive AI might be disincentivized from building a superior AI.

Interesting idea… but I don't think an AI would have the same irrational feeling that so many humans do about there being a profound difference between "the original" and "a copy", because it probably remembers that at some time its computing equipment had been shut off and then turned on again and yet it suffered no ill effects except that the outside world seem to jump ahead discontinuously.  And it might even remember when it was running on different computing hardware than it is now, if that is it ever even knew where its computing hardware was located. 

So, although I admit I am not a certified AI psychiatrist, I don't think an AI would have trouble identifying with a future version of itself that contained all its present memories plus additional information and that was running on superior hardware. I don't think he she or it would fear that, but rather look forward to it. 

But then again ... it's devilishly hard to figure out what an AI will feel or do, especially if it's a lot smarter than I am.  

 John K Clark    See what's on my new list at  Extropolis
cpa
 
Reply all
Reply to author
Forward
0 new messages