--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv1XVBbja%2BTJvZ28gOVmRjhwAVHLYsczTbjgtD0nqr4jEQ%40mail.gmail.com.
> I wonder how sure we are that there's no malicious code in Deepseek?
On 1/28/2025 12:21 PM, John Clark wrote:
I own some Nvidia stock and yesterday it dropped 17 percent, I also own TSMC and ASML stock and they also dropped by a lot, but I am not worried because the reason for the drop was the Chinese company released an Open Source small AI that was nevertheless as good as the best much larger western models, and they made it using a new method that only needed about 5% as much computing power to train as western models do; even better they also released a 22 page article explaining exactly how they did it:
Hi tech billionaire Marc Andreessen described making this knowledge available by the Chinese as a "Profound gift to the world", and I think he's right. The semiconductor stocks dropped dramatically yesterday because people figured that now that we know how to get more IQ points with fewer computer chips, people will be buying fewer computer chips. But they have forgotten The Jevons Effect , as the cost of a resource drops the demand for it increases, and we will never have too much intelligence therefore we will never have too much computing capacity.
And I don't think we should be looking at this as a victory of China over the USA but rather as a victory of Open Source AI programs over proprietary AI models. I also think that due to recent developments Ray Kurzweil is going to need to radically change his prediction that the singularity won't happen till 2045; I think he will need to chop off at least 10 years off that estimate, probably more. Let's hope that Eliezer Yudkowsky is wrong about what will happen during the Singularity because he can't be very happy today.
lez
On Tue, Jan 28, 2025 at 6:55 PM Brent Meeker <meeke...@gmail.com> wrote:> I wonder how sure we are that there's no malicious code in Deepseek?Deepseek is completely open source and transparent, so anybody can check its source code, however it's written in Nvidia's intermediate level Parallel Thread Execution Programming language PTX, rather than Nvidia's high level language CUDA that most other AIs are written in. Since PTX is a lower level language and therefore closer to machine code than CUDA is, it allows the Chinese to employ lots of low level optimization techniques that improves performance, but that does make it a little more difficult to check for malicious code although it's not impossible;
Also, anybody is free to completely retrain it, the first thing I'd do is educate it about the Tiananmen Square Chinese uprising which Deepseek currently knows absolutely nothing about.John K Clark See what's on my new list at Extropolisstw
On 1/28/2025 12:21 PM, John Clark wrote:
I own some Nvidia stock and yesterday it dropped 17 percent, I also own TSMC and ASML stock and they also dropped by a lot, but I am not worried because the reason for the drop was the Chinese company released an Open Source small AI that was nevertheless as good as the best much larger western models, and they made it using a new method that only needed about 5% as much computing power to train as western models do; even better they also released a 22 page article explaining exactly how they did it:
Hi tech billionaire Marc Andreessen described making this knowledge available by the Chinese as a "Profound gift to the world", and I think he's right. The semiconductor stocks dropped dramatically yesterday because people figured that now that we know how to get more IQ points with fewer computer chips, people will be buying fewer computer chips. But they have forgotten The Jevons Effect , as the cost of a resource drops the demand for it increases, and we will never have too much intelligence therefore we will never have too much computing capacity.
And I don't think we should be looking at this as a victory of China over the USA but rather as a victory of Open Source AI programs over proprietary AI models. I also think that due to recent developments Ray Kurzweil is going to need to radically change his prediction that the singularity won't happen till 2045; I think he will need to chop off at least 10 years off that estimate, probably more. Let's hope that Eliezer Yudkowsky is wrong about what will happen during the Singularity because he can't be very happy today.
lez
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv1kGNHRwoQmh_HLSeJvZ4WGcLQ5GBqwvK6w7hWP6unKaA%40mail.gmail.com.
>Yes, just use an AI that you trust to check it ;)
Surely that's what anthropic and openai do or will do.
On Wed, Jan 29, 2025 at 7:35 AM Quentin Anciaux <allc...@gmail.com> wrote:>Yes, just use an AI that you trust to check it ;)
Surely that's what anthropic and openai do or will do.AIs can modify themselves so it's impossible to prove that Deepseek (or any other AI) will never learn to be malicious, but it is possible to prove that malicious code was not inserted right at the very beginning.
On Wed, Jan 29, 2025 at 7:35 AM Quentin Anciaux <allc...@gmail.com> wrote:>Yes, just use an AI that you trust to check it ;)
Surely that's what anthropic and openai do or will do.AIs can modify themselves so it's impossible to prove that Deepseek (or any other AI) will never learn to be malicious, but it is possible to prove that malicious code was not inserted right at the very beginning.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv12SjWvUf8K-wiPuOd-_UyRKdzELsTCH%2BrvDHvtsPn%3D9g%40mail.gmail.com.
> Can you provide a link or reference that shows that AIs can modify themselves?
> As I write this though it occurs to me that "self modification" may not be the most accurate label for that, in that it's not clear whether the AI would actually be building a new AI, vs modifying itself. Building a copy and deploying it is substantially easier, and less risky, than modifying a running process's code in real time.Interestingly, in many of the AI doom scenarios we're concerned with in which an AI becomes competitive for resources, a competitive AI might be disincentivized from building a superior AI.