LLAMA3

101 views
Skip to first unread message

John Clark

unread,
Apr 20, 2024, 11:09:46 AM4/20/24
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
Meta (a.k.a. Facebook) released LLAMA3 just a few days ago, and it's amazing for three reasons: 
1) It's tiny, it only has 70 billion parameters, GPT4 is about 1.8 trillion parameters. 
2) Despite its small size on AI benchmarks it's performance is just a smidgen below that of GPT4.
3) It is open source. 

Meta says it's performance would be even better if they trained it for longer but they stopped early because the company's computational resources are large but not infinite so they decided that compute time could be better spent training a 400 billion parameter version of LLAMA3, which they say they'll release sometime in the next couple of months, and in developing LLAMA4.

And anybody who still thinks the Singularity is not near really needs to look at the following video. I'll tell you one thing, it sure makes the issues that most Americans believe are the most important and which will probably decide the November election, excessive wokeness, the "invasion" from Mexico, and transsexual bathrooms, seem pretty damn trivial. 


 John K Clark    See what's on my new list at  Extropolis
pdt


Brent Meeker

unread,
Apr 20, 2024, 7:11:44 PM4/20/24
to everyth...@googlegroups.com


On 4/20/2024 8:09 AM, John Clark wrote:
Meta (a.k.a. Facebook) released LLAMA3 just a few days ago, and it's amazing for three reasons: 
1) It's tiny, it only has 70 billion parameters, GPT4 is about 1.8 trillion parameters. 
2) Despite its small size on AI benchmarks it's performance is just a smidgen below that of GPT4.
3) It is open source. 

Meta says it's performance would be even better if they trained it for longer but they stopped early because the company's computational resources are large but not infinite so they decided that compute time could be better spent training a 400 billion parameter version of LLAMA3, which they say they'll release sometime in the next couple of months, and in developing LLAMA4.

And anybody who still thinks the Singularity is not near really needs to look at the following video. I'll tell you one thing, it sure makes the issues that most Americans believe are the most important and which will probably decide the November election, excessive wokeness, the "invasion" from Mexico, and transsexual bathrooms, seem pretty damn trivial.

How about the war in Ukraine, Russian hacking, global warming,  Chinese threats in the Taiwan strait and South China sea, and U.S. infrastructure decay?

Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2zj23gW3gbZD4YT0ggzb%2BNqBsf979GkCouuLL7J8WEfA%40mail.gmail.com.

John Clark

unread,
Apr 20, 2024, 7:24:06 PM4/20/24
to everyth...@googlegroups.com
On Sat, Apr 20, 2024 at 7:11 PM Brent Meeker <meeke...@gmail.com> wrote:


How about the war in Ukraine, Russian hacking, global warming,  Chinese threats in the Taiwan strait and South China sea, and U.S. infrastructure decay?

If the singularity happens in the next two or three years, which doesn't sound nearly as ridiculous as it would have 18 months ago, then every one of those things is of utterly trivial importance.

John K Clark    See what's on my new list at  Extropolis
utt

pdt

John Clark

unread,
Apr 20, 2024, 7:27:04 PM4/20/24
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
On Sat, Apr 20, 2024 at 7:11 PM Brent Meeker <meeke...@gmail.com> wrote:

How about the war in Ukraine, Russian hacking, global warming,  Chinese threats in the Taiwan strait and South China sea, and U.S. infrastructure decay?

If the singularity happens in the next two or three years, which doesn't sound nearly as ridiculous as it would have 18 months ago, then every one of those things is of utterly trivial importance.

John K Clark    See what's on my new list at  Extropolis
m5x




pdt


Brent Meeker

unread,
Apr 20, 2024, 7:29:29 PM4/20/24
to everyth...@googlegroups.com


On 4/20/2024 4:23 PM, John Clark wrote:
On Sat, Apr 20, 2024 at 7:11 PM Brent Meeker <meeke...@gmail.com> wrote:


How about the war in Ukraine, Russian hacking, global warming,  Chinese threats in the Taiwan strait and South China sea, and U.S. infrastructure decay?

If the singularity happens in the next two or three years, which doesn't sound nearly as ridiculous as it would have 18 months ago, then every one of those things is of utterly trivial importance.

The big difference is "IF".  IF Earth is hit a million ton asteriod tomorrow the singularity will be irrelevant.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

spudb...@aol.com

unread,
Apr 21, 2024, 6:19:14 AM4/21/24
to everyth...@googlegroups.com
I am not looking for the Singularity itself, simply a great leap in the improvement in the successful use if AI in invention. 
What's the chance of a wipe out as suggested? The impact of technology, not massively improved, just significantly, 2 weekends ago. The Light Show over Israel. So given AI-improved engineering (in all things) we may give our species some reason to budge ourselves on religious or ideological positions? In this fashion, we may enhance our survival to go on to do better things. On religion-ideology, I role with this, personally. Not everyone's choice of course. Simple link, with the subject of the Universe as a neural net. Vitaly Vanchurin, U Minnesota- 





John Clark

unread,
Apr 21, 2024, 7:45:19 AM4/21/24
to everyth...@googlegroups.com
On Sat, Apr 20, 2024 at 7:29 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> "How about the war in Ukraine, Russian hacking, global warming,  Chinese threats in the Taiwan strait and South China sea, and U.S. infrastructure decay?"

>> If the singularity happens in the next two or three years, which doesn't sound nearly as ridiculous as it would have 18 months ago, then every one of those things is of utterly trivial importance.

The big difference is "IF".  IF Earth is hit a million ton asteriod tomorrow the singularity will be irrelevant.

IF an asteroid the size of Mount Everest slams into the Earth during the next year then that will stop the Singularity, but there is only about one chance in 100 million of that happening. The war in Ukraine, global warming, the threat to Taiwan, and decaying US infrastructure will NOT stop, or even significantly delay, the arrival of the Singularity.  But none of the dangers I mentioned in the previous two sentences will decide the November 5th election, the American people believe that the most significant dangers facing the nation today are excessive wokeness, the "invasion" from Mexico, and transsexuals.

John K Clark    See what's on my new list at  Extropolis
hrr

John Clark

unread,
Apr 21, 2024, 7:56:02 AM4/21/24
to everyth...@googlegroups.com
On Sun, Apr 21, 2024 at 6:19 AM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:

> "I am not looking for the Singularity itself, simply a great leap in the improvement in the successful use if AI in invention."

There will certainly be a huge leap in invention during and after the Singularity, but they will be inventions made by artificial intelligence. AI will be the last invention the human race ever makes.

 John K Clark    See what's on my new list at  Extropolis
qfq


Brent Meeker

unread,
Apr 21, 2024, 3:14:09 PM4/21/24
to everyth...@googlegroups.com


On 4/21/2024 4:44 AM, John Clark wrote:
On Sat, Apr 20, 2024 at 7:29 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> "How about the war in Ukraine, Russian hacking, global warming,  Chinese threats in the Taiwan strait and South China sea, and U.S. infrastructure decay?"

>> If the singularity happens in the next two or three years, which doesn't sound nearly as ridiculous as it would have 18 months ago, then every one of those things is of utterly trivial importance.

The big difference is "IF".  IF Earth is hit a million ton asteriod tomorrow the singularity will be irrelevant.

IF an asteroid the size of Mount Everest slams into the Earth during the next year then that will stop the Singularity, but there is only about one chance in 100 million of that happening. The war in Ukraine, global warming, the threat to Taiwan, and decaying US infrastructure will NOT stop, or even significantly delay, the arrival of the Singularity.  But none of the dangers I mentioned in the previous two sentences will decide the November 5th election, the American people believe that the most significant dangers facing the nation today are excessive wokeness, the "invasion" from Mexico, and transsexuals.

I'm an "American people" and I think the election of Donald Trump is the most significant danger facing the nation, and I'm pretty sure I'm in the majority....I'm just not sure I'm not in the Electoral College majority.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Apr 21, 2024, 3:19:37 PM4/21/24
to everyth...@googlegroups.com
So far some human has to provide motivation in the form of prompts.  Has anymore tried a feedback loop in which AI's responses are returned at prompts?

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

spudb...@aol.com

unread,
Apr 22, 2024, 1:10:37 PM4/22/24
to everyth...@googlegroups.com
Not a clue news wise. However, I am guessing when AI Neural Nets and LLM's get loaded onto low-error quantum computers we at least may be creating a new life, and later, merging with such, because it makes for better Milky Way traveling. Like a trade off, it supplies increased intellect, physical immoralism, and our part is to do the Qualia. (Daniel Dennett). 

Russell Standish

unread,
Apr 22, 2024, 6:04:11 PM4/22/24
to everyth...@googlegroups.com
> On Sunday, April 21, 2024 at 03:19:37 PM EDT, Brent Meeker
> <meeke...@gmail.com> wrote:
>
>
> So far some human has to provide motivation in the form of prompts. Has
> anymore tried a feedback loop in which AI's responses are returned at prompts?
>
> Brent
>

Yes - I believe that experiment was done, and it works quite
well. Maybe several times. Possibly with a different AI doing the
prompt evolution, Red Queen style. Sorry - I can point you at a
report, it was amongst the flurry of articles about AI that have come
out in the last 12 months.

Cheers

--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

John Clark

unread,
Apr 23, 2024, 6:06:53 AM4/23/24
to everyth...@googlegroups.com
On Mon, Apr 22, 2024 at 1:10 PM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:

> "AI Neural Nets and LLM's get loaded onto low-error quantum computers we at least may be creating a new life, and later, merging with such, because it makes for better Milky Way traveling. Like a trade off, it supplies increased intellect, physical immoralism, and our part is to do the Qualia." 

I don't see why an AI would need us to supply the Qualia, it could do that on its own. It's easy to see the advantage we would get by merging with an AI, but it's much harder to see what advantage the AI would get out of the deal. 

  John K Clark    See what's on my new list at  Extropolis

god


Brent Meeker

unread,
Apr 23, 2024, 3:18:47 PM4/23/24
to everyth...@googlegroups.com
That would depend on what values the AI instantiated.  We have values determined by billions of years of evolution founded on reproduction.  AIs so far have simple values of responding to prompts.  No much on which to found "advantage".

Brent

  John K Clark    See what's on my new list at  Extropolis

god


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 23, 2024, 5:03:10 PM4/23/24
to everyth...@googlegroups.com
On Tue, Apr 23, 2024 at 3:18 PM Brent Meeker <meeke...@gmail.com> wrote:
>> I don't see why an AI would need us to supply the Qualia, it could do that on its own. It's easy to see the advantage we would get by merging with an AI, but it's much harder to see what advantage the AI would get out of the deal.

"That would depend on what values the AI instantiated.  We have values determined by billions of years of evolution"

And a modern AI has values determined by billions of years of random mutation and natural selection PLUS almost a century of intelligent design; I personally would mark the beginning of the computer age as 1936, the year Alan Turing published his paper that introduced the concept that we now call a Turing Machine.


  > "AIs so far have simple values"
 

Simple?! The value matrix of an AI has become so complex that no human being understands them, not even the people that made the AI.  

John K Clark    See what's on my new list at  Extropolis
!>?

Brent Meeker

unread,
Apr 23, 2024, 5:23:12 PM4/23/24
to everyth...@googlegroups.com
I don't think you understand "values".  They are the basis of motivation, i.e. to realized values.  What motivates LLAMA3...a prompt.  What values does it realize...a response that maximizes some function of the way words fit together.  That it has lots of parameters that are numbers is not the same as having lots of values.

Brent

John K Clark    See what's on my new list at  Extropolis
!>?

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Apr 23, 2024, 6:45:13 PM4/23/24
to everyth...@googlegroups.com
On Tue, Apr 23, 2024 at 5:23 PM Brent Meeker <meeke...@gmail.com> wrote:

> "I don't think you understand "values".  They are the basis of motivation,\"

And I think you don't understand what the word "motivation" means, the reasons that something behaves in a particular way.  


 "What motivates LLAMA3...a prompt." 

Two things determine what LLAMA3 or any other AI will do. 

1) The machine's environment, which in this case is the prompt which can be written text, audio, a picture, or a video. 

2) The way the neural network of the machine is wired up, which is determined by a huge matrix of numbers that nobody understands. 

And you behave the way you do because of your environment, which like the AI could be written text, audio, a picture, or a videoand just like the AIbecause of the way your brain is wired up. 
 
 
  "That it has lots of parameters that are numbers is not the same as having lots of values."

Why not? How would the machine behave differently if having lots of parameters WERE  the same as having lots of values?  

John K Clark    See what's on my new list at  Extropolis

nww

Bruce Kellett

unread,
Apr 23, 2024, 10:10:20 PM4/23/24
to everyth...@googlegroups.com
On Wed, Apr 24, 2024 at 8:45 AM John Clark <johnk...@gmail.com> wrote:
On Tue, Apr 23, 2024 at 5:23 PM Brent Meeker <meeke...@gmail.com> wrote:

> "I don't think you understand "values".  They are the basis of motivation,\"

And I think you don't understand what the word "motivation" means, the reasons that something behaves in a particular way.  


 "What motivates LLAMA3...a prompt." 

Two things determine what LLAMA3 or any other AI will do. 

1) The machine's environment, which in this case is the prompt which can be written text, audio, a picture, or a video. 

2) The way the neural network of the machine is wired up, which is determined by a huge matrix of numbers that nobody understands.

Just because no one understands the way this is wired up does not mean that it is the same as a human brain.

And you behave the way you do because of your environment, which like the AI could be written text, audio, a picture, or a videoand just like the AIbecause of the way your brain is wired up. 
 
 
  "That it has lots of parameters that are numbers is not the same as having lots of values."

Why not? How would the machine behave differently if having lots of parameters WERE  the same as having lots of values?

That is not the question. If the machine behaves exactly as a human in terms of following a value set, then you will, by definition, see no difference. But in saying this you are assuming that the AI can in fact behave in this way, and that is just to assume the answer to the original question. Which was: Can the AI act according to human type values (or any values, for that matter)?

Bruce

John Clark

unread,
Apr 24, 2024, 7:50:57 AM4/24/24
to everyth...@googlegroups.com
On Tue, Apr 23, 2024 at 10:10 PM Bruce Kellett <bhkel...@gmail.com> wrote:

>> Two things determine what LLAMA3 or any other AI will do. 
1) The machine's environment, which in this case is the prompt which can be written text, audio, a picture, or a video. 
2) The way the neural network of the machine is wired up, which is determined by a huge matrix of numbers that nobody understands.

> "Just because no one understands the way this is wired up does not mean that it is the same as a human brain."

I certainly don't believe there is one and only one way a human brain can be wired up, if there was then we'd all be the same and we're not, some humans are geniuses and some are imbeciles. And nobody has anything other than a hazy coarse grained understanding about how modern Large Language Models are wired up, but we do know a few things about them:

1) However modern neural networks are wired up they end up working at least as well as how the average human's biological brain is wired up.

2) The way LLMs are wired up is changing and improving at an exponential rate.  Closed source LLM GPT-3.5, which astonished everybody when it was introduced about a year ago, had 175 billion parameters. Open source LLAMA-3, which was introduced only a few days ago, has only 70 billion parameters but its answers are better than GPT3.5 and almost as good as GPT-4 with its1.8 trillion parameters. And because it's so much smaller you need less hardware and energy to run LLAMA-3 than  GPT-3.5 and vastly less than GPT-4.
 
 
  "That it has lots of parameters that are numbers is not the same as having lots of values."

Why not? How would the machine behave differently if having lots of parameters WERE  the same as having lots of values?

> "That is not the question."

I don't know what "the question" is but I know what MY question was and I think it was crystal clear, and yet I still have not received an answer to it.  

> "If the machine behaves exactly as a human in terms of following a value set, then you will, by definition, see no difference. But in saying this you are assuming that the AI can in fact behave in this way, and that is just to assume the answer to the original question. Which was: Can the AI act according to human type values."

I don't need to assume anything, I know it is a fact because way back in the very distant past, a full year ago, a computer was able to pass the Turing Test. These days if a modern LLM wanted to deceive a human into thinking he was talking to another biological person it would have to pretend to be more stupid and ignorant then it really was and was thinking more slowly than it really could. Yes, LLMs can still occasionally say stupid things, but 95% of human college graduates cannot correctly explain what causes the seasons, most said it's because the Earth is closer to the sun in the summer than in the winter, but in the northern hemisphere exactly the opposite is true. And Harvard graduates are not immune from this misconception.


John K Clark    See what's on my new list at  Extropolis
mlb
 

 

spudb...@aol.com

unread,
Apr 24, 2024, 1:57:59 PM4/24/24
to everyth...@googlegroups.com
Depends on how things evolve? If AI is just a machine, then it may have no need of us. If its a Neural Net, it might see synergy as advantageous. We decide, say if silk feels smooth for example or the smoothness pleases? Qualia, the qualitative difference that explains evolution. Meanwhile, life is still evolving on our world.
Like you and me and the cell and the mitochondria a billion years ago, 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit
Reply all
Reply to author
Forward
0 new messages