You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to 'Brent Meeker' via Everything List
In a very recent paper even the authors of GPT state they were surprised that a few simple techniques can dramatically improve the performance of their AI, particularly when you tell it to show you the reasoning it used to come up with the answer that it gave. Some have suggested that in addition to improving performance this will also improve human ability to control an AI because then we would understand how it thinks and be sure that its goals were aligned with our own, however as this video which critiques the paper points out, there is some evidence the AI is not really telling us what it's thinking, but is instead telling us what it believes we want to hear. But either way the machine gives much better answers if you demand that it explain its reasoning process. Also, some have suggested that further AI improvement will soon run into a roadblock because GPT is already using nearly all of the training data that's available, however this paper gives hints as to why this is probably not the case.
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to everyth...@googlegroups.com
Interesting, but it doesn't contradict what I said earlier. Intelligent behavior implies consciousness but the reverse is not true, a lack of intelligent behavior does not necessarily imply a lack of consciousness. Also, we would never have been able to figure out that a certain output from an MRI machine implies that a particular patient was conscious of a particular thing unless we started with the assumption that intelligent behavior implies consciousness. Without that starting assumption nearly everything we know about consciousness falls apart and we are left only with solipsism. And nobody this side of a loony bin can function if they really believed in solipsism. Not even philosophy professors.
John K Clark See what's on my new list at Extropolis