--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv0JxFidme3YL6cEcxC_Xv%2BXuwfaQfs_-HUZrQE2e8xcPw%40mail.gmail.com.
> These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets. Isn't there some point of diminishing returns in this process?
On Fri, Dec 5, 2025 at 11:45 PM Brent Meeker <meeke...@gmail.com> wrote:
> These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets. Isn't there some point of diminishing returns in this process?
Companies are making a multi-trillion dollar bet that there is not a point of diminishing returns. And I think that's probably a pretty good bet,
>>> These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets. Isn't there some point of diminishing returns in this process?
>>> Companies are making a multi-trillion dollar bet that there is not a point of diminishing returns. And I think that's probably a pretty good bet,
> Why? Do you think there's a lot more to be sucked up?
n Sat, Dec 6, 2025 at 7:18 PM Brent Meeker <meeke...@gmail.com> wrote:
>>> These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets. Isn't there some point of diminishing returns in this process?>>> Companies are making a multi-trillion dollar bet that there is not a point of diminishing returns. And I think that's probably a pretty good bet,
> Why? Do you think there's a lot more to be sucked up?
No, but I think there's a lot more ways to think about the facts that we already know, and even more important I think there are a lot more ways to think about thinking and to figure out ways of learning faster.
People have been saying for at least the last two years that synthetic data doesn't work and we're running out of real data so AI improvement is about to hit a ceiling; but that hasn't happened because high quality synthetic data can work if used correctly. For example, in the process called "AI distillation" a very large AI model supplies synthetic data to a much smaller AI model and asks it a few billion questions about that data and tells it when it made a correct answer and when it has not. After a month or two the small model becomes much more efficient and is nearly as capable as the far larger one, sometimes even more so; it has been able to do this not by thinking more but by thinking smarter. After that the small model is scaled up and is allowed access to much more computing hardware, and then the process is repeated and it starts teaching a much smaller model.
> This strikes me as a positive feedback hallucination feedback amplifier
> Why? Do you think there's a lot more to be sucked up?
No, but I think there's a lot more ways to think about the facts that we already know, and even more important I think there are a lot more ways to think about thinking and to figure out ways of learning faster.
People have been saying for at least the last two years that synthetic data doesn't work and we're running out of real data so AI improvement is about to hit a ceiling; but that hasn't happened because high quality synthetic data can work if used correctly. For example, in the process called "AI distillation" a very large AI model supplies synthetic data to a much smaller AI model and asks it a few billion questions about that data and tells it when it made a correct answer and when it has not. After a month or two the small model becomes much more efficient and is nearly as capable as the far larger one, sometimes even more so; it has been able to do this not by thinking more but by thinking smarter. After that the small model is scaled up and is allowed access to much more computing hardware, and then the process is repeated and it starts teaching a much smaller model.
>>> This strikes me as a positive feedback hallucination feedback amplifier
>> Then why does it work so well?
> I don't know. Do you know how it avoids amplifying hallucinations?
> Do you even know how well it works?